June 22, 2017

One of my preferred developer tools is a web called Compiler Explorer. The tool itself is excellent and useful when trying to optimize your code.
The author of the tool describes it in the Github repository as:

Compiler Explorer is an interactive compiler. The left-hand pane shows editable C/C++/Rust/Go/D/Haskell code. The right, the assembly output of having compiled the code with a given compiler and settings. Multiple compilers are supported, and the UI layout is configurable (the Golden Layout library is used for this). There is also an ispc compiler for a C variant with extensions for SPMD.

The main problem I found with the tool is, it does not allow to write Qt code. I need to remove all the Qt includes, modify and remove a lot of code…

So I decided to modify the tool to be able to find the Qt headers. To do that first of all, we need to clone the source code:

git clone git@github.com:mattgodbolt/compiler-explorer.git

The application is written using node.js, so make sure you have it installed before starting.

The next step is to modify the options line in etc/config/c++.defaults.properties:

-fPIC -std=c++14 -isystem /opt/qtbase_dev/include -isystem /opt/qtbase_dev/include/QtCore

you need to change /opt/qtbase_dev with your own Qt build path.

Then simply call make in the root folder, and the application starts running on port 10240 (by default).

And the mandatory screenshoots ��



The post Using Compiler Explorer with Qt appeared first on Qt Blog.

Sexto mes y sexto podcast. Llegamos al ecuador del año y lo hacemos con un pleno de regularidad en cuanto a material audiovisual. Me complace en presentar Plasma 5.10 y Akademy 2017 de Almería vigésimo podcast de KDE España que fue grabado el pasado 20 de junio. Espero que sea de vuestro agrado.

Plasma 5.10 y Akademy 2017 de Almería vigésimo podcast de KDE España

El sexto de vídeo poscast de la tercera temporada de KDE España titulado Plasma 5.10 y Akademy 2017 de Almería fue grabado una tarde calurosa y retransmitido en directo para todo el mundo que quisiera hacerlo.

Plasma 5.10 y Akademy 2017 de Almería

Los participantes del vigéismo vídeo podcast fueron:

  • Ruben Gómez Antolí, miembro de KDE España y que siguió realizando las labores de presentador.
  • Aleix Pol @AleixPol, ex-presidente de  http://www.kde-espana.org/ y vicepresidente de KDE e.V.
  • Albert Astals @tsdgeos ex-presidente deKDE España

A lo largo de casi la hora y veinte que duró el vídeo podcast se habló de todo lo relacionado con dos eventos muy importantes para la Comunidad KDE: el lanzamiento y las nuevas funcionalidades del escritorio más moderno que ofrecen, es decir, Plasma 5.10 y el evento que todo el mundo relacionado con el mundo KDE espera con impaciencia, Akademy y Akademy-es (su versión española).

Además, gracias al trabajo de VictorHck (no os perdáis su blog) pronto estará disponible el podcast en archive.org.

Plasma 5.10 y Akademy 2017 de Almería

Espero que os haya gustado, si es así ya sabéis: “Manita arriba“, compartid y no olvidéis visitar y suscribiros al canal de Youtube de KDE España.

Como siempre, esperamos vuestros comentarios que os aseguro que son muy valiosos para los desarrolladores, aunque sean críticas constructivas (las otras nunca son buenas para nadie). Así mismo, también nos gustaría saber los temas sobre los que gustaría que hablásemos en los próximos podcast.

Aprovecho la ocasión para invitaros a suscribiros al canal de Ivoox de los podcast de KDE España que pronto estará al día.

June 21, 2017

In this post, I am going to discuss about the working of a submarine and my thought process on implementing the three basic features of a submarine in the “Pilot a Submarine” activity for the Qt version of GCompris, which are:

  • The Engine
  • The Ballast tanks and
  • The Diving Planes

The Engine

The engine of most submarines are either nuclear powered or diesel-electric engines, which are used to drive an electric motor which in turn, powers the submarine propellers. In this implementation, we will have two buttons one for increasing and another for decreasing the power generated by the submarine.

Ballast Tanks

The Ballast Tanks are the spaces in the submarine that can either be filled with water or air. It helps the submarine to dive and resurface on the water, using the concept of buouyancy. If the tanks are filled with water, the submarine dives underwater and if they are filled with air, it resurfaces on the surface of the water

Diving Planes

Once underwater, the diving planes of a submarine helps to accurately control the depth of the submarine. These are very similar to the fins present in the bodies of sharks, which helps them to swim and dive. When the planes are pointed downwards, the water flowing above the planes generate more pressure on the top surface than that on the bottom surface, forcing the submarine to dive deeper. This allows the driver to control the depth and the angle of the submarine.


In this section I will be going through how I implemented the submarine using QML. For handling physics, I used Box2D.

The Submarine

The submarine is an QML Item element, designed as follows:

Item {
    id: submarine

    z: 1

    property point initialPosition: Qt.point(0,0)
    property bool isHit: false
    property int terminalVelocityIndex: 100
    property int resetVerticalSpeed: 500

    /* Maximum depth the submarine can dive when ballast tank is full */
    property real maximumDepthOnFullTanks: (background.height * 0.6) / 2

    /* Engine properties */
    property point velocity
    property int maximumXVelocity: 5

    /* Wings property */
    property int wingsAngle
    property int initialWingsAngle: 0
    property int maxWingsAngle: 2
    property int minWingsAngle: -2

    function destroySubmarine() {
        isHit = true

    function resetSubmarine() {
        isHit = false


        x = initialPosition.x
        y = initialPosition.y

        velocity = Qt.point(0,0)
        wingsAngle = initialWingsAngle

	function increaseHorizontalVelocity(amt) {
        if (submarine.velocity.x + amt <= submarine.maximumXVelocity) {
            submarine.velocity.x += amt

    function decreaseHorizontalVelocity(amt) {
        if (submarine.velocity.x - amt >= 0) {
            submarine.velocity.x -= amt

    function increaseWingsAngle(amt) {
        if (wingsAngle + amt <= maxWingsAngle) {
            wingsAngle += amt
        } else {
            wingsAngle = maxWingsAngle

    function decreaseWingsAngle(amt) {
        if (wingsAngle - amt >= minWingsAngle) {
            wingsAngle -= amt
        } else {
            wingsAngle = minWingsAngle

    function changeVerticalVelocity() {
         * Movement due to planes
         * Movement is affected only when the submarine is moving forward
         * When the submarine is on the surface, the planes cannot be used
        if (submarineImage.y > 0) {
            submarine.velocity.y = (submarine.velocity.x) > 0 ? wingsAngle : 0
        } else {
            submarine.velocity.y = 0
        /* Movement due to Ballast tanks */
        if (wingsAngle == 0 || submarine.velocity.x == 0) {
            var yPosition = submarineImage.currentWaterLevel / submarineImage.totalWaterLevel * submarine.maximumDepthOnFullTanks

            speed.duration = submarine.terminalVelocityIndex * Math.abs(submarineImage.y - yPosition) // terminal velocity
            submarineImage.y = yPosition

    BallastTank {
        id: leftBallastTank

        initialWaterLevel: 0
        maxWaterLevel: 500

    BallastTank {
        id: rightBallastTank

        initialWaterLevel: 0
        maxWaterLevel: 500

    BallastTank {
        id: centralBallastTank

        initialWaterLevel: 0
        maxWaterLevel: 500

    Image {
        id: submarineImage
        source: url + "submarine.png"

        property int currentWaterLevel: bar.level < 7 ? centralBallastTank.waterLevel : leftBallastTank.waterLevel + centralBallastTank.waterLevel + rightBallastTank.waterLevel
        property int totalWaterLevel: bar.level < 7 ? centralBallastTank.maxWaterLevel : leftBallastTank.maxWaterLevel + centralBallastTank.maxWaterLevel + rightBallastTank.maxWaterLevel

        width: background.width / 9
        height: background.height / 9

        function broken() {
            source = url + "submarine-broken.png"

        function reset() {
            source = url + "submarine.png"
            speed.duration = submarine.resetVerticalSpeed
            x = submarine.initialPosition.x
            y = submarine.initialPosition.y

        Behavior on y {
            NumberAnimation {
                id: speed
                duration: 500

        onXChanged: {
            if (submarineImage.x >= background.width) {

    Body {
        id: submarineBody
        target: submarineImage
        bodyType: Body.Dynamic
        fixedRotation: true
        linearDamping: 0
        linearVelocity: submarine.isHit ? Qt.point(0,0) : submarine.velocity

        fixtures: Box {
            id: submarineFixer
            width: submarineImage.width
            height: submarineImage.height
            categories: items.submarineCategory
            collidesWith: Fixture.All
            density: 1
            friction: 0
            restitution: 0
            onBeginContact: {
                var collidedObject = other.getBody().target

                if (collidedObject == whale) {
                if (collidedObject == crown) {
                } else {
    Timer {
        id: updateVerticalVelocity
        interval: 50
        running: true
        repeat: true

        onTriggered: submarine.changeVerticalVelocity()

The Item is a parent object to hold all the different components of the submarine (the Image BallastTank and the Box2D component). It also contains the functions and the variables that are global to the submarine.

The Engine

The engine is a very straightforward implementation via the linearVelocity component of the Box2D element. We have two variables global to the submarine for handling the engine component, defined as follows:

property point velocity
property int maximumXVelocity: 5

which are pretty much self-explanatory, the velocity holds the current velocity of the submarine, both horizontal and vertical and the maximumXVelocity holds the maximum horizontal speed the submarine can achieve.

For increasing or decreasing the velocity of the submarine, we have two functions global to the submarine, as follows:

function increaseHorizontalVelocity(amt) {
    if (submarine.velocity.x + amt <= submarine.maximumXVelocity) {
        submarine.velocity.x += amt

function decreaseHorizontalVelocity(amt) {
    if (submarine.velocity.x - amt >= 0) {
        submarine.velocity.x -= amt

which essentially gets the amount by which the velocity.x component needs to be increased or decreased, checks whether it crosses the range or not, and makes the necessary changes likewise.

The actual applying of the velocity is very straightforward, which takes place in the Body component of the submarine as follows:

Body {
	linearVelocity: submarine.isHit ? Qt.point(0,0) : submarine.velocity

The submarine.isHit component, as the name suggests holds whether the submarine is hit by any object or not (except the pickups). If so, the velocity is reset to (0,0)

Thus, for increasing or decreasing the engine power, we just have to call one of the two functions anywhere from the code:

submarine.increaseHorizontalVelocity(1); /* For increasing H velocity */
submarine.decreaseHorizontalVelocity(1); /* For decreasing H velocity */

The Ballast Tanks

The Ballast Tanks are implemented separately in BallastTank.qml, since it will be implemented more that once. It looks like the following:

Item {
    property int initialWaterLevel
    property int waterLevel: 0
    property int maxWaterLevel
    property int waterRate: 10
    property bool waterFilling: false
    property bool waterFlushing: false

    function fillBallastTanks() {
        waterFilling = !waterFilling

        if (waterFilling) {
        } else {

    function flushBallastTanks() {
        waterFlushing = !waterFlushing

        if (waterFlushing) {
        } else {

    function updateWaterLevel(isInflow) {
        if (isInflow) {
            if (waterLevel < maxWaterLevel) {
                waterLevel += waterRate

        } else {
            if (waterLevel > 0) {
                waterLevel -= waterRate

        if (waterLevel > maxWaterLevel) {
            waterLevel = maxWaterLevel

        if (waterLevel < 0) {
            waterLevel = 0

    function resetBallastTanks() {
        waterFilling = false
        waterFlushing = false

        waterLevel = initialWaterLevel


    Timer {
        id: fillBallastTanks
        interval: 500
        running: false
        repeat: true

        onTriggered: updateWaterLevel(true)

    Timer {
        id: flushBallastTanks
        interval: 500
        running: false
        repeat: true

        onTriggered: updateWaterLevel(false)

What they essentially does is:

  • fillBallastTanks: Fills up the Ballast tanks upto maxWaterLevel. Sets the flag waterFilling to true if the Ballast is to be filled with water, and the timer fillBallastTanks is set to start(), which will increase the water level in the tank after every 500 millisecond.
  • flushBallastTanks: Flushes the Ballast tanks down to 0. Sets the flag waterFlushing to true if the Ballast is to be flushed out of water, and the timer flushBallastTanks is set to start(), which will decrease the water level in the tank after every 500 millisecond.
  • resetBallastTanks: Resets the water level in the ballast tanks to it’s initial values

In the Submarine Item, we just use three instances of the BallastTank object, for left, right and central ballast tanks, setting up it’s initial and maximum water level.

BallastTank {
    id: leftBallastTank

    initialWaterLevel: 0
    maxWaterLevel: 500

BallastTank {
    id: rightBallastTank

    initialWaterLevel: 0
    maxWaterLevel: 500

BallastTank {
    id: centralBallastTank

    initialWaterLevel: 0
    maxWaterLevel: 500

For filling up or flushing the ballast tanks (centralBallastTank in this case), we just have two call either of the following two functions:

centralBallastTank.fillBallastTanks() /* For filling */
centralBallastTank.flushBallastTanks() /* For flushing */

I will be discussing about how the depth is maintained using the ballast tanks in the next section.

The Diving Planes

The diving planes will be used to control the depth of the submarine once it is moving underwater. Keeping that in mind, along with the fact that it needs to be effectively integrated with the ballast tanks. This is implemented in the changeVerticalVelocity() function, which is discussed as follows:

 * Movement due to planes
 * Movement is affected only when the submarine is moving forward
 * When the submarine is on the surface, the planes cannot be used
if (submarineImage.y > 0) {
    submarine.velocity.y = (submarine.velocity.x) > 0 ? wingsAngle : 0
} else {
    submarine.velocity.y = 0

However, under the following conditions:

  • the angle of the planes is reduced to 0
  • the horizontal velocity of the submarine is 0,

the ballast tanks will take over. Which is implemented as:

/* Movement due to Ballast tanks */
if (wingsAngle == 0 || submarine.velocity.x == 0) {
    var yPosition = submarineImage.currentWaterLevel / submarineImage.totalWaterLevel * submarine.maximumDepthOnFullTanks

    speed.duration = submarine.terminalVelocityIndex * Math.abs(submarineImage.y - yPosition) // terminal velocity
    submarineImage.y = yPosition

yPosition calculates how much percentage of the tank is filled with water, and likewise it determines the depth to which it will dive. The speed.duration is the duration of the transition animation, and the duration depends directly on how much the submarine will have to cover up along the Y axis, to avoid a steep rise or fall of the submarine.

For increasing or decreasing the angle of the diving planes, we just need to call either of the following two functions:

submarine.increaseWinglsAngle(1) /* For increasing */
submarine.decreaseWingsAngle(1) /* For decerasing */


That’s it for now! The two major goals to be completed next are the rotation of the submarine (in case more than one tanks are used and they are unequally filled up) and the UI for controlling the submarine. Will provide an update on it once it is completed.

It is a long time since I posted on my blog and frankly, i missed it. I’ve been busy with school: courses, tones of homework, projects and presentations.

Since last year i had a great experience with GCompris and KDE in general, i decided to apply in this year’s GSoC as well, only this time, i chose another project from KDE: Minuet.


Minuet is part of KDE-edu and its goal is helping teachers and students both novice and experienced teach and respectively learn and exercise their music skills. It is primarily focused on ear-training exercises and other areas will soon be available.

Minuet includes a virtual piano keyboard, displayed at the bottom of the screen, on which users can visualize the exercises. Using a piano keyboard is a good starting point for anyone who wants to learn the basics of musical theory: intervals, chords, scales etc. Minuet is currently based on the piano keyboard for all its ear training exercises. While this is a great feature, some may find it not quite suitable to their musical instrument.



My project aims to deliver to the user a framework which will support the implementation of multiple instrument views as Minuet plugins. Furthermore, apart from the piano keyboard, I will implement another instrument for playing the exercise questions and user’s answers.

This mechanism should allow new instruments to be integrated as Minuet plugins. After downloading the preferred instrument plugin, the user would then be able to switch between instruments. It will allow him to enhance his musical knowledge by training his skills using that particular instrument.

At the end of summer, I intend to have changed the current architecture to allow multiple-instrument visualization framework and refactor the piano keyboard view as a separate plugin. I also intend to have implemented a plugin for at least one new instrument: a guitar.

A mock up on the new guitar is shown below:guitar.png

I hope it will be a great summer for me, my mentor and the users of Minuet, whom i want to offer a better experience by using my work.

Sounds like déjà vu? You are right! We used to have Facebook Event sync in KOrganizer back in KDE 4 days thanks to Martin Klapetek. The Facebook Akonadi resource, unfortunately, did not survive through Facebook API changes and our switch to KF5/Qt5.

I’m using a Facebook event sync app on my Android phone, which is very convenient as I get to see all events I am attending, interested in or just invited to directly in my phone’s calendar and I can schedule my other events with those in mind. Now I finally grew tired of having to check my phone or Facebook whenever I wanted to schedule event through KOrganizer and I spent a few evenings writing a brand new Facebook Event resource.

Inspired by the Android app the new resource creates several calendars – for events you are attending, events you are interested in, events you have declined and invitations you have not responded to yet. You can configure if you want to receive reminders for each of those.

Additionally, the resource fetches a list of all your friend’s birthdays (at least of those who have their birthday visible to their friends) and puts them into a Birthday calendar. You can also configure reminders for those separately.

The Facebook Sync resource will be available in the next KDE Applications feature release in August.

Hello readers

I’m glad to share this opportunity to be selected 2 times for Google Summer of Code project under KDE. It’s my second consecutive year working with DigiKam team.

DigiKam is an advanced digital photo management  application which enables user to view, manage, edit, organise, tag and share photographs under Linux systems. DigiKam has a feature to search items by similarity. This require to compute image fingerprints stored in main database. These data can take space on disk especially with huge collection and bloat the main database a lots and increase complexity to backup main database which include all main information for each item registered, as tags, label, comments, etc.

The goal of this proposal is to store the similarity fingerprints must be stored in a dedicated database. This would be a big relief for the end users as image fingerprints are around few KB of raw data for each image. And storing all of them takes huge disk space, increases time latency for huge collections.

Thus, to overcome all the above issues, a new DB interface would be created. {This work has already been done for thumbnail and face fingerprints}. Also, from backup point of view, it’s easy to have separate files to optimise.

I’ll add keep you guys updated with my work in upcoming posts.

Till then, I encourage you to use the software. It’s easy to install and use. (You can find cheat sheet to build DigiKam in my previous post! �� )

Happy DigiKaming! ��






Following the 5th release 5.5.0 published in March 2017, the digiKam team is proud to announce the new release 5.6.0 of digiKam Software Collection. With this version the HTML gallery and the video slideshow tools are back, database shrinking (e.g. purging stale thumbnails) is also supported on MySQL, grouping items feature has been improved, the support for custom sidecars type-mime have been added, the geolocation bookmarks introduce fixes to be fully functional with bundles, the support for custom sidecars, and of course a lots of bug has been fixed.

HTML Gallery Tool

The HTML gallery is accessible through the tools menu in the main bar of both digiKam and showFoto. It allows you to create a a web gallery with a selection of photos or a set of albums, that you can open in any web browser. There are many themes to select and you can create your own as well. Javascript support is also available.

Video Slideshow Tool

The Video Slideshow is accessible also through the tools menu in the main bar of both digiKam and showFoto. It allows you to create a video slide with a selection of photos or albums. The generated video file can be view in any media player, as phones, tablets, Blue Ray reader, etc. There are many settings to customize the format, the codec, the resolution, and the transition (as for ex the famous Kens-Burn effect).

Database Integrity Tool

Already in 5.5.0 release, the tool dedicated to tests for database integrity and obsolete information have been improved. Besides obvious data safety improvements this can free up quite a lot of space in the digiKam databases. For technical reasons only SQLite database were shrunk to this smaller size in 5.5.0 release. Now this is also possible for MySQL databases with 5.6.0.

Items Grouping Features

Earlier changes to the grouping behaviour proved that digiKam users have quite diverse workflows - so with the current change we try to represent that diversity.

Originally grouped items were basically hidden away. Due to requests to include grouped items in certain operations, this was changed entirely to include grouped items in (almost) all operations. Needless to say, this wasn’t such a good idea either. So now you can choose which operations should be performed on all images in a group or just the first one.

The corresponding settings live in the configuration wizard under Miscellaneous in the Grouping tab. By default all operations are set to Ask, which will open a dialog whenever you perform this operation and grouped items are involved.

Extra Sidecars Support

Another new capability is to recognise additional sidecars. Under the new Sidecars tab in the Metadata part of the configuration wizard you can specify any additional extension that you want digiKam to recognise as a sidecar. These files will neither be read from nor written to, but they will be moved/rename/deleted/… together with the item that they belong to.

Geolocation Bookmarks

Another important change done for this new version is to restore the geolocation bookmarks feature which did not work with bundle versions of digiKam (AppImage, MacOS, and Windows). The new bookmarker has been fully re-written and is still compatible with previous geolocation bookmarks settings. It is now able to display the bookmark GPS information over a map for a better usability while editing your collection.

Google Summer of Code 2017 Students

This summer the team is proud to assist 4 students to work on separate projects:

Swati Lodha is back in the team. As in 2016, she will work to improve the Database interface. After having fixed and improved the MySQL support in digiKam, she has the task this year to isolate all the database contents dedicated to manage the similarity finger-prints matrix. As for thumbnails and face recognition, these elements will be stored in a new dedicated database. The goal is to reduce the core database size, simplify the maintenance and decrease the core database time-latencies.

Yingjie Liu is a Chinese student, mainly specialized with math and algorithms who will add a new efficient face recognition algorithm and will try to introduce some AI solution to simplify the face tag workflow.

Ahmed Fathi is an Egyptian student who will work to restore and improve the DLNA support in digiKam, to be able to stream collections contents through the network with compatible UPNP device as the Smart TV, tablets or cellulars.

Shaza Ismail is an another Egyptian student who will work to an ambitious project to create a tool for image editor to be used for healing image stains with the use of another part of the image by coloring by the use of one part over the other, mainly testing on dust spots, but can be used for other particles hiding as well.

Final Words

The next main digiKam version 6.0.0 is planned for the end of this year, when all Google Summer of Code projects will be ready to be backported for a beta release. In September, we will release a maintenance version 5.7.0 with a set of bugfixes as usual.

For further information about 5.6.0, take a look at the list of more than 81 issues closed in Bugzilla.

digiKam 5.6.0 Software collection source code tarball, Linux 32/64 bits AppImage bundles, MacOS package, and Windows 32/64 bits installers can be downloaded from this repository.

Happy digiKaming while this summer!

Aunque KDE Blog es un blog personal, siempre está abierto a colaboraciones de terceras personas. Este es el caso de este nuevo artículo de la escritora Edith Gómez, editora en Gananci, apasionada del marketing digital, especializada en comunicación online y que se está convirtiendo en una habitual del blog. En esta ocasión nos presenta “5 Trucos para ganar dinero como programador para KDE” que evidentemente

5 Trucos para ganar dinero como programador para KDE

Si te estas preguntando qué es KDE, entonces es porque no eres aun un programador con experiencia en este mundo. KDE se refiere a una comunidad de personas que crean software libre para otras personas.

Pero si este es el caso, ¿cómo se gana dinero cuando el punto es crear software gratuito? Esta es una buena pregunta. Hay varias formas de recaudar dinero siendo programador KDE, pero lo importante es saber como usar estas herramientas y hacer el mejor esfuerzo para recaudar dinero.

No todos en el mundo KDE es programador, además hay diseñadores, traductores, promotores, y más. Todas estas personas se unen y trabajan juntas para crear software y dispositivos funcionales y modernos. Si eres uno de ellos y quieres empezar tu negocio, entonces debes saber como hacerte valioso.

Teniendo esto en cuenta, si quieres ganar dinero como parte de la comunidad KDE, sigue estos trucos:

1. Conviértete en freelancer: si ya llevas tiempo creando sitios web, haciendo script, diseñando páginas web, entonces ofrecer tus servicios independientes parece simple. Sin embargo, en el mundo de los freelancers, debes trabajar duro y dejar una buena impresión en tus clientes.

Asegúrate de dejar saber en tu comunidad KDE que estás trabajando aparte, tal vez alguien necesite tu ayuda. También únete a sitios como freelancer donde puedes publicar tus servicios y recibir ofertas. Y si te gustan las redes sociales, no hay nada malo en crear una página para tus servicios, o utilizar LinkedIn para ofrecer tus servicios.

2. Venta de anuncios: es verdad, se puede vender publicidad online. Sin embargo, necesitas tener una página web y generar tráfico. Algunas veces con solo crear una buena estrategia SEO, u ofreciendo algo gratis, generas suficiente tráfico.

Puedes ofrecer información interesante y promocionarla en tus comunidades, siempre y cuando ellos hagan clic en tus anuncios, ganarás dinero.

3. Programador web: ya tienes experiencia creando sitios web, entonces, ¿por qué no ofrecer tus servicios? Esta es una de las formas maas rentables de ganar dinero, pues puedes hacer una buena reputación y ser recomendado por miembros de tu comunidad.

Si no quieres usar todo tu tiempo en crear sitios web nuevos, puedes optar por comprar una página web vieja, renovarla, y venderla por más dinero.


4. Vende tu propio programa o producto: cuando eres un buen programador seguro que tienes ciertos producto para vender. Necesitas lanzar el producto, darlo a conocer, y promocionarlo.

Puedes unirte a otro programador o freelancer para que te ayude a darle los últimos detalles a tu producto y te ayude a calcular su precio y venderlo después.

5. Donaciones: esta es la forma más común de recaudar dinero en el mundo KDE. Pero así como es común, también es difícil de predecir, pues no todo el mundo está dispuesto a donar.

De todas formas, el mundo KDE funciona así. Una de las mejores formas para promover donaciones, es crear templates gratuitos que sean interesantes para las personas, así cuando los liberes al mundo, tendrás muchas visitas y donaciones porque las personas querrán más de tus productos.

Cuando ganes más tráfico, podrás también crear un sistema de afiliados o de publicidad y ganar aún más dinero. Los programadores tienden a crear script y template por montones, por lo que no hay límites y podrás seguir atrayendo visitantes.

Como puedes ver, ganar dinero como programador es algo de trabajo, pero no es imposible. Si estás dispuesto a trabajar duro con estos trucos, puedes ganar algo de ingresos extras sin necesidad de dejar de hacer lo que más te gusta: crear software libre.

Estas opciones son especialmente recomendables para aquellos que son nuevos en la industria o que buscan crear su propio negocio. No es necesario tener un trabajo a tiempo completo si sabes buscar dinero en otros lugares.

June 20, 2017

I got an opportunity to represent KDE in FOSSASIA 2017 held in mid-March at Science Center, Singapore. There were many communities showcasing their hardware, designs, graphics, and software.

 Science Center, Singapore
I talked about KDE, what it aims at, what are the various programs that KDE organizes to help budding developers, and how are these mentored. I walked through all the necessities to start contributing in KDE by introducing the audience to KDE Bugzilla, the IRCs channels, various application Domains, and the SoK(Season of KDE) proposal format. 


Then I shared my journey in KDE, briefed about my projects under Season of KDE and Google Summer of Code. The audience were really enthusiastic and curious to start contributing in KDE. I sincerely thank FOSSASIA for giving me this wonderful opportunity.

Overall working in KDE has been a very enriching experience. I wish to continue contributing in KDE and also share my experiences to help the budding developers to get started.

As my first subject for this animation blog series, we will be taking a look at Animation curves.

Curves, or better, easing curves, is one of the first concepts we are exposed to when dealing with the subject of animation in the QML space.

What are they?

Well, in simplistic terms, they are a description of an X position over a Time axis that starts in (0 , 0) and ends in (1 , 1). These curves are …

The post A tale of 2 curves appeared first on KDAB.

Hace un tiempo presenté RSS Indicator, un lector de rss para nuestro escritorio Plasma que nos puede ayudar para seguir con atención algunas de las webs que más nos interesan. La Comunidad KDE no se contenta con tener una alternativa y hoy me complace presentar dentro de la serie de Plasmoides de KDE, Ttrss pocket, tu lector de rss y pocket en tu escritorio Plasma que seguro que convence a más de uno.

Ttrss pocket, otro lector de rss en tu escritorio Plasma – Plasmoides de KDE (77)

De la mano de Kinta os presento ttrss pocket, un lector de noticias vía rss y pocket especialmente diseñado para ser utilizado a pantalla completa en un dispositivo táctil, aunque puede ser utilizado sin problemas en cualquier ordenador.

En realidad su creador ha rescatado el código de un viejo proyecto y lo ha actualizado para los tiempos actuales. De hecho, acaba de lanzar su versión 1.1 que incluye una jugosa novedad: su motor por defecto pasa a ser webengineview, sutituyendo a Webkit.

Kinta nos advierte que en esta primera versión no puede ser todo lo estable que sería deseable así que atentos a las actualizaciones. Además recomienda que si se tienen muchos problemas se vuelva a la versión 1.0.1.

Ttrss pocket

En resumen, una excelente alternativa que es digna de ser probada si todavía eres un ávido lector de noticias por esta vía.

Más información: KDE.Store

¿Qué son los plasmoides?

Para los no iniciados en el blog, quizás la palabra plasmoide le suene un poco rara pero no es mas que el nombre que reciben los widgets para el escritorio Plasma de KDE.

En otras palabras, los plasmoides no son más que pequeñas aplicaciones que puestas sobre el escritorio o sobre una de las barras de tareas del mismo aumentan las funcionalidades del mismo o simplemente lo decoran.

We are very happy to announce the first AppImage of the next generation Kdenlive. We have been working since the first days of 2017 to cleanup and improve the architecture of Kdenlive’s code to make it more robust and clean. This also marked a move to QML for the display of the timeline.

This first AppImage is only provided for testing purposes. It crashes a lot because many features have yet to be ported to the new code, but you can already get a glimpse of the new timeline, move clips and compositions, group items and add some effects. This first Appimage can be downloaded from the KDE servers. Just download the Appimage, make the file executable and run it. This version is not appropriate for production use and due to file format changes, will not properly open previous Kdenlive project files. We are hoping to provide reliable nightly build AppImages so that our users can follow the development and provide feedback before the final release.

Today is also our 18th Kdenlive Café, so you can meet us tonight -20th of june – at 9pm (CEST) in the #kdenlive channel to discuss the evolution and issues around Kdenlive.

I will also be presenting the progress of this Kdenlive version this summer (22nd of July) at Akademy in Almeria – Spain – so feel free to come visit the KDE Community in this great event.

Set up the arcanist for Koko

  • It was pretty much easy to install. For my Archlinux just below command did the work for me.

    yaourt -S arcanist-git

  • Then I had to add .arcconfig to the Koko repository so that arc could point to the link where it should publish the changes

    { “phabricator.uri”:”https://phabricator.kde.org/” }

  • The only problem is with the SSL certificates, as the university campus wireless network uses its own self-signed certificate. This will create problem to access the ssl encrypted web content, pretty much everything related to development :P
  • Also university campus network does not allow SSH over it’s network. This will prohibit me from committing the changes to the git repository.
  • Hence to use the arcanist, everytime I will have to check for the curl.cainfo in the /etc/php/php.ini and set/unset the environment variable GIT_SSL_CAINFO depending on the network I am using.

June 19, 2017

KRuler, in case you don't know it, is a simple software ruler to measure lengths on your desktop. It is one of the oldest KDE tools, its first commit dating from November 4th, 2000. Yes, it's almost old enough to vote.

I am a long time KRuler user. It gets the job done, but I have often found myself saying "one day I'll fix this or that". And never doing it.

Hidpi screen really hurt the poor app, so I finally decided to do something and spend some time during my daily commute on it.

This is what it looked like on my screen when I started working on it:

KRuler Before

As any developer would, I expected it should not be more than a week of work... Of course it took way longer than that, because there was always something odd here and there, preventing me from pushing a patch.

I started by making KRuler draw scale numbers less often to avoid ugly overlapping texts. I then made it draw ticks on both sides, to go from 4 orientations (North, South, West, East) to 2: vertical or horizontal.

The optional rotation and buttons were getting in the way though: the symmetric ticks required the scale numbers to be vertically centered so buttons were overlapping it. I decided to remove them (they were already off by default). With only two orientations it is less useful to have rotation buttons anyway: it is simple enough to use either the context menu, middle-click the ruler, or the R shortcut to change the orientation. Closing is accessible through the context menu as well.

One of the reasons (I think) for the 4 orientations was the color picker feature. It makes little sense to me to have a color picker in a ruler: it is more natural to use KColorChooser to pick colors. I removed the color picker, allowing me to remove the oddly shaped mouse cursor and refresh the appearance of the length indicator to something a bit nicer.

I then made it easier to adjust the length of the ruler by dragging its edges instead of having to pick the appropriate length from a sub-menu of the context menu. This made it possible to remove this sub-menu.

This is what KRuler looks like now:

KRuler after

That is only part 1 though. I originally had 2 smaller patches to add, but Jonathan Riddell, who kindly reviewed the monster patch, requested another small fix, so that makes 3 patches to go. Need to setup and figure out how to use Arcanist to submit them to Phabricator, as I have been told Review Board is old school these days :)

Or: Tying loose ends where some are slightly too short yet.


  • you favour offline documentation (not only due to nice integration with IDEs like KDevelop),
  • develop code using KDE Frameworks or other Qt-based libraries,
  • you know all the KF5 libraries have seen many people taking care for API documentation in the code over all the years,
  • and you had read about doxygen’s capability to create API dox in QCH format,
  • and you want your Linux distribution package management to automatically deliver the latest version of the documentation (resp. QCH files) together with the KDE Frameworks libraries and headers (and ideally same for other Qt-based libraries),

the idea is easy derived to just extend the libraries’ buildsystem to also spit out QCH files during the package builds.

It’s all prepared, can ship next week, latest!!1

Which would just be a simple additional target and command, invoking doxygen with some proper configuration file. Right? So simple, you wonder why no-one had done it yet ��

Some initial challenge seems quickly handled, which is even more encouraging:
for proper documentation one also wants cross-linking to documentation of things used in the API which are from other libraries, e.g. base classes and types. Which requires to pass to doxygen the list of those other documentations together with a set of parameters, to generate proper qthelp:// urls or to copy over documentation for things like inherited methods.
Such listing gets very long especially for KDE Frameworks libraries in tier 3. And with indirect dependencies pulled into the API, on changes the list might get incomplete. Same with any other changes of the parameters for those other documentations.
So basically a similar situation to linking code libraries, which proposes to also give it a similar handling: placing the needed information with CMake config files of the targeted library, so whoever cross-links to the QCH file of that library can fetch the up-to-date information from there.

Things seemed to work okay on first tests, so last September a pull request was made to add some respective macro module to Extra-CMake-Modules to get things going and a blog post “Adding API dox generation to the build by CMake macros” was written.

This… works. You just need to prepare this. And ignore that.

Just, looking closer, lots of glitches popped up on the scene. Worse, even show stoppers made their introduction, at both ends of the process pipe:
At generation side doxygen turned out to have bitrotted for QCH creation, possibly due to lack of use? Time to sacrifice to the Powers of FLOSS, and git clone the sources and poke around to see what is broken and how to fix it. Some time and an accepted pull request later the biggest issue (some content missed to be added to the QCH file) was initially handled, just yet needed to also get out as released version (which it now is since some months).
At consumption side Qt Assistant and Qt Creator turned to be no longer able to properly show QCH files with JavaScript and other HTML5 content, due to QWebKit having been deprecated/dropped and both apps in many distributions now only using QTextBrowser for rendering the documentation pages. And not everyone is using KDevelop and its documentation browser, which uses QWebKit or, in master branch, favours QWebEngine if present.
Which means, an investment into QCH files from doxygen would only be interesting to a small audience. Myself currently without resources and interest to mess around with Qt help engine sources, looks with hope on the resurrection of QWebKit as well as the patch for a QtWebEngine based help engine (if you are Qt-involved, please help and push that patch some more!)

Finally kicking off the production cycle

Not properly working tools, nothing trying to use the tools on bigger scale… classical self-blocking state. So time to break this up and get some momentum into, by tying first things together where possible and enabling the generation of QCH files during builds of the KDE Frameworks libraries.

And thus to current master branches (which will become v5.36 in July) there has been now added for one to Extra-CMake-Modules the new module ECMAddQch and then to all of the KDE Frameworks libraries with public C++ API the option to generate QCH files with the API documentation, on passing -DBUILD_QCH=ON to cmake. If also having passed -DKDE_INSTALL_USE_QT_SYS_PATHS=ON (or installing to same prefix as Qt), the generated QCH files will be installed to places where Qt Assistant and Qt Creator even automatically pick them up and include them as expected:

Qt Assistant with lots of KF5 API dox

KDevelop picks them up as well, but needs some manual reconfiguration to do so.

(And of course ECMAddQch is designed to be useful for non-KF5 libraries as well, give it a try once you got hold of it!)

You and getting rid of the remaining obstacles

So while for some setups the generated QCH file of the KDE Frameworks already are useful (I use them since some weeks for e.g. KDevelop development, in KDevelop), for many they still have to become that. Which will take some more time and ideally contributions also from others, including Doxygen and Qt Help engine maintainers.

Here a list of related reported Doxygen bugs:

  • 773693 – Generated QCH files are missing dynsections.js & jquery.js, result in broken display (fixed for v1.8.13 by patch)
  • 773715 – Enabling QCH files without any JavaScript, for viewers without such support
  • 783759 – PERL_PATH config option: when is this needed? Still used?
  • 783762 – QCH files: “Namespaces” or “Files” in the navigation tree get “The page could not be found” (proposed patch)
  • 783768 – QCH files: classes & their constructors get conflicting keyword handling< (proposed patch)
  • YETTOFILE – doxygen tag files contain origin paths for “file”, leaking info and perhaps is an issue with reproducible builds

And a related reported Qt issue:

There is also one related reported CMake issue:

  • 16990 – Wanted: Import support for custom targets (extra bonus: also export support)

And again, it would be also good to see the patch for a QtWebEngine based help engine getting more feedback and pushing by qualified people. And have distributions doing efforts to provide Qt Assistant and Qt Creator with *Web*-based documentation engines (see e.g. bug filed with openSUSE).

May the future be bright^Wdocumented

I am happy to see that Gentoo & FreeBSD packagers have already started to look into extending their KDE Frameworks packaging with generated API dox QCH files for the upcoming 5.36.0 release in July, with other packagers planning to do so soon as well.

So perhaps one not too distant day it will be just normal business to have QCH files with API documentation provided by your distribution not just for the Qt libraries itself, but also for every library based on them. After all documentation has been one of the things making Qt so attractive. As developer of Qt-based software, I very much look forward to that day ��

Next stop then: QML API documentation :/

I'm glad to announce that a first stable version of Brooklyn is released!
What's new? Well:

  • Telegram and IRC APIs are fully supported;
  •  it manages attachments (even Telegram's video notes), also on text-only protocols through a web server;
  • it has an anti-flood feature on IRC (e.g. it doesn't notify other channels if an user logs out without writing any message). For this I've to say "thank you" to Cristian Baldi, a W2L developer which has this fabulous idea;
  • it provides support for edited messages;
  • SASL login mechanism is implemented;
  • map locations are supported through OpenStreetMap
  • you can see a list of other channels' members typing "botName users" on IRC or using "/users" command on Telegram;
  • if someone writes a private message to the bot instead of in a public channel, it sends him the license message "This software is released under the GNU AGPL license. https://phabricator.kde.org/source/brooklyn/";
As you may have already noticed, after talking with my mentor I decided to modify the GSOC timeline.We decided to wait until Rocket.Chat REST APIs will be more stable and in the meantime to provide a full-working IRC/Telegram bridge.
This helped me providing a more stable and useful software for the first evaluation.
We are also considering writing a custom wrapper for the REST APIs because current solutions don't fits our needs.

The last post reached over 600 people and that's awesome!
As always I will appreciate every single suggestion.
Have you tried the application? Do you have any plans to do so? Tell me everything in the comments section down below!

Photo of DesktopMy project for Blue Systems is maintaining Calamares, the distro-independent installer framework. Not surprisingly, working on it means installing lots of Linux distro’s. Here’s my physical-hardware testing setup, which is two identical older HP desktop machines and a stack of physical DVDs. Very old-school. Often I use Virtual Box, but sometimes the hum of a DVD is just what I need to calm down. There’s a KDE Neon, a Manjaro and a Netrunner DVD there, but the machine labeled Ubuntu is running Kannolo and sporting an openSUSE Geeko.

I’m all for eclecticism.

So far, I’ve found one new bug in Calamares, and fixed a handfull of them. I’m thankful to Teo, the previous Calamares maintainer, for providing helpful historical information, and to the downstream users (e.g. the distros) for being cheerful in explaining their needs.

Installing a bunch of different modern Linuxen is kind of neat; the variations in KDE Plasma Desktop configuration and branding are wild. Nearly all of them have trouble being usable on small screen sizes (e.g. the 800×600 that Virtual Box starts with — this has since been fixed). They all seem to install Virtual Box guest additions and can handle resizes immediately, so it’s not a huge issue, but just annoying. I’ve only broken one of my Linux installs so far (running an update, which then crashed kscreenlocker, and now it just comes up a black screen). I’ve got a KDE Neon dev/unstable as my main development VM set up, with KDevelop and the whole shizzle .. it’s very nice inside my KDE 4 desktop on FreeBSD.

I’ve got two favorite features, so far, in Linux live CDs and in KDE Plasma installations: ejecting the live CD on shutdown (Neon does this) and skipping the confirmation screen + 30 second timeout when clicking logout or shutdown (Netrunner does this).

So, time to hunker down with the list of issues, and in the meantime: keep on installin’.

This is my first blog post. It’s a great opportunity to start documenting my journey as a software engineer with my GSoC project with digiKam as a part of KDE this summer.

June 18, 2017

Robert Kaye, creator of MusicBrainz

Robert Kaye is definitely a brainz-over-brawn kinda guy. As the creator of MusicBrainz, ListenBrainz and AcousticBrainz, all created and maintained under the MetaBrainz Foundation, he has pushed Free Software music cataloguing-tagging-classifying to the point it has more or less obliterated all the proprietary options.

In July he will be in Almería, delivering a keynote at the 2017 Akademy -- the yearly event of the KDE community. He kindly took some time out of packing for a quick trip to Thailand to talk with us about his *Brainz projects, how to combine altruism with filthy lucre, and a cake he once sent to Amazon.

Robert Kaye: Hola, ¿qué tal?

Paul Brown: Hey! I got you!

Robert: Indeed. :)

Paul: Are you busy?

Robert: I'm good enough, packing can wait. :)

Paul: I'll try and be quick.

Robert: No worries.

* Robert has vino in hand.

Paul: So you're going to be delivering the keynote at Akademy...

* Robert is honored.

Paul: Are you excited too? Have you ever done an Akademy keynote?

Robert: Somewhat. I've got... three? Four trips before going to Almería. :)

Paul: God!

MetaBrainz is the umbrella project under which all other *Brainz are built.

Robert: I've never done a keynote before. But I've done tons and tons of presentations and speeches, including to the EU, so this isn't something I'm going to get worked up about thankfully.

Paul: I'm assuming you will be talking about MetaBrainz. Can you give us a quick summary of what MetaBrainz is and what you do there?

Robert: Yes, OK. In 1997/8 in response to the CDDB database being taken private, I started the CD Index. You can see a copy of it in the Wayback Machine. It was a service to look up CDs and I had zero clues about how to do open source. Alan Cox showed up and told me that databases would never scale and that I should use DNS to do a CD lookup service. LOL. It was a mess of my own making and I kinda walked away from it until the .com crash.

Then in 2000, I sold my Honda roadster and decided to create MusicBrainz. MusicBrainz is effectively a music encyclopedia. We know what artists exist, what they've released, when, where their Twitter profiles are, etc. We know track listings, acoustic fingerprints, CD IDs and tons more. In 2004 I finally figured out a business model for this and created the MetaBrainz Foundation, a California tax-exempt non-profit. It cannot be sold, to prevent another CDDB. For many years MusicBrainz was the only project. Then we added the Cover Art Archive to collect music cover art. This is a joint project with the Internet Archive.

Then we added CritiqueBrainz, a place for people to write CC licensed music reviews. Unlike Wikipedia, ours are non-neutral POV reviews. It is okay for you to diss an album or a band, or to praise it.

Paul: An opinionated musical Wikipedia. I already like it.

Robert: Then we created AcousticBrainz, which is a machine learning/analysis system for figuring out what music sounds like. Then the community started BookBrainz. And two years ago we started ListenBrainz, which is an open source version of last.fm's audioscrobbler.

MusicBrainz is a repository of music metadata widely used by commercial and non-commercial projects alike.

Paul: Wait, let's backtrack a second. Can you explain AcousticBrainz a bit more? What do you mean when you say "figure out what music sounds like"?

Robert: AcousticBrainz allows users to download a client to run on their local music collection. For each track it does a very detailed low-level analysis of the acoustics of the file. This result is uploaded to the server and the server then does machine learning on it to guess: Does it have vocals? Male of female? Beats per minute? Genre? All sorts of things and a lot of them need a lot of improvement still.

Paul: Fascinating.

Robert: Researchers provided all of the algorithms, being very proud and all: "I've done X papers on this and it is the state of the art". State of the art if you have 1,000 audio tracks, which is f**king useless to an open source person. We have three million tracks and we're not anywhere near critical mass. So, we're having to fix the work the researchers have done and then recalculate everything. We knew this would happen, so we engineered for it. We'll get it right before too long.

All of our projects are long-games. Start a project now and in five years it might be useful to someone. Emphasis on "might".

Then we have ListenBrainz. It collects the listening history of users. User X listened to track Y at time Z. This expresses the musical taste of one user. And with that we have all three elements that we've been seeking for over a decade: metadata (MusicBrainz), acoustic info (AcousticBrainz) and user profiles (ListenBrainz). The holy trinity as it were. You need all three in order to build a music recommendation engine.

The algorithms are not that hard. Having the underlying data is freakishly hard, unless you have piles of cash. Those piles of cash and therefore the engines exist at Google, Last.fm, Pandora, Spotify, et al. But not in open source.

Paul: Don't you have piles of cash?

Robert: Nope, no piles of cash. Piles of eager people, however! So, once we have these databases at maturity we'll create some recommendation engine. It will be bad. But then people will improve it and eventually a pile of engines will come from it. This has a significant chance of impacting the music world.

Paul: You say that many of the things may be useful one day, but you also said MetaBrainz has a business model. What is it?

Robert: The MetaBrainz business model started out with licensing data using the non-commercial licenses. Based on "people pay for frequent and easy updates to the data". That worked to get us to 250k/year.

Paul: Licensing the data to...?

Robert: The MusicBrainz core data. But there were a lot of people who didn't need the data on an hourly basis.

Paul: Sorry. I mean *who* were you licensing to?

Robert: It started with the BBC and Google. Today we have all these supporters. Nearly all the large players in the field use our data nowadays. Or lie about using our data. :)

Paul: Lie?

Robert: I've spoken to loads of IT people at the major labels. They all use our data. If you speak to the execs, they will swear that they have never used our data.

Paul: Ah. Hah hah. Sounds about right.

Robert:Anyways, two years ago we moved to a supporter model. You may legally use our data for free, but morally you should financially support us. This works.

Paul: Really?

Robert: We've always used what I call a "drug dealer business model". The data is free. Engineers download it and start using it. When they find it works and want to push it into a product they may do that without talking to us. Eventually we find them and knock on their door and ask for money.

Paul: They pay you? And I thought the music industry was evil.

Robert: This is the music *tech* companies. They know better.


Their bizdev types will ask: where else can we get this data for cheaper? The engineers look around for other options. Prices can range from 3x to 100x, depending on use, and the data is not nearly as good. So they sign up with us. This is not out of the kindness of their hearts.

Paul: Makes more sense now.

Robert: Have you heard the Amazon cake story?

Paul: The what now?

Robert: Amazon was 3 years behind in paying us. I harangued them for months. Then I said: "If you don't pay in 2 weeks, I am going to send you a cake."

Amazon got cake to celebrate the third anniversary of an unpaid invoice.

"A cake?"

"Yes, a cake. One that says 'Congratulations on the 3rd anniversary'..."

They panicked, but couldn't make it happen.

So I sent the cake, then silence for 3 days.

Then I got a call. Head of legal, head of music, head of AP, head of custodial, head of your momma. All in one room to talk to me. They rattled off what they owed us. It was correct. They sent a check.

Cake was sent on Tuesday, check in hand on Friday.

This was pivotal for me: recognizing that we can shame companies to do the right thing... Such as paying us because to switch off our data (drugs) is far worse than paying.

Last year we made $323k, and this year should be much better. We have open finances and everything. People can track where money goes. We get very few questions about us being evil and such.

Paul: How many people work with you at MetaBrainz, as in, are on the payroll?

Robert: This is my team. We have about 6 full-time equivalent positions. To add to that, we have a core of contributors: coders, docs, bugs, devops... Then a medium ring of hard-core editors. Nicolás Tamargo and one other guy have made over 1,000,000 edits to the database!

Paul: How many regular volunteers then?

Robert: 20k editors per year. Más o menos. And we have zero idea how many users. We literally cannot estimate it. 40M requests to our API per day. 400 replicated copies of our DB. VLC uses us and has the largest installation of MusicBrainz outside of MetaBrainz.

And we ship a virtual machine with all of MusicBrainz in it. People download that and hammer it with their personal queries. Google Assistant uses it, Alexa might as well, not sure. So, if you ask Google Assistant a music-related question, it is answered in part by our data. We've quietly become the music data backbone of the Internet and yet few people know about us.

Paul: Don't you get lawyers calling you up saying you are infringing on someone's IP?

Robert: Kinda. There are two types: 1) the spammers have found us and are hammering us with links to pirated content. We're working on fixing that. 2) Other lawyers will tell us to take content down, when we have ZERO content. They start being all arrogant. Some won't buzz off until I tell them to provide me with an actual link to illegal content on our site. And when they can't do it, they quietly go away.

The basic fact is this: we have the library card catalog, but not the library. We mostly only collect facts and facts are not copyrightable.

Paul: What about the covers?

Robert: That is where it gets tricky. We engineered it so that the covers never hit our servers and only go to the Internet Archive. The Archive is a library and therefore has certain protections. If someone objects to us having something, the archive takes it down.

Paul: Have you had many objections?

Robert: Not that many. Mostly for liner notes, not so much for covers. The rights for covers were never aggregated. If someone says they have rights for a collection, they are lying to you. It's a legal mess, plain and simple. All of our data is available under clear licenses, except for the CAA -- "as is"

Paul: What do you mean by "rights for a collection"?

Robert: Rights for a collection of cover art. The rights reside with the band. Or the friend of the band who designed the cover. Lawyers never saw any value in covers pre-Internet. So the recording deals never included the rights to the covers. Everyone uses them without permission

Paul: I find that really surprising. So many iconic covers.

Robert: It is obvious in the Internet age, less so before the Internet. The music industry is still quite uncomfortable with the net.

Paul: Record labels always so foresightful.

Robert: Exactly. Let's move away from labels and the industry.

Though, one thing tangentially, I envisioned X, Y, Z, uses for our data, but we made the data rigorous, well-connected and concise. Good database practices. And that is paying off in spades. The people who did not do that are finding that their data is no longer up to snuff for things like Google Assistant.

Paul: FWIW, I had never heard of Gracenote until today. I had heard of MusicBrainz, though. A lot.

Robert: Woo! I guess we're succeeding. :)

Paul: Well, it is everywhere, right?

Robert: For a while it was even in Antarctica! A sysadmin down there was wondering where the precious bandwidth went during the winter. Everyone was tagging their music collection when bored. So he set up a replica for the winter to save on bandwidth.

Paul: Of course they were and of course he did.

Robert: Follows, right? :)

Paul: Apart from music, which you clearly care for A LOT, I heard you are an avid maker too.

Robert: Yes. Party Robotics was a company I founded when I was still in California and we made the first affordable cocktail robots. But I also make blinky LED light installations. Right now I am working on a sleep debugger to try and improve my crapstastic sleep.

I have a home maker space with an X-Carve, 3D printer, hardware soldering station and piles of parts and tools.

Paul: Uh... How do flashing lights help with sleep?

Robert: Pretty lights and sleep-debugging are separate projects.

Paul: What's your platform of choice, Arduino?

Robert: Arduino and increasingly Raspberry Pi. The Zero W is the holy grail, as far as I am concerned.

Oh! And another project I want: ElectronicsBrainz.

Paul: This sounds fun already. Please tell.

Robert: Info, schematics and footprints for electronic parts. The core libraries with KiCad are never enough. you need to hunt for them. Screw that. Upload to ElectronicBrainz, then, if you use a part, rate it, improve it. The good parts float to the top, the bad ones drop out. Integrate with Kicad and, bam! Makers can be much more useful. In fact, this open data paradigm and the associated business model is ripe for the world. There are data silos *everywhere*.

Paul: I guess that once you have set up something like MusicBrainz, you start seeing all sorts of applications in other fields.

Robert: Yes. Still, we can't do everything. The world will need more MetaBrainzies.

Paul: Meanwhile, how can non-techies help with all these projects?

Robert: Editing data/adding data, writing docs or managing bug reports as well. Clearly our base of editors is huge. It is a very transient community, except for the core.

Also, one thing that I want to mention in my keynote is blending volunteers and paid staff. We've been really lucky with that. The main reason for that is that we're open. We have nothing to hide. We're all working towards the same goals: making the projects better. And when you make a site that has 40M requests in a day, there are tasks that no one wants to do. They are not fun. Our paid staff work on all of those.

Volunteers do the things that are fun and can transition into paid staff -- that is how all of our paid staff became staff.

Paul: This is really an incredible project.

Robert: Thanks! Dogged determination for 17 years. It’s worth something.

Paul: I look forward to your keynote. Thank you for your time.

Robert: No problem.

Paul: I'll let you get back to your packing.

Robert: See you in Almería.

Robert Kaye will deliver the opening keynote at Akademy 2017 on the 22nd of July. If you would like to see him and talk to him live, register here.

About Akademy

For most of the year, KDE—one of the largest free and open software communities in the world—works on-line by email, IRC, forums and mailing lists. Akademy provides all KDE contributors the opportunity to meet in person to foster social bonds, work on concrete technology issues, consider new ideas, and reinforce the innovative, dynamic culture of KDE. Akademy brings together artists, designers, developers, translators, users, writers, sponsors and many other types of KDE contributors to celebrate the achievements of the past year and help determine the direction for the next year. Hands-on sessions offer the opportunity for intense work bringing those plans to reality. The KDE Community welcomes companies building on KDE technology, and those that are looking for opportunities. For more information, please contact The Akademy Team.

Dot Categories:


There’s a lot I did in the last 2 weeks and since I did not update the blog last week, this post is going to include last 2 week’s progress.

Before I begin with what I did, here’s a quick review of what I was working on and what had been done.

I started porting Cantor’s Qalculate backend to QProcess and during the first week I worked on establishing connection with Qalculate, for which we use qalc and some amount of time was spent parsing the output returned by qalc


Qalculate backend as of now uses libqalcuate API for computing the result. To successfully eliminate the direct use of API all the commands should make use qalc, but since qalc does not support all the functions of Qalculate, I had to segerate the parts depending on API from qalc. For instance, qalc does not support plotting graphs.

The version of qalc that we are using supports almost all the major functionalities of Qalculate but there are a few things for which we still depend on the API directly

I will quickly describe what depends on what

* help command
* plotting
* syntax highlighter
* tab completion

* basic calculations: addition, subtraction etc
* all the math functions provided by Qalculate: sqrt(), binomial(), integrate() etc
* saving variables

Segregating part was easy. The other important thing I did was to form a queue based system for the commands that are required to be processed by qalc


Queue based system

The two important components of this system are:

1. Expression Queue :- contains the expressions to be processed
2. Command Queue:- contains commands of the expression being processed.

* The basic idea behind this system is , we compute only one expression at a time, mean while if we get more expressions from user, we store them in the queue and process them once the current expression being processed is complete.

* Another important point is , since an expression can contain multiple commands, we store all the commands for an expression in command queue and just like we process one expression at a time, we are going to process one command at a time. i.e we are going to give QProcess only one command at a time, this makes the output returned by QProcess less messy and hence it’s easier to parse

* Example: Expression1 = (10+12, sqrt(12)) : this expression has multiple commands. The command queue for the same will have two commands.

expression queue                                               command queue

[ expression 1 ] ——————————————-> [ 10+12 ], [ sqrt(12) ]

[ expression 2] ——————————————-> [ help plot ]


We solve all the commands of expression1 , parse the output and then move on to expression2 and this goes on till the time expression queue is empty


Apart from this I worked on Variable model of Qalculate. Qalc provides a lot of variations of save command. Different commands available are:

Not every command mentioned below has been implemented but the important ones have been.

1. save(value, variable, category, title): Implemented

This function is available through the qalc interface and allows the user to define new variables or override the existing variables with the given value.

2. save/store variable : Implemented
This command allows the user to save the current result in a variable with the specified name.

Current result is the last computed result. Using qalc we can access the last result using ‘ans’, ‘answer’ and a few more variables.

3.save definitions : Not implemented

Definitions include the user defined variables, functions, units .

4. save mode: Not implemented
mode is the configuration of the user which include things like ‘angle unit’, ‘multiplication sign’ etc.


With this most of the important functionalities have been ported to qalc but there are still a few things for which we depend on the API directly. Hopefully, in the future with the newer version of qalc we will be able to remove the direct use of API from Cantor

Thanks and Happy hacking

Hello! I am Mikhail Ivchenko, 19 years old student from Russia, and I am participating in GSoC this summer working on improving Go language support in KDevelop.

I did not have much time recently due to exams. Fortunately, I have successfully passed them all and ready to focus all my energy on project. So far, I already had some nice conversations with my mentors Sven Brauch and Aleix Pol and got some helpful advice from them.

During this week, I mostly worked on improving test coverage of the Go parser. I added tests for numbers, comments, loops, multi-line strings and short variable declarations. This allowed me to find issues in multi-line strings parsing: 1) in the Go language, a string wrapped in " quotes cannot be multi-line but it our parser allowed it and 2) a string wrapped in ` quotes can be multi-line but our parser did not parse it correctly. Fixing that required me to work with kdevelop-pg-qt - the KDevelop parser-generator. That wiki page was useful for me because it contains a lot of information about how kdevelop-pg-qt works in general and how to write grammar files in particular. With multi-line strings behavior fixed we are able to have a fuller description of multi-line string constants in “Code Browser” tool view (see screenshots).


That is why test coverage is important - it allows you to find some bugs easier. However, I think that there is one more idea behind that. When I started to work with that Go parser I was not able to judge its completeness at all. It looked good by manual testing but I just was not able to test by hand all features and check if they are parsed correctly. Therefore, having tests for the language parser provides you some kind of "guarantee" that it covers most of cases. It differs from regular applications because in that case you are able to test some scenarios by hand (hey! I am not saying you do not need to write tests. Test coverage is good anyways) but while working on language parsing you often cannot come up with all syntax variations / syntax possibilities and even if you can, the amount of test cases can be big for testing by hand. So, that is why I want to spend some more time on that at some point.

Looking forward to next week!
Mikhail Ivchenko

I really wanted to say that Krita was the first KDE Frameworks 5-based application available in the official FreeBSD ports tree (so that pkg install krita just works), but it turns out that labplot has been KF5-based for a month or more and no-one noticed.

Nonetheless, three cheers for Krita and Calligra, which have been updated to the latest, modernest versions.

This is the start of a whole flood of updates, although Plasma is still going to take a while. Several things have come together to make this update possible:

  • FreeBSD KDE CI is running, so we can keep a close(r) eye on upstream. (This is what runs in KDE’s CI system, for FreeBSD)
  • KDE FreeBSD CI is running, so we can keep a close eye on ourselves. (This is what runs in FreeBSD’s CI systems, for KDE)
  • Both KDE and FreeBSD have Phabricator review systems, one for changes to downstream packaging, one for changes to upstream code.
  • KDE FreeBSD ports development is happening in a GitHub fork of the official FreeBSD ports tree.

With good CI and a good review process, we’ve been much happier getting packaging-fixes upstream than ever before. The CI catches unpleasant changes (hey, k3b has turned red, what’s up? Patches forthcoming ..) before they are released. The packaging CI is good for keeping track of where we are in packaging things ourselves. Since there’s a fair amount of package-shuffling going on, that’s important to have in hand. Finally the move to a git clone of the official ports tree makes it much easier to do small topic branches (e.g. updating Frameworks), test, and merge than we ever could with the SVN-based tree.

There’s a handful of smaller updates in-flight, alongside the Great Big Plasma5 branch, which is now shrinking as parts of it start to show up in the official ports already.

KIO Slaves are out-of-process worker applications that perform protocol-specific communications. To provide local file handling file managers use the file ioslave(file:/). Up until now file management with root access was forbidden in file ioslave. But with this week’s work situation might just change for good.

During the past week and a half, I worked on getting polkit support to delete, copy, rename, symlink and mkdir function. My attempts at getting these file operations working correctly inside a read-only folder were almost successful. I was able to add polkit support to all but the copy file operation. To get the copy function working I needed to be able to open a file with elevated privilege in the KAuth helper and send back the file descriptor to file ioslave for further processing. It was the latter part that proved to be the major obstacle. Since file descriptors are non-negative integers it may be instinctive to embed them in QVariant and send them back to ioslave over D-Bus. However, they have more to them than just being an integer. Owing to this fact the integer value is pretty much meaningless outside its host process. Therefore, even though I was able to retrieve the file descriptor in ioslave, doing any kind of file processing ended in failure. Although unix local domain socket is apt for this task, due to my unfamiliarity with network programming and the unavailability of any equivalents in Qt, I had to postpone my work on the copy function for a while.

Coming back to changes. I had added polkit support to the delete function prior to sending my GSOC proposal as a proof of concept. So, during this time I mostly made it a bit modular and separated, what seemed to be, the boilerplate code. Getting polkit support to rest of the lot was simple as the code was similar to that of delete. After these successive changes, a KIO client can now perform the following tasks as a privileged user without having to start itself as root.

  1. Delete files and folders.
  2. Rename files and folders.
  3. Create folders.
  4. Create symbolic links.

You can view all of my changes in cgit.kde.org. To avoid any convolution in future I went for creating a separate branch for every file management function instead of a single, difficult-to-manage branch. Testing the upgraded ioslave needs some changes in dolphin as well. Since the changes are trivial and cloning dolphin seemed futile, I have hosted my patches for dolphin on github. Apart from the clone, I have some patches on phabricator as well, precisely D6197, D6198 and D6199. Apply them in order to test polkit support in delete file operation. So, if you are getting bored and looking forward to doing some unpaid labor you can review my patches, alternatively, you can also fork my kio clone and see the changes yourself. (:

Here’s a demo for the curious minds.

That’s all for this week folks.

June 17, 2017

Kubuntu 17.04 – Zesty Zapus

The latest 5.10.2 bugfix update for the Plasma 5.10 desktop is now available in our backports PPA for Zesty Zapus 17.04.

Included with the update is KDE Frameworks 5.35

Kdevelop has also been updated to the latest version 5.1.1

Our backports for Xenial Xerus 16.04 also receive updated Plasma and Frameworks, plus some requested KDE applications.

Kubuntu 16.04 – Xenial Xerus

  • Plasma Desktop 5.8.7 LTS bugfix update
  • KDE Frameworks 5.35
  • Digikam 5.5.0
  • Kdevelop 5.1.1
  • Krita 3.1.4
  • Konversation 1.7.2
  • Krusader 2.6

To update, use the Software Repository Guide to add the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade


Upgrade notes:

~ The Kubuntu backports PPA already contains significant version upgrades of Plasma, applications, Frameworks (and Qt for 16.04), so please be aware that enabling the backports PPA for the 1st time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to the versions in this announcement.  The PPA will also continue to receive bugfix and other stable updates when they become available.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], file a bug against our PPA packages [3], or optionally contact us via social media.

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

Some time ago, we got in touch with a team from Microsoft that was reaching out to projects like Krita and Inkscape. They were offering to help our projects to publish in the Windows Store, doing the initial conversion and helping us get published.

We decided to take them up on their offer. We have had the intention to offer Krita on the Windows Store for quite some time already, only we never had the time to get it done.


Putting Krita in the Windows Store makes Krita visible to a whole new group of people. plus…


And we wanted to do the same as on Steam, and put a price-tag on Krita in the store. Publishing Krita on the Store takes time, and the Krita project really needs funding at the moment. (Note, though, that buying Krita in the Windows Store means part of your money goes to Microsoft: it’s still more effective to donate).

In return, if you get Krita from the Windows Store, you get automatic updates, and it becomes really easy to install Krita on all your Windows systems. Krita will also run in a sandbox, like other Windows apps.

Basically, you’re paying for convenience, and to help the project continue.

And there’s another reason to put Krita in the Windows Store: to make sure we’re doing it, and not someone else, unconnected to the project.

For Free Software

Krita is free software under the GNU Public License. Having Krita in the Windows Store doesn’t change that. The Store page has links to the source code (though they might be hardish to find, we don’t control the store layout), and that contains instructions on how to build Krita. If you want to turn your own build into an appx bundle, that’s easy enough.

You can use the Desktop App Converter directly on your build, or you can use it on the builds we make available.

There are no functional differences between Krita as downloaded from this website, and Krita as downloaded from the Windows store. It’s the same binaries, only differently packaged.


We currently still have Krita on Steam, too. We intend to keep it on Steam, and are working on adding the training videos to Steam as well. People who have purchased the lifetime package of Krita Gemini will get all the videos as they are uploaded.

We’re also working on getting Krita 3 into Steam, as a new product, at the same price as Krita in the Windows store — and the same story. Easy updates and installs on all your systems, plus, a purchase supports Krita development.

Additionally, it looks like we might find some funding for updating Krita Gemini to a new version. It’ll be different, because the Gemini approach turns out to be impossible with Qt 5 and Qt Quick 2: we have already spent several thousands of euros on trying to get that to work.

Still, we have to admit that Krita on Steam is slow going. It’s not the easiest app store to work with (that is Ubuntu’s Snap), and uploading all the videos takes a lot of time!

Hi folks, it's been a while since my last post here. I had a hard time to start on GSoC but now that I have successfully graduated from the university (Computer Science BSc) I have all my time for my project. While learning for the exams and preparing for my final exam I've been talking to my mentors often to share our ideas about the project. Thanks to the fact that I have already worked previously (and continously after GSoC) on LabPlot I can implement the needed features much easier and quicker now than last year :) Now that I have my own branch and a few new classes added I can continue the adventure with LabPlot! See you soon! :)

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

Ever since we started working on Kube we faced the conundrum of how to allow Kube to innovate and to provide actual additional value to whatever is already existing out there, while not ending up being a completely theoretical exercise of what could be done, but doesn’t really work in practice. In other words, we want to solve actual problems, but do so in “better” ways than what’s already out there, because otherwise, why bother?

I put “better” into quotes because this is of course subjective, but let me elaborate a bit on what I mean by that.

Traditionally, communication and organization has been dealt with using fairly disjoint tools, that we as users then combine in whatever arbitrary fashion that is useful to us.

For instance:

  • EMail
  • Chat
  • Voice/Video Chat
  • Calendaring
  • Taskmanagement
  • Notetaking

However, these are just tools that may help us work towards a goal, but often don’t support us directly in what we’re actually trying to accomplish.

My goto example; Jane wants to have a meeting with Jim and Bob:

  • Jane tries to find a timeslot that works for all of them (by mail, phone, personally…)
  • She then creates an event in her calendar and invites Jim and Bob, who can in turn accept or decline (scheduling)
  • The meeting will probably have an Agenda that is perhaps distributed over email, or within the event description
  • A meeting room might need to be booked, or an online service might need to be decided.
  • Once the meeting takes place the agenda needs to be followed and notes need to be taken.
  • Meeting minutes and some actionable items come out of the meeting, that then, depending on the type of meeting, may need to be approved by all participants.
  • So finally the approved meeting minutes are distributed and the actionable items are assigned, and perhaps the whole thing is archived somewhere for posterity.

As you can see, a seemingly simple task can actually become a fairly complex workflow, and while we do have a toolbox that helps with some of those steps, nothing really ties the whole thing together.

And that’s precisely where I think we can improve.

Instead of trying to do yet another IMAP client, or yet another calendaring application a far more interesting aspect is how can we improve our workflows, whatever tools that might involve.
Will that involve some email, and some calendaring and some notetaking? Probably, but it’s just a means to and end and not an end by itself.

So if we think about the meeting scheduling workflow there is a variety of ways how we can support Jane:

  • The scheduling can be supported by:
    • Traditional iCal based scheduling
    • An email message to invite someone by text.
    • Some external service like doodle.com
  • The agenda can be structured as a todo-list that you can check off during the meeting (perhaps with time limits assigend for each agenda item)
  • An online meeting space can be integrated, directly offering the agenda and collaborative note-taking.
  • The distribution and approval of meeting minutes can be automated, resulting in a timeline of past meetings, including meeting minutes and actionable items (tasks) that fell out of it.

That means however that rather than building disjoint views for email, calendar and chat, perhaps we would help Jane more if we built and actual UI for that specific purpose, and other UI’s for other purposes.


So in an ideal world we’d have an ideal tool for every task the user ever has to execute which would mean we fully understand each individual user and all his responsibilities and favorite workflows….
Probably not going to happen anytime soon.

While there are absolutely reachable goals like Jane’s meeting workflow above, they all come at significant implementation cost and we can’t hope to implement enough of them right off the bat in adequate quality.
What we can do however is keeping that mindset of building workflows rather than IMAP/iCal/… clients and setting a target somewhere far off on the horizon and try to build useful stuff along the way.

For us that means that we’ve now set the basic structure of Kube as a set of “Views” that are used as containers for those workflows.


This is a purposefully loose concept that will allow us to transition gradually from fairly traditional and generic views to more purposeful and specific views because we can introduce new views and phase out old ones, once their purpose is helped better by a more specific view.

Some of our initial view ideas are drafted up here: https://phabricator.kde.org/T6029

What we’re starting out with is this:

  • A Conversations view:
    While this initially is a fairly standard email view, it will eventually become a conversation centric view where you can follow and pick up on ongoing conversations no matter on what medium (email, chat, …).


  • A People view:
    While also serving the purpose of an addressbook it is first and foremost a people centric way to interact with Kube.
    Perhaps you just want to start a conversation that way, or perhaps you want to lookup past interactions you had with the person.


  • A composer view:
    Initially a fairly standard email composer but eventually this will be much more about content creation and less about email specifically.
    The idea is that what you actually want to do if you’re opening the composer is to write some content. Perhaps this content will end up as email,
    or perhaps it will just end up on a note that will eventually be turned into a blogpost, chances are you don’t even know before you’re done writing.
    This is why the composer implements a workflow that starts with the starting point (your drafts) then goes over to the actual composer to create the content, and finally allows you to do something with the content, i.e. publish or store it somewhere (for now that only supports sending it by email or saving it as draft, but a note/blog/… could be equally viable goals).

The idea of all this is that we can initially build fairly standard and battle-tested layouts and over time work our way towards more specialized, better solutions. It also allows us to offload some perhaps necessary, but not central features to a secondary view, keeping us from having to stuff all available features into the single “email” view, and allowing us to specialize the views for the usecases they’re built for.

KTechLab, the IDE for microcontrollers and electronics has joined KDE. Below I’m summarizing its current status and plans.

June 16, 2017

New FAQ added to KDE neon FAQ.

KDE neon does continuous deployment of the latest KDE software which means there are nearly always new versions of our software to update to. We recommend using Plasma Discover’s updater which appears in your panel:

If you prefer to use the command line you can use the pkcon command:

  • pkcon refresh
  • pkcon update

This will install all new packages and uses the same PackageKit code as Plasma Discover. Some uses of apt do not install new packages which makes it less suitable for KDE neon.


Back in the day I started making a Plasma Wayland ISO to help people try out Plasma on Wayland.  Then we started Neon and the obvious way to create the ISOs became through the Neon infrastructure.  With Wayland becoming closer to be ready to use every day I’ve decided it’s time to scrap the dedicated Wayland ISOs and just install the Wayland session by default on the Dev Unstable ISOs. It’s not yet the default so to give it a try you need to log out, select Wayland session and log in again.  Or install the ISO and select it at login (you’ll need to switch back to X to install, Calamares doesn’t run in Wayland because it wants to run as root which is verboten).

Wayland is pretty much ready to use but the reason we can’t switch to it by default is mostly that some obscure graphics cards may not work with it and it’s hard to implement a detection and fallback for this.  The fonts may be a different size due to differences in the screen dots-per-inch detection and middle mouse button selection paste doesn’t yet work.

Grab the KDE neon Dev Unstable ISO now to try it out.


Older blog entries

Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.