June 26, 2017

Heading towards the first evaluation:

Even the exams and graduation project work this month, heading towards the first evaluation according to timeline with the planned UI ready for further development and started the search with examining a similar tool in gimp, and had more insights about the details of how the “Image Editor” works on a deeper level and how I should work to get to my next milestone of static parts cloning supporting the variable radius within the next 11 days.


June 25, 2017

During this week, I decided to spend more time on language support: code completion, highlighting and so on. This part is provided by DU-Chain. Du-Chain stands for Definition-Use chain which consist of various contexts, declarations in these contexts and usages of these declarations.

First change improved declaration of variables in parameters of anonymous functions. In Go language, it is possible to define anonymous function and assign it to variable, pass as parameter (people use it for various callbacks for example) or simple call it. Before my change, parameters of anonymous functions were treated as declarations only in case of assigning that function to variable. Thus, if, for example, you typed the example of Gin web framework usage:
package main
import "gopkg.in/gin-gonic/gin.v1"
func main() {
r := gin.Default()
r.GET("/ping", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "pong",
r.Run() // listen and serve on
you would end up with “c” not being highlighted / treated as variable. After my change, parameters of anonymous functions are treated as variable declarations in all three cases: assigning, passing and calling (see screenshots).


Second change is under review and is aimed at adding code completion from embedded structs. In Go language, there is no such thing as inheritance – composition is preferred over it. Composition often has drawback – we need to pass all calls to “base” methods so there will be a lot of boilerplate code. In Go this problem is solved using “embedding” structs so their fields and methods are added to top-level struct. For example, struct A has a method Work and struct B embeds struct A. So both B.Work() and B.A.Work() are correct. Because of that we need to travel over all embedding tree for retrieving all possible completions – this is what my second change is aimed at.

Third change added errors of parsing as problems in “Problems” view.

Fourth change fixed misplacing usage information in wrong context. Before that change, information of variable usages was placed in top context, which had led to not working semantic highlighting – variable declarations had different colors but usages were all same. Therefore, while improving overall correctness of generated DU-Chain I also fixed that issue and now variable usages are colored too (see screenshots)!

Apart from DU-Chain improvements I got merged a basic project manager plugin which offers a template of simple console Go application and allows to build Go projects easier.

Looking forward to next week!

For the last month the main time I took the exams, because of this I did not do much for my project. Nevertheless, I implemented the basic primitives and tested them.

Let me tell you about them.

Wet map.

Water is the main part in watercolors. That’s why I started with this.

Wet map contains 2 types of information: water value and speed vector. If the first parameter is clear, then the second one needs explanations. Speed vector needs for rewetting our splats (take it in mind, I will explain what this later).

All this values Wet map contains in KisPaintDevice:

KisPaintDeviceSP m_wetMap;

As the color space was chosen rgb16:

m_wetMap = new KisPaintDevice(KoColorSpaceRegistry::instance()->rgb16());

And there are information about water value and speed vector in pixel data.

But in this form Paint Device can’t visualize wet map correctly:

So I transorm Paint Device for visualizing wet map correctly (because it will be important for artists, I think). And now it looks like this:


My implementation is based on a procedural brush. Every brush stamp is a union of dynamic splats. Here you can see the behavior of splats:

Also I tested reweting (when splat go to flowing state from fixed state):


And as a final test I made splat generator for simulating strokes:


What next?

It’s high time to get splats to work in Krita. So I’m going to finish my plugin, and test splats behavior. But it will be primitive:

  1. Clear canvas for updating splats
  2. No undo/redo
  3. Stupid singleton for splat storage

I was able to do major improvements to the build system of KStars. I think more and more open-source projects should pick up these low-hanging fruits with CMake and Clang:

- CCache: Speed-up the development time and working with git branches by caching the compiled object files.
- Unity Build: This simple "hack" can reduce the build time dramatically by compiling temporary C++ meta files in which the normal C++ files are included. The build time can be speeded up to 2x-3x for bigger projects.
- Clang Sanitizers: Use undefined behavior and address sanitizers to hunt down memory handling errors. Although the executable must be recompiled with Clang and special compiler flags, the resulted binary will run with minimal slowdown. It is not a complete replacement, but these sanitizers can catch most of the problems found by Valgrind during normal runtime.
- Clang Format: Format the source code with a real compiler engine.

More details are on our wiki page:

En ocasiones los temas que dan para todo un artículo escasean. No es que desaparezcan es que parece que se diversifican sin llegar a tener la consistencia suficiente para dedicarles un artículo. De esta forma, se me ha ocurrido para no faltar a la cita diaria crear un cóctel de noticias libres de junio 2017 con el que tener una pequeña visión de lo acontecido estos últimos días en el mundo del Software Libre.

Cóctel de noticias libres de junio 2017

Bienvenidos a un tipo de artículo que no hacía desde hace mucho tiempo: el de recopilación de noticias relacionadas con el Software Libre. Evidentemente, no están todas las que son pero si son todas las que están.

  • Empezamos con la presentación de ISOImageWriter, una nueva aplicación de la Comunidad KDE y que pretende facilitar la creación de USB bootables a partir de imágenes .iso. Algo básico para instalar sistemas operativos. Vía: Jonathan Riddell’s Diary

Cóctel de noticias libres de junio 2017

  • El dock de moda, Latte Dock, inicia campaña de donaciones. Así que ya sabes, si quieres nuevas funcionalidades para esta magnífica barra de tareas no dudes en participar. Vía: Psifidotos

  • Nuevo Hacklab en Madrid y su nombre es IngoberLab 301. Vía: El binario
  • LibreOffice 6 se actualizará solo en GNU/Linux si lo instalas desde el paquete de la web oficial. Vía: Linux Adictos
  • Y seguimos con LibreOffice ya que pide colaboración para decidir que características mantener. Vía: Muy Linux

  • Aprovecho esta recopilación de noticias para invitaros a ver otro, el que semanalmente mensualmente realiza Victorhck en su magnífico blog: “Free Software Foundation @fsf – Recopilación de noticias de junio de 2017”
  • Steam también se pasa al Flatpak. Una buena noticia para este proyecto tan ligado a la Comunidad KDE y que seguro que le dará un gran impulso. Vía: Linux Adictos
  • Esta es sobre la implantación paulatina de la lógica en el mundo, otra gran organización (por tamaño) se une al Software Libre: El Ejército Británico migra su infraestructura cloud a Red Hat. Vía: La mirada del replicante
  • Y finalizamos con un nuevo producto hecho con Krita: Birds Brains. Vía: Krita

June 24, 2017

Before writing about my actual Summer of Code experiences, I wanted to briefly share what I worked on before the official coding start.

Plasma Toolicons

Plasma toolicons

Can you see these ridiculously small toolicons next to the Media frame plasmoid? I’m talking about the icons which allow you to Resize, Rotate and Remove the Plasmoid. Compared with other icons, these are clearly too small.

So, as preparation for GSoC, I wanted to know why this happens, and what would be required to make them bigger. Thus my journey into the (Plasma) rabbithole began…

Tracking down where the containment is defined was relatively easy, and shortly after i found the code responsible for the ActionButton:

PlasmaCore.ToolTipArea {
    id: button

    location: PlasmaCore.Types.LeftEdge
    mainText: action !== undefined ? action.text : ""
    mainItem: toolTipDelegate

    property QtObject svg
    property alias elementId: icon.elementId
    property QtObject action
    property bool backgroundVisible: false
    property int iconSize: 32

Huh, the iconSize is 32px? Well, that was easy to fix, surely this should be set to units.iconSizes.small and this problem is gone…

… or so I thought. No, this didn’t improve the situation, back to square one.

Is it overwritten by the look and feel theme? plasma-workspace/lookandfeel/contents/components/ActionButton.qml at least doesn’t - and it also happens with the style set to Oxygen.

While looking at this, I also noticed that units.iconSizes.small returned 16px on my system. This seemed odd, because the scale factor was set to 1.8x, so I would have expected bigger icons.

Where is this icon size calculated? Ah yes, in the file units.cpp, method Units::devicePixelIconSize.

int Units::devicePixelIconSize(const int size) const
    /* in kiconloader.h
    enum StdSizes {
    // Scale the icon sizes up using the devicePixelRatio
    // This function returns the next stepping icon size
    // and multiplies the global settings with the dpi ratio.
    const qreal ratio = devicePixelRatio();

    if (ratio < 1.5) {
        return size;
     } else if (ratio < 2.0) {
         return size * 1.5;
     } else if (ratio < 2.5) {

Ok, my devicePixelRatio is 1.8 and therefore the icon size of a small pixmap gets multiplied by 1.5 and a request for a small (16px) pixmap should return a 24px pixmap.

But it doesn’t…

Debugging suggested that my devicePixelRatio is NOT 1.8, but rather around 1.4. How did that happen, isn’t the scale factor from the KDE settings used?

Oh, the comment in updateDevicePixelRatio() mentions that QGuiApplication::devicePixelRatio() is really not used:

void Units::updateDevicePixelRatio()
    // Using QGuiApplication::devicePixelRatio() gives too coarse values,
    // i.e. it directly jumps from 1.0 to 2.0. We want tighter control on
    // sizing, so we compute the exact ratio and use that.
    // TODO: make it possible to adapt to the dpi for the current screen dpi
    //  instead of assuming that all of them use the same dpi which applies for
    //  X11 but not for other systems.
    QScreen *primary = QGuiApplication::primaryScreen();
    if (!primary) {
    const qreal dpi = primary->logicalDotsPerInchX();
    // Usual "default" is 96 dpi
    // that magic ratio follows the definition of "device independent pixel" by Microsoft
    m_devicePixelRatio = (qreal)dpi / (qreal)96;

Hmm, yes, that was the case in earlier Qt versions when devicePixelRatio still returned an integer - but nowadays the value is a real number.

So, instead of calculating dpi / 96 I just changed it to return primary->devicePixelRatio().

Which now, finally, should return a devicePixelRatio of 1.8 and therefore result in bigger pixmaps.

Compiled it, and confident of victory restarted plasmashell… only to notice, that it still didn’t work.

What could there still go wrong?

So I got back to debugging.. to notice that primary->devicePixelRatio() returns a scale factor of 1.0. Huh? Isn’t this supposed to just use the QT_SCREEN_SCALE_FACTORS environment variable, which gets set to the value of the “Scale Display” dialog in the Systemsettings? If you want to know, the code for setting the environment variable is located in plasma-workspace/startkde/startplasmacompositor.cmake.

But why isn’t the problem gone, is there something in Plasma that overwrites this value?

Yes, of course there is!

The only way this value can be overwritten is due to the Qt attribute Qt::AA_DisableHighDpiScaling.

Grep’ing for that one pointed me to plasma-workspace/shell/main.cpp - the base file for plasmashell:

int main(int argc, char *argv[])
//    Devive pixel ratio has some problems in plasmashell currently.
//     - dialog continually expands (347951)
//     - Text element text is screwed (QTBUG-42606)
//     - Panel struts (350614)
//  This variable should possibly be removed when all are fixed


I looked into the mentioned bugs. What should I do now? Re-enabling the HighDpiScaling-flag so the real devicePixelRatio is returned in Qt, and therefore I can use this value to calculate the sizes icons should be and then have the bigger Plasma ToolIcons? At least QTBUG-42606 seems to be fixed…

Oh boy, what have I gotten into now…

It was time to talk to my mentor!

David Edmundson quickly noticed that there should be no mismatch with the dpi / 96 calculation. Something fishy seems to be going on here…

What is this dpi value anyway? This is the one reported by xrdb -query |grep Xft.dpi and managed in the file $HOME/.config/kcmfonts.

And that value - as you probably can guess by now - did not make any sense. It didn’t match the expectation of being scaleFactor * 96, the value it should have been set to.

On we go to the location where the scaling configuration is set - scalingconfig.cpp in the KScreen KConfig Module.

void ScalingConfig::accept()
   fontConfigGroup.writeEntry("forceFontDPI", scaleDPI());

qreal ScalingConfig::scaleDPI() const
    return scaleFactor() * 96.0;

This scaleDPI is then applied with xrdb -quiet -merge -nocpp on startup.

So Xft.dpi is set to 1.8 * 96.0, or 172.8.

Have you spotted what is going wrong?

I did not, but David noticed…

The X server can only handle real values - and therefore 172.8 is simlpy discarded!

A few moments later this patch was ready…

Plasma Toolicons

… and I was finally able to enjoy my icons in all their scaled glory! And you can too, because this patch is already in Plasma 5.10.

Hello again...

because there are users that requested a way in order to support Latte even further and donate for the project, we created a donation page at pledgie: https://pledgie.com/campaigns/34116

and we added also a donate button in our main page at: https://github.com/psifidotos/Latte-Dock/

Click here to lend your support to: Latte Dock and make a donation at pledgie.com !

to cheer you up a bit for the upcoming 0.7 version which is scheduled for the end of August or maybe earlier ;) based on the effort...

some of the features already implemented and some that are going to land:

  • support layouts and change them with one click (we enhanced our export/import configuration mechanism to layouts. We provide already 5 predefined layouts and the user can add as many wants to)
  • provide two layouts for the configuration window (Basic/Advanced)
  • set the dock transparency per dock and enable/disable its panel shadows
  • global shortcuts, Super+No (unity way), Super+Ctrl+No (new instance), Super+` (raise the hidden dock)
  • support from libtaskmanager the libunity interface in order to show progress indicators and item counters such as unread e-mails etc... this needs libunity9 installed. +expose independently our own simpler dbus interface to show counters for tasks if there are programmers out there that dont want to use the libunity way
  • tasks audio streams indicator (increase/decrease volume with scroll whell and mute with clicking it)
  • the new Places exposer for your tasks/launchers that plasma 5.10 offers
  • dynamic dock background (show the background only for maximized windows and choose also if you want the dock shadow to be shown in that case)
  • copy your docks easily with one-click (extremely useful in multi-screen environments that want the same dock between different screens)
  • sync your launchers between all your docks (global launchers, they do not erase your per dock launchers if you disable them)
  • wayland tech preview (there is already code implemented for this and landed in our master)
  • support fillWidth/Height applets like the plasma panel
  • substitute the Latte plasmoid with your favourite plasma taskmanager (having implemented proper fillWidth/Height for the applets we can now provide this)
  • support separators everywhere and work correclty with the parabolic effect
  • various improvements for launchers start up, animations etc....

Latte 0.7 will be compatible with plasma>=5.9 and qt>=5.7

thanks everyone once more for your support...

Ha pasado casi dos meses desde la celebración el pasado 6 de mayo el evento de Software Libre de León más conocido. Es el momento de recordarlo viendo los vídeos de las charlas de Linux & Tapas 2017 que han sido promocionados en este humilde blog y que fueron una excelente excusa para que los simpatizantes de los proyectos de software abierto se encontraran en esa magnífica ciudad.

Vídeos de las charlas de Linux & Tapas 2017

Como en todos los eventos libre, en realidad las ponencias son solo la excusa para reunirse, dar trabajo a la sin hueso y mantener ocupado a nuestro sistema digestivo a base de ricos líquidos y sabrosos manjares.

Estas experiencias son únicas y se pierden “como lágrimas bajo la lluvia”, quedando solo poso en la memoria de los asistentes. No obstante, hay algo que si ha quedado de recuerdo colectivo, tanto a los asistentes como a los que nos interesa este evento. Este algo son los vídeos de las charlas de Linux & Tapas 2017 que se han colgado en la web.

De esta forma me complace presentar las ponencias que se realizaron de las 17 a las 20 horas (después de degustar las delicias de la zona) en la Fundación Sierra Pambley, que han sido colgadas por primera vez en en el canal de Youtube de Linux & Tapas  pero que yo os pongo a continuación para vuestra comodidad.

  • “Radio Universitaria, ¡en el aire Libre!” Eloy

  • “Is it a game or is it real?” Fran

  • ” Autodefensa Digital”. Jorge SoydelBierzo

Por cierto, en la descripción de cada vídeo podéis encontrar tanto la descarga de la ponencia como información extra sobre el ponente. Además, si os interesa este evento y queréis estar informados al minuto de sus novedades o conocer a los organizadores os aconsejo visitar tanto su web como su grupo de Telegram: https://t.me/linuxytapas


I got an opportunity to represent KDE in FOSSASIA 2017 held in mid-March at Science Center, Singapore. There were many communities showcasing their hardware, designs, graphics, and software.

 Science Center, Singapore
I talked about KDE, what it aims at, what are the various programs that KDE organizes to help budding developers, and how are these mentored. I walked through all the necessities to start contributing in KDE by introducing the audience to KDE Bugzilla, the IRCs channels, various application Domains, and the SoK(Season of KDE) proposal format. 


Then I shared my journey in KDE, briefed about my projects under Season of KDE and Google Summer of Code. The audience were really enthusiastic and curious to start contributing in KDE. I sincerely thank FOSSASIA for giving me this wonderful opportunity.

Overall working in KDE has been a very enriching experience. I wish to continue contributing in KDE and also share my experiences to help the budding developers to get started.

June 23, 2017

Week 2 of GSoC’s coding period was pretty dope :D. After all the hard work from the last week, I got my downloader to pull data from the website share.krita.org. As from the Week #1’s work status update, we have all discussed what all classes and functions were required to get this running. I was able to get it done and the downloader started downloading the data from the website.

PS: To get my project up and running we need KNewStuff framework version to be 5.29+. KNS team has done a lot of work in the area creating things to move pretty much good. (They have isolated KNSCore and KNS3 from then).

Before I proceed, I would love to mention the immense support and help given to me by Leinir for understanding how KNS and KNSCore works. If he didn’t notice my blog post which I posted at the starting of my project and official coding period, I would be lost at every single point of my project :P. As well as my Krita community people ��

As we had discussed the different classes I have created for the project to proceed, we have used certain KDE primary level framework/APIs in order to complete the GUI and get things working as planned.

Some of them, I have listed below.

  • Kconfig

The KConfig framework offers functionality around reading and writing configuration. KConfig consists of two parts, KConfigCore and KConfigGui. KConfigCore offers an abstract API to configuration files. It allows grouping and transparent cascading and (de-)serialization. KConfigGui offers, on top of KConfigCore, shortcuts, and configuration for some common cases, such as session and window restoration.

  • KWidgetsAddons

KWidgetAddons contains higher-level user interface elements for common tasks. KWidgetAddons contains widgets for the following areas:

  • Keyboard accelerators
  • Action menus and selections
  • Capacity indicator
  • Character selection
  • Color selection
  • Drag decorators
  • Fonts
  • Message boxes
  • Passwords
  • Paging of e.g. wizards
  • Popups and other dialogs
  • Rating
  • Ruler
  • Separators
  • Squeezed labels
  • Titles
  • URL line edits with drag and drop
  • View state serialization
  • KRatingWidget

This class is a part of KWidgetAddons class. It displays a rating value as a row of pixmaps. The KRatingWidget displays a range of stars or other arbitrary pixmaps and allows the user to select a certain number by mouse.

Hence, till now I have implemented order by functionality to sort data items accordingly. Have included functionalities to sort according to categories. While the categories get populated according to the knsrc file. We have options to rate each item on star basis according to the likes of the user. We also have the option to see the expanded details of each item. Revised a methodology to view items in a different mode such as icon mode and list mode. Also, Function to search between the items is also working just fine.

To see all the changes and test it visually, Created a test UI and shows how things work out and pull in data from the site. I will attach it here:

2017-06-21Content downloader with the basic test UI, which does look like the existing KNewStuff. Next step is to change it to our own customizable UI.

Plans for Week #3

Start creating the UI for the resource downloader which will be customizable hereafter. We just need to tweak the existing UI to our need.

Untitled drawing (3)Here is what we actually need.

After that, This week is being followed by the first Evaluation of our work so I have mostly done my part well. Completed the tasks as required in time. As well as my Krita community too. So, after evaluation for the first phase gets over, I will be doing the following.

  1. Give the work done till now a test run with the new and revised GUI for the content downloader.
  2. Fix any bugs if there exists or noticed at the testing phase in the content downloader and fix some of the bugs which might exist in the Resource Manager after discussing it with the Krita community.
  3. Meanwhile, these are going through, will be documenting the functions and classes created and changed.

Here is my branch were all the work I am done is going to.


Will be back with more updates later next week.

Cheers. ��

In this week’s article for my ongoing Google Summer of Code (GSoC) project I planned on writing about the basic idea behind the project, but I reconsidered and decided to first give an overview on how Xwayland functions on a high-level and in the next week take a look at its inner workings in detail. The reason for that is, that there is not much Xwayland documentation available right now. So these two articles are meant to fill this void in order to give interested beginners a helping hand. And in two weeks I’ll catch up on explaining the project’s idea.

As we go high level this week the first question is, what is Xwayland supposed to achieve at all? You may know this. It’s something in a Wayland session ensuring that applications, which don’t support Wayland but only the old Xserver still function normally, i.e. it ensures backwards compatibility. But how does it do this? Before we go into this, there is one more thing to talk about, since I called Xwayland only something before. What is Xwayland exactly? How does it look to you on your Linux system? We’ll see in the next week that it’s not as easy to answer as the following simple explanation makes it appear, but for now this is enough: It’s a single binary containing an Xserver with a special backend written to communicate with the Wayland compositor active on your system - for example with KWin in a Plasma Wayland session.

To make it more tangible let’s take a look at Debian: There is a package called Xwayland and it consists of basically only the aforementioned binary file. This binary gets copied to /usr/bin/Xwayland. Compare this to the normal Xserver provided by X.org, which in Debian you can find in the package xserver-xorg-core. The respective binary gets put into /usr/bin/Xorg together with a symlink /usr/bin/X pointing to it.

While the latter is the central building block in an X session and therefore gets launched before anything else with graphical output, the Xserver in the Xwayland binary works differently: It is embedded in a Wayland session. And in a Wayland session the Wayland compositor is the central building block. This means in particular that the Wayland compositor also takes up the role of being the server, who talks to Wayland native applications with graphical output as its clients. They send request to it in order to present their painted stuff on the screen. The Xserver in the Xwayland binary is only a necessary link between applications, which are only able to speak to an Xserver, and the Wayland compositor/server. Therefore the Xwayland binary gets launched later on by the compositor or some other process in the workspace. In Plasma it’s launched by KWin after the compositor has initialized the rendering pipeline. You find the relevant code here.

Although in this case KWin also establishes some communication channels with the newly created Xwayland process, in general the communication between Xwayland and a Wayland server is done by the normal Wayland protocoll in the same way other native Wayland applications talk to the compositor/server. This means the windows requested by possibly several X based applications and provided by Xwayland acting as an Xserver are translated at the same time by Xwayland to Wayland compatible objects and, acting as a native Wayland client, send to the Wayland compositor via the Wayland protocol. These windows look to the Wayland compositor just like the windows - in Wayland terminology surfaces - of every other Wayland native application. When reading this keep in mind, that an application in Wayland is not limited to using only one window/surface but can create multiple at the same time, so Xwayland as a native Wayland client can do the same for all the windows created for all of its X clients.

In the second part next week we’ll have a close look at the Xwayland code to see how Xwayland fills its role as an Xserver in regards to its X based clients and at the same time acts as a Wayland client when facing the Wayland compositor.

Look what we got today by snail mail:

It’s a children’s nonfiction book, nice for adults too, by Jeremy Hyman (text) and Haude Levesque (art). All the art was made with Krita!


One of my favorite illustrations is the singing White-throated sparrow (page 24-25). The details of the wing feathers, the boldness of the black and white stripes, and the shine in the eye all make the bird leap off the page.

I love the picture of the long tailed manakins (page 32-33). I think this illustration captures the velvety black of the body plumage, and the soft texture of the blue cape, and the shining red of the cap. I also like the way the unfocused background makes the birds in the foreground seem so crisp. It reminds me of seeing these birds in Costa Rica – in dark and misty tropical forests, the world often seems a bit out of focus until a bright bird, flower, or butterfly focuses your attention.

I also love the picture of the red-knobbed hornbill (page 68-69). You can see the texture and detail of the feathers, even in the dark black feathers of the wings and back. The illustration combines the crispness and texture of the branches, leaves and fruits in the foreground, with the softer focus on leaves in the background and a clear blue sky. Something about this illustration reminds me of the bird dioramas at the American Museum of Natural History – a place I visited many times with my grandfather (to whom the book is dedicated). The realism of those dioramas made me fantasize about seeing those birds and those landscapes someday. Hopefully, good illustrations will similarly inspire some children to see the birds of the world.


My name is Haude Levesque and I am a scientific illustrator, writer and fish biologist. I have always been interested in both animal sciences and art, and it has been hard to choose between both careers, until I started illustrating books as a side job, about ten years ago while doing my post doc. My first illustration job was a book about insects behavior (Bug Butts), which I did digitally after taking an illustration class at the University of Minnesota. Since then, I have been both teaching biology, illustrating and writing books, while raising my two kids. The book “Bird Brains” belongs to a series with two other books that I illustrated, and Iwanted to have illustrations that look similar, which is full double page illustrations of a main animal in its natural habitat. I started using Krita only a year ago when illustrating “Bird Brains”, upon a suggestion from my husband, who is a software engineer and into open source software. I was getting frustrated with the software I had used previously, because it did not allow me to render life-like drawings, and required too many steps and time to do what I wanted. I also wanted my drawing to look like real paintings and also get the feeling that I am painting and Krita’s brushes do just that. It is hard for me to choose a favorite illustration in “Bird Brains”, I like them all and I know how many hours I spent on each. But, if I had to, I would say the superb lyrebird, page 28 and 29. I like how this bird is walking and singing at the same time and I like how I could render its plumage while giving him a real life posture.

I also like the striated heron, page 60 and 61. Herons are my favorite birds and I like the contrast between the pink and the green of the lilypads. Overall I am very happy with the illustrations in this book and I am planning on doing more scientific books for kids and possibly try fiction as well.

You can get it here from Amazon or here from Book Depository.

Una de las mejoras que nos ofreció Plasma 5.8 fueron los Look-and-Feel Packs, es decir, la posibilidad de cambiar mediante un simple clic todo el aspecto de Plasma: colores, decoraciones de ventana, iconos, barras de tareas, fondos de pantalla, etc. Ya ha pasado un tiempo y me complace compartir con vosotros los 10 primeros Look-and-Feel Packs disponibles en KDE Store.

Los 10 primeros Look-and-Feel Packs

Han pasado seis meses desde que hablé en el blog de los Look-and-feels Packs y no ha sido hasta este momento que me he decidido ha hablaros de una colección de  temas completos que están disponibles en la tienda. La razón: se ha llegado al maravilloso número de 10 packs disponibles.

1- Oxygen Crystal Diamond

Empezamos con el primero de todos, Oxygen Crystal Diamond de gericom, un tema nos preporciona un look a lo KDE 4 en nuestro escritorio.

Más infomación: Oxygen Crystal Diamond


2- Tweaked Look-and-feel

El segundo por antigüedad es obra de obnosim y se llama Tweaked Look-and-fell no es más que una modificación del tema por defecto de Plasma 5: Breeze.

Más infomación: Tweaked Look-and-feel

3- Arc KDE

El siguiente de la lista es Arc KDE, un port para Plasma del popular tema GTK, con algunos añadidos extras. Es una creación de x-varlesh-x

Más infomación: Arc KDE

4- DBreeze

De la mano de llucas nos llegó DBreeze, el primer tema que se atrevió a colocar la barra de tareas vertical. Una disposición habitual en el escritorio Unity y que poco a poco se va imponiendo en las pantallas de nuestros dispositivos por la geometría de los mismos.

Más infomación: DBreeze

5- KShell

De nuevo tenemos una obra de llucas que en esta ocasión nos ofrece KShell, un tema inspirado en Gnome Shell con tres paneles: superior, derecha e izquierda. Requiere Plasma 5.9.


Más infomación: KShell

6- ELplas

Y seguimos con los temas de llucasAhora nos presenta ELPlas, un tema oscuro con paneles superior e inferior. Requiere Plasma 5.9 y está inspirado en Elementary.

Más infomación: ELplas

7- K10

Cambiamos de creador. Ahora es fapasv quien nos presenta un tema que nos recuerda a Windows 10 tanto en sus colores como en sus iconos. También requiere Plasma 5.9.

Más infomación: K10

8- United

Volvemos con llucas y su pretensión de emular cualquier escritorio. En esta ocasión nos presenta United, la versión Plasma de Unity. Barras superior e izquierda, colores anaranjados y paneles oscuros. También requiere Plasma 5.9.

Los 10 primeros Look-and-Feel Packs

Más infomación: United

9- Modern

Llegamos casi al final de la lista con un tema de tenten8401 que se inspira en Gnome y que utiliza el cada vez más famoso Latte Dock.

Más infomación: Modern

10- Numix

El último de la lista es Numix, que fue añadido ayer, y que presenta una interfaz con un nivel de contraste elevado gracias a la combinación de rojo y tonos oscuros. Es una nueva creación de  x-varlesh-x.

Más infomación: Numix

June 22, 2017

ISO Image Writer is a tool I’m working on which writes .iso files onto a USB disk ready for installing your lovely new operating system.  Surprisingly many distros don’t have very slick recommendations for how to do this but they’re all welcome to try this.

It’s based on ROSA Image Writer which has served KDE neon and other projects well for some time.  This adds ISO verification to automatically check the digital signatures or checksums, currently supported is KDE neon, Kubuntu and Netrunner.  It also uses KAuth so it doesn’t run the UI as root, only a simple helper binary to do the writing.  And it uses KDE Frameworks goodness so the UI feels nice.

First alpha 0.1 is out now.

Download from https://download.kde.org/unstable/isoimagewriter/

Signed by release manager Jonathan Riddell with 0xEC94D18F7F05997E. Git tags are also signed by the same key.

It’s in KDE Git at kde:isoimagewriter and in bugs.kde.org, please do try it out and report any issues.  If you’d like a distro added to the verification please let me know and/or submit a patch. (The code to do with is a bit verbose currently, it needs tidied up.)

I’d like to work out how to make AppImages, Windows and Mac installs for this but for now it’s in KDE neon developer editions and available as source.


Facebooktwittergoogle_pluslinkedinby feather

One of my preferred developer tools is a web called Compiler Explorer. The tool itself is excellent and useful when trying to optimize your code.
The author of the tool describes it in the Github repository as:

Compiler Explorer is an interactive compiler. The left-hand pane shows editable C/C++/Rust/Go/D/Haskell code. The right, the assembly output of having compiled the code with a given compiler and settings. Multiple compilers are supported, and the UI layout is configurable (the Golden Layout library is used for this). There is also an ispc compiler for a C variant with extensions for SPMD.

The main problem I found with the tool is, it does not allow to write Qt code. I need to remove all the Qt includes, modify and remove a lot of code…

So I decided to modify the tool to be able to find the Qt headers. To do that first of all, we need to clone the source code:

git clone git@github.com:mattgodbolt/compiler-explorer.git

The application is written using node.js, so make sure you have it installed before starting.

The next step is to modify the options line in etc/config/c++.defaults.properties:

-fPIC -std=c++14 -isystem /opt/qtbase_dev/include -isystem /opt/qtbase_dev/include/QtCore

you need to change /opt/qtbase_dev with your own Qt build path.

Then simply call make in the root folder, and the application starts running on port 10240 (by default).

And the mandatory screenshoots ��



The post Using Compiler Explorer with Qt appeared first on Qt Blog.

June 21, 2017

In this post, I am going to discuss about the working of a submarine and my thought process on implementing the three basic features of a submarine in the “Pilot a Submarine” activity for the Qt version of GCompris, which are:

  • The Engine
  • The Ballast tanks and
  • The Diving Planes

The Engine

The engine of most submarines are either nuclear powered or diesel-electric engines, which are used to drive an electric motor which in turn, powers the submarine propellers. In this implementation, we will have two buttons one for increasing and another for decreasing the power generated by the submarine.

Ballast Tanks

The Ballast Tanks are the spaces in the submarine that can either be filled with water or air. It helps the submarine to dive and resurface on the water, using the concept of buouyancy. If the tanks are filled with water, the submarine dives underwater and if they are filled with air, it resurfaces on the surface of the water

Diving Planes

Once underwater, the diving planes of a submarine helps to accurately control the depth of the submarine. These are very similar to the fins present in the bodies of sharks, which helps them to swim and dive. When the planes are pointed downwards, the water flowing above the planes generate more pressure on the top surface than that on the bottom surface, forcing the submarine to dive deeper. This allows the driver to control the depth and the angle of the submarine.


In this section I will be going through how I implemented the submarine using QML. For handling physics, I used Box2D.

The Submarine

The submarine is an QML Item element, designed as follows:

Item {
    id: submarine

    z: 1

    property point initialPosition: Qt.point(0,0)
    property bool isHit: false
    property int terminalVelocityIndex: 100
    property int resetVerticalSpeed: 500

    /* Maximum depth the submarine can dive when ballast tank is full */
    property real maximumDepthOnFullTanks: (background.height * 0.6) / 2

    /* Engine properties */
    property point velocity
    property int maximumXVelocity: 5

    /* Wings property */
    property int wingsAngle
    property int initialWingsAngle: 0
    property int maxWingsAngle: 2
    property int minWingsAngle: -2

    function destroySubmarine() {
        isHit = true

    function resetSubmarine() {
        isHit = false


        x = initialPosition.x
        y = initialPosition.y

        velocity = Qt.point(0,0)
        wingsAngle = initialWingsAngle

	function increaseHorizontalVelocity(amt) {
        if (submarine.velocity.x + amt <= submarine.maximumXVelocity) {
            submarine.velocity.x += amt

    function decreaseHorizontalVelocity(amt) {
        if (submarine.velocity.x - amt >= 0) {
            submarine.velocity.x -= amt

    function increaseWingsAngle(amt) {
        if (wingsAngle + amt <= maxWingsAngle) {
            wingsAngle += amt
        } else {
            wingsAngle = maxWingsAngle

    function decreaseWingsAngle(amt) {
        if (wingsAngle - amt >= minWingsAngle) {
            wingsAngle -= amt
        } else {
            wingsAngle = minWingsAngle

    function changeVerticalVelocity() {
         * Movement due to planes
         * Movement is affected only when the submarine is moving forward
         * When the submarine is on the surface, the planes cannot be used
        if (submarineImage.y > 0) {
            submarine.velocity.y = (submarine.velocity.x) > 0 ? wingsAngle : 0
        } else {
            submarine.velocity.y = 0
        /* Movement due to Ballast tanks */
        if (wingsAngle == 0 || submarine.velocity.x == 0) {
            var yPosition = submarineImage.currentWaterLevel / submarineImage.totalWaterLevel * submarine.maximumDepthOnFullTanks

            speed.duration = submarine.terminalVelocityIndex * Math.abs(submarineImage.y - yPosition) // terminal velocity
            submarineImage.y = yPosition

    BallastTank {
        id: leftBallastTank

        initialWaterLevel: 0
        maxWaterLevel: 500

    BallastTank {
        id: rightBallastTank

        initialWaterLevel: 0
        maxWaterLevel: 500

    BallastTank {
        id: centralBallastTank

        initialWaterLevel: 0
        maxWaterLevel: 500

    Image {
        id: submarineImage
        source: url + "submarine.png"

        property int currentWaterLevel: bar.level < 7 ? centralBallastTank.waterLevel : leftBallastTank.waterLevel + centralBallastTank.waterLevel + rightBallastTank.waterLevel
        property int totalWaterLevel: bar.level < 7 ? centralBallastTank.maxWaterLevel : leftBallastTank.maxWaterLevel + centralBallastTank.maxWaterLevel + rightBallastTank.maxWaterLevel

        width: background.width / 9
        height: background.height / 9

        function broken() {
            source = url + "submarine-broken.png"

        function reset() {
            source = url + "submarine.png"
            speed.duration = submarine.resetVerticalSpeed
            x = submarine.initialPosition.x
            y = submarine.initialPosition.y

        Behavior on y {
            NumberAnimation {
                id: speed
                duration: 500

        onXChanged: {
            if (submarineImage.x >= background.width) {

    Body {
        id: submarineBody
        target: submarineImage
        bodyType: Body.Dynamic
        fixedRotation: true
        linearDamping: 0
        linearVelocity: submarine.isHit ? Qt.point(0,0) : submarine.velocity

        fixtures: Box {
            id: submarineFixer
            width: submarineImage.width
            height: submarineImage.height
            categories: items.submarineCategory
            collidesWith: Fixture.All
            density: 1
            friction: 0
            restitution: 0
            onBeginContact: {
                var collidedObject = other.getBody().target

                if (collidedObject == whale) {
                if (collidedObject == crown) {
                } else {
    Timer {
        id: updateVerticalVelocity
        interval: 50
        running: true
        repeat: true

        onTriggered: submarine.changeVerticalVelocity()

The Item is a parent object to hold all the different components of the submarine (the Image BallastTank and the Box2D component). It also contains the functions and the variables that are global to the submarine.

The Engine

The engine is a very straightforward implementation via the linearVelocity component of the Box2D element. We have two variables global to the submarine for handling the engine component, defined as follows:

property point velocity
property int maximumXVelocity: 5

which are pretty much self-explanatory, the velocity holds the current velocity of the submarine, both horizontal and vertical and the maximumXVelocity holds the maximum horizontal speed the submarine can achieve.

For increasing or decreasing the velocity of the submarine, we have two functions global to the submarine, as follows:

function increaseHorizontalVelocity(amt) {
    if (submarine.velocity.x + amt <= submarine.maximumXVelocity) {
        submarine.velocity.x += amt

function decreaseHorizontalVelocity(amt) {
    if (submarine.velocity.x - amt >= 0) {
        submarine.velocity.x -= amt

which essentially gets the amount by which the velocity.x component needs to be increased or decreased, checks whether it crosses the range or not, and makes the necessary changes likewise.

The actual applying of the velocity is very straightforward, which takes place in the Body component of the submarine as follows:

Body {
	linearVelocity: submarine.isHit ? Qt.point(0,0) : submarine.velocity

The submarine.isHit component, as the name suggests holds whether the submarine is hit by any object or not (except the pickups). If so, the velocity is reset to (0,0)

Thus, for increasing or decreasing the engine power, we just have to call one of the two functions anywhere from the code:

submarine.increaseHorizontalVelocity(1); /* For increasing H velocity */
submarine.decreaseHorizontalVelocity(1); /* For decreasing H velocity */

The Ballast Tanks

The Ballast Tanks are implemented separately in BallastTank.qml, since it will be implemented more that once. It looks like the following:

Item {
    property int initialWaterLevel
    property int waterLevel: 0
    property int maxWaterLevel
    property int waterRate: 10
    property bool waterFilling: false
    property bool waterFlushing: false

    function fillBallastTanks() {
        waterFilling = !waterFilling

        if (waterFilling) {
        } else {

    function flushBallastTanks() {
        waterFlushing = !waterFlushing

        if (waterFlushing) {
        } else {

    function updateWaterLevel(isInflow) {
        if (isInflow) {
            if (waterLevel < maxWaterLevel) {
                waterLevel += waterRate

        } else {
            if (waterLevel > 0) {
                waterLevel -= waterRate

        if (waterLevel > maxWaterLevel) {
            waterLevel = maxWaterLevel

        if (waterLevel < 0) {
            waterLevel = 0

    function resetBallastTanks() {
        waterFilling = false
        waterFlushing = false

        waterLevel = initialWaterLevel


    Timer {
        id: fillBallastTanks
        interval: 500
        running: false
        repeat: true

        onTriggered: updateWaterLevel(true)

    Timer {
        id: flushBallastTanks
        interval: 500
        running: false
        repeat: true

        onTriggered: updateWaterLevel(false)

What they essentially does is:

  • fillBallastTanks: Fills up the Ballast tanks upto maxWaterLevel. Sets the flag waterFilling to true if the Ballast is to be filled with water, and the timer fillBallastTanks is set to start(), which will increase the water level in the tank after every 500 millisecond.
  • flushBallastTanks: Flushes the Ballast tanks down to 0. Sets the flag waterFlushing to true if the Ballast is to be flushed out of water, and the timer flushBallastTanks is set to start(), which will decrease the water level in the tank after every 500 millisecond.
  • resetBallastTanks: Resets the water level in the ballast tanks to it’s initial values

In the Submarine Item, we just use three instances of the BallastTank object, for left, right and central ballast tanks, setting up it’s initial and maximum water level.

BallastTank {
    id: leftBallastTank

    initialWaterLevel: 0
    maxWaterLevel: 500

BallastTank {
    id: rightBallastTank

    initialWaterLevel: 0
    maxWaterLevel: 500

BallastTank {
    id: centralBallastTank

    initialWaterLevel: 0
    maxWaterLevel: 500

For filling up or flushing the ballast tanks (centralBallastTank in this case), we just have two call either of the following two functions:

centralBallastTank.fillBallastTanks() /* For filling */
centralBallastTank.flushBallastTanks() /* For flushing */

I will be discussing about how the depth is maintained using the ballast tanks in the next section.

The Diving Planes

The diving planes will be used to control the depth of the submarine once it is moving underwater. Keeping that in mind, along with the fact that it needs to be effectively integrated with the ballast tanks. This is implemented in the changeVerticalVelocity() function, which is discussed as follows:

 * Movement due to planes
 * Movement is affected only when the submarine is moving forward
 * When the submarine is on the surface, the planes cannot be used
if (submarineImage.y > 0) {
    submarine.velocity.y = (submarine.velocity.x) > 0 ? wingsAngle : 0
} else {
    submarine.velocity.y = 0

However, under the following conditions:

  • the angle of the planes is reduced to 0
  • the horizontal velocity of the submarine is 0,

the ballast tanks will take over. Which is implemented as:

/* Movement due to Ballast tanks */
if (wingsAngle == 0 || submarine.velocity.x == 0) {
    var yPosition = submarineImage.currentWaterLevel / submarineImage.totalWaterLevel * submarine.maximumDepthOnFullTanks

    speed.duration = submarine.terminalVelocityIndex * Math.abs(submarineImage.y - yPosition) // terminal velocity
    submarineImage.y = yPosition

yPosition calculates how much percentage of the tank is filled with water, and likewise it determines the depth to which it will dive. The speed.duration is the duration of the transition animation, and the duration depends directly on how much the submarine will have to cover up along the Y axis, to avoid a steep rise or fall of the submarine.

For increasing or decreasing the angle of the diving planes, we just need to call either of the following two functions:

submarine.increaseWinglsAngle(1) /* For increasing */
submarine.decreaseWingsAngle(1) /* For decerasing */


That’s it for now! The two major goals to be completed next are the rotation of the submarine (in case more than one tanks are used and they are unequally filled up) and the UI for controlling the submarine. Will provide an update on it once it is completed.

It is a long time since I posted on my blog and frankly, i missed it. I’ve been busy with school: courses, tones of homework, projects and presentations.

Since last year i had a great experience with GCompris and KDE in general, i decided to apply in this year’s GSoC as well, only this time, i chose another project from KDE: Minuet.


Minuet is part of KDE-edu and its goal is helping teachers and students both novice and experienced teach and respectively learn and exercise their music skills. It is primarily focused on ear-training exercises and other areas will soon be available.

Minuet includes a virtual piano keyboard, displayed at the bottom of the screen, on which users can visualize the exercises. Using a piano keyboard is a good starting point for anyone who wants to learn the basics of musical theory: intervals, chords, scales etc. Minuet is currently based on the piano keyboard for all its ear training exercises. While this is a great feature, some may find it not quite suitable to their musical instrument.



My project aims to deliver to the user a framework which will support the implementation of multiple instrument views as Minuet plugins. Furthermore, apart from the piano keyboard, I will implement another instrument for playing the exercise questions and user’s answers.

This mechanism should allow new instruments to be integrated as Minuet plugins. After downloading the preferred instrument plugin, the user would then be able to switch between instruments. It will allow him to enhance his musical knowledge by training his skills using that particular instrument.

At the end of summer, I intend to have changed the current architecture to allow multiple-instrument visualization framework and refactor the piano keyboard view as a separate plugin. I also intend to have implemented a plugin for at least one new instrument: a guitar.

A mock up on the new guitar is shown below:guitar.png

I hope it will be a great summer for me, my mentor and the users of Minuet, whom i want to offer a better experience by using my work.

Sounds like déjà vu? You are right! We used to have Facebook Event sync in KOrganizer back in KDE 4 days thanks to Martin Klapetek. The Facebook Akonadi resource, unfortunately, did not survive through Facebook API changes and our switch to KF5/Qt5.

I’m using a Facebook event sync app on my Android phone, which is very convenient as I get to see all events I am attending, interested in or just invited to directly in my phone’s calendar and I can schedule my other events with those in mind. Now I finally grew tired of having to check my phone or Facebook whenever I wanted to schedule event through KOrganizer and I spent a few evenings writing a brand new Facebook Event resource.

Inspired by the Android app the new resource creates several calendars – for events you are attending, events you are interested in, events you have declined and invitations you have not responded to yet. You can configure if you want to receive reminders for each of those.

Additionally, the resource fetches a list of all your friend’s birthdays (at least of those who have their birthday visible to their friends) and puts them into a Birthday calendar. You can also configure reminders for those separately.

The Facebook Sync resource will be available in the next KDE Applications feature release in August.

Hello readers

I’m glad to share this opportunity to be selected 2 times for Google Summer of Code project under KDE. It’s my second consecutive year working with DigiKam team.

DigiKam is an advanced digital photo management  application which enables user to view, manage, edit, organise, tag and share photographs under Linux systems. DigiKam has a feature to search items by similarity. This require to compute image fingerprints stored in main database. These data can take space on disk especially with huge collection and bloat the main database a lots and increase complexity to backup main database which include all main information for each item registered, as tags, label, comments, etc.

The goal of this proposal is to store the similarity fingerprints must be stored in a dedicated database. This would be a big relief for the end users as image fingerprints are around few KB of raw data for each image. And storing all of them takes huge disk space, increases time latency for huge collections.

Thus, to overcome all the above issues, a new DB interface would be created. {This work has already been done for thumbnail and face fingerprints}. Also, from backup point of view, it’s easy to have separate files to optimise.

I’ll add keep you guys updated with my work in upcoming posts.

Till then, I encourage you to use the software. It’s easy to install and use. (You can find cheat sheet to build DigiKam in my previous post! �� )

Happy DigiKaming! ��






Following the 5th release 5.5.0 published in March 2017, the digiKam team is proud to announce the new release 5.6.0 of digiKam Software Collection. With this version the HTML gallery and the video slideshow tools are back, database shrinking (e.g. purging stale thumbnails) is also supported on MySQL, grouping items feature has been improved, the support for custom sidecars type-mime have been added, the geolocation bookmarks introduce fixes to be fully functional with bundles, the support for custom sidecars, and of course a lots of bug has been fixed.

HTML Gallery Tool

The HTML gallery is accessible through the tools menu in the main bar of both digiKam and showFoto. It allows you to create a a web gallery with a selection of photos or a set of albums, that you can open in any web browser. There are many themes to select and you can create your own as well. Javascript support is also available.

Video Slideshow Tool

The Video Slideshow is accessible also through the tools menu in the main bar of both digiKam and showFoto. It allows you to create a video slide with a selection of photos or albums. The generated video file can be view in any media player, as phones, tablets, Blue Ray reader, etc. There are many settings to customize the format, the codec, the resolution, and the transition (as for ex the famous Kens-Burn effect).

Database Integrity Tool

Already in 5.5.0 release, the tool dedicated to tests for database integrity and obsolete information have been improved. Besides obvious data safety improvements this can free up quite a lot of space in the digiKam databases. For technical reasons only SQLite database were shrunk to this smaller size in 5.5.0 release. Now this is also possible for MySQL databases with 5.6.0.

Items Grouping Features

Earlier changes to the grouping behaviour proved that digiKam users have quite diverse workflows - so with the current change we try to represent that diversity.

Originally grouped items were basically hidden away. Due to requests to include grouped items in certain operations, this was changed entirely to include grouped items in (almost) all operations. Needless to say, this wasn’t such a good idea either. So now you can choose which operations should be performed on all images in a group or just the first one.

The corresponding settings live in the configuration wizard under Miscellaneous in the Grouping tab. By default all operations are set to Ask, which will open a dialog whenever you perform this operation and grouped items are involved.

Extra Sidecars Support

Another new capability is to recognise additional sidecars. Under the new Sidecars tab in the Metadata part of the configuration wizard you can specify any additional extension that you want digiKam to recognise as a sidecar. These files will neither be read from nor written to, but they will be moved/rename/deleted/… together with the item that they belong to.

Geolocation Bookmarks

Another important change done for this new version is to restore the geolocation bookmarks feature which did not work with bundle versions of digiKam (AppImage, MacOS, and Windows). The new bookmarker has been fully re-written and is still compatible with previous geolocation bookmarks settings. It is now able to display the bookmark GPS information over a map for a better usability while editing your collection.

Google Summer of Code 2017 Students

This summer the team is proud to assist 4 students to work on separate projects:

Swati Lodha is back in the team. As in 2016, she will work to improve the Database interface. After having fixed and improved the MySQL support in digiKam, she has the task this year to isolate all the database contents dedicated to manage the similarity finger-prints matrix. As for thumbnails and face recognition, these elements will be stored in a new dedicated database. The goal is to reduce the core database size, simplify the maintenance and decrease the core database time-latencies.

Yingjie Liu is a Chinese student, mainly specialized with math and algorithms who will add a new efficient face recognition algorithm and will try to introduce some AI solution to simplify the face tag workflow.

Ahmed Fathi is an Egyptian student who will work to restore and improve the DLNA support in digiKam, to be able to stream collections contents through the network with compatible UPNP device as the Smart TV, tablets or cellulars.

Shaza Ismail is an another Egyptian student who will work to an ambitious project to create a tool for image editor to be used for healing image stains with the use of another part of the image by coloring by the use of one part over the other, mainly testing on dust spots, but can be used for other particles hiding as well.

Final Words

The next main digiKam version 6.0.0 is planned for the end of this year, when all Google Summer of Code projects will be ready to be backported for a beta release. In September, we will release a maintenance version 5.7.0 with a set of bugfixes as usual.

For further information about 5.6.0, take a look at the list of more than 81 issues closed in Bugzilla.

digiKam 5.6.0 Software collection source code tarball, Linux 32/64 bits AppImage bundles, MacOS package, and Windows 32/64 bits installers can be downloaded from this repository.

Happy digiKaming while this summer!

June 20, 2017

As my first subject for this animation blog series, we will be taking a look at Animation curves.

Curves, or better, easing curves, is one of the first concepts we are exposed to when dealing with the subject of animation in the QML space.

What are they?

Well, in simplistic terms, they are a description of an X position over a Time axis that starts in (0 , 0) and ends in (1 , 1). These curves are …

The post A tale of 2 curves appeared first on KDAB.

We are very happy to announce the first AppImage of the next generation Kdenlive. We have been working since the first days of 2017 to cleanup and improve the architecture of Kdenlive’s code to make it more robust and clean. This also marked a move to QML for the display of the timeline.

This first AppImage is only provided for testing purposes. It crashes a lot because many features have yet to be ported to the new code, but you can already get a glimpse of the new timeline, move clips and compositions, group items and add some effects. This first Appimage can be downloaded from the KDE servers. Just download the Appimage, make the file executable and run it. This version is not appropriate for production use and due to file format changes, will not properly open previous Kdenlive project files. We are hoping to provide reliable nightly build AppImages so that our users can follow the development and provide feedback before the final release.

Today is also our 18th Kdenlive Café, so you can meet us tonight -20th of june – at 9pm (CEST) in the #kdenlive channel to discuss the evolution and issues around Kdenlive.

I will also be presenting the progress of this Kdenlive version this summer (22nd of July) at Akademy in Almeria – Spain – so feel free to come visit the KDE Community in this great event.

Set up the arcanist for Koko

  • It was pretty much easy to install. For my Archlinux just below command did the work for me.

    yaourt -S arcanist-git

  • Then I had to add .arcconfig to the Koko repository so that arc could point to the link where it should publish the changes

    { “phabricator.uri”:”https://phabricator.kde.org/” }

  • The only problem is with the SSL certificates, as the university campus wireless network uses its own self-signed certificate. This will create problem to access the ssl encrypted web content, pretty much everything related to development :P
  • Also university campus network does not allow SSH over it’s network. This will prohibit me from committing the changes to the git repository.
  • Hence to use the arcanist, everytime I will have to check for the curl.cainfo in the /etc/php/php.ini and set/unset the environment variable GIT_SSL_CAINFO depending on the network I am using.

June 19, 2017

KRuler, in case you don't know it, is a simple software ruler to measure lengths on your desktop. It is one of the oldest KDE tools, its first commit dating from November 4th, 2000. Yes, it's almost old enough to vote.

I am a long time KRuler user. It gets the job done, but I have often found myself saying "one day I'll fix this or that". And never doing it.

Hidpi screen really hurt the poor app, so I finally decided to do something and spend some time during my daily commute on it.

This is what it looked like on my screen when I started working on it:

KRuler Before

As any developer would, I expected it should not be more than a week of work... Of course it took way longer than that, because there was always something odd here and there, preventing me from pushing a patch.

I started by making KRuler draw scale numbers less often to avoid ugly overlapping texts. I then made it draw ticks on both sides, to go from 4 orientations (North, South, West, East) to 2: vertical or horizontal.

The optional rotation and buttons were getting in the way though: the symmetric ticks required the scale numbers to be vertically centered so buttons were overlapping it. I decided to remove them (they were already off by default). With only two orientations it is less useful to have rotation buttons anyway: it is simple enough to use either the context menu, middle-click the ruler, or the R shortcut to change the orientation. Closing is accessible through the context menu as well.

One of the reasons (I think) for the 4 orientations was the color picker feature. It makes little sense to me to have a color picker in a ruler: it is more natural to use KColorChooser to pick colors. I removed the color picker, allowing me to remove the oddly shaped mouse cursor and refresh the appearance of the length indicator to something a bit nicer.

I then made it easier to adjust the length of the ruler by dragging its edges instead of having to pick the appropriate length from a sub-menu of the context menu. This made it possible to remove this sub-menu.

This is what KRuler looks like now:

KRuler after

That is only part 1 though. I originally had 2 smaller patches to add, but Jonathan Riddell, who kindly reviewed the monster patch, requested another small fix, so that makes 3 patches to go. Need to setup and figure out how to use Arcanist to submit them to Phabricator, as I have been told Review Board is old school these days :)

Or: Tying loose ends where some are slightly too short yet.


  • you favour offline documentation (not only due to nice integration with IDEs like KDevelop),
  • develop code using KDE Frameworks or other Qt-based libraries,
  • you know all the KF5 libraries have seen many people taking care for API documentation in the code over all the years,
  • and you had read about doxygen’s capability to create API dox in QCH format,
  • and you want your Linux distribution package management to automatically deliver the latest version of the documentation (resp. QCH files) together with the KDE Frameworks libraries and headers (and ideally same for other Qt-based libraries),

the idea is easy derived to just extend the libraries’ buildsystem to also spit out QCH files during the package builds.

It’s all prepared, can ship next week, latest!!1

Which would just be a simple additional target and command, invoking doxygen with some proper configuration file. Right? So simple, you wonder why no-one had done it yet ��

Some initial challenge seems quickly handled, which is even more encouraging:
for proper documentation one also wants cross-linking to documentation of things used in the API which are from other libraries, e.g. base classes and types. Which requires to pass to doxygen the list of those other documentations together with a set of parameters, to generate proper qthelp:// urls or to copy over documentation for things like inherited methods.
Such listing gets very long especially for KDE Frameworks libraries in tier 3. And with indirect dependencies pulled into the API, on changes the list might get incomplete. Same with any other changes of the parameters for those other documentations.
So basically a similar situation to linking code libraries, which proposes to also give it a similar handling: placing the needed information with CMake config files of the targeted library, so whoever cross-links to the QCH file of that library can fetch the up-to-date information from there.

Things seemed to work okay on first tests, so last September a pull request was made to add some respective macro module to Extra-CMake-Modules to get things going and a blog post “Adding API dox generation to the build by CMake macros” was written.

This… works. You just need to prepare this. And ignore that.

Just, looking closer, lots of glitches popped up on the scene. Worse, even show stoppers made their introduction, at both ends of the process pipe:
At generation side doxygen turned out to have bitrotted for QCH creation, possibly due to lack of use? Time to sacrifice to the Powers of FLOSS, and git clone the sources and poke around to see what is broken and how to fix it. Some time and an accepted pull request later the biggest issue (some content missed to be added to the QCH file) was initially handled, just yet needed to also get out as released version (which it now is since some months).
At consumption side Qt Assistant and Qt Creator turned to be no longer able to properly show QCH files with JavaScript and other HTML5 content, due to QWebKit having been deprecated/dropped and both apps in many distributions now only using QTextBrowser for rendering the documentation pages. And not everyone is using KDevelop and its documentation browser, which uses QWebKit or, in master branch, favours QWebEngine if present.
Which means, an investment into QCH files from doxygen would only be interesting to a small audience. Myself currently without resources and interest to mess around with Qt help engine sources, looks with hope on the resurrection of QWebKit as well as the patch for a QtWebEngine based help engine (if you are Qt-involved, please help and push that patch some more!)

Finally kicking off the production cycle

Not properly working tools, nothing trying to use the tools on bigger scale… classical self-blocking state. So time to break this up and get some momentum into, by tying first things together where possible and enabling the generation of QCH files during builds of the KDE Frameworks libraries.

And thus to current master branches (which will become v5.36 in July) there has been now added for one to Extra-CMake-Modules the new module ECMAddQch and then to all of the KDE Frameworks libraries with public C++ API the option to generate QCH files with the API documentation, on passing -DBUILD_QCH=ON to cmake. If also having passed -DKDE_INSTALL_USE_QT_SYS_PATHS=ON (or installing to same prefix as Qt), the generated QCH files will be installed to places where Qt Assistant and Qt Creator even automatically pick them up and include them as expected:

Qt Assistant with lots of KF5 API dox

KDevelop picks them up as well, but needs some manual reconfiguration to do so.

(And of course ECMAddQch is designed to be useful for non-KF5 libraries as well, give it a try once you got hold of it!)

You and getting rid of the remaining obstacles

So while for some setups the generated QCH file of the KDE Frameworks already are useful (I use them since some weeks for e.g. KDevelop development, in KDevelop), for many they still have to become that. Which will take some more time and ideally contributions also from others, including Doxygen and Qt Help engine maintainers.

Here a list of related reported Doxygen bugs:

  • 773693 – Generated QCH files are missing dynsections.js & jquery.js, result in broken display (fixed for v1.8.13 by patch)
  • 773715 – Enabling QCH files without any JavaScript, for viewers without such support
  • 783759 – PERL_PATH config option: when is this needed? Still used?
  • 783762 – QCH files: “Namespaces” or “Files” in the navigation tree get “The page could not be found” (proposed patch)
  • 783768 – QCH files: classes & their constructors get conflicting keyword handling< (proposed patch)
  • YETTOFILE – doxygen tag files contain origin paths for “file”, leaking info and perhaps is an issue with reproducible builds

And a related reported Qt issue:

There is also one related reported CMake issue:

  • 16990 – Wanted: Import support for custom targets (extra bonus: also export support)

And again, it would be also good to see the patch for a QtWebEngine based help engine getting more feedback and pushing by qualified people. And have distributions doing efforts to provide Qt Assistant and Qt Creator with *Web*-based documentation engines (see e.g. bug filed with openSUSE).

May the future be bright^Wdocumented

I am happy to see that Gentoo & FreeBSD packagers have already started to look into extending their KDE Frameworks packaging with generated API dox QCH files for the upcoming 5.36.0 release in July, with other packagers planning to do so soon as well.

So perhaps one not too distant day it will be just normal business to have QCH files with API documentation provided by your distribution not just for the Qt libraries itself, but also for every library based on them. After all documentation has been one of the things making Qt so attractive. As developer of Qt-based software, I very much look forward to that day ��

Next stop then: QML API documentation :/

I'm glad to announce that a first stable version of Brooklyn is released!
What's new? Well:

  • Telegram and IRC APIs are fully supported;
  •  it manages attachments (even Telegram's video notes), also on text-only protocols through a web server;
  • it has an anti-flood feature on IRC (e.g. it doesn't notify other channels if an user logs out without writing any message). For this I've to say "thank you" to Cristian Baldi, a W2L developer which has this fabulous idea;
  • it provides support for edited messages;
  • SASL login mechanism is implemented;
  • map locations are supported through OpenStreetMap
  • you can see a list of other channels' members typing "botName users" on IRC or using "/users" command on Telegram;
  • if someone writes a private message to the bot instead of in a public channel, it sends him the license message "This software is released under the GNU AGPL license. https://phabricator.kde.org/source/brooklyn/";
As you may have already noticed, after talking with my mentor I decided to modify the GSOC timeline.We decided to wait until Rocket.Chat REST APIs will be more stable and in the meantime to provide a full-working IRC/Telegram bridge.
This helped me providing a more stable and useful software for the first evaluation.
We are also considering writing a custom wrapper for the REST APIs because current solutions don't fits our needs.

The last post reached over 600 people and that's awesome!
As always I will appreciate every single suggestion.
Have you tried the application? Do you have any plans to do so? Tell me everything in the comments section down below!

Photo of DesktopMy project for Blue Systems is maintaining Calamares, the distro-independent installer framework. Not surprisingly, working on it means installing lots of Linux distro’s. Here’s my physical-hardware testing setup, which is two identical older HP desktop machines and a stack of physical DVDs. Very old-school. Often I use Virtual Box, but sometimes the hum of a DVD is just what I need to calm down. There’s a KDE Neon, a Manjaro and a Netrunner DVD there, but the machine labeled Ubuntu is running Kannolo and sporting an openSUSE Geeko.

I’m all for eclecticism.

So far, I’ve found one new bug in Calamares, and fixed a handfull of them. I’m thankful to Teo, the previous Calamares maintainer, for providing helpful historical information, and to the downstream users (e.g. the distros) for being cheerful in explaining their needs.

Installing a bunch of different modern Linuxen is kind of neat; the variations in KDE Plasma Desktop configuration and branding are wild. Nearly all of them have trouble being usable on small screen sizes (e.g. the 800×600 that Virtual Box starts with — this has since been fixed). They all seem to install Virtual Box guest additions and can handle resizes immediately, so it’s not a huge issue, but just annoying. I’ve only broken one of my Linux installs so far (running an update, which then crashed kscreenlocker, and now it just comes up a black screen). I’ve got a KDE Neon dev/unstable as my main development VM set up, with KDevelop and the whole shizzle .. it’s very nice inside my KDE 4 desktop on FreeBSD.

I’ve got two favorite features, so far, in Linux live CDs and in KDE Plasma installations: ejecting the live CD on shutdown (Neon does this) and skipping the confirmation screen + 30 second timeout when clicking logout or shutdown (Netrunner does this).

So, time to hunker down with the list of issues, and in the meantime: keep on installin’.

This is my first blog post. It’s a great opportunity to start documenting my journey as a software engineer with my GSoC project with digiKam as a part of KDE this summer.

June 18, 2017

Robert Kaye, creator of MusicBrainz

Robert Kaye is definitely a brainz-over-brawn kinda guy. As the creator of MusicBrainz, ListenBrainz and AcousticBrainz, all created and maintained under the MetaBrainz Foundation, he has pushed Free Software music cataloguing-tagging-classifying to the point it has more or less obliterated all the proprietary options.

In July he will be in Almería, delivering a keynote at the 2017 Akademy -- the yearly event of the KDE community. He kindly took some time out of packing for a quick trip to Thailand to talk with us about his *Brainz projects, how to combine altruism with filthy lucre, and a cake he once sent to Amazon.

Robert Kaye: Hola, ¿qué tal?

Paul Brown: Hey! I got you!

Robert: Indeed. :)

Paul: Are you busy?

Robert: I'm good enough, packing can wait. :)

Paul: I'll try and be quick.

Robert: No worries.

* Robert has vino in hand.

Paul: So you're going to be delivering the keynote at Akademy...

* Robert is honored.

Paul: Are you excited too? Have you ever done an Akademy keynote?

Robert: Somewhat. I've got... three? Four trips before going to Almería. :)

Paul: God!

MetaBrainz is the umbrella project under which all other *Brainz are built.

Robert: I've never done a keynote before. But I've done tons and tons of presentations and speeches, including to the EU, so this isn't something I'm going to get worked up about thankfully.

Paul: I'm assuming you will be talking about MetaBrainz. Can you give us a quick summary of what MetaBrainz is and what you do there?

Robert: Yes, OK. In 1997/8 in response to the CDDB database being taken private, I started the CD Index. You can see a copy of it in the Wayback Machine. It was a service to look up CDs and I had zero clues about how to do open source. Alan Cox showed up and told me that databases would never scale and that I should use DNS to do a CD lookup service. LOL. It was a mess of my own making and I kinda walked away from it until the .com crash.

Then in 2000, I sold my Honda roadster and decided to create MusicBrainz. MusicBrainz is effectively a music encyclopedia. We know what artists exist, what they've released, when, where their Twitter profiles are, etc. We know track listings, acoustic fingerprints, CD IDs and tons more. In 2004 I finally figured out a business model for this and created the MetaBrainz Foundation, a California tax-exempt non-profit. It cannot be sold, to prevent another CDDB. For many years MusicBrainz was the only project. Then we added the Cover Art Archive to collect music cover art. This is a joint project with the Internet Archive.

Then we added CritiqueBrainz, a place for people to write CC licensed music reviews. Unlike Wikipedia, ours are non-neutral POV reviews. It is okay for you to diss an album or a band, or to praise it.

Paul: An opinionated musical Wikipedia. I already like it.

Robert: Then we created AcousticBrainz, which is a machine learning/analysis system for figuring out what music sounds like. Then the community started BookBrainz. And two years ago we started ListenBrainz, which is an open source version of last.fm's audioscrobbler.

MusicBrainz is a repository of music metadata widely used by commercial and non-commercial projects alike.

Paul: Wait, let's backtrack a second. Can you explain AcousticBrainz a bit more? What do you mean when you say "figure out what music sounds like"?

Robert: AcousticBrainz allows users to download a client to run on their local music collection. For each track it does a very detailed low-level analysis of the acoustics of the file. This result is uploaded to the server and the server then does machine learning on it to guess: Does it have vocals? Male of female? Beats per minute? Genre? All sorts of things and a lot of them need a lot of improvement still.

Paul: Fascinating.

Robert: Researchers provided all of the algorithms, being very proud and all: "I've done X papers on this and it is the state of the art". State of the art if you have 1,000 audio tracks, which is f**king useless to an open source person. We have three million tracks and we're not anywhere near critical mass. So, we're having to fix the work the researchers have done and then recalculate everything. We knew this would happen, so we engineered for it. We'll get it right before too long.

All of our projects are long-games. Start a project now and in five years it might be useful to someone. Emphasis on "might".

Then we have ListenBrainz. It collects the listening history of users. User X listened to track Y at time Z. This expresses the musical taste of one user. And with that we have all three elements that we've been seeking for over a decade: metadata (MusicBrainz), acoustic info (AcousticBrainz) and user profiles (ListenBrainz). The holy trinity as it were. You need all three in order to build a music recommendation engine.

The algorithms are not that hard. Having the underlying data is freakishly hard, unless you have piles of cash. Those piles of cash and therefore the engines exist at Google, Last.fm, Pandora, Spotify, et al. But not in open source.

Paul: Don't you have piles of cash?

Robert: Nope, no piles of cash. Piles of eager people, however! So, once we have these databases at maturity we'll create some recommendation engine. It will be bad. But then people will improve it and eventually a pile of engines will come from it. This has a significant chance of impacting the music world.

Paul: You say that many of the things may be useful one day, but you also said MetaBrainz has a business model. What is it?

Robert: The MetaBrainz business model started out with licensing data using the non-commercial licenses. Based on "people pay for frequent and easy updates to the data". That worked to get us to 250k/year.

Paul: Licensing the data to...?

Robert: The MusicBrainz core data. But there were a lot of people who didn't need the data on an hourly basis.

Paul: Sorry. I mean *who* were you licensing to?

Robert: It started with the BBC and Google. Today we have all these supporters. Nearly all the large players in the field use our data nowadays. Or lie about using our data. :)

Paul: Lie?

Robert: I've spoken to loads of IT people at the major labels. They all use our data. If you speak to the execs, they will swear that they have never used our data.

Paul: Ah. Hah hah. Sounds about right.

Robert:Anyways, two years ago we moved to a supporter model. You may legally use our data for free, but morally you should financially support us. This works.

Paul: Really?

Robert: We've always used what I call a "drug dealer business model". The data is free. Engineers download it and start using it. When they find it works and want to push it into a product they may do that without talking to us. Eventually we find them and knock on their door and ask for money.

Paul: They pay you? And I thought the music industry was evil.

Robert: This is the music *tech* companies. They know better.


Their bizdev types will ask: where else can we get this data for cheaper? The engineers look around for other options. Prices can range from 3x to 100x, depending on use, and the data is not nearly as good. So they sign up with us. This is not out of the kindness of their hearts.

Paul: Makes more sense now.

Robert: Have you heard the Amazon cake story?

Paul: The what now?

Robert: Amazon was 3 years behind in paying us. I harangued them for months. Then I said: "If you don't pay in 2 weeks, I am going to send you a cake."

Amazon got cake to celebrate the third anniversary of an unpaid invoice.

"A cake?"

"Yes, a cake. One that says 'Congratulations on the 3rd anniversary'..."

They panicked, but couldn't make it happen.

So I sent the cake, then silence for 3 days.

Then I got a call. Head of legal, head of music, head of AP, head of custodial, head of your momma. All in one room to talk to me. They rattled off what they owed us. It was correct. They sent a check.

Cake was sent on Tuesday, check in hand on Friday.

This was pivotal for me: recognizing that we can shame companies to do the right thing... Such as paying us because to switch off our data (drugs) is far worse than paying.

Last year we made $323k, and this year should be much better. We have open finances and everything. People can track where money goes. We get very few questions about us being evil and such.

Paul: How many people work with you at MetaBrainz, as in, are on the payroll?

Robert: This is my team. We have about 6 full-time equivalent positions. To add to that, we have a core of contributors: coders, docs, bugs, devops... Then a medium ring of hard-core editors. Nicolás Tamargo and one other guy have made over 1,000,000 edits to the database!

Paul: How many regular volunteers then?

Robert: 20k editors per year. Más o menos. And we have zero idea how many users. We literally cannot estimate it. 40M requests to our API per day. 400 replicated copies of our DB. VLC uses us and has the largest installation of MusicBrainz outside of MetaBrainz.

And we ship a virtual machine with all of MusicBrainz in it. People download that and hammer it with their personal queries. Google Assistant uses it, Alexa might as well, not sure. So, if you ask Google Assistant a music-related question, it is answered in part by our data. We've quietly become the music data backbone of the Internet and yet few people know about us.

Paul: Don't you get lawyers calling you up saying you are infringing on someone's IP?

Robert: Kinda. There are two types: 1) the spammers have found us and are hammering us with links to pirated content. We're working on fixing that. 2) Other lawyers will tell us to take content down, when we have ZERO content. They start being all arrogant. Some won't buzz off until I tell them to provide me with an actual link to illegal content on our site. And when they can't do it, they quietly go away.

The basic fact is this: we have the library card catalog, but not the library. We mostly only collect facts and facts are not copyrightable.

Paul: What about the covers?

Robert: That is where it gets tricky. We engineered it so that the covers never hit our servers and only go to the Internet Archive. The Archive is a library and therefore has certain protections. If someone objects to us having something, the archive takes it down.

Paul: Have you had many objections?

Robert: Not that many. Mostly for liner notes, not so much for covers. The rights for covers were never aggregated. If someone says they have rights for a collection, they are lying to you. It's a legal mess, plain and simple. All of our data is available under clear licenses, except for the CAA -- "as is"

Paul: What do you mean by "rights for a collection"?

Robert: Rights for a collection of cover art. The rights reside with the band. Or the friend of the band who designed the cover. Lawyers never saw any value in covers pre-Internet. So the recording deals never included the rights to the covers. Everyone uses them without permission

Paul: I find that really surprising. So many iconic covers.

Robert: It is obvious in the Internet age, less so before the Internet. The music industry is still quite uncomfortable with the net.

Paul: Record labels always so foresightful.

Robert: Exactly. Let's move away from labels and the industry.

Though, one thing tangentially, I envisioned X, Y, Z, uses for our data, but we made the data rigorous, well-connected and concise. Good database practices. And that is paying off in spades. The people who did not do that are finding that their data is no longer up to snuff for things like Google Assistant.

Paul: FWIW, I had never heard of Gracenote until today. I had heard of MusicBrainz, though. A lot.

Robert: Woo! I guess we're succeeding. :)

Paul: Well, it is everywhere, right?

Robert: For a while it was even in Antarctica! A sysadmin down there was wondering where the precious bandwidth went during the winter. Everyone was tagging their music collection when bored. So he set up a replica for the winter to save on bandwidth.

Paul: Of course they were and of course he did.

Robert: Follows, right? :)

Paul: Apart from music, which you clearly care for A LOT, I heard you are an avid maker too.

Robert: Yes. Party Robotics was a company I founded when I was still in California and we made the first affordable cocktail robots. But I also make blinky LED light installations. Right now I am working on a sleep debugger to try and improve my crapstastic sleep.

I have a home maker space with an X-Carve, 3D printer, hardware soldering station and piles of parts and tools.

Paul: Uh... How do flashing lights help with sleep?

Robert: Pretty lights and sleep-debugging are separate projects.

Paul: What's your platform of choice, Arduino?

Robert: Arduino and increasingly Raspberry Pi. The Zero W is the holy grail, as far as I am concerned.

Oh! And another project I want: ElectronicsBrainz.

Paul: This sounds fun already. Please tell.

Robert: Info, schematics and footprints for electronic parts. The core libraries with KiCad are never enough. you need to hunt for them. Screw that. Upload to ElectronicBrainz, then, if you use a part, rate it, improve it. The good parts float to the top, the bad ones drop out. Integrate with Kicad and, bam! Makers can be much more useful. In fact, this open data paradigm and the associated business model is ripe for the world. There are data silos *everywhere*.

Paul: I guess that once you have set up something like MusicBrainz, you start seeing all sorts of applications in other fields.

Robert: Yes. Still, we can't do everything. The world will need more MetaBrainzies.

Paul: Meanwhile, how can non-techies help with all these projects?

Robert: Editing data/adding data, writing docs or managing bug reports as well. Clearly our base of editors is huge. It is a very transient community, except for the core.

Also, one thing that I want to mention in my keynote is blending volunteers and paid staff. We've been really lucky with that. The main reason for that is that we're open. We have nothing to hide. We're all working towards the same goals: making the projects better. And when you make a site that has 40M requests in a day, there are tasks that no one wants to do. They are not fun. Our paid staff work on all of those.

Volunteers do the things that are fun and can transition into paid staff -- that is how all of our paid staff became staff.

Paul: This is really an incredible project.

Robert: Thanks! Dogged determination for 17 years. It’s worth something.

Paul: I look forward to your keynote. Thank you for your time.

Robert: No problem.

Paul: I'll let you get back to your packing.

Robert: See you in Almería.

Robert Kaye will deliver the opening keynote at Akademy 2017 on the 22nd of July. If you would like to see him and talk to him live, register here.

About Akademy

For most of the year, KDE—one of the largest free and open software communities in the world—works on-line by email, IRC, forums and mailing lists. Akademy provides all KDE contributors the opportunity to meet in person to foster social bonds, work on concrete technology issues, consider new ideas, and reinforce the innovative, dynamic culture of KDE. Akademy brings together artists, designers, developers, translators, users, writers, sponsors and many other types of KDE contributors to celebrate the achievements of the past year and help determine the direction for the next year. Hands-on sessions offer the opportunity for intense work bringing those plans to reality. The KDE Community welcomes companies building on KDE technology, and those that are looking for opportunities. For more information, please contact The Akademy Team.

Dot Categories:


There’s a lot I did in the last 2 weeks and since I did not update the blog last week, this post is going to include last 2 week’s progress.

Before I begin with what I did, here’s a quick review of what I was working on and what had been done.

I started porting Cantor’s Qalculate backend to QProcess and during the first week I worked on establishing connection with Qalculate, for which we use qalc and some amount of time was spent parsing the output returned by qalc


Qalculate backend as of now uses libqalcuate API for computing the result. To successfully eliminate the direct use of API all the commands should make use qalc, but since qalc does not support all the functions of Qalculate, I had to segerate the parts depending on API from qalc. For instance, qalc does not support plotting graphs.

The version of qalc that we are using supports almost all the major functionalities of Qalculate but there are a few things for which we still depend on the API directly

I will quickly describe what depends on what

* help command
* plotting
* syntax highlighter
* tab completion

* basic calculations: addition, subtraction etc
* all the math functions provided by Qalculate: sqrt(), binomial(), integrate() etc
* saving variables

Segregating part was easy. The other important thing I did was to form a queue based system for the commands that are required to be processed by qalc


Queue based system

The two important components of this system are:

1. Expression Queue :- contains the expressions to be processed
2. Command Queue:- contains commands of the expression being processed.

* The basic idea behind this system is , we compute only one expression at a time, mean while if we get more expressions from user, we store them in the queue and process them once the current expression being processed is complete.

* Another important point is , since an expression can contain multiple commands, we store all the commands for an expression in command queue and just like we process one expression at a time, we are going to process one command at a time. i.e we are going to give QProcess only one command at a time, this makes the output returned by QProcess less messy and hence it’s easier to parse

* Example: Expression1 = (10+12, sqrt(12)) : this expression has multiple commands. The command queue for the same will have two commands.

expression queue                                               command queue

[ expression 1 ] ——————————————-> [ 10+12 ], [ sqrt(12) ]

[ expression 2] ——————————————-> [ help plot ]


We solve all the commands of expression1 , parse the output and then move on to expression2 and this goes on till the time expression queue is empty


Apart from this I worked on Variable model of Qalculate. Qalc provides a lot of variations of save command. Different commands available are:

Not every command mentioned below has been implemented but the important ones have been.

1. save(value, variable, category, title): Implemented

This function is available through the qalc interface and allows the user to define new variables or override the existing variables with the given value.

2. save/store variable : Implemented
This command allows the user to save the current result in a variable with the specified name.

Current result is the last computed result. Using qalc we can access the last result using ‘ans’, ‘answer’ and a few more variables.

3.save definitions : Not implemented

Definitions include the user defined variables, functions, units .

4. save mode: Not implemented
mode is the configuration of the user which include things like ‘angle unit’, ‘multiplication sign’ etc.


With this most of the important functionalities have been ported to qalc but there are still a few things for which we depend on the API directly. Hopefully, in the future with the newer version of qalc we will be able to remove the direct use of API from Cantor

Thanks and Happy hacking

Older blog entries

Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.