May 18, 2018

Plasma 5.13 Beta

KDE Plasma 5.13 Beta

Thursday, 17 May 2018. Today KDE unveils a beta release of Plasma 5.13.0.

Members of the Plasma team have been working hard to continue making Plasma a lightweight and responsive desktop which loads and runs quickly, but remains full-featured with a polished look and feel. We have spent the last four months optimising startup and minimising memory usage, yielding faster time-to-desktop, better runtime performance and less memory consumption. Basic features like panel popups were optimised to make sure they run smoothly even on the lowest-end hardware. Our design teams have not rested either, producing beautiful new integrated lock and login screen graphics.

New in Plasma 5.13

Plasma Browser Integration

Plasma Browser Integration is a suite of new features which make Firefox and Chrome, and Chromium-based browsers work with your desktop. Downloads are now displayed in the Plasma notification popup just as when transferring files with Dolphin. The Media Controls Plasmoid can mute and skip videos and music playing from within the browser. You can send a link to your phone with KDE Connect. Browser tabs can be opened directly using KRunner via the Alt-Space keyboard shortcut. To enable Plasma Browser Integration, add the relevant plugin from the addon store of your favourite browser.

Plasma Browser Integration for Downloads

Plasma Browser Integration for Media Controls

Plasma Browser Integration for Downloads and Media Controls

System Settings Redesigns

Our settings pages are being redesigned. The KDE Visual Design Group has reviewed many of the tools in System Settings and we are now implementing those redesigns. KDE's Kirigami framework gives the pages a slick new look. We started off with the theming tools, comprising the icons, desktop themes, and cursor themes pages. The splash screen page can now download new splashscreens from the KDE Store. The fonts page can now display previews for the sub-pixel anti-aliasing settings.

Desktop Theme

Font Settings

Icon Themes

Redesigned System Settings Pages

New Look for Lock and Login Screens

Our login and lock screens have a fresh new design, displaying the wallpaper of the current Plasma release by default. The lock screen now incorporates a slick fade-to-blur transition to show the controls, allowing it to be easily used like a screensaver.

Lock Screen

Login Screen

Lock and Login Screen new Look

Improved Blur Effect in the Dash Menu

Improved Blur Effect in the Dash Menu

Graphics Compositor

Our compositor KWin gained much-improved effects for blur and desktop switching. Wayland work continued, with the return of window rules, the use of high priority EGL Contexts, and initial support for screencasts and desktop sharing.

Discover's Lists with Ratings, Themed Icons, and Sorting Options

Discover's Lists with Ratings, Themed Icons, and Sorting Options


Discover, our software and addon installer, has more features and sports improvements to the look and feel.

Using our Kirigami UI framework we improved the appearance of lists and category pages, which now use toolbars instead of big banner images. Lists can now be sorted, and use the new Kirigami Cards widget. Star ratings are shown on lists and app pages. App icons use your local icon theme better match your desktop settings. All AppStream metadata is now shown on the application page, including all URL types. And for users of Arch Linux, the Pacman log is now displayed after software updates.

Work has continued on bundled app formats. Snap support now allows user control of app permissions, and it's possible to install Snaps that use classic mode. And the 'snap://' URL format is now supported. Flatpak support gains the ability to choose the preferred repository to install from when more than one is set up.

Much More

Other changes include:

  • A tech preview of GTK global menu integration.
  • Redesigned Media Player Widget.
  • Plasma Calendar plugin for astronomical events, currently showing: lunar phases & astronomical seasons (equinox, solstices).
  • xdg-desktop-portal-kde, used to give desktop integration for Flatpak and Snap applications, gained support for screenshot and screencast portals.
  • The Digital Clock widget allows copying the current date and time to the clipboard.
  • The notification popup has a button to clear the history.
  • More KRunner plugins to provide easy access to Konsole profiles and the character picker.
  • The Mouse System Settings page has been rewritten for libinput support on X and Wayland.
  • Plasma Vault has a new CryFS backend, commands to remotely close open vaults with KDE Connect, offline vaults, a more polished interface and better error reporting.
  • A new dialog pops up when you first plug in an external monitor so you can easily configure how it should be positioned.
  • Plasma gained the ability to fall back to a software rendering if OpenGL drivers unexpectedly fail.

GEdit with Title Bar Menu

Redesigned Media Player Widget

Connect an External Monitor

GEdit with Title Bar Menu. Redesigned Media Player Widget. Connect an External Monitor Dialog.

Live Images

The easiest way to try it out is with a live image booted off a USB disk. Docker images also provide a quick and easy way to test Plasma.

Download live images with Plasma 5
Download Docker images with Plasma 5

Package Downloads

Distributions have created, or are in the process of creating, packages listed on our wiki page.

Package download wiki page

Source Downloads

You can install Plasma 5 directly from source.

Community instructions to compile it
Source Info Page


You can give us feedback and get updates on Facebook
or Twitter
or Google+.

Discuss Plasma 5 on the KDE Forums Plasma 5 board.

You can provide feedback direct to the developers via the #Plasma IRC channel, Plasma-devel mailing list or report issues via bugzilla. If you like what the team is doing, please let them know!

Your feedback is greatly appreciated.

Now that the beta 1 release of Qt 3D Studio 2.0 is out, let’s go through the steps involved in trying it out for real.

Pre-built runtime binaries

As outlined in last year’s post, the runtime component of Qt 3D Studio has undergone significant changes. This means that the Viewer application that is launched whenever pressing the green “Play” button in Qt 3D Studio and, more importantly, the C++ and QML APIs with the engine underneath, have all been rewritten in the familiar Qt-style C++ on top of Qt 3D. In practice the runtime is an ordinary Qt module, providing both C++ libraries with classes like Q3DSWidget, Q3DSPresentation, etc. and a QML plugin Studio3D and friends.

The releasing of the pre-built binaries for the runtime has improved a lot for version 2.0: there is now a dedicated entry in the Qt installer which will install the necessarily files alongside the normal Qt installation, so pulling in the runtime via QT += 3dstudioruntime2 or by import QtStudio3D 2.0 in QML will work out of the box in a fresh Qt installation. No more manual building from sources is needed (except when targeting certain platforms).

Let’s Install

At the moment Qt 3D Studio binaries are provided for Windows and macOS. Linux may be added in future releases. It is worth noting that this does not mean the runtime is not suitable for running on Linux (or Android or QNX or INTEGRITY, for that matter) – it, like the most of Qt, will build (cross-compile even) and run just fine, assuming a well-working OpenGL implementation is available for your platform. (however, glitches can naturally be expected with less stable and complete graphics stacks) So while we expect the design work in the Qt 3D Studio application done on Windows or macOS for now, Qt applications using the created 3D presentations can be developed on and deployed to all the typical Qt target platforms.

When launching the online installer, take note of the following entries. Note that the layout and the location of these items may change in the installer in beta 2 and beyond. What is shown here is how things look as of 2.0 beta 1.

The runtime depends on Qt 5.11, meaning it must be installed together with 5.11.0 (or a future release, here we will use the release candidate).

In the example run shown on the screenshots we opted for Visual Studio 2015, but choosing something else is an option too – the installer takes care of downloading and installing the right set of pre-built binaries for the Qt 3D Studio runtime.



Once installation is finished, you will have the Qt 5.11 RC, a recent version of Qt Creator, Qt 3D Studio 2.0 beta 1, and the necessary runtime libraries all in place.

Let’s Design

Launching Qt 3D Studio and opening the example project from <installation root>\Tools\Qt3DStudio-2.0.0\examples\studio3d\SampleProject should result in something like the following.


For details on what can be done in the designer application, check the documentation. Speaking of which, the documentation for Qt 3D Studio has been split into two parts in version 2.0: the runtime has its own documentation set with the API references, introduction, system requirements, and other topics. (the links are currently to doc-snapshots, the contents will move to later on)

Let’s Code

Designing the presentation is only half of the story. Let’s get it into a Qt application.

Here we will code an example from scratch. Regardless, you may want to look at the runtime’s examples first. These ship in <installation root>examples\Qt-5.11.0\3dstudioruntime2.

The qmldatainput example in action

For now, let’s start with an empty project. Launch Qt Creator and create a new, empty Qt Quick project targeting Qt 5.11.


The application template gives us a main QML file like this:


When the runtime is installed correctly alongside Qt, we can add the following to make our Qt Quick application load up a Qt 3D Studio presentation and compose it with the rest of the Qt Quick scene.

    import QtStudio3D 2.0
    Window {
        Studio3D {
            anchors.fill: parent
            anchors.margins: 10
            Presentation {
                source: "file:///c:/QtTest/Tools/Qt3DStudio-2.0.0/examples/studio3d/SampleProject/SampleProject.uip"

(replace c:\QtTest as appropriate)

In many real world applications it is quite likely that you will want to place the presentation’s files and assets into the Qt resource system instead, but the raw file references will do as a first step.

Launching this gives us quite impressive results already:


What we see there is the live Qt 3D Studio presentation rendered by the new Qt3D-based engine into an OpenGL texture which is then used by the Qt Quick scenegraph when drawing a textured quad for the Studio3D element. The keyframe-based animations should all run as defined in the Qt 3D Studio application during the design phase.

Now, a common misconception is that Qt 3D Studio is merely a designer tool and the scenes created by it are static with no runtime modifications possible when loaded up in Qt applications. This is pretty incorrect since applications have full control over

Future versions of Qt 3D Studio are expected to extend the capabilities further, for example with adding APIs to spawn and destroy objects in the scene dynamically.

Let’s change that somewhat annoying text in the middle of the scene, using a button from Qt Quick Controls 2. Whenever the button is clicked, the attribute corresponding to the text rendered by the Text node in the Qt 3D Studio scene will be changed.

Option 1: Direct Attribute Access


When digging down the scene structure at the left of the timline pane, take note of the fact that the Text node has a convenient, unique name already (DateAndTime). This is good since it means we can refer to it without further ado in the application code:

import QtQuick.Controls 2.2

Window {

    Studio3D {
        anchors.fill: parent
        anchors.margins: 10
        Presentation {
            id: pres
            source: "file:///c:/QtTest/Tools/Qt3DStudio-2.0.0/examples/studio3d/SampleProject/SampleProject.uip"

        Button {
            text: "Change text"
            anchors.bottom: parent.bottom
            anchors.horizontalCenter: parent.horizontalCenter
            onClicked: pres.setAttribute("DateAndTime", "textstring", "Hello World")

For details on this approach, check out the documentation for Presentation and the attribute name table.

Clicking the button results in changing the ‘textstring’ property of the Text node in the Qt 3D Studio presentation. The change is then picked up by the Qt 3D Studio engine, leading to changing the rendered text.

Option 2: Data Input

While changing properties via setAttribute & co. is simple and effective, Qt 3D Studio also supports another approach: the so-called data inputs. With data inputs, the designers of the 3D scene decide which attributes are controllable by the applications and assign custom names to these. This way a well-known, fixed interface is provided from the designers to the application developers. The example presentation we are using here exposes the following data inputs. (and note the orange-ish highlight for Text String in the Inspector Control in the bottom right corner)


The name dateAndTime is associated with the textstring property of the DateAndTime Text node. Let’s control the value via this approach:

import QtQuick.Controls 2.2

Window {

    Studio3D {
        anchors.fill: parent
        anchors.margins: 10
        Presentation {
            id: pres
            source: "file:///c:/QtTest/Tools/Qt3DStudio-2.0.0/examples/studio3d/SampleProject/SampleProject.uip"

            property string text: ""
            DataInput {
                name: "dateAndTime"
                value: pres.text

        Button {
            text: "Change text"
            anchors.bottom: parent.bottom
            anchors.horizontalCenter: parent.horizontalCenter
            onClicked: pres.text = "Hello World"

Here the Text node starts up with a textstring value of “” (but note there is no actual reference to “textstring” anywhere in the application code), while clicking the button results in changing it to “Hello World”. What’s more, we can now use the usual QML property binding methods to control the value.

That is all for now. Expect more Qt 3D Studio related posts in the near future.

The post Get Started with Qt 3D Studio 2.0 beta 1 appeared first on Qt Blog.

KDE Student Programs is happy to present our 2018 Google Summer of Code students to the KDE Community.

Welcome Abhijeet Sharma, Aman Kumar Gupta, Amit Sagtani, Andrey Cygankov, Andrey Kamakin, Anmol Gautam, Caio Jordão de Lima Carvalho, Chinmoy Ranjan Pradhan, Csaba Kertesz, Demetrio Carrara, Dileep Sankhla, Ferencz Kovács, Furkan Tokac, Gun Park, Iván Yossi Santa María González, Kavinda Pitiduwa Gamage, Mahesh S Nair, Tarek Talaat, Thanh Trung Dinh, Yihang Zhou, and Yingjie Liu!

KDE Google Summer of Code mentors at Akademy 2017. Photo by Bhushan Shah.

Students will work on
improving KStars for Android.

This year digiKam, KDE's professional photo management application, has three students: Tarek Talaat will be working on supporting Twitter and One Drive services in digiKam export, Thanh Trung Dinh on Web Services tools authentication with OAuth2, and Yingjie Liu on adding the possibility to manually sort the digiKam icon view.

Plasma, KDE's graphical desktop environment, will also be mentoring three students. Abhijeet Sharma will be working on fwupd integration with Discover (KDE's graphical software manager), Furkan Tokac will improve handling for touchpads and mice with Libinput, and Gun Park will port keyboard input modules to Qt Quick and expand scope to cover input method configuration for System Settings.

Another project with three students is Krita, KDE's popular graphic editor and painting application. Andrey Kamakin will improve multithreading in Krita's Tile Manager; Iván Yossi Santa María González (ivanyossi) will optimize Krita Soft, Gaussian and Stamp brushes mask generation to use AVX with Vc Library; and Yihang Zhou (Michael Zhou) is creating a Swatches Docker for Krita.

GCompris, the suite of educational programs and games for young learners, takes two students: Aman Kumar Gupta will port all GTK+ piano activities and get it one step closer to version 1.0, and Amit Sagtani will work on creating bitmap drawing and animation activities while preparing Gcompris for version 1.0.

Labplot, KDE's application for scientific data plotting and analysis, also mentors two students. Andrey Cygankov will add support for import data from web-service in LabPlot, and Ferencz Kovács will be working on plotting of live MQTT data.

Falkon, a new member of the KDE family,
will also get some GSoC love.

Okular, KDE's PDF and document viewer, gets another two students: Chinmoy Ranjan Pradhan will be working on verifying signatures of PDF files, while Dileep Sankhla will implement the FreeText annotation with FreeTextTypeWriter behavior.

For Falkon, a community developed web browser and a new member of the KDE family, Anmol Gautam will be working on JavaScript/QML extension support, and Caio Jordão de Lima Carvalho will finish LVM support and implement RAID support in KDE Partition Manager and Calamares (an advanced system installer).

Csaba Kertesz (kecsap) will aim to improve the desktop and the Android version of KStars, KDE's planetarium program, while Kavinda Pitiduwa Gamage will work on KGpg, KDE's graphical key management application, to make it better.

Mahesh S. Nair will expand Peruse Creator, adding more features to KDE's easy-to-use comic book reader. Finally, Demetrio Carrara will be working on the WikitoLearn production-ready Progressive Webapp (PWA).

Traditionally, Google Summer of Code starts with an introduction period where students get to know their mentors, after which they start coding. The coding period for 2018 has began on May 14, and will last until August 6. We wish all our students a productive, successful, and fun summer!

Simple greeting post for guests to get to know about my GSoC topic.

Hello all! This is my first time writing about my work progress in a blog, so some things are still awkward for me. And it is also my first time participating in GSoC and there are many things new to me. I’m cooperating with KDE organisation or rather with one of their projects, named Krita.

I hope that I will gain a great experience from working with and learning from community and manage to accomplish all my tasks for this summer. The coding has already started, so wish me luck :).

May 17, 2018

This mini tutorial aims to show you the fundamentals of creating a RESTful application with Qt, as a client and as a server with the help of Cutelyst.

Services with REST APIs have become very popular in recent years, and interacting with them may be necessary to integrate with services and keep your application relevant, as well as it may be interesting to replace your own protocol with a REST implementation.

REST is very associated with JSON, however, JSON is not required for a service to become RESTful, the way data is exchanged is chosen by the one who defines the API, ie it is possible to have REST exchanging messages in XML or another format. We will use JSON for its popularity, simplicity and due to the QJsonDocument class being present in the Qt Core module.

A REST service is mainly characterized by making use of the little-used HTTP headers and methods, browsers basically use GET to get data and POST to send form and file data, however REST clients will use other methods like DELETE, PUT and HEAD, concerning headers many APIs define headers for authentication, for example X-Application-Token can contain a key generated only for the application of a user X, so that if this header does not contain the correct data it will not have access to the data.

Let’s start by defining the server API:

  • /api/v1/users
    • GET – Gets the list of users
      • Answer: [“uuid1”, “uuid2”]
    • POST – Register new user
      • Send: {“name”: “someone”, “age”: 32}
      • Answer: {“status”: “ok / error”, “uuid”: “new user uuid”, “error”: “msg in case of error”}
  • /api/v1/users/ – where UUID should be replaced by the user’s UUID
    • GET – Gets user information
      • Answer: {“name”: “someone”, “age”: 32}
    • PUT – Update user information
      • Send: {“name”: “someone”, “age”: 57}
      • Answer: {“status”: “ok / error”, “error”: “msg in case of error”}
    • DELETE – Delete user
      • Answer: {“status”: “ok / error”, “error”: “msg in case of error”}

For the sake of simplicity we will store the data using QSettings, we do not recommend it for real applications, but Sql or something like that escapes from the scope of this tutorial. We also assume that you already have Qt and Cutelyst installed, the code is available at

Part 1 – RESTful Server with C ++, Cutelyst and Qt

First we create the server application:

$ cutelyst2 --create-app ServerREST

And then we will create the Controller that will have the API methods:

$ cutelyst2 --controller ApiV1

Once the new class has been instantiated in serverrest.cpp, init() method with:

#include "apiv1.h"

bool ServerREST::init()
    new ApiV1 (this);

Add the following methods to the file “apiv1.h”

C_ATTR(users, :Local :AutoArgs :ActionClass(REST))
void users(Context *c);

C_ATTR(users_GET, :Private)
void users_GET(Context *c);

C_ATTR(users_POST, :Private)
void users_POST(Context *c);

C_ATTR(users_uuid, :Path('users') :AutoArgs :ActionClass(REST))
void users_uuid(Context *c, const QString &uuid);

C_ATTR(users_uuid_GET, :Private)
void users_uuid_GET(Context *c, const QString &uuid);

C_ATTR(users_uuid_PUT, :Private)
void users_uuid_PUT(Context *c, const QString &uuid);

C_ATTR(users_uuid_DELETE, :Private)
void users_uuid_DELETE(Context *c, const QString &uuid);

The C_ATTR macro is used to add metadata about the class that the MOC will generate, so Cutelyst knows how to map the URLs to those functions.

  • :Local – Map method name to URL by generating /api/v1/users
  • :AutoArgs – Automatically checks the number of arguments after the Context *, in users_uuid we have only one, so the method will be called if the URL is /api/v1/users/any-thing
  • :ActionClass(REST) ​​- Will load the REST plugin that will create an Action class to take care of this method, ActionREST will call the other methods depending on the called method
  • :Private – Registers the action as private in Cutelyst, so that it is not directly accessible via URL

This is enough to have an automatic mapping depending on the HTTP method for each function, it is important to note that the first function (without _METHOD) is always executed, for more information see the API of ActionREST

For brevity I will show only the GET code of users, the rest can be seen in GitHub:

void ApiV1::users_GET(Context *c)
    QSettings s;
    const QStringList uuids = s.childGroups();


After all the implemented methods start the server:

cutelyst2 -r --server --app-file path_to_it

To test the API you can test a POST with curl:

curl -H "Content-Type: application/json" -X POST -d '{"name": "someone", "age": 32}' http://localhost:3000/api/v1/users

Okay, now you have a REST server application, made with Qt, with one of the fastest answers in the old west ��

No, it’s serious, check out the benchmarks.

Now let’s go to part 2, which is to create the client application that will consume this API.

Part 2 – REST Client Application

First create a QWidgets project with a QMainWindow, the goal here is just to see how to create REST requests from Qt code, so we assume that you are already familiar with creating graphical interfaces with it.

Our interface will be composed of:

  • 1 – QComboBox where we will list users’ UUIDs
  • 1 – QLineEdit to enter and display the user name
  • 1 – QSpinBox to enter and view user age
  • 2 – QPushButton
    • To create or update a user’s registry
    • To delete the user record

Once designed the interface, our QMainWindow sub-class needs to have a pointer to QNetworkAccessManager, this is the class responsible for handling communication with network services such as HTTP and FTP. This class works asynchronously, it has the same operation as a browser that will create up to 6 simultaneous connections to the same server, if you have made more requests at the same time it will put them in a queue (or pipeline them if set).

Then create a QNetworkAccessManager *m_nam; as a member of your class so we can reuse it. Our request to obtain the list of users will be quite simple:

QNetworkRequest request(QUrl("http://localhost:3000/api/v1/users"));

QNetworkReply *reply = m_nam->get(request);
connect(reply, &QNetworkReply::finished, this, [this, reply] {
    const QJsonDocument doc = QJsonDocument::fromJson(reply->readAll());
    const QJsonArray array = doc.array();

    for (const QJsonValue &value : array) {

This fills with the data via GET from the server our QComboBox, now we will see the registration code which is a little more complex:

QNetworkRequest request(QUrl("http://localhost:3000/api/v1/users"));
request.setHeader(QNetworkRequest::ContentTypeHeader, "application/json");

QJsonObject obj {
    {"name", ui->nameLE->text()},
    ("age", ui->ageSP->value()}

QNetworkReply *reply = m_nam->post(request, QJsonDocument(obj).toJson());
connect(reply, &QNetworkReply::finished, this, [this, reply] {
    const QJsonDocument doc = QJsonDocument::fromJson(reply->readAll());
    const QJsonObject obj = doc.object();

    if (obj.value("status").toString() == "ok") {
    } else {
        qWarning() << "ERROR" << obj.value("error").toString();

With the above code we send an HTTP request using the POST method, like PUT it accepts sending data to the server. It is important to inform the server with what kind of data it will be dealing with, so the “Content-Type” header is set to “application/json”, Qt issues a warning on the terminal if the content type has not been defined. As soon as the server responds we add the new UUID in the combobox so that it stays up to date without having to get all UUIDs again.

As demonstrated QNetworkAccessManager already has methods ready for the most common REST actions, however if you wanted to send a request of type OPTIONS for example will have to create a request of type CustomOperation:

m_nam->sendCustomRequest("OPTIONS", request);

Did you like the article? Help by giving a star to Cutelyst and/or supporting me on Patreon

May 16, 2018

Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.
We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).
We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.

 Improvements of Elisa Interface

We have got a new application icon from Paul Lesur. Thanks for the help.

128-apps-elisaNew Elisa Application Icon

Diego Gangl has been working on improving the design of the interface. Among the changes some where suggested quite some time ago by the KDE VDG.

The main controls of playback are now all on the same horizontal bar. This way, it should be much easier to locate them.

The view selector list has also a new simpler and cleaner look. I especially appreciate the removal of the not quite working color animations of the current item indicator.

Alexander Stippich added a small button that allows to sort the views in ascending or descending order. They are now sorted by default. The data comes from the tracks database in sorted order (done by Kevin Ottens). The case is ignored.

Screenshot_20180516_223706Current Interface with Many Improvements

Many Improvements on Tracks Properties

A lot of work has recently went into Baloo and KFileMetaData frameworks. While this is not strictly related to the Elisa project, it is a huge improvement.

Elisa got hugely improved support for discovering existing image files with a cover. This work has been done by Kevin Ottens and Alexander Stippich. There are still cases where the cover might not be found if for example the cover file name has spaces. We may also want to be able to select the best one among many.

Elisa got fixes to better support track properties with multiple values. The current state is still not able to fully support authors list for example but we will eventually get there.

Performance Improvements when Checking Existing Music at Start

Elisa stores your discovered music in a small database to allow a quick start. Due to that, it needs to check if known tracks have changed or have been moved or removed. It also has to check if new music has been added.

This had been quite fast when using the Baloo support but was really slow for big music collections indexed by the Elisa specific indexer.

It is now storing the latest modification time of the files (possible by requiring Qt 5.10) and will only parse the properties of a track if needed. I did tests with a collection of around 4000 tracks and the improvement is really noticeable.

Please if you still notice problems, send us your feedback.

Ongoing work

Alexander Stippich is working on file system browsing from inside Elisa.

This work will be especially useful for playing tracks with no or bad properties.

It will also benefits everybody because when doing this work it has resulted in fixes for shared code like falling back to file name when no title property is there.

I am working to allow to browse the music by genre. This has led to many improvements to the automatic tests and the tracks database.

Screenshot_20180516_235410Work In Progress: Browsing by Genre

One of the interesting side improvements is that now the only view that is visible is loaded in the interface. This should improve startup time and resource consumption.

Next Steps

We have decided to modify a little bit our release schedule to have two months before the feature freeze and only one more month before the release of the next stable release (i.e. the 0.2 version). The next step is the 2nd of June.

We Need your Help

There are quite some small tasks that are open in the Elisa workboard. They are a really good opportunity to join the project. Everybody will be happy to help and guide you. We will identify some junior jobs (I borrowed the idea from a recent blog post from KDE Connect team).

We also need help with every other tasks like: triaging bugs, working on code, improving the design and helping with the promo.

Coming close to the next release of LabPlot, the last new feature in this release that we want to introduce is the support for live data. This feature developed by Fábián Kristóf during “Google Summer of Code 2017” program. In this context, the support for live data refers to the data that is frequently changing and the ability of the application to visualize this changing data.

Prior to the upcoming release, the only supported workflow in LabPlot was to import the data from an external file into LabPlot’s data containers and to do the visualization. On data changes, the user needed to re-import again. With LabPlot 2.5 we introduced the “Live Data Source” object that is “connected” to the actual data source and that takes care of re-reading the changed data according to the specified options.

Add Live Data Source

The supported data source are files and named pipes, Unix/UDP/TCP sockets and serial port. The data can be read periodically where the user can specify the time interval for when to read the new data or alternatively on data changes. At the moment, the only supported formats are ASCII and binary data.

The consumption of the data in plots is pretty much the same as when using spreadsheets – the user simply selects the columns in such a live data source object as the sources of the data to be plotted.

The following video shows a combination of Gnucap and LabPlot and demonstrates LabPlot’s ability to monitor a file, to re-read it on changes and to automatically update the visualization of the data. A small application written in python and Qt controls the definition of Gnucap’s circuit file for a biquad filter. The user is able to change the values of the resistances via changing the values of the sliders. After each change, the circuit simulation is re-calculated in Gnucap to obtain the frequency response of the filter. The results of the simulation are written out to a file which is consumed by LabPlot in a “Live Data Source” object. The Bode plot in LabPlot is updated automatically on every change of one of the resistance values.

This example was contributed by Orestes Mas from the Technical University of Catalonia. Fábián’s blog contains more examples and details for this feature.

Another feature that is not directly related to the actual reading of live data but that comes in handy when dealing with such data is the ability to quickly specify the amount of data points that has to be plotted. In addition to the already available “free ranges” where the user can freely navigate through the data, the options “Show first N points” and “Show last N points” were added in the plot. Couple of examples for this new data range options are given in another blog post.

Here is GCompris 0.91, a new bugfix release to correct some issues in previous version and improve a few things.

Every GNU/Linux distribution shipping 0.90 should update to 0.91.


With 68 commits since last release, the full changelog is too long for this post. But here is a list to summarize the changes.


  • fix English text in several activities
  • fix score position in several activities
  • block some buttons and interactions when needed in several places
  • lots of fixes for audio in several activities
  • number_sequence (and others based on it), fix base layout
  • update dataset for clickanddraw, drawnumbers and drawletters
  • crane, add localized dataset
  • lightsoff, add keyboard support and other fixes
  • algorithm, add keyboard support and other fixes
  • money, fixes for locale currency used
  • ballcatch, improve audio feedback
  • calendar, several little fixes
  • memory-case-association fix icons size

Other changes:

  • re-enable sound effects on linux
  • improved playback of sound effects, no more delay
  • add captions to images and OARS tags in the appdata
  • add Scottish Gaelic to core, and update some datasets for it
  • main bar, fix some items size
  • remove unused images

You can find this new version on the download page, and soon in the Play store and Windows store.

On the translation side, we have 16 languages fully supported: British English, Catalan, Catalan (Valencian), Chinese Traditional, Dutch, French, Greek, Indonesian, Irish Gaelic, Italian, Polish, Portuguese, Romanian, Spanish, Swedish, Ukrainian.
We also have 15 languages partially supported: Norwegian Nynorsk (97%), Hindi (96%), Turkish (90%), Scottish Gaelic (86%), Galician (86%), Brazilian Portuguese (84%), Belarusian (84%), German (81%), Chinese Simplified (79%), Russian (78%), Estonian (77%), Slovak (76%), Finnish (76%), Slovenian (69%), Breton (65%).

If you want to help completing one of these translations or adding a new one, please contact us.

Else you can still help by making some posts in your community about GCompris and don’t hesitate to give feedbacks.

Thank you all,
Timothée & Johnny

Some days since the Ceph and CloudStack Day in London last month now. It was a great event, great presentations and a lot of networking with the local community.

You can find my presentation on "Email Storage with Ceph" online as also the break slides with some impressions from some of the last few Ceph events.

Some of the other presentations can be found here and some pictures from the day in this album.

We are getting close to releasing the Qt 3D Studio 2.0 and the first Beta is released today. First beta packages are now available through the Qt online installer & Qt download. Let’s have a quick summary of the changes & new features we are introducing. For detailed information about the Qt 3D Studio please visit our web pages at:

New Runtime & Viewer Application

One of the biggest changes in the 2.0 release is the new Qt 3D Studio runtime which is built on top of the Qt 3D module. From user perspective this change is not directly visible, but a very important item under the hood. Using Qt 3D means deeper and easier integration to the Qt framework. It also makes it easier for us to introduce new features from both tooling and 3D engine perspective. First example of the new features is the Qt 3D Studio 2.0 Viewer Debugging and Profiling UI which shows the basic information about the rendering performance and structure of the 3D scene.

Qt 3D Studio Viewer Profile and Debug view

Qt 3D Studio Viewer Profile and Debug view

Deploying your UI project to the Viewer application in the target device with debugging and profiling enabled allows you to immediately see what the potential design performance bottlenecks are. You can also spot cases where you are using large textures or 3D models have a lot of parts which are not visible.

Improved Data Input

Data Inputs are a mechanism for integrating the user interface to the application logic. The Data Input functionality in the 1.1 release was somewhat limited and basically offered support only to control animation timelines and changing slides. With the 2.0 release we have introduced several new data types which can be now easily tied to different object properties in the 3D scene.

Improved Data Input

Improved Data Input

For more details on using Data Inputs please refer to documentation 

Editor Improvements

Biggest change in the editor side is the new timeline view which has been totally rewritten. This rewrite makes it easier for us to introduce new features in the future and already in the 2.0 release we have already made several small usability improvements.

New timeline

New timeline

We have also made changes to the shortcut keys and mouse usage so that changing between different Camera Views (Perspective, Top, Scene Camera etc.) and panning and orbiting is easier, and the behavior is closer to the common practices in 3D design tools. For more details please refer to keyboard shortcuts documentation.


Qt 3D Studio 2.0 beta releases are available through Qt online installer under the Preview section. Qt online installer can be obtained from Qt Download page and commercial license holders can find the packages from Qt Account.

Some example projects can be found under examples folder in the installation directory. Additional examples and demo applications can be found from repository. If you encounter issues with using Qt 3D Studio or you would like to suggest new feature, please use Qt3D Studio project in the

The post Qt 3D Studio 2.0 Beta Available appeared first on Qt Blog.


I am Gun Park, and I’m excited to finally join the wonderful KDE community through this amazing opportunity called Google Summer of Code 2018. Thanks for all the people that have supported and led me to this journey!

My project is porting the Keyboard configuration module in System Settings to the modern Qt Quick framework, with Wayland in mind. Right now, it’s only capable of configuring Xkb. But Xkb alone is useless without an IME when trying to input languages like Chinese, Japanese, and Korean. So far, if you spoke one of those languages, you had to go to the command line, and manually install fcitx and the corresponding module for your language in order to use your language at all. So I’m planning to add a seamless interface for configuring the IME, to do all that automatically.

Currently the Keyboard module codebase is ancient, depends on a lot of legacy stuff, and a bit messy after standing the Test of Time. It needs a new backend, and a fancy new UI with modern Qt Quick too.

So far I have been experimenting with QML, and have replicated the existing interface in QML. But it seems like they have to be changed a lot. Let’s look at them over.

This is the first screen you see when you go into System Settings -> Inputs -> Keyboard (actually, my replica of it in QML):

Hardware Panel

Also the second panel:

Layouts Panel

The third panel, “Advanced”, is a bunch of miscellaneous settings from the xkb configuration file, and not all of them might be necessary, and they could certainly be placed in better places. So this needs to be completely redesigned with collaboration with the KDE Visual Design Group.

I have also been learning how QAbstractItemModels work, and my mentor told me about how we can use these models and proxy models to make the program much more flexible and easy to work with. I could definitely see why, and I thought this was very cool. So I have been experimenting with the layouts and the keyboard model’s models.

The current backend relies on legacy libraries like xlib and xkblib. We need to transition to a libinput-based solution. (This seems like the biggest work we will need to do, but I still don’t know a lot about it.)

That is what I have figured out so far, thanks to the immense help from my mentor Eike Hein, as the coding period starts.

Qt for Automation has been launched in conjunction with Qt 5.10 as an AddOn to Qt for Application Development or Qt for Device Creation. One module in that offering is our client-side solution for MQTT called Qt MQTT.

Qt MQTT focusses on the client side, helping developers to create sensor devices and or gateways managing data to be sent via MQTT.  MQTT describes itself to be lightweight, open and simple. The principle idea is to reduce the protocol overhead as much as possible. The most used version of this protocol is MQTT 3.1.1, usually referred to as MQTT 4. The MQTT 5 standard has been agreed on recently, and we are working on adding support for it in Qt MQTT. But that will be part of another announcement later this year.

To verify the functionality of a module and also to provide guidelines for developers on its usage, The Qt Company created a demo which is called SensorTag. We will use this demo as a basis for this blog series.

A brief introduction can be viewed in this video:

The source code of this demo is located in our Boot 2 Qt demo repository here ( ).

Basically, this demo describes a scenario in which multiple sensors report data to a gateway (in this case Raspberry Pi3) via Bluetooth, which then again sends data to an MQTT broker in the cloud. Multiple gateways mesh up to generate a sensor network.


Each sensor propagates updates to what they measure, more specifically

  • Ambient Temperature
  • Object Temperature
  • Acceleration
  • Orientation / angular velocity
  • Magnetism
  • Altitude
  • Light
  • Humidity

As the focus of this series is about data transmission, the Sensor / Gateway combination will be summed up as “device”.

The demo serves two purposes; It has a pleasant user interface, which appeals to many, and it showcases how easy it is to integrate MQTT (and Qt MQTT).

MQTT is a publish/subscribe protocol, which implies that data is sent related to a specific topic and other recipients register themselves to a broker (server) to receive notifications on each message published.

For the devices above, available messages are:

  • [Topic: Sensors/active, Data: ID]: Every 5 seconds a sensor publishes an “Online” message, notifying subscribers that the device is still active and sending data. On initial connect it also includes a Will message “Offline”. A will message is sent whenever a device disconnects to notify all subscribers with a “Last Will”. Hence, as soon as the network is disconnected, the broker broadcasts the last will to all subscribed parties.
  • [Topic: Sensor/<sensorID>/<datatype>/, Data:<value>]: This is send from each sensor whenever a value for a specific datatype like in above list changes. The user interface from the video subscribes to these messages and updates the graphics accordingly.

Side-note: One additional use-case could be to check the temperature of all devices. In that case, “wildcard” subscriptions can be very useful. Subscribing to “Sensor/+/temperature will lead to receiving temperature updates from all sensors.

Currently, there are around 10 sensors registered at the same time at peak times, mostly when the demo is presented at tradeshows. As mentioned above, demo code exists to showcase an integration, not necessarily the most performant or energy-saving solution. When going towards production, a couple of questions need to be asked:

  • What would happen in a real-world scenario? Would the demo scale up to thousands of devices reporting data?
  • Does the demo fit to requirements on hardware and/or battery usage?
  • Is it within the boundaries of communication limitations of Low Power WAN solutions?

In case you are not familiar with LPWAN solutions like LoRA or Sigfox, I highly recommend Massimo’s talk ( ) at the Qt World Summit 2017.


To simplify the source to look at and to reduce it to a non-UI minimal example, there is a minimal representation available here . In that the SensorInformation class has a couple of properties, which get updated frequently:

class SensorInformation : public QObject

    Q_PROPERTY(double ambientTemperature READ ambientTemperature WRITE setAmbientTemperature NOTIFY ambientTemperatureChanged)
    Q_PROPERTY(double objectTemperature READ objectTemperature WRITE setObjectTemperature NOTIFY objectTemperatureChanged)
    Q_PROPERTY(double accelerometerX READ accelerometerX WRITE setAccelerometerX NOTIFY accelerometerXChanged)
    Q_PROPERTY(double accelerometerY READ accelerometerY WRITE setAccelerometerY NOTIFY accelerometerYChanged)
    Q_PROPERTY(double accelerometerZ READ accelerometerZ WRITE setAccelerometerZ NOTIFY accelerometerZChanged)
    Q_PROPERTY(double altitude READ altitude WRITE setAltitude NOTIFY altitudeChanged)
    Q_PROPERTY(double light READ light WRITE setLight NOTIFY lightChanged)
    Q_PROPERTY(double humidity READ humidity WRITE setHumidity NOTIFY humidityChanged)


Whenever a property is updated the client publishes a message:



The device update rate depends on the type of sensors, items like temperature and light intensity are updated less frequently than, for instance, acceleration.

To get an idea of how much data is sent, example Part1B hooks into the client’s transport capabilities. MQTT in its standard has only a limited number of requirements for the transport. Whatever transmits the data must send it ordered, lossless and bi-directional. Theoretically, anything starting from QIODevice is capable of this. QMqttClient allows to specify a custom transport via QMqttClient::setTransport(). Here, we use the following:


class LoggingTransport : public QTcpSocket
    LoggingTransport(QObject *parent = nullptr);

     qint64 writeData(const char *data, qint64 len) override;
    void printStatistics();
    QTimer *m_timer;
    QMutex *m_mutex;
    int m_dataSize{0};

Inside the constructor a timer is created to invoke printStatistics() at a regular interval. writeData simply stores the length of data to be send and then passes on to QTcpSocket::writeData().

To add the LoggingTransport to a client, all one needs to do is:

    m_transport = new LoggingTransport(this);
    m_client = new QMqttClient(this);
    m_client->setTransport(m_transport, QMqttClient::AbstractSocket);

The output shows that after 11 seconds one sensor sends about 100KB of data. This is just one sensor. Considering the “real-world” example this is clearly unacceptable for distribution. Hence, the effort is now to reduce the number of bytes to be published without losing information.

The demo itself uses one connection at all times, meaning it does not need to dis- or reconnect. The connect statement in MQTT 3.1.1 is around 10 bytes plus the ID of the client. In case a device would reconnect each time to send data, this can add up to a significant amount. But in this case, we “optimized” it already.

Consequently, we move on to the publishing step. There are two options, reduce the size of each message and reduce the number of messages.

For the former, the design of a publish message for ambient temperature is the following:

Bytes Content
1 Publish Statement & Configuration
1 Remaining Length of the message
2 Length of the topic
71 Topic: qtdemosensors/{8f8fde60-933d-44cf-b3a7-8dac62425a63}/ambientTemperature
2 ID of the message
1..8 Value (String for double)

It is obvious that the topic itself is to be taken responsible for the size of the message, especially as the payload storing the value is just a fraction of the size of a message.

Concepts which can be applied here are:

  • Shorten the “root” topic (qtdemosensors -> qtds)
  • Shrink the size of the ID (UUID -> 8 digit number)
  • Replace the clear text of sensortype with an enum (ambientTemperature -> 1)

All those approaches come with a price. Using enums instead of clear text property reduces the readability of a message, the subscriber always has to know what type of data corresponds to an ID. Reducing the size of the ID potentially limits the maximum number of connected devices, etc. But applying those items leads to more than three times longer period until the 100KB barrier is reached (See example Part 1C).

At this stage, it becomes impossible to reduce the message overhead without losing information. In the showcase, I mentioned that the simulated sensors are to be used in the field, potentially sending data via LPWAN. Typically, sensors in this area should not send data on such a high frequency. If that is the case, there are two additional options.

First, the messages are getting combined to one single message containing all sensor properties. The table above showed that the amount of data for the value part has just been a fraction of the message overhead.

One approach is to use JSON to propagate properties, this can be seen in example Part 1d. QJsonObject and QJsonDocument are very handy to use, and QJsonDocument::toJson() exports the content to a QByteArray, which fits perfectly to an MQTT message.


void SensorInformation::publish()
    QJsonObject jobject;
    jobject["AmbientTemperature"] = QString::number(m_ambientTemperature);
    jobject["ObjectTemperature"] = QString::number(m_objectTemperature);
    jobject["AccelerometerX"] = QString::number(m_accelerometerX);
    jobject["AccelerometerY"] = QString::number(m_accelerometerY);
    jobject["AccelerometerZ"] = QString::number(m_accelerometerZ);
    jobject["Altitude"] = QString::number(m_altitude);
    jobject["Light"] = QString::number(m_light);
    jobject["Humidity"] = QString::number(m_humidity);
    QJsonDocument doc( jobject );
    m_client->publish(QString::fromLatin1("qtds/%1").arg(m_id), doc.toJson());


The size of an MQTT publish message is now at around 272 bytes including all information. As mentioned before, it comes at the cost of losing information, but also at significantly reducing the bandwidth required.

To summarize this first part, we have looked into various solutions to reduce the amount of data being sent from a device to an MQTT broker with minimal effort. Some approaches trade off readability or extensibility, others come with a loss of information before data transmission. It really depends on the use-case and the scenario in which an IoT solution is put into production. But all of these can easily be implemented with Qt. Until now, we have only covered Qt’s feature set on JSON, MQTT, networking, and connectivity via Bluetooth.

In the meantime, you can also find more information on our website and write your comments below.

The post Optimizing Device Communication with Qt MQTT appeared first on Qt Blog.


I am Andrey Cygankov, and I am participating in Google Summer of Code 2018.

Currently LabPlot supports importing data from some data sources and I am working on expanding this list adding web services to it. I have written a proposal explaining what I am going to do in this project.

During the community bonding period I have been acquainted with the code and the overall architecture in LabPlot. Also I am going to start developing a feature for importing data in the JSON format. I already had some nice conversations with my mentor (Alexander Semke) for some questions.

The coding period has started and I am ready to focus all my energy on project.

Contribution of source code is now allowed via Qt systems such as bug reports and forums. Traditionally all source code contributions to the Qt Project are governed via Contribution License Agreement (CLA), except possibility given to the commercial license holders to provide bug fixes and similar small modifications that The Qt Company has pushed into Qt. We have now updated the Qt Account service terms to more clearly state that source code can be contributed via the Qt systems.

The preferred way to contribute source code to the Qt Project is still via the CLA, according to the contribution guidelines. But sometimes a user who has not accepted the CLA has a patch that would, for example, fix a bug in Qt. Providing such a patch is now also possible via the Qt systems, for example via the bug reports or forum posts.

When such a “casual contribution” is done, it will be the responsibility of The Qt Company to pick up the contributed source code and complete the needed steps of pushing the fix into Qt. So the contributed code will not automatically go in as-is, but via the regular contribution process that ensures good quality of the source code in the Qt repositories.

The earlier service terms of the Qt Account and other Qt systems already provided adequate rights for The Qt Company to publish the content provided by the registered users. However, the earlier version did not explicitly mention that this includes source code, so we wanted to make the clarification. As we take these matters seriously, we will only use source code contributed from now on. In case you have earlier submitted a patch via some of the Qt systems, please submit it again.

In addition to providing source code, there are many other ways to contribute to Qt, for example by reporting bugs, supporting others in Qt mailing lists or forums, translating to other languages, and writing documentation.

If you have any questions regarding contributions to Qt, please do not hesitate to ask from our legal.

The post Code contributions via bug reports and forum posts appeared first on Qt Blog.

OPC UA is a central element of the Industry 4.0 story providing seamless communication between IT and industrial production systems. basysKom has initiated Qt OPC UA in 2015 with the goal of providing an out of the box Qt API for OPC UA. In 2017 basysKom, together with The Qt Company, has finished up a Technology Preview of that API. It will be available with the upcoming Qt 5.11 release end of May.

The focus of Qt OPC UA is on HMI/application development and ease of use for client-side development. The Tech Preview implements a subset of the OPC UA standard. It allows to connect to servers, read and write attributes, call methods on the server, monitor values for data changes and browse nodes. All this functionality is provided by asynchronous APIs which integrate nicely into Qt applications.

Qt OPC UA is primarily an API and not a whole OPC UA stack. A plugin interface lets you integrate existing OPC UA stacks as API backends. Currently, plugins for the following stacks are available:

  • FreeOpcUa
  • open62541
  • Unified Automation C++ SDK 1.5

Qt OPC UA will be available directly from the Qt installer for those holding a Qt for Automation license. The source code itself is triple licensed (GPL, LGPL and commercial) with the exception of the Unified Automation backend which is only available under a commercial license. Users of one of the Open Source licenses will need to compile Qt OPC UA themselves. See here for a list of build recipes.

The backends for FreeOpcUa and open62541 are available under an Open Source license. When going for an Open Source solution, we recommend the open62541 plugin as it is the more complete implementation with an active community and a good momentum. It also has fewer dependencies, making usage on platforms such as Android or iOS much easier.

On the platform side, the technology preview will be available for

  • Windows with Visual Studio 2015 & 2017 as well as MinGW
  • Linux (Desktop and Embedded)
  • Android and iOS

Please note that not every backend is available on every platform.

On top of what is part of the Tech Preview, there are already a number of additions planned. Among them support for transport security, the discovery service, event support, and filters.

If you would like to find out more, join my webinar on OPC UA on 19 June. I will be happy to answer your questions.

The post OPC UA support in Qt 5.11 appeared first on Qt Blog.

I have no idea to which category this post belongs to – to C++ or to KDE development – because it is about one component in Plasma Blade. But the post is mostly about the approach to writing a distributed system I went for while implementing it, and a C++ framework that will come out of it.

Blade is my long-term R&D project to implement a multi-process KRunner alternative.


One of the most powerful parts of the C++ standard library is the <algorithm> header. It provides quite a few useful algorithms for processing collections. The main problem is that algorithms take iterator pairs to denote the start and end of a collection which should be processed instead of taking the collection directly.

While this has some useful applications, most of the time it just makes algorithms more difficult to use and difficult to compose.

There were multiple attempts at fixing this by introducing an abstraction over sequence collections called ranges. There are several 3rd party libraries that implement ranges, and one of them (Eric Niebler’s range-v3 library) is on its way to become a part of the C++ standard library.

There are numerous articles written about ranges available online (and I even dedicated a whole chapter to them in my book), so I’m not going to cover them in more detail here. I’m just going to say that ranges allow us to easily create sequences of transformations that should be applied to a collection.

Imagine the following scenario – we have the output of the ping command, and we want to convert the whole output to uppercase, then extract the number of miliseconds each response took, and then filter out all the responses which took longer than some predefined time limit.

64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from localhost (::1): icmp_seq=3 ttl=64 time=0.041 ms

In order to demonstrate the range transformation chaining, we are going to make this a bit more complicated than it needs to be:

  • we will convert each line to uppercase;
  • find the last = character in each line;
  • chop off everything before the = sign (including the sign);
  • keep only results that are less than 0.045.

Written in the pipe notation supported by the range-v3 library, it would look something like this:

auto results =
    | transform([] (std::string&& value) {
          std::transform(value.begin(), value.end(), value.begin(), toupper);
          return value;
    | transform([] (std::string&& value) {
          const auto pos = value.find_last_of('=');
          return std::make_pair(std::move(value), pos);
    | transform([] (std::pair<std::string, size_t>&& pair) {
          auto [ value, pos ] = pair;
          return pos == std::string::npos
                      ? std::move(value)
                      : std::string(value.cbegin() + pos + 1, value.cend());
    | filter([] (const std::string& value) {
          return value < "0.045"s;


In one of my previous posts, I wrote that using the for_each algorithm instead of the range-based for loop provides us with an abstraction high enough to be used for processing asynchronous data streams instead of being able to work only with regular data collections.

Some people argued that for_each was not meant to be a customization point in C++, which might be true, but it works very well as one. Now, it suffers from the same problems as other STL algorithms, and if we want to reach new heights of abstraction, it truly is not the best customization point out there. So, we might find something else to customize instead.

If you look at the previous code snippet, you’ll see several transformations performed on ping_output. But it does not say what ping_output is. It can be a vector of strings, it can be an input stream tokenized on newlines, it can be a QFuture, etc. It can be anything that transform and filter can exist for.

Reactive streams

We can create an abstraction similar to ranges, but instead of trying to create an abstraction over collections, we will create abstractions over series of events.

Imagine if ping_output was a range that reads the data from an input stream like std::cin. Whenever we request a result from the results range, it would block the execution of our whole program until a whole line is read from the input stream.

This is a huge problem. The ping command will send our program one line each second which means that our program will be suspended for most of its lifetime waiting for those seconds to pass,

Instead, if would be better if it could continue working on other tasks until the ping command sends it the new data to process.

This is where event processing comes into play. Instead of requesting a value from the results, and then blocking the execution until that value appears, we want to react to new values (events) that are received from the ping command, and process them when they arrive – without ever blocking our program.

This is what reactive streams are meant to model – an asynchronous stream of values (events). If we continue with the ping example, the transformations it defines should also be able to work on reactive streams. When a new value arrives, it goes through the first transformation which converts it to uppercase, and sends it to the second transformation. The second transformation processes the value and sends it to the third. And so on.

The code would look like this:

auto pipeline =
    system_cmd("ping"s, "localhost"s)
    | transform([] (std::string&& value) {
          std::transform(value.begin(), value.end(), value.begin(), toupper);
          return value;
    | transform([] (std::string&& value) {
          const auto pos = value.find_last_of('=');
          return std::make_pair(std::move(value), pos);
    | transform([] (std::pair&& pair) {
          auto [ value, pos ] = pair;
          return pos == std::string::npos
                      ? std::move(value)
                      : std::string(value.cbegin() + pos + 1, value.cend());
    | filter([] (const std::string& value) {
          return value < "0.045"s;

The only thing that changed is the source of the values. Instead of using a range (a vector, or another sequence collection), this uses a reactive stream which emits a line every time the ping command outputs it.

This is the power of abstraction – using the code we have written for one thing, for something completely different. In this case, using the code that was written to process a collection synchronously, to process asynchronous event streams.

Distributed stream processing

While it is nice to be able to write asynchronous software systems in the same way we write synchronous systems, that is not enough for this post.

One of the great things about defining programs as series of pure transformations to be performed on the data is that those transformations are independent from one another – they can be isolated from each other and even moved into different processes (even processes on separate computers in a network).

For this case, I’ve implemented in the Voy library a special type of transformation called a bridge which is used to transparently send the data from one process to the other.

Let’s split the data processing in ping example into three parts (similar to what the Plasma Blade will need) – to execute the first and the last part in the main program, and to execute the middle part in the backend. It will look like this (added the voy namespace for completeness):

auto pipeline =
voy::system_cmd("ping"s, "localhost"s) | voy::transform([] (std::string&& value) { std::transform(value.begin(), value.end(), value.begin(), toupper); return value; })
| voy_bridge(to_backend)
| voy::transform([] (std::string&& value) { const auto pos = value.find_last_of('='); return std::make_pair(std::move(value), pos); }) | voy::transform([] (std::pair&& pair) { auto [ value, pos ] = pair; return pos == std::string::npos ? std::move(value) : std::string(value.cbegin() + pos + 1, value.cend()); })
| voy_bridge(from_backend)
| voy::filter([] (const std::string& value) { return value < "0.045"s; });

This data pipeline is defined once for both the main program and the backend. For the main program, the middle part of the pipeline will be disabled (no code generated for it), while for the backend only the middle part will be compiled.

A note on the implementation

At the last Akademy, I talked to Tomaz Canabrava about making KDE software more appealing to students to join our development efforts.

Now, this project is probably going to be overly complex for an average student to join in, but I’m trying to make it as readable and as clean as possible for more adventurous students. It uses all the new and cool features of C++17, along with void_t and the detection idiom to simulate concepts, etc.

If you desire to have a chance to work on a real-world C++17 project, just send me an e-mail, and I’ll try to get you up to speed. The only pre-requirement is for you to do some investigation and find out which repository the code resides in. ;)

You can support my work on , or you can get my book Functional Programming in C++ at if you're into that sort of thing.

May 15, 2018

The Qt model/view APIs are used throughout Qt — in Qt Widgets, in Qt Quick, as well as in other non-GUI code. As I tell my students when I deliver Qt trainings: mastering the usage of model/view classes and functions is mandatory knowledge, any non-trivial Qt application is going to be data-driven, with the data coming from a model class.

In this blog series I will show some of the improvements to the model/view API that KDAB developed for Qt 5.11. A small word of advice: these posts are not meant to be a general introduction to Qt’s model/view (the book’s margin is too narrow… but if you’re looking for that, I suggest you start here) and assumes a certain knowledge of the APIs used.

Implementing a model class

Data models in Qt are implemented by QAbstractItemModel subclasses. Application developers can either choose one of the ready-to-use item-based models coming with Qt (like QStringListModel or QStandardItemModel), or can develop custom model classes. Typically the choice falls on the latter, as custom models provide the maximum flexibility (e.g. custom storage, custom update policies, etc.) and the biggest performance. In my experience with Qt, I have implemented probably hundreds of custom models.

For simplicity, let’s assume we are implementing a table-based model. For this use case, Qt offers the convenience QAbstractTableModel class, which is much simpler to use than the fully-fledged QAbstractItemModel. A typical table model may look like this:

class TableModel : public QAbstractTableModel
    explicit TableModel(QObject *parent = nullptr)
        : QAbstractTableModel(parent)

    // Basic QAbstractTableModel API
    int rowCount(const QModelIndex &parent) const override
        return m_data.rowCount();

    int columnCount(const QModelIndex &parent) const override
        return m_data.columnCount();

    QVariant data(const QModelIndex &index, int role) const override
        if (role != Qt::DisplayRole)
            return {};

        return m_data.getData(index.row(), index.column());

    Storage m_data;

First and foremost, note that this model is not storing the data; it’s acting as an adaptor between the real data storage (represented by the Storage class) and the views.

When used into a Qt view (for instance a QTreeView), this code works perfectly and shows us a nice table full of data, for instance like this:

Making the code more robust

The code of the class above has a few issues.

The first issue is that the implementation of rowCount() and columnCount() is, generally speaking, wrong. Those functions are supposed to be callable for every model index belonging to this model, plus the root (invalid) model index; the parameter of the functions is indeed the parent index for which we’re asking the row count / column count respectively.

When called with the root index, the functions return the right amount of rows and columns. However, there are no rows and no columns below any of elements in the table (because it is a table). The existing implementation does not make this distinction, and happily returns a wrong amount of rows/columns below the elements themselves, instead of 0. The lesson here is that we must not ignore the parent argument, and handle it in our rowCount and columnCount overrides.

Therefore, a more correct implementation would look like this:

    int rowCount(const QModelIndex &parent) const override
        if (parent.isValid())
            return 0;

        return m_data.rowCount();

    int columnCount(const QModelIndex &parent) const override
        if (parent.isValid())
            return 0;

        return m_data.columnCount();

The second issue is not strictly a bug, but still a possible cause of concern: we don’t validate any of the indices passed to the model’s functions. For instance, we do not check that data() receives an index which is valid (i.e. isValid() returns true), belonging to this very model (i.e. model() returns this), and pointing to an existing item (i.e. its row and column are in a valid range).

    QVariant data(const QModelIndex &index, int role) const override
        if (role != Qt::DisplayRole)
            return {};

        // what happens here if index is not valid, or not belonging to this model, etc.?
        return m_data.getData(index.row(), index.column());

I personally maintain quite a strong point of view about this issue: passing such indices is a violation of the API contract. A model should never be assumed to be able to handle illegal indices. In other words, in my (not so humble) opinion, the QAbstractItemModel API has a narrow contract.

Luckily, Qt’s own views and proxy models honour this practice. (However, be aware that some other bits of code, such as the old model tester from Qt Labs, does not honour it, and will pass invalid indices. I will elaborate more on this in the next blog post.)

Since Qt will never pass illegal indices to a model, it’s generally pointless to make QAbstractItemModel APIs have wide contracts by handling all the possible inputs to its functions; this will just add unnecessary overhead to functions which are easily hotspots in our GUI.

On the other hand, there are cases in which it is desirable to have a few extra safety checks in place, in the eventuality that an illegal index gets passed to our model. This can happen in a number of ways, for instance:

  • in case we are developing a custom view or some other component that uses our model via the model/view API, accidentally using wrong indices;
  • a QModelIndex is accidentally stored across model modifications and then used to access the model (a QPersistentModelIndex should have been used instead);
  • the model is used in combination with one or more proxy models, which may have bugs in the mapping of the indices (from source indices to proxy indices and viceversa), resulting in the accidental passing of a proxy index to our model’s functions.

In the above scenarios, a bug somewhere in the stack may cause our model’s methods to be called with illegal indices. Rather than crashing or producing invalid data, it would be very useful to catch the mistakes, in order to gracefully fail and especially in order to be able to debug them.

In practice all of this means that our implementation of the QAbstractItemModel functions needs some more thorough checks. For instance, we can rewrite data() like this:

    QVariant data(const QModelIndex &index, int role) const override
        // index is valid

        // index is right below the root

        // index is for this model
        Q_ASSERT(index.model() == this);

        // the row is legal
        Q_ASSERT(index.row() >= 0);
        Q_ASSERT(index.row() < rowCount(index.parent())); 
        // the column is legal 
        Q_ASSERT(index.column() >= 0);
        Q_ASSERT(index.column() < columnCount(index.parent()));

        if (role != Qt::DisplayRole)
            return {};

        return m_data.getData(index.row(), index.column());

Instead of hard assertions, we could use soft assertions, logging, etc. and returning an empty QVariant. Also, do note that some of the checks could (and should) also be added to the rowCount() and columnCount() functions, for instance checking that if the index is valid then it indeed belongs to this model.

Introducing checkIndex

After years of developing models I’ve realized that I must have written some variation of the above checks countless times, in each and every function of the QAbstractItemModel API. Recently I gave the question some more thought, and I came up with a solution: centralize the above checks, so that I don’t have to re-write them every time.

In Qt 5.11 I have added a new function to QAbstractItemModel: QAbstractItemModel::checkIndex(). This function takes a model index to check, and an option to determine the kind of checks that should be done on the index (see the function documentation for all the details).

In case of failure, the function returns false and prints some information in the qt.core.qabstractitemmodel.checkindex logging category. This gives us the flexibility of deciding what can be done on failure, and also to extract interesting data to debug an issue.

Using the brand new checkIndex() our data() reimplementation can now be simplified to this:

    QVariant data(const QModelIndex &index, int role) const override
        // data wants a valid index; moreover, this is a table, so the index must not have a parent
        Q_ASSERT(checkIndex(index, QAbstractItemModel::CheckIndexOption::IndexIsValid | QAbstractItemModel::CheckIndexOption::ParentIsInvalid));

        if (role != Qt::DisplayRole)
            return {};

        return m_data.getData(index.row(), index.column());

Again, the example has an hard assert, which means that the program will crash in case of an illegal index (forcing the developer to do something about it). On the other hand the check will disappear in a release build, so that we don’t pay the price of the check at each invocation of data(). One could instead use a soft assert or just a plain if statement (as many models — unfortunately — do, including the ones coming with Qt) for customizing the outcome of the check.

This is an example of the logging output we automatically get in case we pass an invalid model index, which is not accepted by data():

qt.core.qabstractitemmodel.checkindex: Index QModelIndex(-1,-1,0x0,QObject(0x0)) is not valid (expected valid)

And this is an example of the output in case we accidentally pass an index belonging to another model (which happens all the time when developing custom proxy models):

qt.core.qabstractitemmodel.checkindex: Index QModelIndex(0,0,0x0,ProxyModel(0x7ffee145b640)) is for model ProxyModel(0x7ffee145b640) which is different from this model TableModel(0x7ffee145b660)


I hope that this addition to QAbstractItemModel will help developers build better data models, and to quickly and effectively debug situations where the model API is being misused.

In the next instalment I will talk about other improvements to the model/view framework in Qt 5.11.

About KDAB

KDAB is a consulting company offering a wide variety of expert services in Qt, C++ and 3D/OpenGL and providing training courses in:

KDAB believes that it is critical for our business to contribute to the Qt framework and C++ thinking, to keep pushing these technologies forward to ensure they remain competitive.

The post New in Qt 5.11: improvements to the model/view APIs (part 1) appeared first on KDAB.

On Monday, a security vulnerability in the OpenPGP and S/MIME email encryption standards and the email clients using those, called EFAIL was published.

What is this about and how is KMail affected? (Spoiler: KMail users are safe by default.)

Encrypted Email

The discovered vulnerability affects the OpenPGP and S/MIME standards used for end-to-end encryption of emails that specifically encrypts emails for the intended receivers. This is not to be confused with transport encryption (typically TLS) that is used universally when communicating with an email server. Users not using OpenPGP and S/MIME are not affected by this vulnerability.

End-to-end encryption is usually employed to prevent anyone different from the intended receiver from accessing message content, even if they somehow manage to intercept or accidentally receive an email. The EFAIL attack does not attempt
to break that encryption itself. Instead, it applies some clever techniques to trick the intended receiver into decrypting the message, and then sending the clear text content back to the attacker.

KMail relies on GnuPG for the OpenPGP and S/MIME handling, so you might also be interested in the GnuPG team's statement on EFAIL.

Exfiltration Channels

The EFAIL research paper proposes several exfiltration channels for returning the clear text content. The easiest one to understand is by exploiting the HTML capabilities of email clients. If not properly controlled, HTML email messages can download external resources, such as images, while displaying an email - a feature often used in corporate environments.

Considerably simplified, the idea is to add additional encrypted content around an intercepted encrypted message. The whole procedure for doing this is quite elaborate and explained in depth in the paper. Let's assume an attacker manages to prefix an intercepted encrypted email with the (encrypted) string "<img src='" and append an extra "'/>". The result would look something like this, after decryption by the receiver:

Attacker inserted Original content
<img src=" SomeTopSecretText "/>

An email client that unconditionally retrieves content from the Internet while displaying HTML emails would now leak the email content as part of an HTTP GET request to an attacker controlled web server - game over.


The OpenPGP standard has a built-in detection mechanism for manipulations of the encrypted content. This provides effective protection against this attack. KMail, or rather the GnuPG stack KMail uses for email cryptography, does make use of this correctly. Not all email clients tested by the EFAIL authors seem to do this correctly, though. Notwithstanding, your OpenPGP encrypted emails are safe from this attack if you use KMail.


The situation with S/MIME is more difficult, as S/MIME itself does not have any integrity protection for the encrypted content, leaving email clients with no way to detect the EFAIL attack. That's a conceptual weakness of S/MIME that can only really be fixed by moving to an improved standard.

Fortunately, this does not mean that your S/MIME encrypted emails cannot be protected in KMail. By default, KMail does not retrieve external content for HTML emails. It only does that if you either explicitly trigger this for an individual email by clicking the red warning box at the top of emails which informs of external content, or if you enable this unconditionally via Settings > Configure KMail > Security > Reading > Allow messages to load external references from the Internet. Starting with version 18.04.01, the latter setting will be ignored for S/MIME encrypted content as an additional precaution. For older versions, we recommend you make sure this setting is disabled.

Furthermore, distribution maintainers can get patches to solve this problem from here:


In order to revoke compromised signing keys, S/MIME relies on certificate revocation lists (CRLs) or the online certificate status protocol (OCSP). These two mechanisms consult an online server defined by the authority managing the
respective keys. The EFAIL paper suggests that this might be another possible exfiltration channel, as well as HTML. However, this hasn't been demonstrated yet, and the GnuPG team thinks it is unlikely to work. It is also a relevant piece
of the S/MIME security model, so simply disabling this as a precaution has security implications, too.

Therefore, we have not changed the default settings for this in KMail at this point. The reason is because compromised and thus revoked keys seem to be the more common concern than an elaborate targeted attack that would employ CRL or OCSP as an exfiltration channel (if possible at all). You'll find the corresponding settings for the CRL and OCSP usage under Settings > Configure KMail > Security > S/MIME Validation should you want to review or change them.


Research in email client and email cryptography security is very much appreciated and badly needed, considering how prevalent email is in our daily communication. As the results show, S/MIME is showing its age and is in need of conceptual improvements. Also, EFAIL again highlights the dangers to privacy caused by HTML emails with external references. Most importantly, this shows that your emails are well-protected by KMail and GnuPG, and there is certainly no reason to panic and stop using email encryption.

The organization of this year's Akademy is in full swing: the official conference program is out, we have had an insightful interview with one of the keynote speakers, another is coming soon, and attendees are already booking flights and accommodation. The #akademy IRC channel on Freenode and the Telegram group are buzzing with messages, advice and recommendations.

That said, it's not too early to start planning for Akademy 2019!

In fact, we are now opening the Akademy 2019 Call for Hosts, and looking for a vibrant spot and an enthusiastic crew that will host us.

Would you like to bring Akademy, the biggest KDE event, to your country? Read on to find out how to apply!

In 2005, Akademy took place in beautiful Málaga, Spain. Photo by Paolo Trabbatoni.

A Bit About Akademy

The venue of Akademy 2014 in Brno, Czech Republic.
Photo by Kevin Funk.

Akademy is KDE's annual get-together where our creativity, productivity and community-bonding reach their peak. Developers, users, translators, students, artists, writers - pretty much anyone who has been involved with KDE - will join Akademy to participate and learn. Contents will range from keynote speeches and two days of dual track talks by the FOSS community, to workshops and Birds of a Feather (BoF) sessions where we plot the future of the project.

The first day serves as a welcoming event. The next two days cover the keynote speakers and other talks. The remaining days are used for BoF sessions, intensive coding and workshops for smaller groups of 10 to 30 people. One of the workshop days is reserved for a day trip, so the attendees can see the local tourist attractions.

What You Get as a Host

Hosting Akademy is a great way to contribute to a movement of global collaboration. You get a chance to host one of the world's largest FOSS communities with contributors from across the globe, and witness a wonderful week of intercultural collaboration in your home town.

You'll get significant exposure to the Free Software community, and develop an understanding of how large projects operate. It is a great opportunity for the local university students, professors, technology enthusiasts and professionals to try their hand at something new.

What We Need from a Host

Ten years ago we gathered in Sint-Katelijne-Waver,
Belgium for this cool group photo.

Akademy requires a location close to an international airport, with an appropriate conference venue that is easy to reach. Organizing Akademy is a demanding task, but you’ll be guided along the entire process by people who’ve been doing it for years. Nevertheless, the local team should be prepared to invest a considerable amount of time into organizing Akademy.

For detailed information, please see the Call for Hosts. Questions and applications should be addressed to the Board of KDE e.V. or the Akademy Team.

Please indicate your interest in hosting Akademy to the Board of KDE e.V. by June 15st.
Full applications will be accepted until 15th July.

We look forward to your ideas, and can't wait to have fun at Akademy 2019 in your city!

Dot Categories:

May 14, 2018

Calamares is a Linux system installer (and some day, a FreeBSD system installer, but that is a long way off) which is distro- and desktop-independent. OpenSUSE Krypton is a live CD and installer for the latest-and-greatest .. but it already has an installer, so why try Calamares on it?

Well, sometimes it’s just to show that a derivative could be made (there is one, called GeckoLinux), or to experiment with tools and configurations.

Calamares has a script called, which like every gaping huge security hole is expected to be downloaded from the Calamares site, then run. It is recommended to only use this in a VM, with a live CD / ISO image running. What the script does is install a basic dev environment for Calamares, install up-to-date dependencies, and then it builds and installs Calamares. That then gives you a way to experiment, installing with Calamares from an already-set-up live CD.

The deploy script supports many different package managers and host systems, so it’s just a matter of running python3 -n to get started (and then wait for a while as packages are installed, Calamares is cloned, and then built). Calamares builds with no issues on Krypton (at least today, when I tried it).

Screrenshot of Calamares in Krypton

Calamares in Krypton (Qt 5.11)

Screenshot of Calamares in Manjaro

Calamares in Manjaro (Qt 5.10)

Having built Calamares, there’s a few bits I notice:

  • Esperanto isn’t supported in Qt applications (neither in Krypton, nor in Manjaro, nor in anything else I tested); QLocale has a constructor that takes an enum value specifying the language, but for a bunch of languages in that enum, it then creates a “C” locale. This is documented with the wriggly description “… if found in the database …”, but is rather unsatisfying.
  • Manjaro (Qt 5.10) does a better job of displaying Indic scripts than OpenSUSE Krypton (Qt 5.11), although this might be an artifact of installing updated packages into the live system.
  • The keyboard-layout picker displays no keycaps. It also doesn’t provide any useful debugging output. This is probably a combination of missing packages in the live system, and Calamares not providing enough useful feedback when the live image isn’t quite right. The latter, I can fix.

So, by briefly switching distro’s today, I’ve found one bug of my own, and one configuration thing for myself to document. And then from this not-using-Calamares distro, I can move on to another one.

Could you tell us something about yourself?

Well, I think I am a human shaped thing also known as Aedouard A. and also as El Gato Cangrejo, who loves making drawings and listening to music.

Do you paint professionally, as a hobby artist, or both?

I’m really trying to make it professionally, “very hard thing” but also I try to keep the fun in it so I would have to say both.

What genre(s) do you work in?

I like to let my hand and my pen go to wherever they want to go, and then I begin to think about those traces and it leads me to different shapes, themes and genres. I can build an script for a comic or for a short film, an illustration or even sounds based on a web of random traces on a digital canvas or on a piece of paper.

Whose work inspires you most — who are your role models as an artist?

I love the paintings, illustrations, designs and movies from these people: William Boguereau, Alphonse Mucha, Albrecht Durer, Jules Lefebvre, William Waterhouse, Masamune Shirow, Haruhiko Mikimoto, Shoji Kawamori, Mamoru Oshii, Quentin Tarantino, Hideaki Anno, Hayao Miyasaki, Ralph Bakshi, Guillermo del Toro… (not mentioning musicians, they are such an endless source of inspiration, I only can work while listening to music)

How and when did you get to try digital painting for the first time?

I tried digital painting for the first time like 12 years ago, I bought my first PC and I tried with a software called Image Ready from Photoshop, I did a couple of landscapes with the mouse and then I tried scanning my drawings and retrace them in Corel Draw, also with the mouse.

What makes you choose digital over traditional painting?

The production time, everything is like 10 times faster, expensive materials and the super powerful Ctrl-Z.

How did you find out about Krita?

I like to search for new tools and I try to use libre software. I can’t remember when I tried Krita for first time but I think it was like 7 years ago and it ran very very badly on my old PC.

What was your first impression?

I hated Krita at the time, now I love it!

What do you love about Krita?

The shortcuts are essential, the brushes, the animation tools, “insert meme here” it’s free!

What do you think needs improvement in Krita? Is there anything that really annoys you?

The performance in Linux, I recently changed my OS from Windows 7 to Linux Mint and I have noticed a significant difference in performance between the systems. I noticed a difference in performance between working in grayscale and working in color too, and and also I’m waiting for some layer FX’s as the ones in photoshop, specifically the trace effect, which I used a lot when I worked with photoshop.

What sets Krita apart from the other tools that you use?

As I said earlier, the shortcuts are essential, the animation tools combined with those awesome brushes makes a powerful tool for animation, and I love the fact that Krita has been made for professional use but you can also have tons of fun with it.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I would choose Distant.

What techniques and brushes did you use in it?

I like the “Airbrush_Linear” a lot. I set it to a big size and the opacity to 10 percent, then I use the “Eraser_Circle” the hard shaped one, to define shapes, also I use a lot the “Smudge_Soft” I like to play with it taking the paint from one side to another. When I grabbed Krita again it reminded me of my old times drawing with pencil and paper I just loved.

Where can people see more of your work?

Anything else you’d like to share?

If you are the pretty invisible friend, thanks and I’ll see you in a parallel universe.
If you are the Sorceress, I really sorry about the silence, I had a couple of good reasons…
If I owe you money, I’m trying to pay it.
If you are the extraterrestrial, stop it man.
If you are the C.I.A. stop sending stuff to my invisible friends and to the extraterrestrial.
If you like my drawings, keep your eyes peeled, I’m going to start a patreon/kickstarter campaign that involves comic, animation, Krita, Blender and other libre software.
If you are from Krita staff, thanks for Krita and thanks for the interview.
If you don’t know Krita, just give it a try, it is awesome. You don’t need to be an artist, you just need to have fun.

In this post, I’ll show you how to use the qtattributionsscanner tool and Python to generate an attribution document for third-party code in Qt.

Qt and Third-Party Code

Qt is available under both opensource and commercial licenses. While commercial users are free to hide their use of Qt, GPL and LGPL mandate that the use of Qt is properly attributed. In addition, several Qt modules contain third-party code that are bundled under their own licenses. We make sure that the licenses are liberal, so that they do not restrict the use of Qt too much. However, several of them do require attribution, too. Typically, in the documentation of the final product.

We actually go to a great length to make sure that third-party code is correctly attributed in our documentation. We have a process in place that requires any change of third-party code to be carefully reviewed. Additions of new third-party code to a Qt library, or significant updates, need to be approved by Lars Knoll, the chief maintainer of the Qt Project. We use tools like Fossology to double check correct attributions. All third-party code will be listed in the documentation of the respective Qt module, but also in the Licenses used in Qt overview section of the documentation. Significant changes of third-party modules are listed in the respective modules change log. Starting with Qt 5.11, we also will list changes in a separate documentation page.

If you bundle Qt in your application or device, you also need to provide these attributions – usually in the form of an attribution section of your documentation. The most straightforward way to create such a document is to copy the sections in the Qt documentation. However, this is a lot of work and can lead to errors. Luckily, there is now a cleaner way. Since Qt 5.9, the attributions are not written in plain qdoc markup. Instead, they are saved as qt_attribution.json files that are located close to the actual source code. The format of these JSON files is simple and documented (see QUIP-7). Now we just need an application that finds the files and generates attribution documents in a format of your choice.


It turns out that we already have an application for the collection of qt_attribution.json files: The qtattributionsscanner command line tool. You can find it in the bin directory of Qt.

When you specify a source directory for the tool, it recursively searches for qt_attribution.json files in there, and parses them. In addition, it takes an -output-format parameter, which currently supports qdoc and json as arguments. qdoc output is the default, because the tool was developed for generating Qt documentation. json generates a document in the same format as the input files. As a result, the attributions from the directory tree are placed into one document.

You might ask yourself, how can you generate other formats besides json and qdoc? My first hack, when I had the need to generate a PDF, was to add a markdown argument to qtattributionsscanner. Markdown is easy to read by itself, and there are tools available that convert it to HTML, which in turn can be used to generate a PDF. Anyhow, this is just one format, and the exact layout and content were hard-coded to my needs, so I surely didn’t want to merge this. Wouldn’t it be more flexible to use a template engine, so that you can generate attribution files with the exact format and layout you want?


What we want is an application that generates documents out of a JSON document and a template file. In the spirit of “not reinventing the wheel”, let’s use a language and library that makes this easy. I’ve been going for Python, using the Jinja2 templating engine. This is the Python file for setting up the engine:

import argparse
import io
import json
import os
import sys
from jinja2 import Environment, FileSystemLoader, Template

parser = argparse.ArgumentParser()
parser.add_argument("-f", "--file", nargs="?", help="qt_attribution.json file",
                    type=argparse.FileType('r'), default=sys.stdin)
parser.add_argument("-o", "--out", nargs="?", help="file to write to",
                    type=argparse.FileType('w'), default=sys.stdout)
parser.add_argument("-t", "--template", help="template file to use", default="")
parser.add_argument("-v", "--verbose", action="store_true")
args = parser.parse_args()

# Set up jinja2
env = Environment(
  loader = FileSystemLoader(os.path.dirname(os.path.realpath(__file__)) + "/templates")
template = env.get_template(args.template)

# Read in qt_attribution.json file
attributions = json.load(args.file)

# Load content of license files directly into the structure
for entry in attributions:
  if entry['LicenseFile']:
    if (args.verbose):
      sys.stderr.write("Loading " + entry['LicenseFile'] + "...\n")
    with['LicenseFile'], mode='r', encoding='utf-8', errors='replace') as content:
      entry['LicenseText'] =

# Render
result = template.render(attributions = attributions)

And this is the file template:

# Attributions for Qt Libraries

{% for entry in attributions -%}
{%- if "libs" in entry.QtParts -%}
## {{entry.Name}}



License: {{entry.License}}

{%- endif %}
{%- endfor %}

With {%- if "libs" in entry.QtParts -%} ... %{-endif}, we filter out attributions that are not part of Qt libraries, but examples, tools, or tests.

You can also find the sources at

Putting It Together

To generate a list of attributions in Qt in Markdown, we can now combine the qtattributionsscanner for the collection part and the new for the formatting part:

qtattributionsscanner --output-format json qt-everywhere-src-5.11.0-rc1/ | python >


This was easy enough. So you might ask yourself why isn’t this part of Qt yet, in this form or in a similar form? That is because there are some shortcomings:

  • Chromium uses its own file format, which qtattributionsscanner does not understand yet. Therefore, the output does not include third-party code inside Qt WebEngine yet.
  • You need a Qt source code checkout. I think we should package the JSON files as part of the build artifacts so that you can also generate documentation for a binary installation of Qt.
  • The attribution text is unnecessarily long, because a lot of license texts are duplicated.
  • When running on the default Qt checkout, the tool will pick up attributions for all Qt libraries, even if your application does not use a particular library. For now, you can limit this by either tweaking the source tree or adding some basic filtering to Python. Int the long run, this should be done automatically for you. The deployqt tools (windeploqt, macdeployqt …) can determine which libraries to package and similar logic could be used for generating code attribution documentation.
  • Even if you just take the libraries into account, there might be code that your Qt build doesn’t use. A lot of the third party code is in fact optional and only used on some platforms or in some setups.

The last issue raises some interesting questions. Should we use the Qt configuration system to select the necessary attributions? Or should we use Quartermaster, which uses a build graph to determine which attributions are needed? How big of a problem do ‘false positives’ present?

Even though more research is needed before making the generation of attribution files an official feature, I hope anybody struggling with this issue will find this proof-of-concept helpful.

The post Generating Third-Party Attribution Documents appeared first on Qt Blog.

This past weekend I travelled to Valencia, the third biggest city in Spain, located by the Mediterranean sea, to attend to Akademy-es, the annual meeting of the KDE community in Spain. At this event we also hold the KDE Spain annual assembly.

KDE España is the legal entity behind the KDE community in Spain and legally represents KDE in my country. We are about 30 members and it was founded in 2009 although Akademy-es started a few years earlier.


Event highlights

These are the points that called my attention the most at this edition:

  • Many new faces: although I do not have the official numbers yet, my guess is that we had around 75-80 participants among the three days, mostly locals which means a median of 35-40 people in most talks. Most were new faces. KDE Spain designs this event not so much targeting contributors but newcomers and potential future community members. So having many new faces is a very good sign.
  • Slimbook: this company from Valencia, sponsored the event and participated in its organization. At their booth, they showed some of their new products. I really liked the new Katana II and the new KDE Slimbook II. They are already selling outside Spain (EU) and they have a small response window when customers has issues with their laptops or owners require an upgrade, even faster than most multinational brands.
    • My Slimbook had a little issue with the fan. It was a little noisy and it did not work perfectly. I agreed with the support service to bring my laptop to Akademy-es so the fan could be replaced there as part of the guarantee. Isn’t that cool or what? I got my laptop back in 30 minutes and meanwhile they explained to me the components used, some design and technical decisions they took for my Pro2 laptop and the evolution suffered by the new version of the model, which they were showing at the booth.
  • KDE Vaults: what a nice surprise! This is a fairly recent KDE future that will be shipped in openSUSE Leap 15, I believe that I will use it on daily basis. It basically allow you to encrypt a folder with standard encryption technology and it is integrated with Plasma.
  • Mycroft integration in KDE: I was glad to see that a power user like I am will be able to easily install and configure Mycroft in openSUSE Leap 15 and interact with it using the KDE Plasma applet.
  • Catch up with friends: every member of any community would claim that this is a highlight of a every community event. It is absolutely true. It always amaze my how diverse this group is in some aspects but how our passion for changing the world with KDE holds us together.
  • Valencia: this is a city I haven’t been often enough, with enough time to enjoy it. I should come in Fallas, the local (and crazy) party week. Paella, party and mascletás, what more can a guy like me ask for?
    • Slimbook Paella. What a nice paella we had at the event.dav
  • Support from my KDE colleagues: as I mentioned, I am a power user. My technical skills are limited. I have a few minor issues with my openSUSE Leap 43.2 that I am unable to fix them myself. Akademy-es is always an opportunity for me to get support from the experts and fix some of them, or at least get an explanation about why I have that issue, if it is fixed already in new versions or if I have to use a workaround.

Call for action

These are some points where I would like to call for action on them:

  • High resolution screens represent an issue when installing or booting most Linux distros, including openSUSE Leap. It is also a pain to configure multiscreen set-ups when the difference in resolutions between screens is high. The new openSUSE Leap version, Leap 15 represents a step forward to solve some of them but, from what I’ve heard there is still a way to go. There are several laptop models under $1000 out there already with these type of screens so I assume the priority to solve these issues for distro and desktop hackers will significantly increase. I have hope.
  • OEM installer: years ago I came to the conclusion that the reason why Linux desktops are not mainstream is because upstream mostly target those users who do not and will never install any operative system in their machines while Linux distros mostly target those who can install their own OS. Both would greatly benefit from targeting mainly the prescriptors, that is, those who install the operative systems of the users either in corporate or domestic environment. Let me put an example. Most Linux distros still do not have a OEM installer. I heard this demand again at Akademy-es, this time done by Alejandro López, Slimbook CEO, as a limiting factor to ship their laptops with some Linux distros pre-installed. I would like to see a OEM installer soon for openSUSE Leap.
  • Distro upgrade application: openSUSE Leap is a distro for users. Leap 15 is coming and it seems I will have to use YAST to change the repos in order to point to the new ones to upgrade my distro. Asking around, the situation is not better in most distros (they do not have YAST �� ). Upgrading the distro through internet (network) is an awesome feature. Let’s make it affordable to everybody. I would like to see an application in openSUSE to manage this complex feature, making it suitable for any user not just power users. It could be a great opportunity too to inform those users about the benefits of the new version, including those apps that are available for the very first time, together with a simple path to install them.
  • Applications for Plasma Mobile: Plasma developers are achieving the long-awaited goal to get Plasma ready for mobiles. Now we need applications. Aleix Pol did a call for action on this regard and I fully support his cause. Without applications, it will way harder to make this effort shine.
  • Not enough women (diversity): although expected, we cannot stay conformist with the result at this event. Women need references to feel KDE as an even more inclusive and attractive place to learn and develop their skills. Maribel García, Directora de la Oficina de Software Libre de la Universidad de Granada (Director of the Free Software Office at the University of Granada), spoke about this, describing the activities this entity is doing to increase the interest among women about Free Software, pointing at an evidence, that KDE can and should do more to help. She also agreed, based on the ratio of women vs men studying Software Engineering at her University, that the root cause is at home and at the High School. She has published a study about this, she mentioned.
    • It is not the first time I hear this diagnosis. I know first hand that the KDE España board has made efforts to mitigate the lack of women speakers at this edition. The Board needs more help from the Membership and the wider KDE community. It is in everybody’s interest.

Overall, Akademy-es has been a good one. See you all at Akademy in summer or next year again at Akademy-es. Where? Who knows…

Last month the developers of Plasma, KDE's featureful, flexible and Free desktop environment, held a sprint in Berlin, Germany. The developers gathered to discuss the forthcoming 5.13 release and future development of Plasma. Of course, they didn't just sit and chat - a lot of coding got done, too.

During the sprint, the Plasma team was joined by guests from Qt and Sway WM. Discussion topics included sharing Wayland protocols, input methods, Plasma Browser Integration, tablet mode for Plasma's shell, porting KControl modules to QtQuick, and last but not least, the best beer in Berlin.

Plasma Team Sprinting

Constructive Discussions with SwayWM - Check!

The effort to port Plasma to work on Wayland rather than X continues at a fast pace. Wayland protocols define how applications interact with the display, including tasks essential to Plasma such as declaring which "window" is really a panel. These protocols have to be defined by the Plasma team and preferably standardized with other users of the Linux desktop.

One newcomer to the field is SwayWM - a Wayland version of the i3 window manager. Drew DeVault, the lead developer of the project, joined our Plasma sprint to discuss where Wayland procotols could be shared. The team looked at their Layer Protocol, which covers much of the work of the current plasmashell protocol. We found that this protocol contains some nice ideas and suggested some improvements for the SwayWM developers.

The Plasma Output Management Protocol was also discussed. This protocol defines how external monitors are used, and Sway currently just reloads configuration files as needed. The team will consider this solution if the need for such a protocol arises. Protocols for Remote Access were compared and reviewed along with Pipewire as systems for managing audio and video. Drew wrote a blog post with more information on this topic.

Plasma Team Sprinting

Exciting Collaboration with Qt - Check!

Shawn Rutledge, the lead developer of Qt's new input stack, also joined us for a few days of the sprint. Together, we reviewed the new API and looked at how some of the unique use-cases of Plasma would work with it. The conclusion was that "some parts, including complex drag-and-drop actions, went surprisingly smoothly".

A bunch of design changes were suggested and improvements submitted. Working with Qt developers at this early stage is a great win for both projects, as it saves KDE developers a lot of time when they come to use the new features, while the Qt world gets a nicer result.

Thanks to Endocode for hosting us in central Berlin.

Improved Plasma Browser Integration - Check!

Plasma Browser Integration is a fun new feature that will be shipped with Plasma 5.13 next month.

It means Firefox and Chrome/Chromium will use Plasma's file transfer widget for downloads and native Plasma notifications for browser notifications. Moreover, media controls will work with the task manager.

The browser extensions were tidied up, translations fixed, and accounts on the relevant browser store websites set up. Another decision made at the sprint was that we have a collective duty to make sure KDE's new web browser Falkon is at feature-parity in terms of Plasma integration.

Plasma running on a Pinebook

Plasma on Pinebook and Tablet Mode - Check!

The team continued to work on convergence with other form factors - in other words, on making Plasma run seamlessly on a variety of devices, both desktop and mobile. Bhushan worked on Plasma Mobile images for devices which supports upstream kernel, which is essential for security and more up-to-date system on mobile devices.

Rohan worked on making Plasma run smoothly and with all Free drivers on the low-end Pinebook laptop. This goes to show that Plasma can function as a lightweight desktop environment without losing the features.

Lastly, Marco managed to get Plasma working on a convertible laptop with support for switching into tablet mode, illustrating how we can actively shift between form factors.

Presenting to FSFE members

Talks, Burritos, and Beer - Check!

Throughout the week, we also gave talks to our host company Endocode who kindly lent us their central Berlin offices, complete with a fridge full of alcohol-free beer.

We also hosted an evening meetup for the local group of Free Software Foundation Europe members and gave some talks over burritos.

Special thanks to long-term KDE contributor Mirko of Endocode, who impressed us with his multi-monitor multi-activity high-definition display Plasma setup.

Having checked off all the items on our to-do list, we concluded another successful Plasma sprint. Look forward to seeing the results of our work in the upcoming Plasma 5.13 release!

Plasma Team Closing the Sprint with Fine Dining

May 13, 2018

The community bonding period ends today and the coding period begins. Community bonding period had been quite hectic for me with respect to learning new things and thinking of good ways to implement them. I didn't know much about piano or other musical instruments (as I had never played them before) and was unaware of …

Get ready for some more Usability and Productivity! First off, here’s the week’s process on the Open/save dialog project:

Open/Save dialog project

These improvements will land in KDE Frameworks 5.47.

But that’s not all! Here’s the usual assortment of miscellaneous goodies:


UI Polish & Improvement

If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon, LiberaPay, or PayPal.

Become a patron Donate using Liberapay donate with PayPal

One of KDE’s Community goals for the next years is streamlined onboarding of new contributors. It’s very important that new people regularly join the community for various reasons. First of all, there will always be something to do and the more contributors the merrier! But there are also people becoming very inactive or leaving the community and these people need to be replaced. Furthermore new people bring in new and fresh ideas. It’s important to have people from diverse backgrounds in the community. Lack of community diversity manifests in several issues:

  • Lack of language diversity results in poor translations for non-widespoken languages and other internationalization issues
  • Having only English/German/Spanish/French speaking contributors leads to incomplete support for things like right-to-left layouts and CJK characters/input methods
  • Contributors with powerful hardware and good internet connection tend to ignore the need for performance- and bandwidth-efficient solutions
  • An example from KDE Connect: Most of us are from Europe where its common to use messengers like Whatsapp whereas in the USA it’s far more common to write SMS. Therefore we didn’t really had in mind that it’s important to have good SMS integration in KDE Connect until we actually had some people from the US at our sprint
  • People tend to not be aware of different setups other people might have, e.g. having an Android phone without Google Play Services is rare in Europe but common in China

To address these issues I am aiming to ease contribution for everyone. I believe that KDE Connect is a great project to get started in KDE development. After all, it’s the project I got started with myself. The codebase is small and IMHO quite sane. The modular architecture allows you to quickly get cool results without risking to break stuff.

To help you pick a task for your first contribution I started marking tasks as junior jobs on the KDE Connect workboard. I want to provide structured information for each task that helps you diving into it. I hope that other KDE projects will follow our lead.

Now I need your help!

Which information would you like to see in the task description?

What can we, the established developers do to make your first contribution easier?

What were the problems you encountered in your first contribution?

What are the lessons you learned during your first contributions that you want to share with others?

If you already have a KDE Identity account please leave your feedback on the Streamlined onboarding of new contributors or the Junior Jobs task.

If you don’t please leave your feedback here in the comments and I will forward it to the appropriate places.

Your help will improve the overall community and thus the quality of our software in the long term!


So I’m writing my first ever online blog post for KDE.

Hello KDE!!

I’m Michael, a students taking part in this year’s GSoC. I am working on improving the “palette” docker for Krita.

Finally I’ve got something that’s almost a blog set up. Hopefully it’s set up. This post is a test.

Krita is the first open source project I have contributed to, so I don’t really know the ecosystem of the open source world. When I heard that should be posting blogs on Planet KDE, I thought it would be like posting on some online forum. Sign up, log in, and post. I later realized it’s not that simple…

All right. I am going to do some web stuff some day. I can make today that day. As a result, I now have a super ugly Github Pages blog. But it will become prettier one day.

Back to my GSoC project. You can find the descriptions here.

The palette docker is used by the painters to manage their colors. As a user of Krita myself, I think there are several functionalities that can be added to it so that the color management is easier for the painters.

Currently, I can add a color to a palette, but only to a position I cannot control. If I turn back after working on some other portions of a picture and adding some more colors, it’ll be kind of hard to find where the original color is. Therefore, I want to enable users to decide what to place a color.

Another improvement I want to make is to associate a .kra file with one or several palettes, so that an artist can easily ensure that the colors are consistent every if she needs to close and open some work several times.

I also want to bring a better UI design to this docker. Currently, I don’t really feel welcome when using it. Though I already use the docker pretty often, sometimes I still forget what to expect after pressing a button. Especially when creating a new palette.

These are everything I want to talk about in my first post. The coding starts today. Wish me luck!

May 12, 2018


I talked in my last post about some of my LVM studies for the first goal of GSoC. This post is an addition to the last one, focused more in explaining how I want to implement it and talking a little bit about some application concepts from Calamares that I’ve studied.


LVM VGs in Calamares

Operation Management with JobQueue

KDE Partition Manager uses a stack of pending operations as a way to manage and execute processes in volumes. As I’ve said in my previous post, Calamares doesn’t work in this way and uses a global JobQueue instance that collects pending Jobs and execute all of them when the GUI reaches the last view step (i.e. ExecutionViewStep).

These Jobs are collected according to its origin modules. Calamares is an application which can use multiple plugin modules (and you can identify them at GUI as the option chooser interfaces, e.g. keyboard, partition, users). You can see a brief description of Calamares structure here.

Jobs in Partition module

At Partition module, the Jobs are managed by the PartitionCoreModule instance, which represents this module. Each available device is associated to a collection of Jobs (e.g. create partition, delete partition, resize partition) and the module collects these Jobs based on it.

Volume Groups Jobs and GUIs

So it will be necessary to create a dialog that will communicate with a button available at the Partition page (as I said in my previous post as well). After this dialog is opened and the options for the newly VG are confirmed, this information will be passed to a method called PartitionCoreModule::createVolumeGroup() that I’ll implement, which will create an instance of CreateVolumeGroupJob with the necessary data for the new VG and will load the new LVM device at the storage device combobox, allowing the user to choose it and manipulate its logical volumes. When executed, CreateVolumeGroupJob will use a CreateVolumeGroupOperation (from kpmcore) to create the new VG.

And about the other VG manipulations (i.e. resize and delete), I’m planning to create a widget hierarchy to reuse  in the resize view, which will include some concepts from the VG creation interface. It will be similar to how its implemented in KPM. And before deleting a LVM device, it will be needed to deactivate it, where I think that just including another button or action to do this operation is ok, and  remove it from the device list combobox afterwards.


There are more details in this implementation, but I decided to create a brief description of the most important in this post, just to organize my ideas. I’ll start implementing it during this Monday. This first goal will be fine to complete.

Until the next post! ��

Today the Krita team releases Krita 4.0.3, a bug fix release of Krita 4.0.0. This release fixes an important regression in Krita 4.0.2: sometimes copy and paste between images opened in Krita would cause crashes (BUG:394068).

Other Improvements

  • Krita now tries to load the system color profiles on Windows
  • Krita can open .rw2 RAW files
  • The splash screen is updated to work better on HiDPI or Retina displays (BUG:392282)
  • The OpenEXR export filter will convert images with an integer channel depth before saving, instead of giving an error.
  • The OpenEXR export filter no longer gives export warnings calling itself the TIFF filter
  • The emtpy error message dialog that would erroneously be shown after running some export filters is no longer shown (BUG:393850).
  • The setBackGroundColor method in the Python API has been renamed to setBackgroundColor for consistency
  • Fix a crash in KisColorizeMask (BUG:393753)



Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.3 on Ubuntu and derivatives. We are working on an updated snap.


Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code


For all downloads:


The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

Older blog entries



Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.