Skip to content

Wednesday, 23 December 2020

Matrix is an instant messaging system similar to Whatsapp or Telegram, but uses an open and decentralized network for secure and privacy-protected communications. NeoChat is a visually attractive Matrix client that works on desktop computers and mobile phones.


NeoChat provides an elegant and convergent user interface, allowing it to adapt to any screen size automatically and gracefully.


This means you can use it both on desktop computers, where you might want to have a small bar on the side of your screen, but you can also enjoy NeoChat on Plasma Mobile and Android phones.

In fact, NeoChat will be installed by default on the PinePhone KDE edition and we offer a nightly Android version too. The Android version is for the moment experimental and some features, like file uploading, don’t work yet.


NeoChat provides a timeline with support for simple messages and also allows you to upload images and video and audio files. You can reply to messages and add reactions.

NeoChat provides all the basic features chat application needs: apart from sending and responding to messages, you can invite users to a room, start private chats, create new rooms and explore public rooms.

Start a chat dialog
Start a chat dialog
Explore rooms
Explore rooms

Some room management features are also available: You can ban or kick users, upload a room avatar and edit a room’s metadata.

Room Setting dialog
Room Setting dialog
Message detail dialog
Message detail dialog

The room view contains a sidebar that is automatically displayed on wide screens, but also appears as a drawer on smaller screens. This sidebar contains all the information about the room.

Invitation UI
Invitation UI
Multiaccount Support
Multiaccount Support

A lot of care has been put into making NeoChat intuitive to use. For example, copying with Ctrl+C and dragging and dropping images just work; and the text field gets autofocused so that you are never writing into the void. NeoChat also integrates an emoji picker, letting you use the greatest invention of the 21st century. (Note: someone in Mastodon pointed out that Emojis are from the 20th century and appeared in 1997 in Japan.)

Image Editor

NeoChat also includes a basic image editor that lets you crop and rotate images before sending them. The image editor is provided by a small library called KQuickImageEditor.

This library for the moment doesn’t have a stable API and is released together with NeoChat.

Why Matrix and NeoChat

Matrix is an open network for secure and decentralized communication. This is an initiative that is very much aligned with KDE’s goals of creating an open operating system for everybody. This is why we need a Matrix client that integrates into Plasma and thus NeoChat was born.

NeoChat is a fork of Spectral, another QML client, and uses the libQuotient library to interact with the Matrix protocol. We would like to send out a huge thank you to these two projects and their contributors. Without them, NeoChat wouldn’t have been possible.

NeoChat uses the Kirigami framework and QML to provide an elegant and convergent user interface.


NeoChat is fully translated in English, Ukrainian, Swedish, Spanish, Portuguese, Hungarian, French, Dutch, Catalan (Valencian), Catalan, British English, Italian, Norwegian Nynorsk and Slovenian. Thanks a lot to all the translators and if NeoChat is not available in your native language consider joining KDE localization team.

What is Missing

For the moment, encryption support is missing and NeoChat doesn’t support video calls and editing messages yet either. Both things are in the works.

We are also missing some integration with the rest of the KDE applications,
like with Purpose, which will allow NeoChat to be used to share content from other KDE applications; and with Sonnet, which will provide spellchecking features.

The fastest way to implement these deficiencies is to get involved! The NeoChat team is a friendly group of developers and Matrix enthusiasts. Join us and help us make NeoChat a great Matrix client! You can join us at We also participate to Season of KDE, so if you want to get mentored on a project and at the end get a cool KDE T-Shirt, feel free to say hi.


Version 1.0 of NeoChat is availabe here, kquickimageeditor 0.1.2 is availabe here. Both packages are signed with my gpg key 14B0ED91B5783415D0AA1E0A06B35D38387B67BE.

A Flathub release will hopefully be released in the next few days. We will update this post when it is available.

The Flathub version is available.

Monday, 21 December 2020

While trying to implement a long planned feature, an ad block in Angelfish, the Plasma Mobile webbrowser, I was looking for a mostly complete and performant library that provides this functionality.

First I found libadblockplus, which is a C++ library providing the AdblockPlus core functionality. Sounds great, right? Well, not quite. It includes it’s own v8 java script engine, and since we are talking about a webbrowser with a QML interface here, including a third java script engine and a second copy of v8 was absolutely not an option. Even if this wasn’t a webbrowser, running a java script engine as implementation detail of a library is at least … problematic.

The other option I found is adblock-rust, which is the built-in ad block of the Brave browser. As the name tells, it is written in Rust, and I was originally looking for a C++ library. But it turned out this was not much of a problem, since Rust features excellent C interoperability, just like C++. Based on this common ground, bindings can be created to use Rust code from C++ (and the other way around if needed).

Approach 1

My first approach was to use raw ffi. That means essentially building a C API featuring the typical C primitive types in rust, and telling the rust compiler to represent structs in memory the same way that C would do. Thanks to cbindgen, which automatically generates a header file with the information for the C compiler to know which fields a struct has and were they are, we directly get something we can include in our C++ project.

The rust build system cargo is capable of running custom code at build time, and we can use that to run cbindgen on our rust code, by adding a file named

extern crate cbindgen;

use std::env;

fn main() {
    let crate_dir = env::var("CARGO_MANIFEST_DIR").unwrap();


Our core data structure for the ad block looks like this:

pub struct Adblock {
    blocker: *mut Engine,

It stores a pointer to the rust Engine type in a C compatible struct. The struct can not be created directly from C / C++, since we don’t know anything about the Engine type there.

So we need a function on the rust side that creates and initializes the Engine for us and packs it into an Adblock struct. Since the code in angelfish is doing a bit more than only that, the function takes two C string arguments, and returns a pointer to a mutable (non-const) Adblock object.

pub extern "C" fn new_adblock(
    list_dir: *const c_char,
    public_domain_suffix_file: *const c_char,
) -> *mut Adblock

A few more thigs in this function signature are unusual, but they are all related to the FFI / C compatibility we need here:

  • #[no_mangle] tells the rust compiler not to apply its rust specific function name mangling
  • extern "C" tells that this function should use the C calling conventions.

Every time we interact with data from C, the rust compiler is unable to run its usual safety checks. For that reason we need unsafe blocks around those lines of code. If anything unexpectedly segfaults, it’s likely to be in our unsafe blocks. To get a string that we can feed into a usual rust API, we can use unsafe { CStr::from_ptr(public_domain_suffix_file).to_str() }.

For more examples of how to interact with the C / C++ side, feel free to have a look at some real code in Angelfish. I’m by no means an expert on this, but it should help you get started.

Using this approach, the ad block could successfully be implemented in about 140 lines of rust code, of which only half is FFI code, and the rest actual logic.

Approach 2

The second approach is to use the cxx crate (library), which can generate most of the boilerplate FFI code automatically, and provides a modern API on the C++ side. To do that, it implements its own wrapper types, each wrapping the functionality of one type of one of the languages. Those wrapper types are implemented in both languages, and allow easily passing more advanced types than pointers and number types through the FFI boundary. On the rust side, the wrapper types are not really visible, because a macro generates everything for us.

The only unusual thing on the rust side will be a small ffi module, declaring which types and functions we want to expose to C++:

mod ffi {
    extern "Rust" {
        type Adblock;
        type AdblockResult;

        fn new_adblock(list_dir: &str, suffix_file: &str) -> Box<Adblock>;
        fn should_block(
            self: &Adblock,
            url: &str,
            source_url: &str,
            request_type: &str,
        ) -> Box<AdblockResult>;

All objects are returned as smart pointers, like Box. On the C++ side, this will result in a rust::Box<Adblock>, which is a type generated by the cxx_build crate, which is doing something slightly similar to cbindgen.

With the cxx crate, our will look like this:

extern crate cxx_build;

fn main() {

You may wonder, if the cxx crate makes everything so easy, why did I start with approach 1 at all? I had had a look at the cxx crate a few month ago, when it was still too minimal to do what I needed. Luckily I had another look, since it has become really useful in the meantime. However learning the raw ffi way was important to understand what actually happens in the background, and I’d almost recommend everyone to have a look at that first before using the cxx crate. Using cargo expand you can then understand what cxx generated for you.

Given the cxx crate makes this so much easier, I initially feared it might add tons of new dependencies and increase the build time, but to my surprise it actually has a lot less dependencies than cbindgen. Even though cbindgen only uses those at build time (they don’t end up in the binary), they take some time to build.

Angelfish has recently switched to using the cxx crate, so you find usage examples in the current version of the ad block code.

Build system

After we have written the FFI, we need to build the Rust code as part of our project, most likely using CMake. This could be very annoying and complicated, but luckily Corrosion exists to make this very easy for us. It can build our rust code using the cargo build system, and create CMake targets for the library we built, so its easy to link against it.

Usage in KDE

Now that the implementation part is explained, it makes sense to look into where this can be useful and where not. Unfortunately the truth is that some distros are still not fully happy with having to package rust code, because the rust community has a different approach to sharing code than known from the C / C++ world. While Qt re-implements some functionality also found in other C++ libraries, to only make it necessary to package Qt and not one library for json, one for xml, for http and so on, the rust community likes to split everything into small packages, so no unnecessary code is included.

In Angelfish, all the rust code is optional, and Angelfish can of course still be built without Rust.

Possible areas in KDE that could profit from using Rust are icon and SVG theme rendering code, which could profit from using rsvg or resvg. I can imagine it could also be useful for document thumbnailers, when a rust implementation of the file type already exists. A similar case could be KIO workers, and pretty much any other project that can profit from optional plugins.


This approach to using Rust in KDE allows to make use of the many libraries and language features the ecosystem provides, without running into the infamous “rewrite it in Rust” reflex. It avoids having to create rust bindings for all KDE Frameworks and Qt only to make use of Rust, and still produces readable code.

Sunday, 20 December 2020

Just in time for the upcoming Christmas Holidays, the KMyMoney development team today announces the immediate availability of version 5.1.1 of its open source Personal Finance Manager.

This is a maintenance release as part of the ongoing effort to support our users and fix bugs and annoyances. If you think you can support the project with some code changes or your artistic or writing talent, please take a look at the some low hanging fruits at the KMyMoney junior job list. Any contribution is welcome.

Despite the ongoing permanent testing we understand that some bugs may have slipped past our best efforts. If you find one of them, please forgive us, and be sure to report it, either to the mailing list or on

The details


  • 426406 Minus sign is missing for negative numbers
  • 427519 Merge of Payees Deletes all Payees when an error occurs


  • 405204 New account details window is not sized correctly
  • 419275 QIF Im-/Exporter do not have default profile in dropdown after initial installation
  • 420016 Spelling mistakes in German “Ausgabe” categories
  • 421915 Can’t reposition the currency symbol
  • 423259 Faulty icons display
  • 423509 kmymoney : decimal separator not supported anymore
  • 423870 Report ‘Transactions by Category’ converts all records to base currency even if this option is not selected
  • 424098 Closed Accounts not hidden on Home view
  • 424188 Account names only display partially on Home View
  • 424302 Dates in “All dates” reports title not displayed
  • 424305 Reload home page failed
  • 424321 “Find transaction” does not respect filter
  • 424344 Accounts combobox in export wizard
  • 424424 Budget plugin and date format
  • 424511 Loan will not calculate and does
  • 424674 Transaction report is missing splits in certain circumstances
  • 424745 The word “Principal” is mispelled as “Principle” in KMyMoney’s categories
  • 425280 KMM adds an extra “/” to the institution url
  • 425530 KMyMoney Missing currency – Zambia Kwacha (ZMW)
  • 425533 Price information to old currencies not added when adding successor
  • 425934 Cannot rename new currency
  • 425935 No way of changing smallest cash/money unit size
  • 426151 Remove not implemented settings option “Insert transaction type into No. field for new transactions”
  • 426217 “New Institution” edits instead of creating institution
  • 426243 Chip-TAN widget is not completely visible
  • 428164 Adding new category from menu bar
  • 428776 Do not expose KCMs in application launchers
  • 429237 xea2kmt fails to parse gnucash templates
  • 429436 Calculator button not working for individual budget fields
  • 429489 Make Searching Categories case insensitive
  • 429491 Newly created institution is not displayed in institutions view
  • 430178 App freezes when toggling currency in investment performance report
  • 430409 Can’t cancel Schedule Details dialog presented on startup


  • 424378 Values of date picker runs out of control for scheduled payments and transactions
  • 428434 Column not deleted on update of a forecast with different cycle duration


  • 429229 Import opening balance flag from gnucash templates

A complete description of all changes can be found in the changelog.

Wednesday, 16 December 2020

It was not easy… we’re so used to celebrate the GNU Health Conference (GHCon) and the International Workshop on eHealth in Emerging Economies (IWEEE) in a physical location, that changing to a virtual conference was challenging. At the end of the day, we are about Social Medicine, and social interaction is a key part of it.

The pandemic has changed many things, including the way we interact. So we decided to work on a Big Blue Button instance, and switch to virtual hugs for this year. Surprisingly, it work out very well. We had colleagues from Gabon, Brazil, Japan, Austria, United States, Argentina, Spain, Germany, Chile, Belgium, Jamaica, England, Greece and Switzerland. We didn’t have any serious issues with the connectivity, and all the live presentations went fine. Time zone difference among countries was a bit challenging, specially to our friends from Asia, but they made it!

Social Medicine, health literacy and patient activation

The non-technical talks covered key aspects in Social Medicine, Citizens, health literacy, patient activation and Global digital health records, given by Dr. Richard Fitton, Steve and Oliver, two of his patients. Armand Mpassy talked about the challenges about GNU Health for IT illiterate Case study: Patient workflow at the outpatient service.

Individual privacy and crypto

On privacy and cryptography subjects, we had talks from Isabela Bagueros from Tor, Surviving the surveillance pandemic, Ricardo Morte Ferrer on A proposal for the implementation of a Whistleblower Channel in GNU Health, and Florian Dold on GNU Taler: A payment system by the GNU Project.

openSUSE Leap, packaging GNU Health and Orthanc

On operating systems, Doug Demaio from openSUSE talked about The big Change for openSUSE Leap 15.3, the new features on the upcoming release of this great distribution, bringing sources of SUSE Linux Enterprise (SLE) closer to the community distribution. I can not stress enough the importance of getting professional support on your GNU Health installations. Health informatics should not be taken lightly, and is key to have a solid implementation of the underlying operating system components, to get the highest levels of of security, availability and performance in GNU Health.

Axel Braun, board member openSUSE and core team member of GNU Health, focused on packaging GNU Health and the openSUSE Open Build Service (OBS). The presentation Hidden Gems – the easy way to GNU Health, put the stress on making GNU Health installation even easier, in a self-contained environment.

Sébastien Jogdone, leader of the Orthanc project, presented Using WebAssembly to render medical images, an open standard that allows to run C/C++ code on a web browser. This is great news since GNU Health integrates with Orthanc PACS server.

Qt and KDE projects in the spotlight

If we think about innovation in computing, we think about Qt and KDE. GNU Health integrates this bleeding edge technology in MyGNUHealth, the GNU Health Personal Health Record for desktop and mobile devices that uses Qt and Kirigami frameworks.

Aleix Pol, president of KDE e.V., presented Delivering Software like KDE, putting emphasis on delivering code that would be valid for many different platforms, specially mobile devices.

Cristián Maureira-Fredes, leader of the Qt for Python project, in his presentation Qt for Python: past, present, and future!, talked about the history and the upcoming developments in the project, such as PySide6, the latest Python package and development environment for Qt6. MyGNUHealth is a PySide application, so we’re very happy to have Cristián and Aleix on the team!

Dimitris Kardarakos, presented a key concept on modern applications, that is convergence, the property of the application to adapt to different platforms and geometries. His talk Desktop/mobile convergent applications with Kirigami explained how this framework KDE framework, that implements the KDE Human Interface Guidelines, helps the developers create convergent, consistent applications from the same codebase. MyGNUHealth is an example of a convergent application, to be used both in the desktop as in a mobile device.

I did go into details on MyGNUHealth design in my talk, MyGNUHealth The GNU Health Personal Health Record (PHR).

Argentina leading Public Health implementations with Libre Software

Many years of methodical and intense hard work in the areas of health informatics and public health have paid off. The team lead by Dr. Fernando Sassetti, head of the Public Health department of the National University of Entre Rios, has become a reference in the world of public health, Libre Software in the public administration, and implementations of GNU Health in many primary care centers and public hospitals in Argentina.

The National Scientific and Technological Promotion Bureau (Agencia Nacional de Promoción Científica y Tecnológica), chose Dr. Sassetti project based on GNU Health as the system for management of epidemics in municipalities. Health professionals were trained in GNU Health epidemiological surveillance system, as well as the contact tracing functionality.

Ingrid Spessotti and Fiorella de la Lama on their talk Outpatient follow-up and home care of patients with suspected or confirmed COVID-19, explained some of the functionality and benefits of these GNU Health packages, for instance:

  • Real-time observatory and epidemiological surveillance
  • Automatic notification of notifiable disease to the National Ministry of Health
  • Reporting on cases and contacts
  • Calls registry, monitoring of signs and symptoms.
  • Risk factors on each individual (eg, chronic diseases, socioeonomic status…)
  • Geolocalization of suspected or confirmed cases
  • Clinical management and followup for both inpatient and outpatient cases.

Carli Scotta and Mario Puntin, presented the Design, development and implementation of a Dentistry module for GNU Health: experience at the Humberto D’Angelo Primary Care Center in Argentina, a package that will be the base for the upcoming dentistry functionality in GNU Health 3.8 series.

The GNU Health Social Medicine Awards 2020

GNU Health is a social project. In every GHCon, we recognize the people and organizations that work to deliver dignity to those who need it most around the world.

Our biggest congratulations to Prof. Angela Davis, Proactiva Open Arms and Diamante Municipality!

As you can see, we still can do great conferences in the context of the pandemic. I hope to see you and hug you in person at GHCon2021.

In the meantime, stay safe!

For this and past editions of GNUHealthCon, you can visit

Sunday, 13 December 2020

Our computers are getting faster and faster, but compilation and startup times are still something we want to avoid.

One situation where waiting for compilation and startup to finish feels like a waste is when you are fine-tuning an aspect of your application. For example when you are adjusting spacing or colors in a user interface. Having to wait between each iteration not only costs us time, it also makes us less likely to do more experiments.

In this article I am going to show a few tricks to reduce these pains.

Setting up the scene

For the sake of the article, I am creating an "Hello World" application: a plain window with an "Hello World" QLabel.

Our code looks like this:

int main(int argc, char* argv[]) {
    QApplication: app(argc, argv);

    QLabel label("Hello World");
    label.resize(320, 120);;

    return app.exec();

Nothing too fancy for now.

Let's say we want to configure the color and size of the label. In a real app this code would be in a method of some object, but here I am just going to declare a static configureLabel() function:

static void configureLabel(QLabel* label) {
    QFont font;

    QPalette palette;
    palette.setColor(QPalette::WindowText, Qt::red);

and call it in my main() function:

    QLabel label("Hello World");
    label.resize(320, 120);

So far so good, we have a big red "Hello World".

A red "Hello World"

The color and the font size are hard-coded, though, so we need to rebuild the app and restart it every time we want to change them. That's annoying.

Level 1: environment variables

A first improvement would be to read the values from environment variables. This requires very little modifications to our code. I am going to change configureLabel() to read the color from the $COLOR environment variable and the font size from the $FONT_SIZE variable:

static void configureLabel(QLabel* label) {
    int fontSize = qEnvironmentVariableIntValue("FONT_SIZE");
    QColor color = qEnvironmentVariable("COLOR");

    QFont font;

    QPalette palette;
    palette.setColor(QPalette::WindowText, color);

With these changes, we recompile one final time and then we can start our app from the terminal like this:

COLOR=blue FONT_SIZE=36 ./hello-world

And get a bigger, bluer, "Hello World":

Bigger, bluer, Hello World

Level 2: configuration file

Environment variables are handy, but they are tedious to edit if you have more than two or three, and you have to be careful with escaping. An alternative which requires a bit more code is to use a configuration file.

Using the QSettings class, it is very simple to parse a key/value file. You can also use QJsonDocument to use a JSON file if you want, but I prefer key/value files because it's easier to experiment variants by commenting out lines (JSON does not support comments :/)

Reading the configuration file can be done by changing again our configureLabel() function to this:

static void configureLabel(QLabel* label) {
    QSettings settings("config.ini", QSettings::IniFormat);
    int fontSize = settings.value("fontSize").toInt();
    QColor color = settings.value("color").toString();

    QFont font;

    QPalette palette;
    palette.setColor(QPalette::WindowText, color);

Then we create a config.ini configuration file with the following content:


Note that we don't need a traditional [section] header, QSettings is happy to read values without any header.

Now we can start the app from the folder containing our configuration file and enjoy our magenta Hello World.

Level 3: live reload

Using a configuration file brings another possibility: live reload! Wouldn't it be great if our app could reload its configuration file as soon as we saved the file? This is actually quite easy to do, but there is a minor caveat.

Qt comes with the QFileSystemWatcher class to monitor file changes. We can use it to make our app call configureLabel() every time the configuration file changes.

Before we do this, I am going to refactor our code a bit to introduce a TempConfig class to hold our variables and our QFileSystemWatcher instance (I named it TempConfig and not Config to reduce the chances of naming conflicts when using it in a real-world code base).

Here is TempConfig.h:

class TempConfig : public QObject {
    explicit TempConfig(const QString& path, QObject* parent = nullptr);

    void load();

    QString color;
    int fontSize;

    QString mPath;

To minimize the boilerplate required to add a new value, The configuration values are public member variables.

Here is TempConfig.cpp:

TempConfig::TempConfig(const QString& path, QObject* parent)
    : QObject(parent), mPath(path) {

void TempConfig::load() {
    QSettings settings(mPath, QSettings::IniFormat);
    color = settings.value("color").toString();
    fontSize = settings.value("fontSize").toInt();

TempConfig::load() is basically the code we added to configureLabel() in the previous section.

At this point, main.cpp looks like this:

static void configureLabel(QLabel* label, TempConfig* config) {
    QFont font;

    QPalette palette;
    palette.setColor(QPalette::WindowText, config->color);

int main(int argc, char* argv[]) {
    QApplication: app(argc, argv);

    TempConfig config("config.ini");

    QLabel label("Hello World");
    configureLabel(&label, &config);
    label.resize(320, 120);;

    return app.exec();

main() creates an instance of our new TempConfig class and passes it as an argument to the modified configureLabel() function. No behavior change for now.

Time to introduce our file watcher, I am going to create a QFileSystemWatcher instance in TempConfig constructor to call TempConfig::load() when our configuration file changes:

TempConfig::TempConfig(const QString& path, QObject* parent)
    : QObject(parent), mPath(path) {
    auto watcher = new QFileSystemWatcher(this);
    connect(watcher, &QFileSystemWatcher::fileChanged, this, &TempConfig::load);

We also need to tell the rest of the app when the configuration has changed, so lets add a changed() signal to TempConfig and emit it in load():

void TempConfig::load() {
    QSettings settings(mPath, QSettings::IniFormat);
    color = settings.value("color").toString();
    fontSize = settings.value("fontSize").toInt();

Now in our main.cpp, we need to call configureLabel() when TempConfig::changed() is emitted:

configureLabel(&label, &config);

QObject::connect(&config, &TempConfig::changed, &label, [&label, &config] {
    configureLabel(&label, &config);

label.resize(320, 120);

And... it works!

Oh wait... it does not, changes are not automatically applied...

What's going on?

The answer is: it depends on your text editor. Many text editors do not just save the file in place: they save it to a temporary hidden file in the same folder then atomically rename the hidden file to the original file when they are done saving it.

QFileSystemWatcher is not smart enough to follow this. Since the code in TempConfig is not supposed to be final production code, a quick workaround is to watch the folder of the configuration file in addition to the file itself. This can be done by modifying TempConfig constructor like this:

TempConfig::TempConfig(const QString& path, QObject* parent)
    : QObject(parent), mPath(path) {
    auto watcher = new QFileSystemWatcher(this);
    connect(watcher, &QFileSystemWatcher::fileChanged, this, &TempConfig::load);
    connect(watcher, &QFileSystemWatcher::directoryChanged, this, &TempConfig::load);

And... it works this time!

Now fine tune your app, and enjoy the instant feedback :). I find it super fun to work this way!

When you are happy with your settings, replace the code reading from the TempConfig class with hard-coded constants. Or, if you plan to revisit this later, leave the code hidden behind a build option.

That's a lot of code for some one-time fine tuning!

This is true, but you don't have to start from scratch every time. I recommend keeping TempConfig.h and TempConfig.cpp in your snippet library. Next time you need to fine-tune a set of parameters in your application, you just have to copy TempConfig files to your project, add the fields you need, then instantiate the class at the right place.

What about QML apps?

It is fairly easy to adapt this code to work with a QML app. First we need to declare a Q_PROPERTY for each field. Lets do this for our TempConfig class:

class TempConfig : public QObject {
    Q_PROPERTY(QColor color MEMBER color NOTIFY changed)
    Q_PROPERTY(int fontSize MEMBER fontSize NOTIFY changed)
    explicit TempConfig(const QString& path, QObject* parent = nullptr);

Since we use public members and the same notify signal for all properties, writing the Q_PROPERTY lines does not take too long.

Now we have to give access to our class to our QML code. We can either go the long way and make our class instantiatable from QML, or take a shortcut (which is appropriate in my opinion in this situation), and add our TempConfig instance to the root context. Here is a QML version of this example code, where TempConfig instance is made available via the root context.

Here is main.qml:

import QtQuick 2.12
import QtQuick.Controls 2.12
import QtQuick.Window 2.12

Window {
    width: 320
    height: 120
    visible: true
    title: "Reload (QML)"

    Label {
        text: "Hello World"
        anchors.centerIn: parent
        color: config.color
        font.pixelSize: config.fontSize

And here is main-qml.cpp:

int main(int argc, char* argv[]) {
    QGuiApplication app(argc, argv);

    TempConfig config("config.ini");

    QQmlApplicationEngine engine;
    engine.rootContext()->setContextProperty("config", &config);

    return app.exec();

Final words

When fine-tuning an application, consider adding temporary code to reduce the time spent waiting between each adjustment.

Environment variables is a super quick way to do this, but a configuration file scales better, is more comfortable and makes it possible to setup auto reload.

Keep a skeleton of the TempConfig class around for quick integration in your code.

The code for this article is available here.

Thursday, 10 December 2020

From time to time, we receive regular complaints about frame scheduling. In particular, compositing not being synchronized to vblanks, missed frames, repainting monitors with different refresh rates, etc. This blog post will (hopefully) explain why these issues are present and how we plan to fix them.

Past & Present

With the current scheduling algorithm, compositing /should/ start immediately right after a vblank. A vblank is the time between the vertical front porch and the vertical back porch, or simply put, it’s the time when the display starts scanning out the contents of the next frame.

One thing that’s worth point out is that buffers are not swapped after finishing a compositing cycle, they are swapped at the start of the next compositing cycle, in other words, at the next vblank

KWin assumes that glXSwapBuffers() and eglSwapBuffers() will always block until the next vblank. By delaying the buffer swap, we have more time to process input events, do some window manager things, etc. But, this assumption is outdated, nowadays, it’s rare to see a GLX or an EGL implementation where a buffer swap operation blocks when rendering double buffered.

In case the buffer swap operation doesn’t block, which is typically the case with Mesa drivers, glXSwapBuffers() or eglSwapBuffers() will be called at the end of a compositing cycle. There is a catch though. Compositing won’t be synchronized to vblanks.

Since compositing is not synchronized with vblanks anymore, you may notice that animations in some application don’t look butter smooth as they should. This issue can be easily verified using the black frame insertion test [1].

Another problem with our compositing scheduling algorithm is latency. Ideally, if you press a key, the corresponding symbol should show up on the screen as soon as possible. In practice, things are slightly different

With the current compositing timing, if you press a key on the keyboard, it may take up to two frame before the corresponding symbol shows up on the screen. Same thing with videos, the audio might be playing two frames ahead of what is on the screen.

Monitors With Different Refresh Rates

Things get trickier if you have several monitors and they have different refresh rates. On X11, compositing is throttled to the lowest common refresh rate, in other words if you have two monitors with a refresh rate of 60Hz and one with a refresh rate of 120Hz, compositing will be performed at a rate of 60Hz. There is probably nothing that we can do about it.

On Wayland, it’s a completely different situation. From the technical point of view, we don’t have anything that prevents compositing being performed separately per screen at different refresh rates. But due to historical reasons, compositing on Wayland is throttled similar to the X11 case.


Our main goals are to unlock true per screen rendering on Wayland and reduce latency caused by compositing (both on X11 and Wayland). Some work [2] has already been started to fix compositing timing and if things go smoothly, you should be able to enjoy improved frame timings in KDE Plasma 5.21.

If we start compositing as close as possible to the next vblank, then applications, such as video players, will be able to get their contents on the screen in the shortest amount of time without inducing any screen tearing.

The main drawback of this approach is that the compositor has to know how much time exactly it will take to render the next frame. In other words, we need a reliable way to predict the future, easy, no problem!

The main idea behind the compositing timing rework is to introduce a new class, called RenderLoop, that notifies the compositor when it’s a good time to start painting the next frame. On X11, there is going to be only one RenderLoop. On Wayland, every output is going to have its own RenderLoop.

As it was mentioned previously, the compositor needs to predict how long it will take to render the next frame. We solve this inconvenient problem by making two guesses:

  • The first guess is based on a desired latency level that comes from a config. If the desired latency level is high, the predicted render time will be longer; on the other hand, if the desired latency level is low, the predicted render time will be shorter;
  • The second guess is based on the duration of previous compositing cycles.

The RenderLoop makes two guesses and the one with the longest render time is used for scheduling compositing for the next frame. By making two estimates rather than one, hopefully, animations will be more or less stable.

There is no “silver bullet” solution for the render time prediction problem, unfortunately. In the end, it all comes down to making a trade-off between latency and stability. The config option lets the user decide what matters the most. It’s worth noting that with the default latency level, the compositor will make a compromise between frame latency and animation stability that should be good enough for most of users.

The introduction of the RenderLoop helper is only half of the battle. At the moment, all compositing is done on the main thread and it can get crowded. For example, if you have several outputs with different refresh rates, some of them will have to wait until it’s their turn to get repainted. This may result in missed vblanks, and thus laggy frames. In order to address this issue, we need to put compositing on different threads. That way, monitors will be repainted independently of each other. There is no concrete milestone for compositing on different threads, but most likely, it’s going to be KDE Plasma 5.22.


Currently, compositing infrastructure in KWin is heavily influenced by the X11 requirements, e.g. there is only one compositing clock, compositing is throttled to the lowest refresh rate, etc. Besides that, incorrect assumptions were made about the behavior of glXSwapBuffers() and eglSwapBuffers(), unfortunately, which result in frame drops and other related issues. With the ongoing Wayland improvements, we hope to fix the aforementioned issues.




Two days ago, my macbook pro M1 arrived. I mainly got this device to test Krita on and make ARM builds of Krita, but it’s also the first macbook anyone in the Krita community that allows playing with sidecar and has a touch strip.

So, SideCar works, as expected. There is one problem, though, and that’s that the pressure curve of the Apple Pencil seems to be seriously weird, so I first thought I was painting with a sketch engine brush. But apart from that, it’s nice and smooth.

KDAB has published a library to integrate support for the touchbar: kdmactouchbar — so on that front we might see some support coming.

Krita itself, the x86 build, runs fine: the performance is much better than on my 2015 15″ macbook pro, and rosetta seems to even translate the AVX2 vectorization instructions we use a lot. Weirdly enough, X86 Firefox doesn’t seem to be able to load any website, and Safari is very annoying. Looks like the macOS build of Kate isn’t notarized yet, or maybe I need to use the binary factory build for that. XCode took about two hours to install and managed to crash the system settings applet in the process.

We haven’t succeeded in actually making an ARM build yet. We first need to build the libraries that Krita uses, and some of those seem to build as X86, and some as ARM, and we haven’t figured out how to fix that yet.

The laptop itself is, well, a laptop. It’s not bad, but it would never be my favorite. Yes, it’s very fast, that’s what everyone says, and it’s true: Qt builds in a mere 20 minutes.

The keyboard is nice, much better than the one on the 2015 macbook pro, so Apple was able to make some progress. But the edges of the palm rest — well, all of the edges are really sharp, which is quite painful when typing.

Really cute was the way the language choice on installation tells you to press Enter in all the language, including four dialects of English.

MacOS 11 is also really annoying, with an endless stream of notifications and please-use-your-finger-to-unlock for the most innocuous things. The visuals are appallingly ugly, too, with really ugly titlebars, a cramped system settings applet and weird little pauses now and then. And if the performance monitor can still be put in the menubar, I haven’t found the way to do that.

Anyway, that’s it. We’ll be making ARM builds of Krita one of these days. If you value your freedom, if you like the idea of actually owning the hardware and being able to do whatever you want with it, don’t buy one.

Saturday, 5 December 2020

I took the month of November off, which had been very necessary, as I hadn’t had a proper vacation in a year. In this month, I slowly started being able to draw again, and I did a lot of drawing. I also had a conundrum: I wanted to learn a bit more about Godot Engine, but I also wanted to learn a bit more about internet technologies. Solution: Write an XMPP client in Godot.

Connection and sending data:

I started on last Thursday, with setting up the TCP connection. I first set up prosody on my computer, with it only accepting traffic from the localhost so I had something to test again. I also installed Gajim and Kaidan so I had an idea of what proper xmpp clients look like (and so I could chat with myself). Unfortunately, because of QML shenigans, Kaidan doesn’t start.

For the stream itself, I am very indebted to this little project, as it showed me that you can use the process function in Godot to periodically listen to the TCP stream, which was a connection I hadn’t been able to make from the documentation. After I got that done, I still had some trouble getting the server to accept my data… turns out Godot’s convenience functions for strings and others helpfully prefixes it with a few bytes to indicate the size of the sent data. Using put_data(string.to_utf8()) for this solved everything. Now the server was accepting my stream!

I also came across one annoying limitation in Godot: I can do host resolving, but I cannot seem to do a lookup on DNS SRV, which is used in the XMPP spec to point at the precise IP address and port to use for an XMPP Client. This means that right now the add-account button requires you type in the IP address and port of the server manually, which is really annoying.

Stream Negotiation:

The next day it was time to negotiate with the server on features I wanted to use. Or, mostly to get through the required TSL and Authentication services.

XML Parsing

Godot doesn’t have a XML writer, but thankfully it does have an XML parser, which is similar to the XMLStreamReader you see in Qt. This is useful, because it can be used directly for parsing incomplete XML, like XMPP might do over the stream.

The way these kind of parser should be used is that you read the next line, and whenever you come across a new element that is an opening tag, you pass the parser into a specialised function for that element, and parse until you see the closing tag, in which case you return.

I am not doing this everywhere in the code, but it is the simplest way to use this, and I honestly should be doing it like this everywhere.


So, setting up an SSL connection was either suspiciously easy, or I messed it up. Basically, Godot just has you make an StreamPeerSSL, which can wrap around any other stream peer (such as the one handling our TCP connection). The thing that still confuses me is how to check the ssl connection. Does the client require the certificates? If so, how do you get the certificates? Or is this all handled by the StreamPeerSSL? The documentation for this is very sparse, so I am completely confused on what has exactly been established.

Prosody seemed to be okay with it, but the whole connection was over localhost, so I am unsure how much trust I can put in that…


Next up was SASL. I had hoped to be able to do anything better than PLAIN, but even though Godot has support for hashing Sha1 and sha-256, there’s no exposed HMAC functionality, which means SCRAM is not possible. HMAC functionality seems to be in the works, though.

Anyway, I was pretty pleased once I got authentication going, though I still couldn’t do much: Resource Binding was still required by the server.

Info/Query Handling

I think info/query handling gave me the most trouble. The brunt of XMPP messages are the stanzas, which can only be sent after stream negotiation is done. Stanzas have subtypes of message, presence and info/query, or ‘iq’. Resource binding, which is basically telling the server which device is using the account right now, needs to be done to complete stream negotiations. Resource binding is an iq stanza… After recovering from that paradox, there is the next problem.

You can send out message and presence stanzas into the void and not worry about them after that, but iq stanzas always need to be replied to. Those replies can come in at any time, and after the reply has arrived, stuff needs to be done with it, like error handling, or sending the information to whatever object requested it. A reply is recognized by having it have the same ID as the initial iq stanza. So I needed to keep a list of queries, that also connected to whatever class required the query to begin with, I also needed to make sure the object gets freed once I am done with it, because otherwise it could clog up memory, oh, and I needed to ensure it could handle error cases.

I spent two days on trying to figure out how to handle this properly, making my head spin. I didn’t want to use signals for handling when the query was done and could be removed, because I myself have a hard time tracking what signals are connected to a given object at a time. It seemed to feel like a multi-threading problem, where you’re not sure whether a thread is done processing. So I decided to solve it like one, treating the queries as jobs, and creating a query processor, that checks each query periodically (through the _process() function), sends stanzas if necessary, ensures incoming replies get forwarded to the appropriate query, and removes queries when they’re done.

To ensure the queries get freed, I went ahead and made the query class inherit Resource, which is a class that frees itself when there’s no references to it anymore. Then whenever I make a query, I connect it’s ‘finished’ signal to the object that needs it with a one-shot connection, so the connection is disconnected once fired, and then the query processor can just remove the query from it’s list when done.

Message and Presence

The rest of Sunday, I ended up spending most my time writing up classes for presence and message. I eventually made a stanza resource type, then made query inherit that, and have presence and message inherit that too. Mostly just a lot of spec-reading and typing work.

I was also baking speculaas. I burned 18 out of 63 cookies and burned 0 out of 41 spice nuts. This was largely because of an error in the spec of the cookies — they should’ve gone only 10 minutes into the oven instead of the suggested 20.

Said speculaas was because the fifth of December is Saint Nicholas day, and I wanted to sent my family something nice, given we can’t travel at the moment. This was well received on the days afterwards.

GUI Time

Most of the days after that was spent on decorating and sending the cookies, and getting back into the swing of things regarding Krita, as my vacation was over. After all that I ended up being really tired, so I did mostly a bit of gui work.

One of the more annoying differences between working with GUI in Godot and working with GUI in Qt, is that with the latter I can assign an object name to a widget in designer, and then move it around and not worry about my code breaking. In Godot, however, nodes need to be manually pathed to inside the code. You can set a reference to it, but everytime you change the layout, for example putting a button inside a HBox, the path changes and you have to go into the script to fix that. There’s a ‘find_node’ function, which makes Godot search the named node for you, but the documentation explicitly says it is not recommended because it might be slow.

Other than that, when I was handling the roster items, I kind of missed the model system you can find in Qt, which might seem incredible. It’s just, it took care of so much handling of updates that now need to be manually handled. Did have some fun writing a custom drawing function for the Tree node that contains all the roster items, which was as easy as writing the function and telling the roster item that it should use a custom drawing function and to use the function I tell it to.

I put in avatars and date-time stamps inside the GUI even though there’s no XEPs implemented to handle either, I just felt like I should have good place holders to dress it up a bit. When it came to handling rosters, I went ahead and read some XEPs to see if the way I wanted to setup the architecture was how coherent with what spec-writers seem to think it should be.

I also came across a weird issue where if I connected multiple accounts to my localhost prosody server, and stuff would come in, Godot would freeze. This is still happening, and I am not sure why. Given that Gajim can handle this same situation I am inclined to say it’s a bug in Godot, but I wrote my first TCP connection a week ago, so I might be doing something obvious wrong…

Finally I cleaned up all the code and tried to make everything more coherent and documented.

Feedback on Godot:

A list of the things I noticed about Godot, summarized for convenience.

  • I wish the documentation had more code examples on how a given node is to be used. The tree node has this, and it clarifies a lot how a given feature is to be used. Similarly, I ended up searching around a lot which GUI element was the one that I needed, so something like the page for layouts, but with all the standard gui elements would be nice.
  • The example of the TCP stream above would make a really good sample for the StreamPeer pages, like, giving users the idea they could use the process function to listen for incoming messages.
  • StreamPeerSSL could use a bit more explanation of what guarantees it gives. Overal a bit more explanation over how SSL works, and which parts Godot takes care of.
  • As mentioned, there are some network features missing. DNS SRV, I can imagine, might not be the most important feature out there, but HMAC and other crypto options are kinda really important, even with TLS. HMAC seems in the works though.
  • GDScript isn’t hard to learn, most of the learning I had to do seemed to be API specific, which I was going to have to learn anyway?
  • The Resource type is really nice, and it gave me a lot of reassurance that I could just juggle around a lot of data and have minimal leakage.
  • The constant need to change the node path for scripts when a change has occurred in the scene tree is really annoying, especially in the early parts when you’re still trying to figure out what should go where.

I put the code up on I haven’t actually tried to use it with a non-testing account, but I think that despite it’s issues and limitations it should still prove to be a useful example for how to write a relatively low-level networked program inside Godot. If I can find a way to get rid of the limitations, then I think it’ll be fun to keep working on it.

Anyway, I hope you enjoyed my rambling report of my excursion into the world of XMPP in Godot.

Wednesday, 2 December 2020

I am happy to announce that linux builds are online openSUSE Build Service.

Currently there are scripts building binaries for:

  • Arch Linux (x86_64)
  • Debian 10, Unstable and Testing
  • openSUSE Tumbleweed (i586/x86_64) and Leap 15.1/15.2 (x86_64, PowerPC)
  • Ubuntu Bionic (18.04 amd64), Focal (20.04 amd64) and Groovy (20.10 amd64)

Linux AppImage and mingw64 Windows builds are planned next. They are of course all available on download page.

— Mladen

Sunday, 29 November 2020

I’m happy to announce the release of Doxyqml 0.5.1. Doxyqml is a python program allowing to document QML APIs with the help of Doxygen. This version includes a single commit contributed by Olaf Mandel adding supports for recent versions of Doxygen (> 1.8.20).