November 20, 2019

Current situation

Some of our users seem to don’t really know how Krita (as a project) functions or not understand how User Support is done (User Support: helping users, solving their problems with Krita, hardware, use-cases, limitations of tools, misunderstanding on how some features work). That’s natural – nobody knows details before they get inside and can see for themselves.

 I did get inside and I’m tempted to write that the answer to the question in the title is “Not at all”, but that’s not actually true – often you’ll get the first answer on within a few hours. On the other hand, the person answering you will most probably be me

When I first came to, a lot of questions were left unanswered, and most of the others had an answer from Boudewijn Rempt: the lead developer. I thought that I’d prefer Krita’s main developer to develop, not answer user questions. I knew that helping Krita on the code side would be difficult because of the complexity of the code and me having little time between uni and writing my thesis, so I decided to help with user support which I could easily do in small chunks of free time. A year later, when I was hired to actually hack on Krita, I realized that now there is, again, a full-time Krita developer doing user support…

Now from all the places I am on (KDE forum, reddit, irc,, reddit gets the highest number of questions. It’s hard to estimate how many help posts we get there, but let’s say it’s 5 per day. Some of them are quite easy and requires only a link from my links set and the user answer is “Thank you, it’s solved!”, but some of them requires a long investigation into what the user is doing exactly and why the feature that is supposed to be working isn’t working on their system.

On reddit, most posts are answered by me (full-time Krita developer), Snudl (volunteer) and le_becc (volunteer). Recently there is also yotzi – another full-time Krita developer, and Tusooa – ex-GSOC student, volunteer.

On the KDE forum, people helping are mostly Wolthera (full-time Krita developer), Boud (full-time developer and Krita project lead/maintainer) and Ahab (volunteer). Ahab is also great in bug triaging, does a huge amount of work in our bug reporting system. Wolthera is helping on tumblr as well.

On IRC, the helping person is nearly always a full-time developer – either me, Wolthera or Boud, sometimes someone else. On the Krita-Artists forum we don’t get that many help requests yet, so it’s hard to judge. But I did see Wolthera, Snudl and Ahab helping there already. We get quite a few questions on Steam app forum as well, and they are dealt with by Emmet or redirected to other support channels.

So to sum it up (I hope I didn’t miss anyone, if I did, I’m sorry):

  • full-time developers: me, Wolthera, Boud, Yotzi (Ivan Yossi)
  • volunteers: Snudl, Ahab, le_becc, Tusooa, Emmet

The thing is, our full-time developers are hired to develop Krita. I can’t say for others, but I do user support outside of my working hours. And it takes quite a lot of mental energy to be constantly available. Volunteers have a lot on their plate too – it’s just… Krita is used on 1.5 millions computers with Windows 10 alone, there is no way 8 people are going to manage all the user support that it’s needed.

I know there are also Facebook groups and Discord and other places, but there is a bit of disconnect between the core Krita developers team and people there. I think Snudl is from Discord though.

So, what about it

I believe some new people are needed in the system. I would really like to see someone who would be willing to take the burden of checking every post on reddit, making sure they get the support they need – that every post is either solved, unsolvable or waiting for user response (sometimes indefinitely, which happens much more often than you’d imagine). If there was more than one person, let’s say three of them, then every one of them would have much less to do. If there were a few other users who just check in from time to time and answer some of the questions, even just the easiest ones (basically how to use Krita, or questions solved by one of my links), that would create a situation where no one’s job is too heavy or too time-consuming.

I do believe that the existence of someone who feels responsible for the whole platform is, at least now on reddit, forum etc., quite important – because otherwise some questions might get missed, some difficult ones might not get the attention they need since there is a question just next to it that seems much easier. But “helpers-visitors” are important as well, to keep the user support maintainer sane 🙂

If any of you, dear readers, want to help in any way, or maybe even waited to engage in open source project, even specifically Krita, for a long time but never had the courage to step in, please contact me on IRC (on freenode, or tiar or tiar-[any two letters], on /u/-tiar- , or on tiar.

Learn more

Some tips for user supporters:

My powerful list of links:

Mac crash log instruction
reset configuration
onion skin:
render issues;
what to do with printing:
xp pens tablets
krita vs gimp
why tablets doesn’t work
how to use Krita Plus flatpak
krita vs corel painter
painting in CMYK
tips & tricks:

We had to skip the October release because we were working on a bunch of issues that took longer to resolve than planned, but that means that this release has more fixes than ever. Please test this beta, and take the survey afterwards!

There has been a lot of work on vector shapes, the transform tool and, especially, saving on Windows. Windows usually only writes out saved files to the actual disk when it feels like it.

So if you’d cut the power to your computer before Windows did that, you might get corrupted files. With 1,500,000 distinct Windows 10 users of Kritain the past month, chances are good for that happening (just like there are people who work exclusively with unnamed autosave files — don’t do that!), so we now try to force Windows to write files to disk after saving.

  • Fix the sliders in the performance settings page. BUG:414092
  • Fix the color space of the onion skin cache. BUG:407251
  • Fix transforming layers that have onion skins enabled. BUG:408152
  • Also save the preferences when closing the preferences dialog with the titlebar close button
  • Fix a bug in the polygon tool that adds an extra point. BUG:411059
  • Save the last used export settings. BUG:409044
  • Prevent a crash on macOS when closing a dialog that opened the system color dialog. BUG:413922
  • Fix an issue on macOS where the native file dialogs would not return a filename. BUG:413241
  • Make it possible to save the “All” tag as the current tag. BUG:409748
  • Show the correct blending mode in the brush preset editor. BUG:410136
  • Fix saving color profiles that are not sRGB to PNG files
  • Make the transform tool work correctly with the selection mask’s overlay
  • Fix  a crash when editing the global selection mask. BUG:412747
  • Remove the “Show Decorations” option from the transform tool. BUG:413573
  • Remove the CSV export filter (it hasn’t worked for ages)
  • Fix slowdown in the Warp transform tool. BUG:413157
  • Fix possible data loss when pressing the escape key multiple times. BUG:412561
  • Fix a crash when opening an image with a clone layer when instant preview is active. BUG:412278
  • Fix a crash when editing vector shapes. BUG:413932
  • Fix visibility of Reference Layer and Global Selection Mask in Timeline. BUG:412905
  • Fix random crashes when converting image color space. BUG:410776
  • Rewrite the “auto precision” option in the brush preset editor. BUG:413085
  • Fix legacy convolution filters on images with non-transparent background. BUG:411159
  • Fix an assert when force-autosaving the image right during the stroke. BUG:412835
  • Fix crash when using Contionous Selection tool with Feather option. BUG:412622
  • Fix an issue where temporary files were created in the folder above the current folder.
  • Improve the rendering of the transform tool handles while actually making a transformation
  • Use the actual mimetype instead of the extension to identify files when creating thumbnails for the recent files display
  • Improve the logging to detect whether Krita has closed improperly
  • Fix exporting compositions from the compositions docker. BUG:412953, BUG:412470
  • Fix Color Adjustment Curves not processing. BUG:412491
  • Fix artifacts on layers with colorize mask *and* disabled layer styles
  • Make Separate Channels work. BUG:336694, BUG:412624
  • Make it possible to create vector shapes on images that are bigger than QImage’s limits. BUG:408936
  • Disable adjustmentlayer support on the raindrop filter. BUG:412522
  • Make it possible to use .kra files as file layers. BUG:412549
  • Fix Crop tool loosing aspect ratio on move. BUG:343586
  • Fix Rec2020 display format. BUG:410918
  • Improve error messages when loading and saving fails.
  • Fix jumping of vector shapes when resizing them
  • Add hi-res input events for pan, zoom and rotate. BUG:409460
  • Fix crash when using Pencil Tool with a tablet. BUG:412530
  • Always ask Windows to synchronize the file systems after saving a file from Krita.
  • Fix wrong aspect ratio on loading SVG files. BUG:407425



For the beta, you should only only use the portable zip files. Just open the zip file in Explorer and drag the folder somewhere convenient, then double-click on the krita icon in the folder. This will not impact an installed version of Krita, though it will share your settings and custom resources with your regular installed version of Krita. For reporting crashes, also get the debug symbols folder.


(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)


Note: the gmic-qt is not available on OSX.

Source code


For all downloads:

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos! With your support, we can keep the core team working on Krita full-time.

Hoy quiero compartir con todos vosotros el vídeo del gran Yoyo Fernández «Conociendo Dolphin de KDE Plasma» en el que nos resume en casi media hora casi todas las opciones del explorador de archivos más potente y flexible del mercado. Aún conociendo Dolphin como lo conozco, he descubierto cosas que no conocía.

Conociendo Dolphin de KDE Plasma, vídeo de Yoyo

Lo he dicho muchas veces: cuando apareció Dolphin me pareció una aberración. En ese momento utilizaba Konqueror de forma fluida y Dolphin discontinuaba su desarrollo. Pensé que era un inmenso error y que desaparecía una de las mejores aplicaciones de KDE.

Afortunadamente, la decisión de los desarrolladores estaba más que justificada ya que Dolphin no solo ha igualada a Konqueror como explorador de archivos sino que lo ha superado ampliamente tanto por funcionalidades como por posibilidades de interfaz, sin olvidar su increíble integración con Baloo y con el ecosistema KDE en general.

En otras palabras, los desarrolladores con Dolphin consiguieron repetir el éxito de Konqueror pero con la posibilidad de ir ampliando sus funcionalidades de una forma dinámica y sencilla, pudiendo adaptar Dolphin a las necesidades futuras.

9 razones para uitlizar Dolphin

En Dolphin casi todo es configurable: vista y posición de los paneles informativos, tipo de visualización  y tamaño de los iconos, previsualización y visualización de los archivos, filtros y búsquedas integradas, visualización de lugares, etc.

Además, el tema de los Service Menu le confiere una potencialidad casi infinita con la que adaptar Dolphin a nuestras necesidades añadiéndole funcionalidades como conversión de archivos de vídeo o realización de acciones con las imágenes.

Pues bien, al parecer Yoyo Fernández, creador y editor de Killall Radio y Salmorejo Geek, maestro cacharreador con hiperactividad en Twitter  y podcaster nivel experto y con carnet, empieza a enamorarse de verdad del escritorio Plasma de KDE y le ha dedicado un extenso vídeo Dolphin que no solo os recomiendo ver, sino también compartir y viralizar.



November 19, 2019

Seguimos presentando algunos juegos de la división más lúdica de KDE (KDEGames). Ya han pasado por este humilde blog un buen número de juegos de la división KDEGames como KbounceKMahjonggKmines,  KBreakoutKTuberling , Granatier, KSudoku, KGoldrunner, Kolor Lines (o Klines), KBlocks, Bovo o Kubrick. Hoy ha llegado el turno a Kapman, el clon del clásico Pac-man de la Comunidad KDE y uno de los pasatiempos más famoso de los años 80.


Kapman, el clon del clásico Pac-man de la Comunidad KDE

Esta semana presento otro juego, pero es que hacía mucho tiempo que no aparecían en el juego y me apetecía ofrecer algo de distracción lúdica para vosotros.

En esta ocasión se trata de Kapman, un clon del conocido como Pac-Man, uno de los video-juegos arcade más famoso de todos los tiempos.

Kapman, el clon del clásico Pac-man de la Comunidad KDE

La mecánica es bien sencilla, consiste en recoger las bolitas sin que seáis capturados por fantasma. Si tienes una energía, el Capitán adquiere la capacidad de hacer realidad las fantasías durante unos segundos. Cuando una etapa se limpia de bolitas y energizantes, el jugador se lleva a la etapa siguiente con un ligero incremento de la velocidad de juego.
Sus opciones de configuración son sencillas ya que apenas podemos seleccionar el nivel de dificultad y el aspecto gráfico entre varias posibilidad, incluyendo una apariencia casi calcada a la original, como vemos en la imagen superior.

Más información: KDE Games

Cómo instalar Kapman

Al ser un juego de la rama de KDE su instalación es sumamente sencilla. Básicamente debemos abrir una sesión de la consola y escribir:

En KDE Neon, Kubuntu y Linux Mint: $sudo apt install kapman

En openSUSE: $sudo zypper install kapman

[… comenta y añadimos cómo se instala en tu distribución favorita]

November 18, 2019

I had the pleasure of going to the Linux Applications Summit last week in Barcelona.  A week of talks and discussion about getting Linux apps onto people’s computers.  It’s the third one of these summits but the first ones started out with a smaller scope (and located in the US) being more focused on Gnome tech, while this renamed summit was true cross-project collaboration.


Oor Aleix here opening the conference (Gnome had a rep there too of course).


It was great to meet with Heather here from Canonical’s desktop team who does Gnome Snaps, catching up with Alan and Igor from Canonical too was good to do.


Here is oor Paul giving his talk about the language used.  I had been minded to use “apps” for the stuff we make but he made the point that most people don’t associate that word with the desktop and maybe good old “programs” is better.


Oor Frank gave a keynote asking why can’t we work better together?  Why can’t we merge the Gnome and KDE foundations for example?  Well there’s lots of reasons why not but I can’t help think that if we could overcome those reasons we’d all be more than the sum of our parts.

I got to chat with Ti Lim from Pine64 who had just shipped some developer models of his Pine Phone (meaning he didn’t have any with him).

Pureism were also there talking about the work they’ve done using Gnomey tech for their Librem5 phone.  No word on why they couldn’t just use Plasma Mobile where the work was already largely done.

This conference does confirm to me that we were right to make a goal of KDE to be All About the Apps, the new technologies and stores we have to distribute our programs we have mean we can finally get our stuff out to the users directly and quickly.

Barcelona was of course beautiful too, here’s the cathedral in moonlight.


Repasando un poco las entradas del blog, y al hilo del lanzamiento de Hedgewars 1.0, me he dado cuenta que no publiqué en su día que SuperTuxKart ha llegado a su primera versión estable, lo cual siempre es sinónimo de madurez y alegría. Así que, aunque sea seis mese tarde, echemos un vistazo a lo que nos ofrece este juego

SuperTuxKart ha llegado a su primera versión estable

Siempre es fabuloso saber que los proyectos basado en Software Libre alcanza una numeración basado en enteros que indica que se ha llegado a una versión definitiva. Este es el caso de SuperTuxKart, un juego de Karts al estilo del mítico Mario Kart, en el que priva la diversión por delante de la simulación.

No obstante, antes de continuar creo que es conveniente hacer una pequeña reseña histórica basada en la información disponible en en la Wikipedia.

En el año 2000 el desarrollador Steve Baker inició el desarrollo de TuxKart, un juego estilo  carreras de Kart. El juego sufrió una serie de desacuerdos interno y fue discontinuado en marzo de 2004.

El proyecto fue derivado (forqueado) con el nombre de SuperTuxkart, pero esta versión era injugable. Pasaron un par de años para que en 2006 Joerg «Hiker» Henrichs rescató definitivamente el proyecto con la ayuda de Eduardo «Coz» Hernandez Muñoz, lanzando una versión estable.

SuperTuxKart ha llegado a su primera versión estable

SuperTuxKart se perfilaba entonces como un proyecto con futuro y este hecho se confirmó cuando en 2008 Marianne Gagnon (aka. «Auria») se unió al mismo y finalmente reemplazó a Coz como uno de los líderes del mismo. Este hecho es muy importante puesto que en muchas ocasiones este es el punto crítico de muchos proyectos: el reemplazo del líder.

El resto, como se suele decir, es historia. SuperTuxKart fue encadenando lanzamientos, buenas críticas y reconocimientos año tras años para culminar, en abril de 2019, su primera versión estable cuyo tráiler podemos ver a continuación.

SuperTuxKart tiene varios modos de juego, siendo el más completo el modo Historia, donde debes enfrentarte al malvado Nolok y derrotarlo para que el Reino de la Mascota vuelva a ser seguro. Y es que los personajes del juego son las mascotas de un buen número de proyectos libres como Tux (protagonista y mascota de Linux), Geeko (de openSUSE), Konqi (de KDE) o Gnu (de GNU), entre otros.

Además, dispones otros modos de juegos como competir solo contra el ordenador, competir en varias copas de Grand Prix o intentar batir tu mejor tiempo en el modo de contrarreloj.

También tiene modo competitivo ya que puedes luchar con hasta ocho amigos en un solo ordenador, jugar en una red local o jugar en línea con otros jugadores de todo el mundo.

Supertuxkart 0.9

Solo me queda felicitar a todos los desarrolladores que han hecho posible este lanzamiento y desear que esta versión estable sea la primera de muchas. Al mismo tiempo animaros a probar el juego y que deis el refuerzo positivo y las críticas constructivas necesarias para que el juego siga mejorando.

Más información: SuperTuxKart

Cómo instalar SuperTuxKart

Al ser un juego tan popular está disponible en los repositorios principales de la mayoría de las distribuciones, así que su instalación es sumamente sencilla. Básicamente debemos abrir una sesión de la consola y escribir:

En Kubuntu: $sudo apt-get install supertuxkart

En openSUSE: $sudo zypper install supertuxkart

En Geento: $sudo zypper emerge supertuxkart

En Fedora: $sudo zypper yum supertuxkart

We are pleased to announce the relase of Calligra Plan 3.2.0. Tarball can be found here: Summary of changes: General: * Add drag/drop and copy/paste using text/html and text/plain mimetypes. This can be done from most table based views … Continue reading

November 17, 2019

Barcelona from above

The change-over from 17 degrees in Barcelona to 6 and gloomy in Amsterdam is considerable. This past week I was in Catalunya to participate in the Linux App Summit, a new gathering of applications developers looking to deliver applications on Linux to end-users.

Of course I handed out Run BSD stickers.

To a large extent the conference was filled with people from the KDE community and GNOME – but people don’t have to be put in one single category, so we had FreeBSD people, Linux people, Elementary people, openSUSE people, coders, translators, designers and communicators.

I’d like to give a special shout-out to Nuritzi and Kristi for organizational things and Regina and Shola for communications and Katarina and Emel Elvin for coding. To Heather for schooling me, Muriel for hearing me out and Yuliya for making me eat flan. To Hannah and Hannah for reminding me to update some packaging stuff.

Barcelona from the ground

I guess there were some guys at the conference too.

Kenny taught me more things about audio. Sriram explained some kinds of audience strategy. I talked with Aleix and Albert about everyday-language. I learned a teensy bit of rust from someone with a German accent and a red beard.

Egged on by some over-coffee discussion I gave a talk titled Hey guys, this conference is for everyone, which talked about cultural differences and making people feel welcome and comfortable. After all, we want as many app contributors as possible. It turns out that people from Ohio and people from Alberta are compatible, in terms of natural conversational distance. We found this out live on stage, and I expect to give this talk more often in places where it can be a positive, looking-forward contribution to an event.

In many ways I was an unusual attendee at this conference. I do most of my work in FreeBSD, and run mostly applications on FreeBSD. My work is on Linux system installers, which don’t behave like apps in the consumer sense of the word, and on security services, which run somewhere at the bottom of the stack. Still there’s plenty to learn about containerization and making applications run anywhere, be it Plasma Mobile or some desktop or elsewhere.

As happens so often, I didn’t see all that many talks at the conference. One of my favorites was the quick talk by Saloni and Amit about student retention after “introductory” programmes. It was really well presented; the message it brought, which is “retention is really low” is an unhappy one, though.

Saloni and Amit on stage

Overall it was good to (re)connect with a bunch of people in the Linux ecosystem, to see people working together to deliver what we care about – the best tool for the job – to end users. I think any conclusions presented from this conference are way premature, but there’s food for thought going forward.

A short announcement, probably meaningless to most people, but who knows: I've created an Ubuntu PPA with the latest QBS. The reason why this might make some sense is that this PPA targets the older Ubuntu distributions. It's currently built for 14.04 (Trusty, which is no longer supported by Canonical), and I'll eventually upload the QBS package for 16.04, too.

This package can be useful to people distributing applications in the AppImage format, where one usually builds the application in one older distribution in order to increase the chances of it being runnable in as many Linux distributions as possible. A simpler way to obtain QBS on Trusty is to install QtCreator, but that's not trivial to install in an automated way and might not come with the latest QBS. Especially when building on Ubuntu, a package from a PPA is much easier to install.

This QBS is built statically, and won't install any Qt libraries on your system; this is good, because it allows you to use whatever Qt version you like without any risk of conflicts.

English content after the next paragraph.

Ce week-end, j’ai assisté au Capitole du Libre à Toulouse. Pour une fois, je n’ai pas suivi beaucoup de présentations afin de discuter dans les couloirs. Toutefois, j’ai fait quelques sketchnotes des présentations auxquelles j’ai assisté.

And now for English readers. ;-)

During this week-end, I attended the Capitole du Libre in Toulouse. I didn’t attend many talk for once since I wanted to benefit a lot from the “hallway track”. Still, I did a few sketchnotes of the talks I attended.

For once it’s all in french though, since it was the language of the conference. ;-)

GitLab CI


Panorama des menaces sur les libertés numériques

AR et VR sur le web, état des lieux

Le Libre et l’ESS peuvent-ils faire bon ménage ?

There are some neat things to report and I think you will enjoy them! In particular, I think folks are really going to like the improvements to GNOME/GTK app integration and two sets of touch- and scrolling-related improvements to Okular and the Kickoff Application Launcher, detailed below:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

How You Can Help

Do you like icons? Of course you do! But more importantly, are you interested in helping to make them? I bet you are! And you should be, because there’s lots to do! Luckily, it’s actually really easy, and we have a page in the Human Interface Guidelines that describes the rules to help guide you along. Icons in the Breeze icon theme are vector SVG files, and we use Inkscape to make and edit them. I started doing it recently with no prior experience whatsoever in either icon design or the Inkscape program. My work was crude, but VDG members very gently and patiently helped me along, and you can do it too! Here’s how:

More generally, have a look at and find out more ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Finally, consider making a tax-deductible donation to the KDE e.V. foundation.

I’m now writing this post in the last hours of the Lakademy 2019 (and my first one). It was really good to be “formally” introduced to the community and it’s people, and to be in this environment of people wanting to collaborate to something as incredible as KDE. Althought I wanted to contribute more to other projects, I did some changes and fixes in the rocs, wrote my Season of KDE project and got some tasks that can help with the future of rocs.


This event showed me the passion that even the most veteran members have for the software, and how, even after years of collaboration, they are still teaching new members and putting everything they got to create better softwares for everyone. On the other side, seeing the new members collaborating for the first time with such desire to share and learn helped me with the energy to help more.


I just have to thanks KDE for everything they provided to me during these months of Google Summer of Code, for the help to come to Salvador to be a part of this community and for the good laughs. It was incredible! To the next Lakademy and (i hope so) Akademy. :)

Special thanks to Tomaz, that introduced me to KDE!

November 16, 2019

At Akademy earlier this year I presented the current state of KPublicTransport, and mentioned a remaining privacy-relevant issue in there for giving its users full control about which backend service to query. This has now been addressed, with a way to list and chose backends globally or per request.


KPublicTransport needs to retrieve its information online, and for that is supports a number of different backend services. So far those have been selected as follows:

  • For any request that contained geographic coordinates, only backends who’s geographic bounding area includes any of the coordinates is considered. Even in densely covered areas that’s not more than three or four candidates.
  • For requests only containing location names all backends were considered. As we are approaching 30 backends that’s getting too much, even when leaving privacy concerns aside.

In a best-case scenario we would be able to pick one appropriate backend automatically, and on top of that allow users to exclude certain backends explicitly, e.g. because they don’t trust the operator, or the privacy laws applying to the operator. The latter is straightforward, the former not so much. Making good choices for appropriate backends requires context information beyond what the KPublicTransport framework has access to, so to some extend we need to offload this to the application. Therefore we need API giving applications full control over backend selection.


The newly added API can be found in the following places:

  • KPublicTransport::Manager::backends() provides a list of all available backends, to give applications the necessary information to show a selection UI to the user, or to make choices itself.

  • KPublicTransport::Manager::set[Enabled|Disabled]Backends() allows you to set the globally selected backends. The reason this tracks both enabled and disabled backends explicitly is to provide a third state (“default”), for newly added backends that haven’t been explicitly enabled or disabled yet. These methods work just on backend identifiers, so you can directly use them with QSettings or KConfig.

  • KPublicTransport::BackendModel provides a convenience model to offer a backend selection UI to the user, usable from C++ or QML.

KDE Itinerary showing a configuration page to select which public transport services to use. Manual public transport information service selection in KDE Itinerary.
  • KPuclicTransport::xxxRequest::setBackends() allows to override the global backend selection per query.

Backend Metadata

For selecting backends applications might need additional information about a backend. Right now we only have a human readable description and the geographic bounding area, but we can easily extend that as needed.

Backend meta data is stored in simple JSON files, together with the parameters needed to contact the corresponding service. ISO 3166-1 country codes and ISO 3166-2 region codes would seem like a useful addition, but I’d like to wait for actual application developer feedback first before extending this (hello KTrip :) ).

November 15, 2019

As part of the historical move of Janayugom news paper migrating into a completely libre software based workflow, Kerala Media Academy organized a summit on self-reliant publishing on 31-Oct-2019. I was invited to speak about Malayalam Unicode fonts.

The summit was inaugurated by Fahad Al-Saidi of the Scribus fame, who was instrumental in implementing complex text layout (CTL). Prior to the talks, I got to meet the team who made it possible to switch Janayogom’s entire publishing process on to free software platform — Kubuntu based ThengOS, Scribus for page layout, Inkspace for vector graphics, GIMP for raster graphics, CMYK color profiling for print, new Malayalam Unicode fonts with traditional orthography etc. It was impressive to see that entire production fleet was transformed, team was trained and the news paper is printed every day without delay.

I also met Fahad later and pleasantly surprised to realize that he already knows me from open source contributions. We had a productive discussion about Scribus.

My talk was on data encoding and text shaping in Unicode Malayalam. The publishing industry in Malayalam is at large still trapped in ASCII which causes numerous issues now, and many are still not aware of Unicode and its advantages. I tried to address that in my presentation with examples — so the preface of my talk filled half of the session; while the second half focused on font shaping. Many in the industry seems to be aware of Unicode and traditional Malayalam orthography can be used in computers now; but many in the academia still has not realized it — evident from the talk of the moderator of the discussion, who is director of the school of Indian languages. There was a lively discussion with the audience in the Q&A session. After the talk, a number of people gave me feedback and requested the slides be made available.

Slides on data encoding and complex text shaping are available under CC-BY-NC license here.

And Lakademy is finally here! This is not my first direct interaction with a KDE member, but I was sort of nervous to met many members at once, since it has been less than a year that I began contributing to KDE. As I got off the plane I got to know a member of the translation team with a KDE t-shirt and talked to him (he came in the same plane with me), and he introduced me to other members. We got to the hostel and, as we arrived one day earlier, we went out to drink, talk and eat acarajé (which was incredible). It was a nice evening and I got to know better the most veteran and new members.


Next day, we got up early to move to the Universidade Federal da Bahia and began the Lakademy. Some members went to buy groceries and some went directly and prepared the room. After a round of presentations, Lakademy was declared online! I spent most of the time reviewing ROCS code and wrote some fixes for redundant code and a problem with the interface that was introduced in one of the last commits. After that, I listed some tasks that could be done this week. We finished the first day of Lakademy sharing what we did and went back to the hostel to prepare to have dinner and some fun in Salvador. :)


Looking forward for the next days!

November 14, 2019

Most web apps need login information of some kind, and it is a bad idea to put them in your source code where it gets saved to a git repository that everyone can see. Usually these are handled by environment variables, but Docker has come up with what they call Docker secrets. The idea is deceptively simple in retrospect. While you figure it out it is arcane and difficult to parse what is going on.

Essentially the secrets function create in memory files in the docker image that contain the secret data. The data can come from files, or a Docker swarm.

The first thing to know is that the application running in the docker image needs to be written to take advantage of the Docker secrets function. Instead of getting the password from an environment variable, it would get the password from the file system at /run/secrets/secretname. Not all images available use this functionality. If they don't describe how to use Docker secrets, the won't work. The files will be created in the image, but the application won't read them.

For a development setup having files outside of the git source tree works well. To create a file with a secret, I created a folder called serverdata, with a dev/ and prod/ folder within. In the dev/ folder, run this command with all the secret data you will need:

echo "shh, this is a secret" > mysecret.txt
The names simply need to tell you what they do. What the secret is called in the image is set in the docker configuration. This is what my dev/ folder looks like:
-rw-r--r-- 1 derek derek 66 Nov  5 14:49 mongodb_docker_path
-rw-r--r-- 1 derek derek  6 Oct 22 14:09 mongodb_rootusername
-rw-r--r-- 1 derek derek 13 Oct 22 14:08 mongodb_rootuserpwd
-rw-r--r-- 1 derek derek 18 Oct 22 14:10 mongodb_username
-rw-r--r-- 1 derek derek 14 Oct 22 14:10 mongodb_userpwd
-rw-r--r-- 1 derek derek 73 Oct 22 14:02 oauth2_clientid
-rw-r--r-- 1 derek derek 25 Oct 22 14:02 oauth2_clientsecret
-rw-r--r-- 1 derek derek 14 Oct 22 14:03 oauth2_cookiename
-rw-r--r-- 1 derek derek 25 Oct 22 14:04 oauth2_cookiesecret
-rw-r--r-- 1 derek derek 33 Oct 26 08:27 oauth2_redirecturl

Function and description. I have some configuration details as well.

Using Secrets with docker-compose

This is the docker-compose.yml that builds a mongodb image with all the configuration.
version: '3.6'services:
    build: ./mongo-replicator
    container_name: mongo-replicator
      - mongodb_rootusername
      - mongodb_rootuserpwd
      - mongodb_username
      - mongodb_userpwd
      MONGO_INITDB_ROOT_USERNAME_FILE: /run/secrets/mongodb_rootusername
      MONGO_INITDB_ROOT_PASSWORD_FILE: /run/secrets/mongodb_rootuserpwd
      - mongo-cluster
      - mongo-primary
      - mongo-secondary
And the secrets are defined as follows:
    file: ../../serverdata/dev/mongodb_rootusername
    file: ../../serverdata/dev/mongodb_rootuserpwd
    file: ../../serverdata/dev/mongodb_username
    file: ../../serverdata/dev/mongodb_userpwd
    file: ../../serverdata/dev/mongodb_docker_path
The secrets: section reads the contents of the file into a namespace, which is the name of the /run/secrets/filename. Mongo docker image looks for an environment variable with the suffix _FILE, then reads the secret from that file in the image file system. Those are the only two variables supported by the Mongo image.

Of course it gets more complicated. I wanted to watch the changes in the database within my node application for various purposes. This function is only supported in a replicated set in Mongo. To fully automate the configuration and initialization of Mongo within Docker images using replication requires a second Docker image that waits for the Mongo images to initialize, then runs a script. So here is the complete docker-compose.yml for setting up the images:
version: '3.6'services:
    build: ./mongo-replicator
    container_name: mongo-replicator
      - mongodb_rootusername
      - mongodb_rootuserpwd
      - mongodb_username
      - mongodb_userpwd
      MONGO_INITDB_ROOT_USERNAME_FILE: /run/secrets/mongodb_rootusername
      MONGO_INITDB_ROOT_PASSWORD_FILE: /run/secrets/mongodb_rootuserpwd
      - mongo-cluster
      - mongo-primary
      - mongo-secondary

    container_name: mongo-primary
    image: mongo:latest
    command: --replSet rs0 --bind_ip_all
      - "27019:27017"    networks:
      - mongo-cluster
    container_name: mongo-secondary
    image: mongo:latest
    command: --replSet rs0 --bind_ip_all
      - "27018:27017"    networks:
      - mongo-cluster
      - mongo-primary
The Dockerfile for the mongo-replicator looks like this:
FROM mongo:latest
ADD ./replicate.js /replicate.js
ADD ./seed.js /seed.js
ADD ./ /
CMD ["/"]
Mongo with various scripts added to it. Here they are.

rs.initiate( {
  _id : "rs0",  members: [
    { _id: 0, host: "mongo-primary:27017" },    { _id: 1, host: "mongo-secondary:27017" },  ]
    { email: ""},    { $set: { email: "", name: "My Name"} },    { upsert: true },);
and finally what does all the work,
#!/usr/bin/env sh
if [ -f /replicated.txt ]; then  echo "Mongo is already set up"else  echo "Setting up mongo replication and seeding initial data..."  # Wait for few seconds until the mongo server is up  sleep 10  mongo mongo-primary:27017 replicate.js
  echo "Replication done..."  # Wait for few seconds until replication takes effect  sleep 40
  MONGO_USERNAME=`cat /run/secrets/mongodb_username|tr -d '\n'`  MONGO_USERPWD=`cat /run/secrets/mongodb_userpwd|tr -d '\n'`

mongo mongo-primary:27017/triggers <<EOFrs.slaveOk()use triggersdb.createUser({  user: "$MONGO_USERNAME" ,  pwd: "$MONGO_USERPWD",  roles: [  { role: "dbOwner", db: "admin" },            { role: "readAnyDatabase", db: "admin" },            { role: 'readWrite', db: 'admin'}]})


  mongo mongo-primary:27017/triggers seed.js
  echo "Seeding done..."  touch /replicated.txt
 In the docker-compose.yml the depends_on: orders the creation of images, so this one waits until the others are done. It runs the replication.js which initializes the replication set, then waits for a while. The password and username are read from the /run/secrets/ file, the linefeed removed, then the user is created in the mongo database. Then seed.js is called to add more initial data

This sets up mongoDb with admin user and password, as well as a user that is used from the node.js apps for reading and writing data.

No passwords in my git repository, and an initialized database. This is working for my development setup, with a mongo database, replicated so that I can get change streams, and read and write function from the node.js application.

More to come.

  1. Using secrets in node.js applications and oauth2_proxy
  2. The oauth2_proxy configuration
  3. Nginx configuration to tie the whole mess together

This publication is a Science for Policy report by the Joint Research Centre (JRC), the European Commission’s science and knowledge service. It aims to provide evidence-based scientific support to the European policymaking process. The scientific output expressed does not imply a policy position of the European Commission. Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use that might be made of this publication.

The report has been developed in the framework of the 2017 Communication of the European Commission ‘Setting out the EU approach to Standard Essential Patents’ (COM(2017) 712 final). In this Communication there is a direct commitment that ‘The Commission will work with stakeholders, open source communities and SDOs for successful interaction between open source and standardisation, by means of studies and analyses’. Standards and open source development are both processes widely adopted in the ICT industry to develop innovative technologies and drive their adoption in the market. Innovators and policy makers assume that a closer collaboration between standards and open source software development would be mutually beneficial. The interaction between the two is however not yet fully understood, especially with regard to how the intellectual property regimes applied by these organisations influence their ability and motivation to cooperate. This study provides a comprehensive analysis of the interaction between standard development organisations (SDOs) and open source software (OSS) communities. The analysis is based on 20 case studies, a survey of stakeholders involved in SDOs and OSS communities, an expert workshop, and a comprehensive review of the literature.

November 13, 2019

I am happy to Announce we have released Qt 5.12.6 today.

November 12, 2019

The last minor release of the 19.08 series is out with a fair amount of usability fixes while preparations are underway for the next major version. The highlights include an audio mixer, improved effects UI and some performance optimizations. Grab the nightly AppImage builds, give it a spin and report any issues. 


  • Try to make it compile with gcc 9. Commit. Fixes bug #413416
  • Fix missing param name in avfilters. Commit.
  • Fix compositions disappear after reopening project with locked track. Commit. Fixes bug #412369
  • Fix favorite compositions broken. Fixes #361. Commit.
  • Fix razor tool cutting wrong clip. Fixes #380. Commit.
  • Fix red track background on add track. Commit.
  • Fix deprecated method. Commit.
  • Fix docked widgets losing title and decoration when undocked. Commit.
  • Close favorite effect popup on activation. Commit.
  • Fix fades handles sometimes not appearing. Commit.
  • Fix seeking with wheel on ruler. Commit.
  • Update appdata for 19.08.3. Commit.
  • Fix fad in control sometimes not visible. Commit.

Last week Nicolas Fella, Roman Gilg, and I represented KDE at the Qt World Summit 2019 in Berlin.

KDE booth at Qt World Summit, shiny black vinyl floor, a KDE-branded table, three bistro tables with devices on themThe booth just before the venue opened – half an hour later it was crowded in here.

This year we set up a lounge area upstairs for people to chat, see our hardware and software offerings, as well as charge their phones between talks. At the center of our booth we had a large KDE-branded table with various bits of swag and a KDE Slimbook on display which we used as our main device for demoing our extensive KDE Applications and Frameworks offerings.

The letters "#QtWS19" 30cm tall of styrofoam with suggested Berlin skyline of cardboard behind#QtWS19
Plasma Mobile table with a Nexus 5X an Librem Dev kit (smartphone on a PCB)Plasma Mobile table

We also had a dedicated table for our mobile effort where we showcased our KDE apps for Android and of course Plasma Mobile. The latter of which we had running on a good ol’ Nexus 5X and more importantly the Librem 5 Dev Kit by Purism. Unfortunately, the Pinephone developer kits we were hoping to show as well weren’t shipped in time for the event. Anyway, if you’re interested in learning more about what’s going on with Plasma Mobile go check out our new weekly blog series!

A small convertible laptop running Plasma with its screen folded 270° to be an inverse V shapePlasma yoga action

Another unusual device we presented was the One Mix 2S – a 7″ convertible laptop featuring an Intel Atom processor rather than something ARM-based. It also has a selection of ports you would rather expect from a full-sized laptop, such as USB-C/A and HDMI, making it an interesting choice for use in a docking station setup. It ran an experimental version of Plasma Wayland with automatic screen rotation and tablet mode toggle which increases the hit area for controls and the spacing between them. Finally, our booth also featured a Pinebook which impressively shows how well our software performs on slow but cheap hardware.

Black cloth covering pallets used as seating accommodation with "SUPER!" written in a comic-like spiky speech bubble, and a KDE-branded USB multi charger in the backSupercharge your phone at the KDE booth.

On the evening of the first conference day KDAB celebrated their 20th anniversary by throwing a party with table football, VR games, and most importantly: free beer. A perfect opportunity to spread a stack of KDE Konqi beer coasters throughout the venue! However, the next morning I had to get up early as I got asked to participate in a panel discussion at 8 am on contributing to Qt. Surprisingly, despite the party the evening before we ran out of chairs in that room. Lars Knoll did a live demonstration on how to actually submit a Qt patch using Gerrit and then we all discussed the pros and cons of contributing upstream. Sadly, there was no recording.

A wooden bar counter with KDAB 20 years ahead cup on it as well as a bunch of KDE-branded beer coasters with Konqi on itKonqi trying to protect KDAB’s precious wooden counter from beer stains.

Since the three of us were pretty busy running this large stall there wasn’t much free time to sit in any of the presentations. I did get a chance on the second day of the event to attend Ulf Hermann’s talk on the future of QML which is a very important topic for the advancement of Plasma. Overall their changes planned for QML 3 in Qt 6 sound reasonable but given the sheer number of to do items mentioned during the presentation it is somewhat early to assess the impact on our software stack.

Just a few days from now I’ll be visiting Berlin once again. This time for the KDE Frameworks 6 sprint end of next week, kindly hosted by MBition. Are you using KDE Frameworks in your application or are even a contributor already? There’s lots of work to be done, come join us and check out the Wiki page for more information!

November 11, 2019

On my last post I talked about the new async simplemail-qt API that I wanted to add, yesterday I finished the work required to have that.

SMTP (Simple Mail Transfer Protocol) as the name says it’s a very simple but strict protocol, you send a command and MUST wait for the reply, which is rather inefficient but it’s the way it is, having it async means I don’t need an extra thread (+some locking) just to send an email, no more GUI freezes or an HTTP server that is stalled.

The new Server class has a state machine that knows what reply we are waiting, and which status code is the successful one. Modern SMTP servers have PIPELING support, but it’s rather different from HTTP PIPELING, because you still have to wait for several commands before you send another command, in fact it only allows you to send the FROM the RECIPIENTS email list and DATA commands at once, parse each reply and then send the mail data, if you send the email data before you are allowed by the DATA command the server will just close the connection.

So in short we only save a few (but important) round trips, this new version also includes support for the RESET command, this is useful for example when you have a long list of emails, and some of them have malformed RECIPIENTS email, you will get an error for the command but instead of closing the connections and start the long SMTP handshake you just RESET and try to send the next email.

Cutelyst master is already using this new class if you set ViewEmail::setAsync(true);. With that set you won’t get the SMTP mail queued reply but will still be able to process more HTTP requests while waiting for the SMTP reply.

I plan to do a 2.0.0 stable release in the following weeks so if you use this library please test and suggest API changes that you may like.

Have fun!


Well, that was three interesting articles on the same topic on the same day, namely, billionaires. And read in turn they explain exactly why the Linux Desktop is still at such a marginal market share, and why that’s not because we, who work hard on it, are failures who have been doing the wrong thing all the time. It is in the first place policies, bought with money, that allowed people to build monopolies, taxing individuals and so becoming even more rich and powerful.

(Similarly, it’s not individuals through their choices who are destroying the planet, it is policies bought by the very rich who somehow believe that their Florida resorts won’t sink, that they won’t be affected by burning up the planet so they can get richer. But that’s a digression.)

So, the the first article, by Arwa Mahdawi, discussed the first part of this problem: with enough money, all policies are yours. It’s just a squib, not the strongest article.

Then, Robert Reich, a former US secretary of labor enumerates the ways people can become so exceedingly rich, and none of that is because they are so hard-working and so successful:

  • Exploit a monopoly: this is illegal under the laws of the United States.
  • Exploit insider information. This is also illegal.
  • Buy a tax cut. This seemed uniquely USA’ian until the Dutch prime minister Rutte promised abolition of the dividend tax to Unilever. This would seem to be illegal as well, but IANAL.
  • Extort people who already have a lot of money. Extortion is illegal.
  • Inherit the money. This is the only legal way to become a billionaire.

Now the article entitled What Is a Billionaire, by Matt Stoller was posted to the Linux reddit today. Not surprisingly, many people completely didn’t get the point, and thought it was irrelevant for a Linux discussion forum, or was about capitalism vs socialism, or outdated Microsoft bashing.

However, what it is about, is the question: why is Bill Gates not in jail for life with all his wealth stripped off? He’s a criminal, and his crime has directly harmed us, the people working on free software, on the Linux Desktop.

So, to make things painfully clear: Bill Gates made it so that his company would tax every computer sold no matter whether it ran Windows or not. If a manufacturer wanted to sell computers running Windows, all the computers it sold were taxed by Microsoft. He would get paid for the work a Linux distribution was doing, and the Linux distribution would not get that money.

That means there’s a gap twice the amount of this illegal tax between Microsoft and the Linux distribution. If a Linux distribution would want to earn what Microsoft earned on a PC sale, it would have to pay the Microsoft tax, and ask for its own fee.

This cannot be done.

And I know, this has been said often before, and discussed often before, and yeah, I guess, poor Bill Gates, if he hadn’t been bothered so badly with the hugely unfair antitrust investigation, he would also have been able to monopolize mobile phones, and the world would have been so much sweeter. For him, for certain.

I guess we didn’t do all that badly with the Linux Desktop market share being what it is. This is a fight that cannot be won.

Monopolies must be broken up. It’s the law, after all.

Interested in getting started with Kirigami development in a few minutes? Since version 5.63 of the Kirigami framework, there is an easy way to do so: an application template. The template facilitates the creation of a new application, using CMake as the build system, and linking to Kirigami dynamically as a plugin at runtime.

Let’s see how it works. As a starting point, we will use a virtual environment running the latest KDE Neon User edition. This is by no means a requirement, any Linux distribution with Kirigami 5.63 or later perfectly fits our needs. KDE Neon has been chosen just because it provides -by design- the latest Qt, KDE frameworks and applications. Moreover, a virtual machine will let us play fearlessly with the system directories.

At first, we install the packages required:

sudo apt install qtbase5-dev build-essential git gcc g++ qtdeclarative5-dev qml-module-qtquick-controls libqt5svg5-dev qtmultimedia5-dev automake cmake qtquickcontrols2-5-dev libkf5config-dev libkf5service-dev libkf5notifications-dev libkf5kiocore5 libkf5kio-dev qml-module-qtwebengine gettext extra-cmake-modules libkf5wallet-dev qtbase5-private-dev qtwebengine5-dev libkf5wallet-dev qt5-default kirigami2-dev kdevelop kapptemplate

We are ready to convert the template to a working application. We will do it in two different ways:

  • Using the KAppTemplate tool
  • With KDevelop

The KAppTemplate way

KAppTemplate, the Qt and KDE template generator, consumes templates and generates source code. After launching KAppTemplate, on the second page of the wizard, we select Kirigami Application. The Kirigami template can be found under Qt > Graphical on the leftmost panel. We also provide a project name, e.g. hellokirigami.

Choose your project Choose your project template

On the last page of the wizard, we set the application directory and, optionally, some details about our application.

Project properties Set the project properties

After clicking to Generate, the source code of our application is created; we are ready to start building it.

cd /home/user/src/hellokirigami
mkdir build
cd build
cmake  ..
make -j4
sudo make install

Since the development environment is a virtual one, we are free to install our application in the system directories. Otherwise, in case of working on the host machine, custom install prefixes would be recommended.

Hello Kirigami Hello Kirigami

Using KDevelop

KDevelop is a full-featured IDE that integrates perfectly with application templates. On the Project menu, after clicking to the New From Template menu item, the application template wizard is shown.

New from template Create a KDevelop project from template

We just select the Qt category on the leftmost panel and Kirigami Application on the right one, and set a project name, e.g. hellokirigami2.

Kirigami template Kirigami template on KDevelop

On the last screen of the template wizard, we may also select a version control system, if needed. Not this time.

Back to KDevelop, we may set a custom installation prefix. Since we are not going to to install anything, we just select Debug as Build type and don’t bother with installation prefixes.

KDevelop build dir Set the build directory

The Projects pane enables us to examine the source code file hierarchy, while on the right we can see the code of each file. To test our application, we Build and, finally, Execute our application. Both actions can be found on the main toolbar.

KDevelop build exec Build and execute

Having clicked to Execute, on the Launch Configurations dialog, we Add a new configuration, selecting the scr/hellokirigami2 project target.

Configure launch Set application target on KDevelop

To check how the application will run on Plasma Mobile, we have to configure again a little bit our environment. In particular, on the Run menu, we click to the Configure Launches item. The configuration dialog will be displayed.

Configure environment Launch configurations on KDevelop

Then, we add the following to Environment:


Set environment variables Set mobile environment variables

We are prompted with the mobile version of our application.

Hello Kirigami Hello Kirigami

That’s all. It’s time to add your own code bits. Happy hacking!

Note: The pictures in this post are large and intricate. Click on any picture to see it full size!

Could you tell us something about yourself?

My name is Bryan Wong. I am a physics undergraduate who is trying to become a game developer.

Do you paint professionally, as a hobby artist, or both?

I am working on my own game, so I mainly use Krita to work on game assets. Since paintings are not really necessary, I am only hobby artist in paintings. I am more familiar with making game assets, and I cannot call myself professional since I am just starting out.

What genre(s) do you work in?

I mainly use Krita for character spritesheets, tilemaps, UI design. When I make a painting, I don’t really choose the style. I just try to make something look good with a lot of experiments on the brushes.

Whose work inspires you most — who are your role models as an artist?

I followed many YouTubers to start learning digital art. I think most of the YouTube digital artists inspired me and taught me things. It is hard to choose one that inspired me the most.

How and when did you get to try digital painting for the first time?

The first time I tried digital painting was when I needed a menu screen for my game. And the painting was very ugly, no surprise. I was not using Krita at that time. I decided to improve my art. I checked a lot of discussions from different communities and a lot of them pointed me to Krita. So I downloaded it and gave it a try. I would say it was the real first time I tried making a digital painting. Eventually I made what I needed with Krita.

What makes you choose digital over traditional painting?

I had been doing traditional art for about 10 years and then switched to digital painting. To be honest, if resources are allowed, I prefer playing with traditional art. It is because a computer will not simulate the flexibility of a real brush, at least for now. However, digital art has its own features that traditional art can never have. First, digital art is digital. That means the paintings can be used by the computer for whatever you need. Second, some tools in digital painting, such as transform tools, perspective tools, layers, opacity, effects, are so powerful that traditional art will never have.

How did you find out about Krita?

I found Krita by reading a lot of discussions. A lot of them claimed that Krita is the best program for drawing. So I gave it a try.

What was your first impression?

My first impression was a bit confused when I ran Krita. And I guess this is normal when one tries something new with a lot of functions inside. I just read the Krita manual and watch YouTube to learn how to use the program. It was easy and quite intuitive.

What do you love about Krita?

There are a lot of features that make me love Krita.

First, a lot of those features are very useful for game arts, such as clones array, grid and guide, these make making tiles extremely smooth. I can also make a bunch of clone layers with transform mask to generate spritesheets easily.

Second, the brush engine is powerful. It has masked brush and texture. The soft round brush also allows you to draw your own intensity curve to make an interesting result.

Third, the developer support is excellent. Whenever I report a bug, the developer will respond quickly and will solve the problem. The team really cares about the program and user experience.

And many more…

What do you think needs improvement in Krita? Is there anything that really annoys you?

Used to be the performance issue, which was heavily improved in 4.2. Nothing really annoys me.

What sets Krita apart from the other tools that you use?

Krita has everything I need. Krita was the first program I tried that has everything I need. I gained success with Krita.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

This was the first time I finally made a distant view. I finally understood how to create a giant space in the picture. My first cityscape that actually works. My first painting that most of my friends loved.

What techniques and brushes did you use in it?

For a complicated painting, I usually will start with thumbnail first. Then try to deal with lighting and details. The brush is mainly round brush, with help from a set of palette knife brush I downloaded from somewhere. The main technique is to use lasso selector and transform tools to make hard edges for the buildings.

Where can people see more of your work?

Right now, I only post my work in the Krita forum as SymmetryWeapon. If I make my collection of painting in the future, I will put links in the signature.

November 10, 2019

Hello one more time! Last week I analyzed GNOME keyboard shortcuts because I’m in an endeavor to analyze keyboard shortcuts in most major DEs just so that I can provide insight into what the best defaults would be for KDE Plasma.

I must say, this one brings me memories since XFCE was my second Desktop Environment, and it also rendered quite some nice insight since a significant basis for comparisons has already been established—my blog should target primarily Plasma users, after all, and GNOME, a keyboard-enabling environment, was already verified.

With that said, let’s start our analysis of XFCE keyboard shortcuts.


For testing XFCE, I installed full xubuntu-desktop on my work machine which includes Kubuntu 19.04 (with backports).

I also used a Xubuntu 19.10 live USB on both my home and work machines. Oh, how I missed that squeaky mascot placeholding the interactive area of whisker menu!

I checked MX Linux, Sparky Linux, Arco Linux, EndeavourOS and Void musl XFCE on VMs for comparison. Generally speaking, Xubuntu, MX Linux and Arco Linux had very different keyboard shortcuts, and I must say beforehand: the distro which comes closest to that seems to be Void, which does not even include a panel by default (similarly to Openbox) and does not seem to change anything about XFCE keyboard shortcuts, whereas the most complete experience was Xubuntu. As it has the most polish and is arguably the most popular one, I’ll talk first about Xubuntu and comment on the others using it as parameter.

For sources, since XFCE themselves do not provide lists with keyboard shortcuts, I initially used this random but minimally comprehensive page to acquire the typical modifier combos so that I could experiment; however, afterwards I would learn that XFCE, unlike Plasma and GNOME, stores its keyboard shortcuts in a single file, which made my life easier. It is stored in /etc/xdg/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml, and it is the default configuration for XFCE; distros however instead prepare a user config file stored in ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml, even if no significant changes were made.

I must mention two things: first, it would be useful for readers to pay particular attention to when I mention “XFCE” and when I mention “Xubuntu”, as those are not used interchangeably. In addition, reading of my previous post on workspace dimensions should make things more clear for those who never read my blog or this series before.

Xubuntu’s immediate affairs

The first contact the user might have with XFCE after logging in is the menu, and in the case of people with keyboard-driven workflows, its associated key, Meta.

In XFCE, that is not the case. XFCE’s defaults include no keybinding for the menu, so Xubuntu adds its own. The whisker menu can be opened with Ctrl+Esc, which is an interesting choice because it was clearly made to compensate for an issue related to binding the Meta key to the menu.

What happens is that XFCE is unable to discern modifier behavior from typical keyboard shortcut behavior, that is, any keyboard shortcut you set, no matter the keys used, will behave the same. If you bind Meta to open the menu, when using any other keyboard shortcut that involves Meta, the menu will open before the keycombo is finished. Period.

If setting Meta to the Menu will open the menu every time someone wants to, for instance, open the Terminal with Meta+T, this can quickly become annoying. By default, the whisker menu steals focus when it is open, which means the terminal you tried to open will not accept input unless you close the whisker menu.

The terminal is also the first contact for several tech-savvy or experienced Linux users out there. Its keyboard shortcuts are interesting as well: both Ctrl+Alt+T and Meta+T are set by default. This was clearly in the devs minds as they designed the desktop: as I’ll show soon, Xubuntu uses an alphabetical logic to set its application shortcuts, and sets them so with Meta; we can safely assume that Meta+T is the preferred keycombo, whereas Ctrl+Alt+T is present for two reasons: it’s the default for XFCE, and it’s standardized among desktop environments.

By default, Xubuntu only includes one workspace, which determines that its preference is a 0D model, as expected, since it’s the most basic of desktop usage; however, it is easy to create more, as long as you don’t exceed 32 workspaces. I created my usual 4 workspaces—Work, Folders, Browser and Media—and added a workspace switcher to my tray. By default, workspaces are displayed in a horizontal manner, very typical of 1D models, but a 2D model can be achieved by configuring the workspace switcher to have more than one row.

As I use a two-monitor setup both at home and at work, I set up layout first by how I knew best, xfce4-display-settings, XFCE’s own tool for display settings that looks a lot better than arandr. A few days later, when searching for keyboard shortcuts, I found out that the XFCE default Meta+P activates a popup for screen configuration much similar to Plasma’s kscreen! It was a very nice surprise. Even nicer is that both share the same keycombo.

Xubuntu – Respect!

As stated before, Xubuntu by default has only one workspace. This is likely a preferred setting, given the fact the remaining distributions displayed the ability of setting the workspace switcher by default in the panel with multiple workspaces preset. In addition, XFCE includes absolutely no GUI-discoverable way to determine that navigation shortcuts even exist.

There is also currently no means to move a window to another monitor in the window manager. Consequently, there is no keyboard shortcut for that. What this means is that the only way to move a window to another screen—at least currently— is by pulling the window with the mouse either on the titlebar or the menubar, requiring the help of a third-party tool like wmctrl+xdotool in order to execute such functionality with a keycombo.

Aside from that, controlling windows within one workspace in Xubuntu is rather impressive. By default, Alt+F5 will maximize a window horizontally, Alt+F6 vertically, Alt+F7 fully, while all three can be toggled. Alt+F8 shows a window in all workspaces, Alt+F9 minimizes the window, Alt+F11 turns the window fullscreen, including the absence of the titlebar for any application regardless of being CSD or SSD, functionality I had never even thought about before. I found this awesome for laptop usage. Alt+F12 pins a window to always stay above.

Xubuntu makes some sense in its use of the F# keys, though it has issues. Window sizing is neatly set with very close keys, first horizontally (as we read), then vertically (the parallel equivalent), then maximized (both horizontal and vertical), which is nice. F11 is traditional; many a software have used it before. What I dislike is the fact that minimize, maximize and fullscreen are not paired together when they could, as they make sense semantically, from minimal screen space usage to maximal screen space usage. I also don’t think it is practical to have (Alt+)F5 available compared to XFCE’s defaults (which don’t have it) simply because it is easy to accidentally press F4, a potentially destructive behavior.

It is worth mentioning that XFCE’s defaults on that are very different. In XFCE, F6 sticks a window in all workspaces, F7 moves a window with the mouse, F8 resizes the window with the mouse, F9 minimizes, F10 maximizes, F11 is fullscreen and F12 pins a window to always stay above. Notice how F9 to F11 are neatly stacked, similarly to Xubuntu’s F5 to F7.

Alt+Shift+PgDown is a unique treasure for this 0D environment also. What it does is set the current window to the bottom layer. For instance: say you have three Thunar windows, 1, 2 and 3, where 2 is stacked over 3, and 1 is stacked over 2. That is, 1 is on the upper-most layer (selected) and 3 the bottom-most layer. Pressing this keybinding makes it so that 1 gets below 3, and 2 assumes its upper-most position. Pressing it again transfers 2 below 1 which is below 3; pressing it again transfers 3 below 2 below 1, restoring the original layout and making it a perfect cycle. This behavior can be read in another way too, if it is easier to understand; for instance, it could mean “switch focus to a window one layer below”. This is not entirely representative because if true it would mean that this command would never reach window 3, similarly to dynamic/smart application switching seen in both Plasma and Windows, but it is easier to interpret, I assume.

Alt+Tab and Alt+Shift+Tab are the preferred way to switch between applications within the same workspace with the keyboard however, given that they are primarily based on a two-keys keycombo that is well known in any environment; there is no such thing as Meta+number to switch between applications on the panel’s task manager; generally speaking, everything can be done with the mouse, not everything with the keyboard, which is fine. It is not a keyboard-oriented environment, after all.

One thing I found clunky and very similar to Plasma was the presence of Up and Down snapping, namely Meta+Up/Down/Left/Right would snap the window to each respective part of the screen. The majority of screens still being shipped is 1366×768, and so this kind of snapping to the upper-/bottom-most part is essentially useless. Even in my 1920×1020 screen I cannot find any use for that, though I presume it is possible and it therefore exists. I see, however, no particular solution to that other than what GNOME did.

Anyway, all of this makes XFCE very suitable as a 0D environment. It also shows signs of a good 1D model preference, given the following 1-axis-oriented default XFCE keycombos:

Ctrl+Alt+Home is set to “Move window to previous workspace”, and Ctrl+Alt+End to “Move window to next workspace”. Ctrl+Alt+Left/Right move between workspaces horizontally. If PgUp and PgDown semantically imply Up/Down, Home/End can imply Left/Right by analogy. If previous is associated with Left and next is associated with Right, then it forms one horizontal axis.

Ctrl+Alt+Shift+Left/Right is a mysterious thing. Supposedly it should allow to move a window from one workspace to another, but I could not do that in any of the distributions tested. I can only think this is a bug in XFCE. Ctrl+Alt+Home/End work, so in practice their absence is not an issue, but as a parallel to Ctrl+Alt+Left/Right, they would work very well, despite the fact they are 4-keys keycombos.

Meta+Insert and Meta+Delete add/remove new workspaces, respectively. This is an XFCE default.

In addition, Ctrl+F# switches to a given # workspace, namely Ctrl+F4 would switch to the 4th workspace. This is typical of 1D environments in that the F# keys serve as a row much akin to the standard layout.

If we switch to a 2D model, however, things still work out just fine. In addition to previous/next, Ctrl+Alt+Up/Down allow for vertical movement within workspaces. The F# keys do lose their row semantics, but aside from that, it shows no major issues.

As can be seen, all dimensions shown are respected by XFCE, thus being perfectly compliant with any workspace workflow (with the exception of 3D, which is a Plasma-exclusive feature), despite some shortcut issues.

Changing the subject from navigation to applications, default application keyboard shortcuts in Xubuntu generally use the first-letter rule assigned to functionality semantics, regardless of application name. That is: Meta+T = terminal, Meta+F = file manager, Meta+M = mail, Meta+E = editor, Meta+W = web browser, Meta+L = lock. Lock also makes sense since each bit of XFCE is an independent application, and named like so, xflock4.

This is possible because of the presence of exo-utils and libexo, a small program which works very similarly to xdg-open, kde-open5 and gnome-open. It sets default applications for default functionality, and so keybindings are not set to specific applications, providing certain flexibility with keyboard shortcuts.

The exceptions to this rule seem to be an afterthought that makes sense. I found no discernible way to explain why Meta+P = display and Meta+F1 = show mouse position. Presumably, Meta+P could refer to popup, but it seems unlikely. Meta+1 opens parole, the music player, 2 opens pidgin, the IM, 3 opens LibreOffice Writer and 4 opens LibreOffice Calc, which shows they are additional, common applications set to easy-to-type keycombos. They make sense, but the reason for that clearly is the fact the aforementioned rule is very limiting in that many default programs will typically have the same initial letter, in addition to the fact that variants of the same application break this rule: L cannot be set to both Writer and Calc, for instance, and so the first letter of the second word would be expected. Using Meta+1–4 is a simple solution for what’s left while also not making use of Meta+5–0, which are simply awful to type with one hand. Technically speaking, however, Meta+4 is already bad for typing with one hand, so there’s that.

Aside from Meta+first letter, XF86 keys are also available. For those who do not know this, the function keys, typically available by pressing a function modifier key plus an F# key or by pressing extra keyboard keys (such as Home, Volume+, Brightness-, Email, etc., not available in every keyboard) are canonically recognized in X11 as XF86. XF86WWW for instance is the equivalent to that key with a Browser icon. XF86AudioRaiseVolume, XF86AudioLowerVolume and XF86AudioMute should be more common and familiar to users, since if this sort of keys exists in your keyboard, then at the very least it likely includes these keys.

For the sake of completion, Alt+F1 opens the desktop’s context menu, Alt+F2 opens the collapsed application finder (very similar to KRunner), Alt+F3 opens the application finder fully (very similar to the menu), Alt+Space opens the popup menu for the current window. Those are XFCE defaults.

Ctrl+Alt+Esc, similarly to Plasma, invokes xkill, which kills any window instantly with a mouse click.

Unlike Plasma and GNOME, XFCE and Xubuntu are not striving to standardize Meta as a system/shell/desktop shortcut; as a matter of fact, this would go against the way Xubuntu has used Meta at least since 2014, from what I could find.

All in all, I generally liked very much the amount of thought put into keyboard shortcuts in Xubuntu 19.04.

Arco Linux – Too many options, too many heads

Like Xubuntu, Arco makes use of Ctrl+Esc in order to free Meta for applications, but the use of the first-letter rule is much more extensive than Xubuntu’s, especially since it uses Meta, Alt, Ctrl+Alt and Meta+Shift as modifiers.

Meta+M = music player (pragha), Meta+H = htop task manager in urxvt (which is inconsistent with Ctrl+Shift+Escape), Meta+V = volume control with pavucontrol, Meta+Esc = xkill (instead of Ctrl+Alt+Esc), Meta+Enter = terminal (akin to i3), Meta+X = eXit, a.k.a. logout, Meta+L = lock screen with slimlock instead of xflock, Meta+E = editor, namely Geany, Meta+R = rofi theme selector, Meta+C = toggling conky on and off, Meta+Q = quit application, again akin to i3.

There there’s the Meta+F# keys: Meta+F1 = browser, Meta+F2 = Atom, Meta+F3 = Inkscape, Meta+F4 = Gimp, Meta+F5 = Meld, Meta+F6 = VLC, Meta+F7 = VirtualBox, Meta+F8 = Thunar, Meta+F9 = Evolution, Meta+F10 = Spotify, Meta+F11 = fullscreen rofi, Meta+F12 = rofi. Those seem to be arbitrarily defined as third-party tools being bundled in the F# keys (with the exception of thunar), which, once again, feels like an afterthought.

There’s also the Ctrl+Alt+letter keys: Ctrl+Alt+U = pavucontrol, which is inconsistent with Meta+V, Ctrl+Alt+M = settings Manager, Ctrl+Alt+A = app finder, Ctrl+Alt+V = Vivaldi, Ctrl+Alt+B = Thunar (I see no letter association here), Ctrl+Alt+W = Atom, Ctrl+Alt+S = Spotify, Ctrl+Alt+C = Catfish, Ctrl+Alt+F = Firefox, Ctrl+Alt+G = Chromium, Ctrl+Alt+Enter = Terminal (like Meta+Enter).

Other keyboard shortcuts include Alt+F2 to execute gmrun, an application launcher similar to application finder, Alt+N/P/T/F/Up/Down for variety, the wallpaper changer, Ctrl+Shift+Escape to run the XFCE task manager, Ctrl+Shift+Print = gnome-screenshot, Meta+Shift+D to run dmenu, Meta+Shift+Enter to run Thunar, Ctrl+Alt+PgUp/PgDown to transition between conky layouts, Ctrl+Alt+D shows the desktop, F4 (yes, by itself, not including Alt) opens the XFCE terminal as a dropdown, quite similarly to Yakuake.

All in all, I found Arco to be a mess. No logic is used throughout the whole keyboard-driven UX (it can be the first-letter of application name or functionality, it can be the first-letter of second name, it can be the corresponding letter to a CLI option, it can be a bundle based on nativity of software…), modifiers have no discernible logic either (what differs between Ctrl+Alt, Meta+Shift, Meta? Only Alt is more or less consistent), applications bound to F# keys are not memorizable at all and seem completely arbitrary, and the main issue with all of that: the huge majority of custom changes to Arco CANNOT be seen in XFCE’s keyboard settings tool, and only partially through the default conky file, which makes their majority effectively undiscoverable unless the user checks the preset user config file, which is also messy. But then again, aside from XFCE’s default config file, all distros had messy user config files.

Additionally, Arco Linux includes 4 workspaces by default.

MX Linux – Not-so-custom keybindings for a very custom duck

One thing that it does differently from Xubuntu is usage of the menu, which is bound to Meta. Because of this, it uses no other Meta+key shortcut aside from Meta+P in order to avoid annoying the user with the aforementioned issue. Meta+P behaves differently, however: instead of a popup, it opens the display settings application directly.

I particularly found two keyboard shortcuts interesting in MX Linux.

Alt+F1, instead of opening the generally-not-so-useful desktop context menu, opens the mxmanual, basically the Help section for MX Linux, which is pretty friendly to new users, though the manual also lies on the desktop by default as an icon. It is also nice in that it uses F1, which is standardized as Help in many applications.

F4 (yes, by itself, not including Alt) opens the XFCE terminal as a dropdown, quite similarly to Yakuake. This is a particularity of MX Linux, which is nice in that it gives this heavily-customized distro a sense of identity. I have no idea why it would be F4, however.

Ctrl+Alt+M to find the mouse is nice in that it follows the first-letter rule, but aside from that, nothing else does this.

MX Linux patito feo did not have much in terms of keyboard shortcuts in comparison with its amount of user-friendly utilities.

Additionally, it includes 2 workspaces by default, and in a vertical manner, curiously enough. This is likely because the panel in MX Linux is set vertically on the left side of the screen.

Sparky Linux, Void Linux, EndeavourOS

All three did not do much in terms of keyboard shortcuts. If anything, their differences are mainly aesthetic or low-level. EndeavourOS provides a much more glossy and modern design, with transparency in the panel and whatnot; Sparky seemed to strive for a more traditionalist aspect, similar to Linux Mint, but more bland; Void ships XFCE as pure as one can get, with its clearly dated interface.

Sparky included 2 workspaces, Void included 4 workspaces, and EndeavourOS included 1 workspace by default. XFCE is so fitting with all three dimensions that these three distributions, shipping essentially default XFCE, managed to ship with three different dimensions.

Final Considerations

I am more and more inclined to believe that Ctrl+Alt+arrows are very suitable to navigate between workspaces. I also feel like the ability to switch between screens is becoming more relevant with the increasingly common multi-monitor setup.

The first-letter rule works with small environments, but can be overwhelming if the DE or distribution makes excessive use of preconfigured keybindings.

I wonder what was the semantic association which lead both XFCE and Plasma to use Meta+P for display settings.

Xubuntu worked around the challenges of 2D environments and its compatibility with 0D and 1D environments very well.

Verifying keyboard shortcuts in different distros was tiring as hell too.

Some weeks ago at the Open Source Summit & Embedded Linux Conference there was also a talk by David about using heaptrack and hotspot. Since these tools are extremely valuable, I thought I’d blog to make these tools a bit more visible in the KDE community. Have fun watching & happy debugging, and join the discussion on KDE’s subreddit :-)



I was going to attend the Linux App Summit, and even going to speak, about Krita and what happens to a an open source project when it starts growing a lot. And what that would mean for the Linux desktop ecosystem and so on. But that’s not going to happen.

There was really bad flooding in the south of France, which damaged the TGV track between Montpellier and Barcelona. When we went home after the 2018 Libre Graphics Meeting, we took the train from Barcelona to Paris, and noticed how close the water was.

Well, I guess it was too close. And this is relevant, because I had planned to come to Barcelona by train. It’s a relatively easy one-day trip if everything is well, and gives ten hours of undisturbed coding time, too. Besides, I have kind of promised myself that I’m not going to force myself to take planes anymore. Flying terrifies me. So I didn’t consider flying from Amsterdam for a moment — I was only afraid that other people would try to force me to fly.

Then I learned that there is a connection: I would take the train to Montpellier, and then a bus to Bezieres and then a train again. It would make the whole trip a two-day journey, with, as recommended by the travel agency I bought the tickets from, a stop in Paris. That advice was wrong.

I should have taken my connecting train to Montpellier, got a hotel there, and continued the next day. At this point I was like, okay… I’m just going home. Which I am doing right now. I bet the trip back would be just as difficult, and my Spanish isn’t as good as my French, so I would have an even harder time getting people to tell me what to do exactly, when to travel and where to go.

Sorry, everybody.

We’re mid-cycle in Plasma 5.17 and still working hard to fix bugs and regressions, while planning for Plasma 5.18, our next LTS release! There’s also been continued work on our apps. Check it out:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

How You Can Help

Qt 6 is around the corner! …And everything will need to be ported to use it. But don’t worry, the 5 -> 6 transition promises to be a relatively smooth one thanks to the Qt folks working hard on compatibility and us having already started the work of porting our software away from deprecated APIs. on that subject, it’s a great way to help out if you’re into backend work and appreciate a clean codebase. Check out the Phabricator workboard for the KF6 transition to learn more!

More generally, have a look at and find out more ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Finally, consider making a tax-deductible donation to the KDE e.V. foundation.

The branch naming has changed to try to accommodate for the stopping of the "KDE Applications" brand, it's now called

Make sure you commit anything you want to end up in the 19.12 releases to them

We're already past the dependency freeze.

The Freeze and Beta is this Thursday 14 of November.

More interesting dates
November 28, 2019: KDE Applications 19.12 RC (19.11.90) Tagging and Release
December 5, 2019: KDE Applications 19.12 Tagging
December 12, 2019: KDE Applications 19.12 Release


P.S: Yes, this release unfortunately falls in the middle of the debranding of "KDE Applications" and there's still a few things called "KDE Applications" here and there

[*] There's a small issue with kwave we're working on figuring it out

November 09, 2019

It's been almost half a year since I mentioned how precompiled headers do (not) improve C++ compile times. Quite a long time, filled with doing other things, life, occassionally working on getting my patch production-ready and, last but definitely not least, abandoning that patch and starting from scratch again.
It turns out, the problems I mentioned last time had already been more or less solved in Clang. But only for C++ modules, not for precompiled headers. *sigh* I had really mixed feelings when I finally realized that. First of all, not knowing Clang internals that well, it took me quite a long time to get to this point figuring it all out, probably longer than it could have. Second, I've been using C++ modules when building Clang itself and while it's usable, I don't consider it ready (for example, sometimes it actually makes the build slower), not to mention that it's non-trivial to setup, not standardized yet and other compilers (AFAIK) do not yet support C++ modules. And finally, WTH has nobody else yet noticed and done the work for precompiled headers too? After all the trouble with finding out how the relevant Clang parts work, the necessary patches mostly border on being trivial. Which, on the other hand, is at least the good news.
And so I'm switching for LibreOffice building to my patched build of Clang. For the motivation, maybe let's start with an updated picture from the last time:

This is again column2.cxx, a larger C++ file from Calc. The first row is again compilation without any PCH involved. The second row is unpatched Clang with --enable-pch=full, showing again that way too large PCHs do not really pay off (here it does, because the source file is large, but for small ones such as bcaslots.cxx shown last time it makes things slower). In case you notice the orange 'DebugType' in the third row that looks like it should be in the second row too, it should be there, but that's one of these patches of mine that the openSUSE package does not have.
The third row is with one patch that does the PerformPendingInstantiations phase also already while building the PCH. The patch is pretty much a one-liner when not counting handling fallout from some Clang tests failing because of stuff getting slightly reordered because of this. Even by now I still don't understand why PCH generation had to delay this until every single compilation using the PCH. The commit introducing this had a commit message that didn't make much sense to me, the test it added works perfectly fine now. Presumably it's been fixed by the C++ modules work. Well, who cares, it apparently works.
The last row adds also Clang options -fmodules-codegen -fmodules-debuginfo. They do pretty much what I was trying to achieve with my original patch, they just approach the problem from a different side (and they also do not have the technical problems that made me abandon my approach ...). They normally work only for C++ modules, so that needed another patch, plus a patch fixing some problems. Since this makes Clang emit all kinds of stuff from the PCH into one specific object file in the hopes that all the compilations using the PCH will need that too but will be able to reuse the shared code instead, LibreOffice now also needs to link with --gc-sections, which throws away all the possibly problematic parts where Clang guessed wrong. But hey, it works. Even with ccache and Icecream (if you have the latest Icecream, that is, and don't mind that it "implements" PCHs for remote compilations by simply throwing the PCH away ... it still pays off).
So, that it's for a single compilation. How much does it help with building in practice? Time for more pretty colorful pictures:

This is a debug LO build on my 4-core (8 HT) Ryzen laptop, Library_sm is relatively small (36 source files), Library_scfilt is larger (154 source files). Plain 'Clang' means unpatched Clang(v9), 'Clang+' is with the PerformPendingInstantiations patch (i.e. the third row above), 'Clang++' is both patches (i.e. the fourth row above). The setting is either --enable-pch=base for including only system and base LO headers in the PCH, or --enable-pch=full for including everything that makes sense. It clearly shows that using large PCHs with GCC or unpatched Clang just doesn't make sense.
Note that GCC(v9) and MSVC(v2017) are there more as a reference than a fair comparison. MSVC runs on a different OS and the build may be possibly slightly handicaped by some things taking longer in Cygwin/Windows. GCC comes from its openSUSE package, which AFAICT is built without LTO (unlike the Clang package, where it makes a noticeable difference).
And in case the graphs don't seem impressive enough, here's one for Library_sc, which with its 598 source files is too big for me to bother measuring it in all cases. This is the difference PCHs can make. That's 11:37 to 4:34, almost down to one third:
As for building entire LO from scratch, it can be like in the picture below (or even better). The effect there is smaller, because the build consists of other things than just building libraries, and some of the code built doesn't use PCHs. And it's even smaller than it could be, because I used --enable-pch=base, as that's what I've been using up to now (although now I'll switch to a higher level). That's about 1h42m without PCHs to 1h14m with unpatched Clang (27% percent saved), and 1h06m with patches (and the 8minutes difference is still 11% of the unpatched time). Not bad, given that this is the entire LO build. Those 6 minutes for ccache are there to show the maximum possible improvement (or rather nowhere near possible, since the compiler normally still has to do the work of actually compiling the code somehow).
In case you'd want to use this too, that's not up to me now. The patches are now sitting and waiting in the LLVM Phabricator. Hopefully somebody there still cares about PCHs too.

Older blog entries