August 19, 2019

The KMyMoney development team today announces the immediate availability of version 5.0.6 of its open source Personal Finance Manager.

Another maintenance release is ready: KMyMoney 5.0.6 comes with some important bugfixes. As usual, problems have been reported by our users and the development team fixed some of them in the meantime. The result of this effort is the brand new KMyMoney 5.0.6 release.

Despite even more testing we understand that some bugs may have slipped past our best efforts. If you find one of them, please forgive us, and be sure to report it, either to the mailing list or on

From here, we will continue to fix reported bugs, and working to add many requested additions and enhancements, as well as further improving performance.

Please feel free to visit our overview page of the CI builds at and maybe try out the lastest and greatest by using a daily crafted AppImage version build from the stable branch.

The details

Here is the list of the bugs which have been fixed. A list of all changes between v5.0.5 and v5.0.6 can be found in the ChangeLog.

  • 408361 Hardly distinguishable line colors in reports
  • 410091 Open sqlite database under kmymoney versions >= 5.0
  • 410865 Access to german online banking requires product key
  • 411030 Attempt to move one split to new category moves all splits with same category

Here is the list of the enhancements which have been added:

  • The default price precision for the Indonesian Rupiah in new files has been raised to 10 decimals.

Lately there were some issues with the Google integration in Kontact which caused that it is no longer possible to add new Google Calendar or Gmail account in Kontact because the log in process will fail. This is due to an oversight on our side which lead to Google blocking Kontact as it did not comply with Google’s policies. We are working on resolving the situation, but it will take a little bit.

Existing users should not be affected by this - if you already had Google Calendar or Gmail set up in Kontact, the sync should continue to work. It is only new accounts that cannot be created.

In case of Gmail the problem can mostly be worked around when setting up the IMAP account in KMail by selecting PLAIN authentication1 method in the Advanced tab and using your email and password. You may need to enable Less Secure Applications in your Google account settings in order to be able to log in with regular email address and password.

If you are interested in the technical background of this issue, the problem comes from Google’s OAuth App Verification process. When a developer wants to connect their app to a Google service they have to select which particular services their app needs access to, and sometimes even which data within each service they want to access. Google will then verify that the app is not trying to access any other data or that it is not misusing the data it has access to - this serves to protect Google users as they might sometimes approve apps that will access their calendars or emails with malicious intent without them realizing that.

When I registered Kontact I forgot to list some of the data points that Kontact needs access to. Google has noticed this after a while and asked us to clarify the missing bits. Unfortunately I wasn’t able to react within the given time limit and so Google has preemptively blocked login for all new users.

I’m working on clarifying the missing bits and having Google review the new information, so hopefuly the Google login should start working again soon.

  1. Despite its name, the PLAIN authentication method does not weaken the security. Your email and password are still sent safely encrypted over the internet. 

We have released Qt Visual Studio Tools 2.4 RC (version 2.4.0); the installation package is available in the Qt download page. This version features an improved integration of Qt tools with the Visual Studio project system, addressing some limitations of the current integration methods, most notably the inability to have different Qt settings for each project configuration, and lack of support for importing those settings from shared property sheets.

Using Qt with the Visual Studio Project System

The Visual Studio Project System is widely used as the build system of choice for C++ projects in VS. Under the hood, MSBuild provides the project file format and build framework. The Qt VS Tools make use of the extensibility of MSBuild to provide design-time and build-time integration of Qt in VS projects — toward the end of the post we have a closer look at how that integration works and what changed in the new release.

Up to this point, the Qt VS Tools extension managed its own project settings in an isolated manner. This approach prevented the integration of Qt in Visual Studio to fully benefit from the features of VS projects and MSBuild. Significantly, it was not possible to have Qt settings vary according to the build configuration (e.g. having a different list of selected Qt modules for different configurations), including Qt itself: only one version/build of Qt could be selected and would apply to all configurations, a significant drawback in the case of multi-platform projects.

Another important limitation that users of the Qt VS Tools have reported is the lack of support for importing Qt-related settings from shared property sheet files. This feature allows settings in VS projects to be shared within a team or organization, thus providing a single source for that information. Up to now, this was not possible to do with settings managed by the Qt VS Tools.

To overcome these and other related limitations, all Qt settings — such as the version of Qt, which modules are to be used or the path to the generated sources — will now be stored as fully fledged project properties. The current Qt Settings dialog will be removed and replaced by a Qt Settings property page. It will thus be possible to set the values of all Qt settings according to configuration, as well as import those values from property sheet files.


A closer look

An oversimplified primer might describe MSBuild as follows:

  • An MSBuild project consists of references to source files and descriptions of actions to take in order to process those source files — these descriptions are called targets.
  • The build process runs in the context of a project configuration (e.g. Debug, Release, etc.) A project may contain any number of configurations.
  • Data associated to source files and the project itself is accessible through properties. MSBuild properties are name-value definitions, specified per configuration (i.e. each configuration has its own set of property definitions).

Properties may apply to the project itself or to a specific file in the project, and can be defined globally or locally:

  • Project scope properties are always global (e.g. the project’s output directory or target file name).
  • Properties applying to source files can be defined globally, in which case the same value will apply to all files (e.g. default compiler warning level is defined globally at level 3).
  • Such a global, file-scope definition may be overridden for a specific file by a locally defined property with the same name (e.g. one of the source files needs to be compiled with warning level 4).
  • Global definitions are stored in the project file or imported from property sheet files.
  • Local property definitions are stored in the project file, within the associated source file references.

The Qt Visual Studio Tools extension integrates with the MSBuild project system by providing a set of Qt-specific targets that describe how to process files (e.g. a moc header) with the appropriate Qt tools.

The current integration has some limitations, with respect to the capabilities of the MSBuild project system:

  • User-managed Qt build settings are copied to project properties on change. Given this one-way synchronization, project properties may become out-of-sync with the corresponding Qt settings.
  • The value of the Qt build settings is the same for all configurations, e.g. the same Qt build and modules will be used, regardless of the selected configuration.
  • It is not possible to override properties in generated files like the meta-object source code output of moc.
  • Qt settings can only be stored in the project file. As such, it is not possible to import Qt definitions from shared property sheets, e.g. a common Qt build shared across several projects.

As discussed above, the solution for these limitations has been to make Qt settings fully fledged project properties. In this way, Qt settings will be guaranteed in-sync with all other properties in the project, and have the possibility of being defined differently for each build configuration. It will be possible to import Qt settings from property sheets, and the property page of Qt tools that generate C++ code, like moc, will now allow overriding compiler properties in generated files.

The post Qt Visual Studio Tools 2.4 RC Released appeared first on Qt Blog.

Could you tell us something about yourself?

Hi, my name is Chayse Goodall. I am 14 years old. I just draw for fun!

Do you paint professionally, as a hobby artist, or both?

I do animate and draw for a hobby. My artwork improves little by little, but I don’t think it’ll be good enough for a profession!

What genre(s) do you work in?

I don’t know what my style is, but people tell me that I have an anime style.

Whose work inspires you most — who are your role models as an artist?

I have many inspirers, two of which was my art teachers, and one of which was my math tutor. They both motivate me to keep going with my progress.

How and when did you get to try digital painting for the first time?

I tried digital art in 2015. I only had a phone at the time, so it was hard at first. It’s a million times better now, and I have a touch screen computer, so I’ve been trying different art programs!

What makes you choose digital over traditional painting?

There are more tools, and there is an undo button!

How did you find out about Krita?

I was looking up free art programs, and stumbled across this program.

What was your first impression?

My experience was great! It was easy to follow once you get the hang of it!

What do you love about Krita?

All the brushes and how easy it is to save.

What do you think needs improvement in Krita? Is there anything that really annoys you?

It is a bit confusing and it looks complex to newcomers.

What sets Krita apart from the other tools that you use?

It has amazing features and it doesn’t require me to “sign up” or pay for anything!

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I only made one picture. She doesn’t have a name, she was just a test character to see how good the program was. (It came out great!)

What techniques and brushes did you use in it?

I normally draw the sketch first in a dark red color. Then I draw the plain body in a light green. I sketch the clothes, hair, and accessories on in a neon color.

I just use the pen for coloring and shading.

Where can people see more of your work?

My instagram- panstables
My youtube- Nubbins

Es difícil hacer la crónica de un evento en el que no participas activamente. Este año no he podido asistir por motivos personales, pero eso no quita que no esté al tanto de buena parte del evento ya que, afortundamente y gracias a la tecnología, existen medios de comunicación eficaces. De esta forma, he aquí la crónica de la tarde del sábado Akademy-es 2019 de Vigo, que a la espera de tener los vídeos, que este año parece que por fin van a ser buenos, es lo más cerca que puedo estar de mis amigos.

La tarde del sábado de Akademy-es 2019 de Vigo

Sí, lo sé, yo no estuve, pero mis «pajarillos» de Vigo me informaron bastante bien del evento. He aquí el resumen de la mañana del sábado de Akademy-es 2019 de Vigo siguiendo el programa.


16:30 – 17:30  Charlas relámpago

Se empieza la tarde con las charlas rápidas de menos de 5 minutos donde se dan pinceladas de proyectos y nuevas tecnologías.

  • Albert Astals abre las charlas relámpago preparándonos para la futura versión 6 de Qt, que se espera para finales de 2020.
  • Le sigue Aleix Pol, que nos introduce a AppStream y las distintas tecnologías que usan sus metadatos: Discover, Snap, Flatpak,, F-Droid, etc.
  • Antonio Larrosa toma el relevo con dos charlas relámpago. En la primera nos habla de las dos principales distribuciones de OpenSUSE: Leap y Tumbleweed. En la segunda nos habla de MusicBrainz, el catálogo de musica universal y libre, nos explica como usar Picard para añadir metadatos a nuestra música, y nos presenta so proyecto personal, Bard, basado en estas tecnologías.
  • A continuación Iraisy Figueroa nos habla de Bricolabs, un espacio ‘maker’ de desarrollo con tecnologías libres situado en la Domus de A Coruña, y de algunos de los proyectos en los que han trabajado y trabajan.
  • Después Ramón Villaverde nos habla de FreeWear, desde donde él y Xulia Barros se encargan desde 2006 de confeccionar productos téxtiles para proyectos de software libre como KDE, Python o Arch Linux que retroalimentan a esos proyectos.
  • Cierra Daniel Sánchez hablando de Vigo Tech Alliance, una agrupación de grupos vigueses con temática tecnológica, y en concreto de Python Vigo, responsables de los PyDay Galicia.

17:35 – 18:10  Introducción al hardening de LinuxPaula de la Hoz, analista de ciberseguridad

Paula de la Hoz, analista de ciberseguridad y co-fundadora de la asociación Interferencias para defender los derechos digitales y la privacidad, nos explica qué es el «hardening», nos hace una introducción a su aplicación en Linux, y nos hace una demostración práctica.


Descanso y foto de grupo

La tarde del sábado de Akademy-es 2019 de Vigo


18:30 – 19:05  Plasma Mobile, próximos pasosAleix Pol, Vicepresidente de KDE e.V.

En una nueva edición de su charla sobre Plasma Mobile, Aleix Pol nos puso al día de los avances de este proyecto cuyo objetivo es llevar KDE Plasma y sus aplicaciones al dispositivo móvil.

Aleix nos desgranó los últimos meses de trabajo en el proyecto que han permitido numerosos avances, como se están apoyando en distintos proyectos base (Halium, Postmarket OS…) para conseguir alcanzar nuevos dispositivos.

Finalizó enumerando los retos técnicos a los que se enfrenta este proyecto,como se encuentran los dispositivos que van a llegar al mercado con Plasma Mobile como opción de s.o. y las posibilidades que, sin tener un dispositivo específico, tenemos para poder probar Plasma Mobile.

19:10 – 19:45  Desarrollo de aplicaciones móviles con Expo y React NativeAnggella Fortunato, desarrolladora de Codeko

Anggella Fortunato, desarrolladora de Codeko, nos muestra una forma de hacer aplicaciones móviles con JavaScript de manera rápida usando React Native, una infraestructura de software creada por Facebook, y Expo, que hace aún más fácil crear y distribuir aplicaciones móviles con React Native.

La tarde del sábado de Akademy-es 2019 de Vigo

Para finalizar la tarde, los asistentes se retiraron al restaurante de Vigo donde se celebrará la cena oficial y los miembros de KDE España celebran su Asamblea Anual.

August 18, 2019

End of June I attended the annual Plasma sprint that was this year held in Valencia in conjunction with the Usability sprint. And in July we organised on short notice a KWin sprint in Nuremberg directly following up on the KDE Connect sprint. Let me talk you through some of the highlights and what I concentrated on at these sprints.

Plasma sprint in Valencia

It was great to see many new faces at the Plasma sprint. Most of these new contributors were working on the Plasma and KDE Apps Ui and Ux and we definitely need some new blood in these areas. KDE's Visual Design Group, the VDG, thinned out over the last two years because some leading figures left. But now seeing new talented and motivated people joining as designers and Ux experts I am optimistic that there will be a revival of the golden time of the VDG that brought us Breeze and Plasma 5.

In regards to technical topics there is always a wide field of different challenges and technologies to combine at a Plasma sprint. From my side I wanted to discuss current topics in KWin but of course not everyone at the sprint is directly working on KWin and some topics require deeper technical knowledge about it. Still there were some fruitful discussions, of course in particular with David, who was the second KWin core contributor present besides me.

As a direct product of the sprint my work on dma-buf support in KWin and KWayland can be counted. I started work on that at the sprint mostly because it was a feature requested already for quite a long time by Plasma Mobile developers who need it on some of their devices to get them to work. But this should in general improve in our Wayland session the performance and energy consumption on many devices. Like always such larger features need time so I was not able to finish them at the sprint. But last week I landed them.

Megasprint in Nuremberg

At the Plasma sprint we talked about the current state of KWin and what our future goals should be. I wanted to talk about this some more but the KWin core team was sadly not complete at the Plasma sprint. It was Eike's idea to organize a separate sprint just for KWin and I took the next best opportunity to do this: as part of the KDE Connect and the Onboarding sprints in the SUSE offices in Nuremberg just a few weeks later. Jokingly we called the whole endeavor because of the size of three combined sprints the Megasprint.

KDE Connect sprint

I was there one or two days earlier to also attend the KDE Connect sprint. This was a good idea because the KDE Connect team needs us to provide some additional functionality in our Wayland session.

The first feature they rely on is a clipboard management protocol to read out and manipulate the clipboard via connected devices. This is something we want to have in our Wayland session also in general because without it we can not provide a clipboard history in the Plasma applet. And a clipboard selection would be lost as soon as the client providing it is closed. This can be intentionally but in most cases you expect to at least have simple text fragments still available after the source client quit.

The second feature are fake inputs of keyboard and mouse via other KDE Connect linked devices. In particular fake input via keyboard is tricky. My approach would be to implement the protocols developed by Purism for virtual keyboards and input methods. Implementation of these looks straight forward at first, the tricky part comes in when we look at the current internal keyboard input code in KWayland and KWin: there is not yet support for multiple seats or for one set multiple keyboards at the same time. But this is a prerequisite for virtual keyboards if we want to do it right including the support of different layouts on different keyboards.

KWin sprint

After the official begin of the KWin sprint we went through a long list of topics. As this was the first KWin sprint for years or even forever there was a lot to talk about, starting with small code style issues we needed to agree on till large long-time goals on what our development efforts should concentrate in the future. Also we discussed current topics and one of the bigger ones is for sure my compositing rework.

But in the overall picture this again is only one of several areas we need to put work in. In general it can be said that KWin is a great piece of software with many great features and a good performance but its foundations have become old and in some cases rotten over time. Fixes over fixes have been put in one after the other increasing the complexity and decreasing the overall cohesion. This is normal for actively used software and nothing to criticize but I think we are now at a point in the product life cycle of KWin to either phase it out or put in the hours to rework many areas from the ground up.

I want to put in the hours but on the other side in light of possible regressions with such large changes the question arises if this should be done dissociated with normal KWin releases. There was not yet a decision taken on that.

Upcoming conferences

While the season of sprints for this year is over now there are some important conferences I will attend and if you can manage I invite you to join these as well. No entry fee! In the second week of September the KDE Akademy is held in Milan, Italy. And in the first week of October the X.Org Developer's Conference (XDC) is held in Montreal, Canada. At XDC I have two talks lined up myself: a full length talk about KWin and a lightning talk about a work-in-progress solution by me for multi DPI scaling in XWayland. And if there is time I would like to hold a third one about my ongoing work on auto-list compositing.

In the beginning I planned only to travel to Canada for XDC but just one week later the WineConf 2019 is held close to Montreal, in Toronto, so I might prolong the stay a bit to see how or if at all I as a compositor developer could help the Wine community in achieving their goals. To my knowledge this would be the first time a KWin developer attends WineConf.

Es difícil hacer la crónica de un evento en el que no participas activamente. Este año no he podido asistir por motivos personales, pero eso no quita que no esté al tanto de buena parte del evento ya que, afortundamente y gracias a la tecnología, existen medios de comunicación eficaces. De esta forma, he aquí la crónica de la mañana del sábado de Akademy-es 2019 de Vigo, que a la espera de tener los vídeos, que este año parece que por fin van a ser buenos, es lo más cerca que puedo estar de mis amigos.

La mañana del sábado de Akademy-es 2019 de Vigo

Sí, lo sé, yo no estuve, pero mis «pajarillos» de Vigo me informaron bastante bien del evento. He aquí el resumen de la mañana del sábado de Akademy-es 2019 de Vigo siguiendo el programa.

10:00 – 10:35  El Mordor de la CiberseguridadBelén Pérez, ingeniera de redes y ciberseguridad de Balidea y coordinadora para Galicia del Centro de Ciberseguridad Industrial

Belén Pérez arroja luz sobre el precario estado de la seguridad informática en entornos industriales. Entornos en los que la falta de información, preparación o concienciación tienden a potenciar malas prácticas con efectos desastrosos que pueden llegar a hundir empresas.

10:40 – 10:55  ¿KDE España para qué?Adrián Chaves, traductor al gallego de KDE

Adrián Chaves nos resume la actividad actual de KDE España, centrada en aspectos de comunicación y promoción.

11:00 – 11:15  Retos de KDE y de KDE EspañaAntonio Larrosa, Presidente de KDE España

Le sigue Antonio Larrosa, que enumera algunos de los retos a los que se enfrenta la comunidad KDE: retos tecnológicos importantes, pero también retos en el sector empresarial y retos “humanos” que no podemos descuidar. Y nos habla además de retos adicionales que nos conciernen específicamente a KDE España.

11:35 – 12:10  Crear una aplicación simple con KirigamiAleix Pol, Vicepresidente de KDE e.V.

Aleix Pol nos resume la historia, los casos de éxito y el funcionamiento de Kirigami, una infraestructura de software que nos permite crear aplicaciones que se adaptan a todo tipo de dispositivos, sistemas operativos y entornos de

La mañana del sábado de Akademy-es 2019 de Vigo

12:15 – 12:50  La importancia de la promociónRubén Gómez, miembro de Hacklab Almería

Rubén Gómez cierra su triología de charlas sobre la promoción en el software libre. En esta charla nos descubre el funcionamiento interno del grupo de promoción de KDE, y sus aciertos, pero también se pregunta en voz alta por qué nuestra meta de conquistar el mundo, o al menos a la vecina del quinto, aún parece demasiado lejana, y qué debemos hacer para acercarla.

12:55 – 13:30  Python y QtJosé Millán, Tesorero de KDE España

José Millán nos presenta las opciones que tenemos para combinar Python, el popular lenguaje  de programación de alto nivel, y Qt, la infraestructura de software para desarrollar aplicaciones multiplataforma, y contrapone las similitudes y diferencias entre estas opciones para ayudarnos a decidir.
Para finalizar la mañana, los asistentes gozaron de una visita guiada al MARCO, es decir, el Museo de Arte Contemporáneo de Vigo.

Get ready for week 84 in KDE’s Usability & Productivity initiative! 84 weeks is a lot of weeks, and in fact the end is in sight for the U&P initiative. I’d say it’s been a huge success, but all good things must come to an end to make room for new growth! In fact, KDE community members have submitted many new goals, which the community will be able to vote on soon, with the three winners being unveiled at Akademy next month.

But fear not, for the spirit of the Usability & Productivity initiative has suffused the KDE community, and I expect a lot of really cool U&P related stuff to happen even after the initiative has formally ended–including the long-awaited projects of PolicyKit support and mounted Samba and NFS shares in KIO and Dolphin! These projects are making steady progress and I hope to have them done in the next few months, plugging some longstanding holes in our software.

And finally, I’ll be continuing these weekly blog posts for the foreseeable future! It’s just too much fun to stop. Anyway, here’s what we’ve got this week:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

If you’re getting the sense that KDE’s momentum is accelerating, you’re right. More and more new people are appearing all the time, and I am constantly blown away by their passion and technical abilities. We are truly blessed by… you! This couldn’t happen without the KDE community–both our contributors for making this warp factor 9 level of progress possible, and our users for providing feedback, encouragement, and being the very reason for the project to exist. And of course, the overlap between the two allows for good channels of communication to make sure we’re on the right track.

Many of those users will go on to become contributors, just like I did once. In fact, next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

The previous month’s been a ride!

At the beginning of this month, I started testing out the new producer as I had a good, rough structure for the producer code, and was only facing a few minor problems. Initially, I was unclear about how exactly the producer is going to be used by the titler so I took a small step back and spent some time figuring out how kdenlivetitle worked, which is the producer in use.

Initially, I faced integration problems (which are the ones you’d normally expect) when I tried to make use of the QmlRenderer library for rendering and loading QML templates – and most of them were resolved by a simple refactoring of the QmlRenderer library source code. To give an example, the producer traditionally stores the QML template in global variables which is taken as a character pointer argument (which is, again, traditional C) The QmlRenderer lib takes a QUrl as its parameters for loading the Qml file, so to solve this problem all I had to do was to overload the loadQml() method with one which could accommodate the producer’s needs – which worked perfectly fine. As a consequence, I also had to compartmentalise (further) the rendering process so now we have 3 methods which go sequentially when we want to render something using the library ( initialiseRenderParams( ) -> prepareRenderer( ) -> renderQml( ) )


At around the beginning of the 2nd week, I resumed testing the producer code. I used melt for this purpose:

melt qml:~/path/to/test.qml

and now, I was faced with a blocker that would keep me for little more than a week – a SEGFAULT

The SIGSEGV signal was from QtOpenGLContext::create( ) -> the method tries to create an OpenGL context for rendering (done while constructing the QmlRenderer class) and the bug was quite weird in itself – initially, I thought it might due to something related to QObject ownership and tried putting a wrapper class (both a wrapper for the renderer class, and a Qt wrapper for the producer wrapper itself – which might sound stupid) around my producer wrapper – and the code still produced a SIGSEV. The next thing I could think of was maybe it is due to OpenGL and I was feeling confident after I found we have had issues with OpenGL context creation and thread management as OpenGL context and OpenGL functions need to be created and called on the same thread

( an excellent reference: )

The problem was resolved (thank you JB) finally and it was not due to OpenGL but it was simply because I hadn’t created an QApplication for the producer (which is necessary for qt producers). The whole month’s been a steep curve, definitely not easy, but, I enjoyed it!

Right now, I have a producer which is, now, almost complete and with a little more tweaking, will be put to use, hopefully. I’m still facing a few minor issues which I hope to resolve soon and get a working producer. Once we get that, I can start work on the Kdenlive side. Let’s hope for the best!








August 17, 2019

The KDE community’s large software product releases – Frameworks, Plasma, and Applications – come down the turnpike with distressing regularity. Like clockwork, some might say. We all know Plasma is all about clocks.

Screenshot of About dialogs

Recent releases were KDE Frameworks 5.61 and KDE Applications 19.08. These have both landed in the official FreeBSD ports tree, after Tobias did most of the work and I pushed the big red button.

Your FreeBSD machine will need to be following current ports – not the quarterly release branches, since we don’t backport to those.

All the modern bits have arrived, maintaining the KDE-FreeBSD team’s commitment to up-to-date software for the FreeBSD desktop. The one thing we’re currently lagging on is Qt 5.13. There’s a FreeBSD problem report tracking that update.

Updating from previous

As usual, pkg upgrade should do all the work and get you a fully updated FreeBSD desktop system with KDE Plasma and the whole bit.

I ran into one issue though, related to the simultaneous upgrade of Akonadi and MySQL. I ended up in a situation where Akonadi wouldn’t start because it needed to update the database, and MySQL wouldn’t start because it couldn’t load the database tables.

Since Akonadi functions as a cache to my IMAP server and local maildirs, and I don’t keep any Akonadi-style metadata anywhere, my solution was to move the “old” akonadi dirs aside, and just let it get on with it: that means re-downloading and indexing a lot of mail. For people with truly gigantic collections, this may not be acceptable, so I’m going to suggest, without actually having tried it:

  • ZFS snapshot your system
  • Stop Akonadi
  • Upgrade MySQL
  • Start and stop Akonadi
  • Upgrade Akonadi
  • Start Akonadi again
  • Drop the snapshots

I haven’t seen anyone else in the KDE-FreeBSD support channels mentioning this issue, so it may be an unusual situation.

Missing functionality

The kio-gdrive port doesn’t build. This is an upstream issue, where the API in 19.08 changed but there’s no corresponding kio-gdrive release to chase that API. I can see it’s fixed in git master, just not released.

All the other bits I use daily (konsole, kate, kmail, falkon) work fine.

It is a great idea to encrypt files on client side before uploading them to an ownCloud server if that one is not running in controlled environment, or if one just wants to act defensive and minimize risk.

Some people think it is a great idea to include the functionality in the sync client.

I don’t agree because it combines two very complex topics into one code base and makes the code difficult to maintain. The risk is high to end up with a kind of code base which nobody is able to maintain properly any more. So let’s better avoid that for ownCloud and look for alternatives.

A good way is to use a so called encrypted overlay filesystem and let ownCloud sync the encrypted files. The downside is that you can not use the encrypted files in the web interface because it can not decrypt the files easily. To me, that is not overly important because I want to sync files between different clients, which probably is the most common usecase.

Encrypted overlay filesystems put the encrypted data in one directory called the cipher directory. A decrypted representation of the data is mounted to a different directory, in which the user works.

That is easy to setup and use, and also in principle good to use with file sync software like ownCloud because it does not store the files in one huge container file that needs to be synced if one bit changes as other solutions do.

To use it, the cypher directory must be configured as local sync dir of the client. If a file is changed in the mounted dir, the overlay file system changes the crypto files in the cypher dir. These are synced by the ownCloud client.

One of the solutions I tried is CryFS. It works nicely in general, but is unfortunately very slow together with ownCloud sync.

The reason for that is that CryFS is chunking all files in the cypher dir into 16 kB blocks, which are spread over a set of directories. It is very beneficial because file names and sizes are not reconstructable in the cypher dir, but it hits on one of the weak sides of the ownCloud sync. ownCloud is traditionally a bit slow with many small files spread over many directories. That shows dramatically in a test with CryFS: Adding eleven new files with a overall size of around 45 MB to a CryFS filesystem directory makes the ownCloud client upload for 6:30 minutes.

Adding another four files with a total size of a bit over 1MB results in an upload of 130 files and directories, with an overall size of 1.1 MB.

A typical change use case like changing an existing office text document locally is not that bad. CryFS splits a 8,2 kB big LibreOffice text doc into three 16 kB files in three directories here. When one word gets inserted, CryFS needs to create three new dirs in the cypher dir and uploads four new 16 kB blocks.

My personal conclusion: CryFS is an interesting project. It has a nice integration in the KDE desktop with Plasma Vault. Splitting files into equal sized blocks is good because it does not allow to guess data based on names and sizes. However, for syncing with ownCloud, it is not the best partner.

If there is a way how to improve the situation, I would be eager to learn. Maybe the size of the blocks can be expanded, or the number of directories limited?
Also the upcoming ownCloud sync client version 2.6.0 again has optimizations in the discovery and propagation of changes, I am sure that improves the situation.

Let’s see what other alternatives can be found.

I won’t say that I am done with Magnetic Lasso now, but the results are a lot better now to be honest. Take a look at one of the tests that I did,

Hace un par de días fue el anuncio oficial del lanzamiento, ayer presenté las mejoras de Dolphin, Gwenview, Okular y Kate, y hoy toca hablar más novedades de KDE Aplicaciones 19.08: Konsole, Spectacle, Kontact y kdenlive.

Más novedades de KDE Aplicaciones 19.08

El pasado 15 de agosto de 2019 fue la fecha elegida por los desarrolladores de KDE para lanzar KDE Aplicaciones 19.08 y preparar una gran entrada explicando sus novedades que ha sido su anuncio oficial.

Es hora de repasarlas a fondo en el blog en la segunda parte del artículo dedicado a ello. Si ayer hablamos de Dolphin, Gwenview, Okular y Kate, hoy toca Konsole, Spectacle, Kontact y kdenlive.

KonsoleMás novedades de KDE Aplicaciones 19.08

El emulador de terminal de KDE, Konsole, trae pocas novedades pero muy vistosas:

  • Ahora se puede dividir la pantalla principal de la consola tanto en vertical como en horizontal… y éstas subdivisiones se podrán a volver a sudividir tantas veces como queráis.
  • Además, se podrán arrastrar las consolas de un lado a otro para reorganizarlas y que se adapten mejor a vuestro trabajo.
  • Por último, la ventana de Configuración ha recibido una revisión para que sea más sencilla y fácil de utilizar.



Spectacle es la aplicación de captura de pantalla del KDE que poco a poco está superando (si no lo ha hecho ya) al clásico KSnapShot, el capturador de pantalla que tenía anteriormente KDE. Sus novedades principales son:

  • Cuando se realice una captura de pantalla con retardo aparecerá una cuenta atrás con el tiempo restante, tanto en el título de su ventana como en su icono en el Gestor de tareas.
  • También en la captura de pantalla con retardo, se mostrará una barra de progreso en el gestor de tareas de manera y el botón de «Toma una captura de pantalla» se convertirá en el botón «Cancelar» para detenerlo.
  • Al guardar la captura aparecerá un mensaje que nos permitirá abrir la imagen o abrir la carpeta que la contiene.



El gestor de información personal  que incluye correo electrónico, calendario y contactos viene cargada de nuevas funcionalidades:

  • Nuevos «emoji» coloridos con Unicode.
  • Soporte para Markdown en el editor de correo electrónico.
  • Integración con los correctores gramaticales como LanguageTool y Grammalecte, que nos ayudarán a revisar y corregir nuestros textos.
  • Al planificar acontecimientos a partir de una invitación de un correo electrónico de KMail, éste no se borrará después d responder.
  • Ahora es posible mover un acontecimiento de un calendario a otro desde el editor de acontecimientos de KOrganizer.
  • KAddressBook nos permitirá enviar mensajes SMS a los contactos a través del KDE Connect, con lo que se gana integración entre vuestro escritorio y vuestros dispositivos móviles.


Para finalizar la ronda a las ocho aplicaciones que más novedades han recibido toca hablar de Kdenlive, la aplicación de edición de vídeo del KDE, que nos ofrece pocas de novedades ya que el trabajo ha sido bajo el capó.

  • Añadido un nuevo conjunto de combinaciones de teclado y ratón que nos ayudarán a ser más productivos.
  • Se ha mejorada la usabilidad para que al hacer las operaciones de edición de 3 puntos sean consitentes con otros editores de vídeo, lo cual ayudará a la transición si se viene de otro editor.

No está mal, para ser una de las tres ramas de desarrollo del proyecto KDE. Además, estos son las novedades que han desatacado los desarrolladores pero que bajo en capó hay muchas más. Os invito a ir leyendo las entradas de Nathan o su traducción realizada por Victorchk, donde podéis ver todas las novedades semanales tanto de las aplicaciones KDE, el escritorio Plasma o los KDE Frameworks,

Más información: KDE


August 15, 2019

After a well deserved summer break, the Kdenlive community is happy to announce the first major release after the code refactoring. This version comes with a big amount of fixes and nifty new features which will lay the groundwork for the 3 point editing system planned for this cycle. The Project Bin received improvements to the icon view mode and new features were added like the ability to seek while hovering over clips with the mouse cursor and now it is possible to add a whole folder hierarchy. On the usability front the a menu option was added to reset the Kdenlive config file and now you can search for effects from all tabs instead of only the selected tab. Head to our download page for AppImage and Windows packages.



3 point editing with keyboard shortcuts

With 19.08.0 we added groundwork for full editing with keyboard shortcuts. This will speed up the edit work and you can do editing steps which are not possible or not as quick and easy with the mouse. Working with keyboard shortcuts in 19.08 is different as in the former Kdenlive versions. Mouse operations have not changed and working as before.

3 important points to understand the new concept:

Source (left image):
On the left of the track head the green vertical lines (V1 or A2). The green line is connected to the source clip in the project bin. Only when a clip is selected in the project bin the green line show up depending of the type of the clip (A/V clip, picture/title/color clip, audio clip).

Target (right image):
In the track head the target V1 or A1 is active when it’s yellow. An active target track react to edit operations like insert a clip even if the source is not active (see “Example of advanced edit” here).

The concept is like thinking of connectors:
Connect the source (the clip in the project bin) to a target (a track in the timeline). Only when both connectors on the same track are switched on the clip “flow” from the project bin to the timeline. Be aware: Active target tracks without connected source react on edit operations.

You can find a more detailed introduction in our Toolbox section here.

Adjust AV clips independently with Shift + resize to resize only audio or video part of a clip. Meta/Windows-key + Move in timeline allows to move the audio or video part to another track independently.


Press shift while hovering over clips in the Project Bin to seek through them.


Adjust the speed of a clip by pressing CTRL + dragging a clip in the timeline.


Now you can choose the number of channels and sample rates in the audio capture settings.


Other features

  • Added a parameter for steps that allows users to control the separation between keyframes generated by the motion tracker.
  • Re-enable transcode clip functionality.
  • Added a screen selection in the screen grab widget.
  • Add option to sort audio tracks in reverse order.
  • Default fade duration is now configurable from Kdenlive Settings > Misc.
  • Render dialog: add context menu to rendered jobs allowing to add rendered file as a project clip.
  • Renderwidget: Use max number of threads in render.
  • More UI components are translatable.


Full list of commits

  • Do not setToolTip() for the same tooltip twice. Commit.
  • Use translations for asset names in the Undo History. Commit.
  • Fix dropping clip in insert/overwrite mode. Commit.
  • Fix timeline drag in overwrite/edit mode. Commit.
  • Fix freeze deleting a group with clips on locked tracks. Commit.
  • Use the translated effect names for effect stack on the timeline. Commit.
  • Fix crash dragging clip in insert mode. Commit.
  • Use the translated transition names in the ‘Properties’ header. Commit.
  • Fix freeze and fade ins allowed to go past last frame. Commit.
  • Fix revert clip speed failing. Commit.
  • Fix revert speed clip reloading incorrectly. Commit.
  • Fix copy/paste of clip with negative speed. Commit.
  • Fix issues on clip reload: slideshow clips broken and title duration reset. Commit.
  • Fix slideshow effects disappearing. Commit.
  • Fix track effect keyframes. Commit.
  • Fix track effects don’t invalidate timeline preview. Commit.
  • Fix effect presets broken on comma locales, clear preset after resetting effect. Commit.
  • Fix crash in extract zone when no track is active. Commit.
  • Fix reverting clip speed modifies in/out. Commit.
  • Fix audio overlay showing up randomly. Commit.
  • Fix Find clip in bin not always scrolling to correct position. Commit.
  • Fix possible crash changing profile when cache job was running. Commit.
  • Fix editing bin clip does not invalidate timeline preview. Commit.
  • Fix audiobalance (MLT doesn’t handle start param as stated). Commit.
  • Fix target track inconsistencies:. Commit.
  • Make the strings in the settings dialog translatable. Commit.
  • Make effect names translatable in menus and in settings panel. Commit.
  • Remember last target track and restore when another clip is selected. Commit.
  • Dont’ process insert when no track active, don’t move cursor if no clip inserted. Commit.
  • Correctly place timeline toolbar after editing toolbars. Commit.
  • Lift/gamma/gain: make it possible to have finer adjustments with Shift modifier. Commit.
  • Fix MLT effects with float param and no xml description. Commit.
  • Cleanup timeline selection: rubber select works again when starting over a clip. Commit.
  • Attempt to fix Windows build. Commit.
  • Various fixes for icon view: Fix long name breaking layout, fix seeking and subclip zone marker. Commit.
  • Fix some bugs in handling of NVidia HWaccel for proxies and timeline preview. Commit.
  • Add 19.08 screenshot to appdata. Commit.
  • Fix bug preventing sequential names when making serveral script renderings from same project. Commit.
  • Fix compilation with cmake < 3.5. Commit.
  • Fix extract frame retrieving wrong frame when clip fps != project fps. Commit. Fixes bug #409927
  • Don’t attempt rendering an empty project. Commit.
  • Fix incorrect source frame size for transform effects. Commit.
  • Improve subclips visual info (display zone over thumbnail), minor cleanup. Commit.
  • Small cleanup of bin preview thumbnails job, automatically fetch 10 thumbs at insert to allow quick preview. Commit.
  • Fix project clips have incorrect length after changing project fps. Commit.
  • Fix inconsistent behavior of advanced timeline operations. Commit.
  • Fix “Find in timeline” option in bin context menu. Commit.
  • Support the new logging category directory with KF 5.59+. Commit.
  • Update active track description. Commit.
  • Use extracted translations to translate asset descriptions. Commit.
  • Fix minor typo. Commit.
  • Make the file filters to be translatable. Commit.
  • Extract messages from transformation XMLs as well. Commit.
  • Don’t attempt to create hover preview for non AV clips. Commit.
  • Add Cache job for bin clip preview. Commit.
  • Preliminary implementation of Bin clip hover seeking (using shift+hover). Commit.
  • Translate assets names. Commit.
  • Some improvments to timeline tooltips. Commit.
  • Reintroduce extract clip zone to cut a clip whithout re-encoding. Commit. See bug #408402
  • Fix typo. Commit.
  • Add basic collision check to speed resize. Commit.
  • Bump MLT dependency to 6.16 for 19.08. Commit.
  • Exit grab mode with Escape key. Commit.
  • Improve main item when grabbing. Commit.
  • Minor improvement to clip grabbing. Commit.
  • Fix incorrect development version. Commit.
  • Make all clips in selection show grab status. Commit.
  • Fix “QFSFileEngine::open: No file name specified” warning. Commit.
  • Don’t initialize a separate Factory on first start. Commit.
  • Set name for track menu button in timeline toolbar. Commit.
  • Pressing Shift while moving an AV clip allows to move video part track independently of audio part. Commit.
  • Ensure audio encoding do not export video. Commit.
  • Add option to sort audio tracks in reverse order. Commit.
  • Warn and try fixing clips that are in timeline but not in bin. Commit.
  • Try to recover a clip if it’s parent id cannot be found in the project bin (use url). Commit. See bug #403867
  • Fix tests. Commit.
  • Default fade duration is now configurable from Kdenlive Settings > Misc. Commit.
  • Minor update for AppImage dependencies. Commit.
  • Change speed clip job: fix overwrite and UI. Commit.
  • Readd proper renaming for change speed clip jobs. Commit.
  • Add whole hierarchy when adding folder. Commit.
  • Fix subclip cannot be renamed. Store them in json and bump document version. Commit.
  • Added audio capture channel & sample rate configuration. Commit.
  • Add screen selection in screen grab widget. Commit.
  • Initial implementation of clip speed change on Ctrl + resize. Commit.
  • Fix FreeBSD compilation. Commit.
  • Render dialog: add context menu to rendered jobs allowing to add rendered file as a project clip. Commit.
  • Ensure automatic compositions are compositing with correct track on project opening. Commit.
  • Fix minor typo. Commit.
  • Add menu option to reset the Kdenlive config file. Commit.
  • Motion tracker: add steps parameter. Patch by Balazs Durakovacs. Commit.
  • Try to make binary-factory mingw happy. Commit.
  • Remove dead code. Commit.
  • Add some missing bits in Appimage build (breeze) and fix some plugins paths. Commit.
  • AppImage: disable OpenCV freetype module. Commit.
  • Docs: Unbreak menus. Commit.
  • Sync Quick Start manual with UserBase. Commit.
  • Fix transcoding crashes caused by old code. Commit.
  • Reenable trancode clip functionality. Commit.
  • Fix broken fadeout. Commit.
  • Small collection of minor improvements. Commit.
  • Search effects from all tabs instead of only the selected tab. Commit.
  • Check whether first project clip matches selected profile by default. Commit.
  • Improve marker tests, add abort testing feature. Commit.
  • Revert “Trying to submit changes through HTTPS”. Commit.
  • AppImafe: define EXT_BUILD_DIR for Opencv contrib. Commit.
  • Fix OpenCV build. Commit.
  • AppImage update: do not build MLT inside dependencies so we can have more frequent updates. Commit.
  • If a timeline operation touches a group and a clip in this group is on a track that should not be affected, break the group. Commit.
  • Add tests for unlimited clips resize. Commit.
  • Small fix in tests. Commit.
  • Renderwidget: Use max number of threads in render. Commit.
  • Don’t allow resizing while dragging. Fixes #134. Commit.
  • Revert “Revert “Merge branch ‘1904’””. Commit.
  • Revert “Merge branch ‘1904’”. Commit.
  • Update master appdata version. Commit.

Since the last year the development in Cantor is keeping quite a good momentum. After many new features and stabilization work done in the 18.12 release, see this blog post for an overview, we continued to work on improving the application in 19.04. Today the release of KDE Applications 19.08, and with this of Cantor 19.08, was announced. Also in this release we concentrated mostly on improving the usability of Cantor and stabilizing the application. See the ChangeLog file for the full list of changes.

For new features targeting at the usability we want to mention the improved handling of the “backends”. As you know, Cantor serves as the front end to different open-source computer algebra systems and programming languages and requires these backends for the actual computation. The communication with the backends is handled via different plugins that are installed and loaded on demand. In the past, in case a plugin for a specific backend failed to initialize (e.g. because of the backend executable not found, etc.), we didn’t show it in the “Choose a Backend” dialog and the user was completely lost. Now we still don’t allow to create a worksheet for this backend, but we show the entry in the dialog together with a message about why the plugin is disabled. Like as in the example below to check the executable path in the settings:

Select Backend Dialog

Similar for cases where the plugin, as compiled and provided by the Linux distribution, doesn’t match to the version of the backend that the user installed manually on the system and asked Cantor to use. Here we clearly inform the user about the version mismatch and also advise what to do:

Select Backend Dialog

Having mentioned custom installations of backends and Julia above, in 19.08 we allow to set the custom path to the Julia interpreter similarly to how it is already possible for some other backends.

The handling of Markdown and LaTeX entries became more comfortable. We allow now to quickly switch from the rendered result to the original code via the mouse double click. Switching and back, as usual, via the evaluation of the entry. Furthermore, the results of such rendered Markdown and LaTeX entries are saved now as part of the project. This allows to consume projects with such Markdown and LaTeX entries also on systems having no support for Markdown and LaTeX rendering process. This also decreases the loading times of projects since the ready-to-use results can be directly used.

In 19.08 we added the “Recent Files” menu allowing for a quick access to the recently opened projects:

Recent Files Menu

Among important bug fixes we want to mention fixes that improved the communication with the external processes “hosting” embedded interpreters like Python and Julia. Cantor reacts now much better on errors and crashes produced in those external processes. For Python the interruption of running commands was improved.

While working on 19.08 to make the application more usable and stable we also worked on some bigger new features in parallel. This development is being done as part of Google Summer of Code project and the goal of this project is to add the support of Jupyter notebooks in Cantor. The idea behind this project and its progress are covered in a series of blogs (here, here and here). The code is in a quite good shape and was merged to master already. This is how such a Jupyter notebook looks like in Cantor:

We plan to release this in the upcoming release of KDE Applications 19.12. Stay tuned!

I'm writing this while sitting in the Moscow airport, in a state which is a mix of tiredness, anger and astonishment. At this time I should be already at home, in Saint Petersburg, but something went very wrong and I feel the need to vent my frustration here. Please bear with me :-)

The story goes like this: we booked a flight from Venice to Saint Petersburg, with a change in Moscow. The time between flights was 1 hour and 20 minutes — that's not plenty of time, but it's still much longer than other changes I've had in the past in other airports. And I should stress that this flight combination was suggested to me by the Aeroflot company's website, so this time should be enough for the transfer. At least, this was my assumption.

And it was wrong: in spite of the flight from Venice to Moscow landing in time, in spite of us not losing a minute and performing all the passport and security checks without delays (we didn't even let the kids visit the toilet!), in spite of us running along the corridors as soon as we realized that we might be late, we arrived at the gate just in time to see it closing in front of us. And no, rules are rules, so they wouldn't let us board.

The guy at the gate comforted us by saying that planes from Moscow from Saint Petersburg fly every hour, so we could just take the next one, for free. We tried to protest, but to no avail; we went to the Aeroflot ticket desk, explained the situation, and they told us that we could take the flight 6 hours later. And no, it didn't matter that we had kids; the employee at the desk told us that we should be grateful to them, that we could fly for free, as they were making an exception for us.

Yes, you've read it correctly: they asked us to fly 6 hours later, and instead of giving us any compensation (as I once got from Finnair, for example, when their flight was late), we had to be grateful for not having to buy the tickets again!

And I want to make one thing clear: there was no announcement about the transfer time being tight, nor in the plane nor at the airport; no one came forward asking for passengers transferring to St. Petersburg and inviting us to skip the lines for the various controls, nothing. Our names were not called, ever. All that is the norm in all other similar situations I've experienced. If that does not happens, it means — but maybe I'm being too naive — that the second plane is waiting (or that there's still plenty of time to board it).

Luckily they found that an earlier flight, which was only three hours late than our initial expected departure had three available seats, so my wife and kids are boarding that place right now while I'm writing this. I'll take the other flight in three hours, hoping that there won't be any more surprises.

Lesson of the day: always fly from Finland (if you live in Saint Petersburg, like me, that's close), and in any case avoid Aeroflot.

KTouch, an application to learn and practice touch typing, has received a considerable update with today's release of KDE Apps 19.8.0. It includes a complete redesign by me for the home screen, which is responsible to select the lesson to train on.

The new home screen of KTouch

There is now a new sidebar offering all the courses KTouch has for a total of 34 different keyboard layouts. In previous versions, KTouch presented only the courses matching the current keyboard layout. Now it is much more obvious how to train on different keyboard layouts than the current one.

Other improvements in this release include:

  • Tab focus works now as expected throughout the application and allows training without touching the mouse ever.
  • Access to training statistics for individual lessons from the home screen has been added.
  • KTouch supports now rendering on HiDPI screens.

KTouch 19.08.0 is available on the Snap Store and is coming to your Linux distribution.

Can you believe we've already passed the half-year mark? That means it's just the right time for a new release of KDE Applications! Our developers have worked hard on resolving bugs and introducing features that will help you be more productive as you get back to school, or return to work after your summer vacation.

The KDE Applications 19.08 release brings several improvements that truly elevate your favorite KDE apps to the next level. Take Konsole, our powerful terminal emulator, which has seen major improvements to its tiling abilities. We've made tiling a bit more advanced, so now you can split your tabs as many times as you want, both horizontally and vertically. The layout is completely customizable, so feel free to drag and drop the panes inside Konsole to achieve the perfect workspace for your needs.

Dolphin, KDE's file explorer, introduces features that will help you step up your file management game. Let's start with bookmarks, a feature that allows you to create a quick-access link to a folder, or save a group of specific tabs for future reference. We've also made tab management smarter to help you declutter your desktop. Dolphin will now automatically open folders from other apps in new tabs of an existing window, instead of in their own separate windows. Other improvements include a more usable information panel and a new global shortcut for launching Dolphin - press Meta + E to try it out!

Okular, our document viewer, continues with a steady stream of usability improvements for your document-viewing pleasure. In Applications 19.08, we have made annotations easier to configure, customize, and manage. Okular's ePub support has also greatly improved in this release, so Okular is now more stable and works better when previewing large files.

All this sounds exciting for those who read and sort through documents, but what about those who write a lot of text or emails? They will be glad to hear we've made Kate, our advanced text editor, better at sorting recent files. Similar to Dolphin, Kate will now focus on an existing window when opening files from other apps. Your email-writing experience also receives a boost with the new version of Kontact, or more specifically, KMail. After updating to Applications 19.08, you'll be able to write your emails in Markdown, and insert - wait for it - emoji into them! The new integration with grammar-checking tools like LanguageTool and Grammalecte will help you prevent embarrassing mistakes and typos that always seem to creep into the most important business emails.

Photographers and other creatives will appreciate changes to Gwenview, KDE's image viewer. Gwenview can now display extended EXIF metadata for RAW images, share photos and access remote files more easily, and generate better thumbnails. If you are pressed for system resources, Gwenview has your back with the "Low usage resource mode" that you can enable at will. In the video-editing department, Kdenlive shines with a new set of keyboard+mouse combinations and improved 3-point editing operations.

We should also mention Spectacle, KDE's screenshot application. The new version lets you open the screenshot (or its containing folder) right after you've saved it. Our developers introduced a number of nice and useful touches to the Delay functionality. For example, you may notice a progress bar in the panel, indicating the remaining time until the screenshot is done. Sometimes, it's the small details that make using KDE Applications and Plasma so enjoyable.

Speaking of details, to find out more about other changes in KDE Applications 19.08, make sure to read the official announcement.

Happy updating!

If you happen to be in or close to Milan, Italy this September, come and join us at Akademy, our annual conference. It's a great opportunity to meet the creators of your favorite KDE apps in person, and get an early sneak peek at all the things we have in store for the future of KDE.

August 14, 2019

As Lars mentioned in his Technical Vision for Qt 6 blog post, we have been researching how we could have a deeper integration between 3D and Qt Quick. As a result we have created a new project, called Qt Quick 3D, which provides a high-level API for creating 3D content for user interfaces from Qt Quick. Rather than using an external engine which can lead to animation synchronization issues and several layers of abstraction, we are providing extensions to the Qt Quick Scenegraph for 3D content, and a renderer for those extended scene graph nodes.

Does that mean we wrote yet another 3D Solution for Qt?  Not exactly, because the core spatial renderer is derived from the Qt 3D Studio renderer. This renderer was ported to use Qt for its platform abstraction and refactored to meet Qt project coding style.

Complex 3D scene visualized with Qt Quick 3D

“San Miguel” test scene running in Qt Quick 3D

What are our Goals?  Why another 3D Solution?

Unified Graphics Story

The single most important goal is that we want to unify our graphics story. Currently we are offering two comprehensive solutions for creating fluid user interfaces, each having its own corresponding tooling.  One of these solutions is Qt Quick, for 2D, the other is Qt 3D Studio, for 3D.  If you limit yourself to using either one or the other, things usually work out quite fine.  However, what we found is that users typically ended up needing to mix and match the two, which leads to many pitfalls both in run-time performance and in developer/designer experience.

Therefore, and for simplicity’s sake, we aim have one runtime (Qt Quick), one common scene graph (Qt Quick Scenegraph), and one design tool (Qt Design Studio).  This should present no compromises in features, performance or the developer/designer experience. This way we do not need to further split our development focus between more products, and we can deliver more features and fixes faster.

Intuitive and Easy to Use API

The next goal is for Qt Quick 3D is to provide an API for defining 3D content, an API that is approachable and usable by developers without the need to understand the finer details of the modern graphics pipeline.  After all, the majority of users do not need to create specialized 3D graphics renderers for each of their applications, but rather just want to show some 3D content, often alongside 2D.  So we have been developing Qt Quick 3D with this perspective in mind.

That being said, we will be exposing more and more of the rendering API over time which will make more advanced use cases, needed by power-users, possible.

At the time of writing of this post we are only providing a QML API, but the goal in the future is to provide a public C++ API as well.

Unified Tooling for Qt Quick

Qt Quick 3D is intended to be the successor to Qt 3D Studio.  For the time being Qt 3D Studio will still continue to be developed, but long-term will be replaced by Qt Quick and Qt Design Studio.

Here we intend to take the best parts of Qt 3D Studio and roll them into Qt Quick and Qt Design Studio.  So rather than needing a separate tool for Qt Quick or 3D, it will be possible to just do both from within Qt Design Studio.  We are working on the details of this now and hope to have a preview of this available soon.

For existing users of Qt 3D Studio, we have been working on a porting tool to convert projects to Qt Quick 3D. More on that later.

First Class Asset Conditioning Pipeline

When dealing with 3D scenes, asset conditioning becomes more important because now there are more types of assets being used, and they tend to be much bigger overall.  So as part of the Qt Quick 3D development effort we have been looking at how we can make it as easy as possible to import your content and bake it into efficient runtime formats for Qt Quick.

For example, at design time you will want to specify the assets you are using based on what your asset creation tools generate (like FBX files from Maya for 3D models, or PSD files from Photoshop for textures), but at runtime you would not want the engine to use those formats.  Instead, you will want to convert the assets into some efficient runtime format, and have them updated each time the source assets change.  We want this to be an automated process as much as possible, and so want to build this into the build system and tooling of Qt.

Cross-platform Performance and Compatibility

Another of our goals is to support multiple native graphics APIs, using the new Rendering Hardware Interface being added to Qt. Currently, Qt Quick 3D only supports rendering using OpenGL, like many other components in Qt. However, in Qt 6 we will be using the QtRHI as our graphics abstraction and there we will be able to support rendering via Vulkan, Metal and Direct3D as well, in addition to OpenGL.

What is Qt Quick 3D? (and what it is not)

Qt Quick 3D is not a replacement for Qt 3D, but rather an extension of Qt Quick’s functionality to render 3D content using a high-level API.

Here is what a very simple project with some helpful comments looks like:

import QtQuick 2.12
import QtQuick.Window 2.12
import QtQuick3D 1.0
Window {
  id: window
  visible: true
  width: 1280
  height: 720
  // Viewport for 3D content
  View3D {
    id: view
    anchors.fill: parent
    // Scene to view
    Node {
      id: scene
      Light {
        id: directionalLight

      Camera {
        id: camera
        // It's important that your camera is not inside 
        // your model so move it back along the z axis
        // The Camera is implicitly facing up the z axis,
        // so we should be looking towards (0, 0, 0)
        z: -600

      Model {
        id: cubeModel
        // #Cube is one of the "built-in" primitive meshes
        // Other Options are:
        // #Cone, #Sphere, #Cylinder, #Rectangle
        source: "#Cube"
        // When using a Model, it is not enough to have a 
        // mesh source (ie "#Cube")
        // You also need to define what material to shade
        // the mesh with. A Model can be built up of 
        // multiple sub-meshes, so each mesh needs its own
        // material. Materials are defined in an array, 
        // and order reflects which mesh to shade
        // All of the default primitive meshes contain one
        // sub-mesh, so you only need 1 material.
        materials: [
          DefaultMaterial {
            // We are using the DefaultMaterial which 
            // dynamically generates a shader based on what
            // properties are set. This means you don't 
            // need to write any shader code yourself.  
            // In this case we just want the cube to have
            // a red diffuse color.
            id: cubeMaterial
            diffuseColor: "red"

The idea is that defining 3D content should be as easy as 2D.  There are a few extra things you need, like the concepts of Lights, Cameras, and Materials, but all of these are high-level scene concepts, rather than implementation details of the graphics pipeline.

This simple API comes at the cost of less power, of course.  While it may be possible to customize materials and the content of the scene, it is not possible to completely customize how the scene is rendered, unlike in Qt 3D via the its customizable framegraph.  Instead, for now there is a fixed forward renderer, and you can define with properties in the scene how things are rendered.  This is like other existing engines, which typically have a few possible rendering pipelines to choose from, and those then render the logical scene.

Camera Orbiting Car

A Camera orbiting around a Car Model in a Skybox with Axis and Gridlines (note: stutter is from the 12 FPS GIF )

What Can You Do with Qt Quick 3D?

Well, it can do many things, but these are built up using the following scene primitives:


Node is the base component for any node in the 3D scene.  It represents a transformation in 3D space, and but is non-visual.  It works similarly to how the Item type works in Qt Quick.


Camera represents how a scene is projected to a 2D surface. A camera has a position in 3D space (as it is a Node subclass) and a projection.  To render a scene, you need to have at least one Camera.


The Light component defines a source of lighting in the scene, at least for materials that consider lighting.  Right now, there are 3 types of lights: Directional (default), Point and Area.


The Model component is the one visual component in the scene.  It represents a combination of geometry (from a mesh) and one or more materials.

The source property of the Mesh component expects a .mesh file, which is the runtime format used by Qt Quick 3D.  To get mesh files, you need to convert 3D models using the asset import tool.  There are also a few built-in primitives. These can be used by setting the following values to the source property: #Cube, #Cylinder, #Sphere, #Cone, or #Rectangle.

We will also be adding a programmatic way to define your own geometry at runtime, but that is not yet available in the preview.

Before a Model can be rendered, it must also have a Material. This defines how the mesh is shaded.

DefaultMaterial and Custom Materials

The DefaultMaterial component is an easy to use, built-in material.  All you need to do is to create this material, set the properties you want to define, and under the hood all necessary shader code will be automatically generated for you.  All the other properties you set on the scene are taken into consideration as well. There is no need to write any graphics shader code (such as, vertex or fragment shaders) yourself.

It is also possible to define so-called CustomMaterials, where you do provide your own shader code.  We also provide a library of pre-defined CustomMaterials you can try out by just adding the following to your QML imports:

import QtQuick3D.MaterialLibrary 1.0


The Texture component represents a texture in the 3D scene, as well as how it is mapped to a mesh.  The source for a texture can either be an image file, or a QML Component.

A Sample of the Features Available

3D Views inside of Qt Quick

To view 3D content inside of Qt Quick, it is necessary to flatten it to a 2D surface.  To do this, you use the View3D component.  View3D is the only QQuickItem-based component in the whole API.  You can either define the scene as a child of the View3D or reference an existing scene by setting the scene property to the root Node of the scene you want to renderer.

If you have more than one camera, you can also set which camera you want to use to render the scene.  By default, it will just use the first active camera defined in the scene.

Also it is worth noting that View3D items do not necessarily need to be rendered to off-screen textures before being rendered.  It is possible to set one of the 4 following render modes to define when the 3D content is rendered:

  1. Texture: View3D is a Qt Quick texture provider and renders content to an texture via an FBO
  2. Underlay: View3D is rendered before Qt Quick’s 2D content is rendered, directly to the window (3D is always under 2D)
  3. Overlay: View3D is rendered after Qt Quick’s 2D content is rendered, directly to the window (3D is always over 2D)
  4. RenderNode: View3D is rendered in-line with the Qt Quick 2D content.  This can however lead to some quirks due to how Qt Quick 2D uses the depth buffer in Qt 5.


2D Views inside of 3D

It could be that you also want to render Qt Quick content inside of a 3D scene.  To do so, anywhere where an Texture is taken as a property value (for example, in the diffuseMap property of default material), you can use a Texture with its sourceItem property set, instead of just specifying a file in the source property. This way the referenced Qt Quick item will be automatically rendered and used as a texture.


The diffuse color textures being mapped to the cubes are animated Qt Quick 2D items.

3D QML Components

Due to Qt Quick 3D being built on QML, it is possible to create reusable components for 3D as well.  For example, if you create a Car model consisting of several Models, just save it to Car.qml. You can then instantiate multiple instance of Car by just reusing it, like any other QML type. This is very important because this way 2D and 3D scenes can be created using the same component model, instead of having to deal with different approaches for the 2D and 3D scenes.

Multiple Views of the Same Scene

Because scene definitions can exist anywhere in a Qt Quick project, its possible to reference them from multiple View3Ds.  If you had multiple cameras in a scene, you could even render from each one to a different View3D.


4 views of the same Teapot scene. Also changing between 3 Cameras in the Perspective view.


Any Light component can specify that it is casting shadows.  When this is enabled, shadows are automatically rendered in the scene.  Depending on what you are doing though, rendering shadows can be quite expensive, so you can fine-tune which Model components cast and receive shadows by setting additional properties on the Model.

Image Based Lighting

In addition to the standard Light components, its possible to light your scene by defining a HDRI map. This Texture can be set either for the whole View3D in its SceneEnvironment property, or on individual Materials.


Animations in Qt Quick 3D use the same animation system as Qt Quick.  You can bind any property to an animator and it will be animated and updated as expected. Using the QtQuickTimeline module it is also possible to use keyframe-based animations.

Like the component model, this is another important step in reducing the gap between 2D and 3D scenes, as no separate, potentially conflicting animation systems are used here.

Currently there is no support for rigged animations, but that is planned in the future.

How Can You Try it Out?

The intention is to release Qt Quick 3D as a technical preview along with the release of Qt 5.14.  In the meantime it should be possible to use it already now, against Qt 5.12 and higher.

To get the code, you just need to build the QtQuick3D module which is located here:

What About Tooling?

The goal is that it should be possible via Qt Design Studio to do everything you need to set up a 3D scene. That means being able to visually lay out the scene, import 3D assets like meshes, materials, and textures, and convert those assets into efficient runtime formats used by the engine.


A demonstration of early Qt Design Studio integration for Qt Quick 3D

Importing 3D Scenes to QML Components

Qt Quick 3D can also be used by writing QML code manually. Therefore, we also have some stand-alone utilities for converting assets.  Once such tool is the balsam asset conditioning tool.  Right now it is possible to feed this utility an asset from a 3D asset creation tool like Blender, Maya, or 3DS Max, and it will generate a QML component representing the scene, as well as any textures, meshes, and materials it uses.  Currently this tool supports generating scenes from the following formats:

  • FBX
  • Collada (dae)
  • OBJ
  • Blender (blend)
  • GLTF2

To convert the file myTestScene.fbx you would run:

./balsam -o ~/exportDirectory myTestScene.fbx

Thiswould generate a file called MyTestScene.qml together with any assets needed. Then you can just use it like any other Component in your scene:

import QtQuick 2.12
import QtQuick.Window 2.12
import QtQuick3D 1.0

Window {
 width: 1920
 height: 1080
 visible: true
 color: "black"

 Node {
   id: sceneRoot
   Light { 
   Camera {
   z: -100
   MyTestScene {
 View3D {
   anchors.fill: parent
   scene: sceneRoot

We are working to improve the assets generated by this tool, so expect improvements in the coming months.

Converting Qt 3D Studio Projects

In addition to being able to generate 3D QML components from 3D asset creation tools, we have also created a plugin for our asset import tool to convert existing Qt 3D Studio projects.  If you have used Qt 3D Studio before, you will know it generates projects in XML format to define the scene.  If you give the balsam tool a UIP or UIA project generated by Qt 3D Studio, it will also generate a Qt Quick 3D project based on that.  Note however that since the runtime used by Qt 3D Studio is different from Qt Quick 3D, not everything will be converted. It should nonetheless give a good approximation or starting point for converting an existing project.  We hope to continue improving support for this path to smooth the transition for existing Qt 3D Studio users.


Qt 3D Studio example application ported using Qt Quick 3D’s import tool. (it’s not perfect yet)

What About Qt 3D?

The first question I expect to get is why not just use Qt 3D?  This is the same question we have been exploring the last couple of years.

One natural assumption is that we could just build all of Qt Quick on top of Qt 3D if we want to mix 2D and 3D.  We intended to and started to do this with the 2.3 release of Qt 3D Studio.  Qt 3D’s powerful API provided a good abstraction for implementing a rendering engine to re-create the behavior expected by Qt Quick and Qt 3D Studio. However, Qt 3D’s architecture makes it difficult to get the performance we needed on an entry level embedded hardware. Qt 3D also comes with a certain overhead from its own limited runtime as well as from being yet another level of abstraction between Qt Quick and the graphics hardware.  In its current form, Qt 3D is not ideal to build on if we want to reach a fully unified graphics story while ensuring continued good support for a wide variety of platforms and devices ranging from low to high end.

At the same time, we already had a rendering engine in Qt 3D Studio that did exactly what we needed, and was a good basis for building additional functionally.  This comes with the downside that we no longer have the powerful APIs that come with Qt 3D, but in practice once you start building a runtime on top of Qt 3D, you already end up making decisions about how things should work, leading to a limited ability to customize the framegraph anyway. In the end the most practical decision was to use the existing Qt 3D Studio rendering engine as our base, and build off of that.

What is the Plan Moving Forward?

This release is just a preview of what is to come.  The plan is to provide Qt Quick 3D as a fully supported module along with the Qt 5.15 LTS.  In the meantime we are working on further developing Qt Quick 3D for release as a Tech Preview with Qt 5.14.

For the Qt 5 series we are limited in how deeply we can combine 2D and 3D because of binary compatibility promises.  With the release of Qt 6 we are planning an even deeper integration of Qt Quick 3D into Qt Quick to provide an even smoother experience.

The goal here is that we want to be able to be as efficient as possible when mixing 2D and 3D content, without introducing any additional overhead to users who do not use any 3D content at all.  We will not be doing anything drastic like forcing all Qt Quick apps to go through the new renderer, only ones who are mixing 2D and 3D.

In Qt 6 we will also be using the Qt Rendering Hardware Interface to render Qt Quick (including 3D) scenes which should eliminate many of the current issues we have today with deployment of OpenGL applications (by using DirectX on Windows, Metal on macOS, etc.).

We also want to make it possible for end users to use the C++ Rendering API we have created more generically, without Qt Quick.  The code is there now as private API, but we are waiting until the Qt 6 time-frame (and the RHI porting) before we make the compatibility promises that come with public APIs.

Feedback is Very Welcome!

This is a tech preview, so much of what you see now is subject to change.  For example, the API is a bit rough around the edges now, so we would like to know what we are missing, what doesn’t make sense, what works, and what doesn’t. The best way to provide this feedback is through the Qt Bug Tracker.  Just remember to use the Qt Quick: 3D component when filing your bugs/suggestions.

The post Introducing Qt Quick 3D: A high-level 3D API for Qt Quick appeared first on Qt Blog.

I am in the Netherlands I came for the Krita Sprint and I have done a lot of progress with my Animated Brush for the Google Summer of Code Read More...

August 13, 2019

In Calamares there is a debug window; it shows some panes of information and one of them is a tree view of some internal data in the application. The data itself isn’t stored as a model though, it is stored in one big QVariantMap. So to display that map as a tree, the code needs to provide a Qt model so that then the regular Qt views can do their thing.

Screenshot of treeview

Each key in the map is a node in the tree to be shown; if the value is a map or a list, then sub-nodes are created for the items in the map or the list, and otherwise it’s a leaf that displays the string associated with the key. In the screenshot you can see the branding key which is a map, and that map contains a bunch of string values.

Historically, the way this map was presented as a model was as follows:

  • A JSON document representing the contents of the map is made,
  • The JSON document is rendered to text,
  • A model is created from the JSON text using dridk’s QJsonModel,
  • That model is displayed.

This struck me as a long-way-around. Even if there’s only a few dozen items overall in the tree, it looks like a lot of copying and buffer management going on. The code where all this happens, though, is only a few lines – it looks harmless enough.

I decided that I wanted to re-do this bit of code – dropping the third-party code in the process, and so simplifying Calamares a little – by using the data from the QVariant directly, with only a “light weight” amount of extra data. If I was smart, I would consult more closely with Marek Krajewski’s Hands-On High Performance Programming with Qt 5, but .. this was a case of “I feel this is more efficient” more than doing the smart thing.

I give you VariantModel.

This is strongly oriented towards the key-value display of a QVariantMap as a tree, but it could possibly be massaged into another form. It also is pushy in smashing everything into string form. It could probably use data from the map more directly (e.g. pixmaps) and be even more fancy that way.

Most of my software development is pretty “plain”. It is straightforward code. This was one of the rare occasions that I took out pencil and paper and sketched a data structure before coding (or more accurate: I did a bunch of hacking, got nowhere, and realised I’d have to do some thinking before I’d get anywhere – cue tea and chocolate).

What I ended up with was a QVector of quintptrs (since a QModelIndex can use that quintptr as intenal data). The length of the vector is equal to the number of nodes in the tree, each node is assigned an index in the tree (I used depth-first traversal along whatever arbitrary yet consistent order Qt gives me the keys, enumerating each node as it is encountered). In the vector, I store the parent index of each node, at the index of the node itself. The root is index 0, and has a special parent.

Tree Representation

The image shows how a tree with nine nodes can be enumerated into a vector, and then how the vector is populated with parents. The root gets index 0, with a special parent. The first child of the root gets index 1, parent 0. The first child of that node gets index 2, parent 1; since it is a leaf node, its sibling gets index 3, parent 1 .. the whole list of nine parents looks like this:

-1, 0, 1, 1, 0, 0, 5, 5, 5

For QModelIndex purposes, this vector of numbers lets us do two things:

  • the number of children of node n is the number of entries in this vector with n as parent (e.g. a simple QVector::count()).
  • given a node n, we can find out its parent node (it’s at index n in the vector) but also which row it occupies (in QModelIndex terms), by counting how many other nodes have the same parent that occur before it.

In order to get the data from the QVariant, we have to walk the tree, which requires a bunch of parent lookups and recursively descending though the tree once the parents are all found.

Changing the underlying map isn’t immediately fatal, but changing the number of nodes (expecially in intermediate levels) will do very strange things. There is a reload() method to re-build the list and parents indexes if the underlying data changes – in that sense it’s not a very robust model. It might make sense to memoize the data as well while walking the tree – again, I need to read more of Marek’s work.

I’m kinda pleased with the way it turned out; the consumers of the model have become slightly simpler, even if the actual model code (from QJsonModel to VariantModel) isn’t much smaller. There’s a couple of places in Calamares that might benefit from this model besides the debug window, so it is likely to get some revisions as I use it more.

So, we had a Krita sprint last week, a gathering of contributors of Krita. I’ve been at all sprints since 2015, which was roughly the year I became a Krita contributor. This is in part because I don’t have to go abroad, but also because I tend to do a lot of administrative side things.

This sprint was interesting in that it was an attempt to have more if not as much artists as developers there. The idea being that the previous sprint was very much focused on bugfixing and getting new contributors familiar with the code base(we fixed 40 bugs back then), this sprint would be more about investigating workflow issues, figuring out future goals, and general non-technical things like how to help people, how to engage people, how to make people feel part of the community.

Unfortunately, it seems I am not really built for sprints. I was already somewhat tired when I arrived, and was eventually only able to do half days most of the time because there were just too many people …

So, what did I do this sprint?

Investigate LibSai (Tuesday)

So, PaintTool Sai is a 2d painting program with a simple interface that was the hottest thing around 2006 or so, because at the time, you had PaintShop Pro(a good image editing program, but otherwise…), Photoshop CS2(You could paint with this but it was rather clunky), GIMP, OpenCanvas(very weird interface) and a bunch of natural media simulating programs (Corel, very buggy, and some others I don’t remember). Paint Tool Sai was special in that it had a stablizer, and mirroring/rotating the viewport, and a color-mixing brush, and variable width vector curves, and a couple of cool dockers. Mind you it only had like, 3 filters, but all those other things were HUGE back then. So everyone and their grandmother pirated it until the author actually made an English version, at which point like 90% of people still pirated it. Then the author proceeded to not update it for like… 8 years?, with Paint Tool Sai 2 being in beta for a small ever.

The lowdown is that nowadays many people(mostly teens, so I don’t really want to judge them too much) are still using Paint Tool Sai 1, pirated, and it’s so old it won’t work on windows 10 computers anymore. One of the things that always had bothered me is that there was no program outside of Sai that could handle opening the Sai file format. It was such a popular program, yet noone had seemed to have tried?

So, it seems someone has tried, made a library out of it even. If you look at the readme, the reason noone besides Wunkolo has tried to support it is because sai files aren’t just encoded, no, they’re encrypted. This would be fine if it were video game saves, but a painting program is not a video game, and I was slightly horrified, as it is a technological mechanism put in place to avoid people getting to their artwork that they made rather than just the most efficient way to store it on the computer. So I now feel more compelled to have Krita be able to open these files, so people can actually access them. So I sat down with Boudewijn to figure out how much needs to be done to add it to Krita, and we got some build errors, so we’re delaying this for a bit. Made a phabricator task instead:

T11330 – Implement LibSai for importing PaintTool Sai files

Pressure Calibration Widget (Tuesday)

This was a slightly selfish thing. I had been having trouble with my pen deciding it had a different pressure curve every so often, and adjusting the global tablet curve was, while perfectly possible, getting a bit annoying. I had seen Pressure Calibration widgets in some android programs, and I figured that the way how I tried to figure out my pressure curve(messing with the tablet tester and checking the values) is a little bit too technical for people, so I decided to gather up all my focus and program a little widget that would help with that.

Right now, it asks for a soft stroke, a medium one and a heavy one and then calculates the the desired pressure from that. It’s a bit fiddly though, and I want to make it less fiddly but still friendly in how it guides you to provide the values it needs.

MR 104 – Initial Prototype Pressure Callibration Widget

HDR master class (Tuesday)

(Not really)

So, the sprint was also the first time to test Krita on exciting new hardware. One of these was Krita on an Android device(more on that later), the other was the big HDR setup provided by Intel. I had already played with it back in January, and have since the beginning of Krita’s LUT docker support played with making HDR/Scene Linear images in Krita. Thus when Raghu started painting, I ended up pointing at things to use and explaining some peculiarities(HDR is not just bright, but also wide gamut, and linear, so you are really painting with white).

Then Stefan joined in, and started asking questions, and I had to start my talk again. Then later that day Dmitry bothered David till he tried it out, and I explained everything again! ��

Generally, you don’t need an HDR setup to paint HDR images, but it does require a good idea of how to use the color management systems and wrapping your head around it. It seems that the latter was a really big hurdle, because artists who had come across as scared of it over IRC were, now that they could see the wide gamut colors, a lot more positive about it.

Animation cycles were shown off as well, but I had run out of juice too much to really appreciate it.

Later that evening we went to the Italian Restaurant on the corner, who miraculously made ordering á la carte work for a group of 25 people. I ended up translating the whole menu for Tusooaa and Tiar, who ended up repaying me by making fun of my habit of pulling apart the nougat candy I got with the coffee. *shakes fist in a not terribly serious way* There were later during the walk also discussions had about burning out and mental health, a rather relevant topic for freelancers.

Open Lucht Museum Arnhem (Wednesday)

I did make a windmill, but… I really should’ve stayed in Deventer and catch up on sleep. Being Dutch, this was my 5th or 7th time I had seen this particular open air museum(there’s another one(Bokrijk) in Belgium where I have been just as many times), but I had wanted to see how other people would react to it. In part because it shows all this old Dutch architecture, and in part because these are actual houses from those periods, they get carefully disassembled and then carefully rebuild brick by brick, beam by beam on the museum terrain, which in itself is quite special.

But all I could think when there was ‘oh man, I want to sleep’. Next time I just need to stay in Deventer, I guess.

I did get to take a peek at other people’s sketchbooks and see, to my satisfaction, I am not the only person to just write notes and todo lists in the sketchbook as well.

At dinner, we mostly talked about the weather, and confusion over the lack of spicyness in the other wise spicy cuisine of Indonesia(which was later revealed to be caused by the waiters not understanding we had wanted a mix of spicy and non-spicy things). And also that box-beds are really weird.

Taking Notes (Thursday Afternoon)

Given the bad decision I had made yesterday to go to the museum, I decided to be less dumb and tell everyone I’d sleep during the morning.

And then when I joined everyone, it turned there had been half a meeting during the morning. And Hellozee was kind of hinting that I should definitely take the notes for the afternoon(looking at his notes and his own description of that day, taking notes had been a bit too intense for him). And later he ranted at me about the text tool, and I told him that ‘Don’t worry, we know it is clunky as hell. Identifying that isn’t what is necessary to fix it’. (We have several problems, the first being the actual font stack itself, so we can have shaping for scripts like those for Arabic and Hindi, then there’s the laying out, so we can have wordwrap and vertical layout for CJK scripts, and only after those are solved we can even start thinking of improving the UI).

Anyhow, the second part of the meeting was about instagram, marketing, and just having a bit of fun with getting people to show off their art they made with Krita. The thing is of course that if you want it to be a little bit of fun, you need to be very thorough in how you handle it. Like, competition could lead to it feeling like a dog-eat-dog style competition, and we also need to make it really clear how to deal with the usual ethics around artists and ‘working for exposure’. Sara Tepes was the one who wants to start up the Krita account for Instagram, which I am really thankful of. She also began with the discussion on this, and I feel a little bad because I pointed out the ethical aspect by making fun of ‘working for exposure’, and I later realized that she was new to the sprint, and maybe that had been a bit too forward.

And then I didn’t get the chance to talk to her afterwards, so I couldn’t apologize. I did get some comments from others that they were glad I brought it up, but still it could’ve been nicer. orz

In the end we came to a compromise that people seemed comfortable with: A cycle of several images, selected from things people explicitly tag to be included on social media, for a short period, and a general page that explains how the whole process works so that there’s no confusion on what and why and how, and that it can just be a fun thing for people to do.

Android Testing (Friday)

I had poked at the android version, but last time it had not yet have graphics acceleration support, so it was really slow. This time I could really sit down and test it. It’s definitely gotten a lot better, and I can see myself or other artists having this as an alternative to a sketchbook to sit down and doodle on while the computer is for the bigger intensive work.

It was also a little funny, when I showed to someone it was pressure sensitive, all the other artists present one-by-one walked over to me to try poke the screen with a stylus. I guess we generally have so much trouble to get pressure to work on desktop devices it’s a little unbelievable it would just work on the mobile device.

T11355 – General feedback Android version.

That evening discussions were mostly about language, and photoshop’s magnetic lasso tool crashing, and that Europeans talk about language a lot.


On Saturday I read an academic book of 300~ pages, something which I had really needed after all this. I felt a lot more clear headed after wards. I had attempted to help Boudewijn with bugtriaging, which is something we usually do on a sprint, but I just couldn’t concentrate.

We were all too tired to talk much on Saturday. I can only remember eating.


On Sunday I spent some time with Boudewijn going through the meeting notes and turning it into the sprint report. Boudewijn then spend 5 times trying to explain the current release schedule to me, and now I have my automated mails setup so people get warned about backporting their fixes and about the upcoming monthly release schedule.

In the evening I read through the Animator’s Survival Kit. We have a bit of an issue where it seems Krita’s animation tools are so intuitive that when it comes to the unintuitive things that are inherent to big projects themselves (ram usage, planning, pipeline), people get utterly confused.

We’ve already been doing a lot of things in that area: making it more obvious when you are running out of ram, making the render dialog a bit better, making onion skins a bit more guiding. But now I am also rewriting the animation page and trying to convey to aspiring animators that they cannot do a one hour 60 fps film in a single krita file, and that they will need to do things like planning. The Animator’s Survival Kit is a book that’s largely about planning, which very little talked about, so hence why it is suggested to aspiring animators a lot, and I was reading it through to make sure I wasn’t about to suggest nonsense.

We had, after all the Indians had left, gone to an Indian restaurant. Discussions were about spicy food, language and Europe.


On Monday I stuck around for the irc meeting and afterwards went home.

It was lovely to meet everyone individually, and each singular conversation I had, had been lovely, but this is really one of those situations where I really need to learn to take more breaks and not be too angry at myself for that. I hope to meet everyone in the future again in a less crowded setting so I can actually have all the fun of meeting fellow contributors and none of the exhausting parts. The todo list we’ve accumulated is a bit daunting, but hopefully we’ll get through it together.

Last week we had a huge Krita Sprint in Deventer. A detailed report is written by Boudewijn here, and I will concentrate on the Animation and Workflow discussion we had on Tuesday, when Boudewijn was away, meeting and managing people arriving. The discussion was centered around Steven and his workflow, but other people joined during the discussion: Noemie, Scott, Raghavendra and Jouni.

(Eternal) Eraser problem

Steven brought up a point that current brush options "Eraser Switch Size" and "Eraser switch Opacity" are buggy, so it winded up an old topic again. These options were always considered as a workaround for people who need a distinct eraser tool/brush tip, and they were always difficult to maintain.

After a long discussion with broader circle of people we concluded that "Ten Brushes Plugin" can be used as an alternative for a separate eraser tool. One should just assign some eraser-behaving preset to the 'E' key using this plugin. So we decided that we need the following steps:

Proposed solution:

  1. Ten Brushes Plugin should have some eraser preset configured by default
  2. This eraser preset should be assigned to "Shift+E" by default. So when people ask about "Eraser Tool" we could just tell them "please use Shift+E".
  3. [BUG] Ten brushes plugin doesn't reset back to a normal brush when the user picks/changes painting color, like normal eraser mode does.
  4. [BUG] Brush slot numbering is done in 1,2,3,...,0 order, which is not obvious. It should be 0,1,2,...,9 instead.
  5. [BUG] It is not possible to set up a shortcut to the brush preset right in the Ten Brushes Plugin itself. The user should go to the settings dialog.

Stabilizer workflow issues

In Krita stabilizer settings are global. That is, they are applied to whatever brush preset you use at the moment. That is very inconvenient, e.g. when you do precise line art. If you switch to a big eraser to fix up the line, you don't need the same stabilization as in the liner.

Proposed solution:

  1. The stabilizer setting are still in the Tool Options docker, we don't move them into the Brush Settings (because sometimes you need to make them global?)
  2. Brush Preset should have a checkbox "Save Stabilizer Settings" that will load/save the stabilizer settings when the preset is selected/unselected.
  3. The editing of these (basically) brush-based setting will happen in the tool option.


  • I'm not sure if the last point is sane. Technically, we can move the stabilizer settings into the brush preset. And if the user wants to use the same stabilizer settings in different presets, he can just lock the corresponding brush settings (we have an special lock icon for that). So should we move the stabilizer settings into the brush preset editor or keep it in the tool options?

Cut Brush feature

Sometimes painter need a lot of stamps for often-used objects. E.g. a head or a leg for an animation character. A lot of painters use brush preset selector as a storage for that. That is, if you need a copy of a head on another frame, just select the preset and click in a proper position. We already have stamp brushes and they work quite well, we just need streamline workflow a bit.

Proposed solution:

  1. Add a shortcut for converting the current selection into a brush. It should in particular:
    • create a brush from the current selection, add default name to it and create an icon from the selection itself
    • deselect the current selection. It is needed to ensure that the user can paint right after pressing this shortcut
  2. There should be shortcuts to rotate, scale current brush
  3. There should be a shortcut for switching prev/next dab of the animated brush
  4. Brush needs a special outline mode, when it paints not an outline, but a full colorful preview. It should be activated by some modifier (that is pres+hold).
  5. Ideally, if multiple frames are selected, the created brush should become animated. That would allow people to create "walking brush" or "raining brush".

Multiframe editing mode

One of the major things Krita's animation still lack is multiframe editing mode, that is ability to transform/edit multiple frames at once. We discussed it and ended up with a list of requirements.

Proposed solution:

  1. By default all the editing tools transform the current frame only
  2. The only exception is "Image" operations, which operate on the entire image, e.g. scale, rotate, change color space. These operations work on all existing frames.
  3. If there is more than one frame selected in the timeline, then operation/tool should be applied on these frames only.
  4. We need a shortcut/action in the frame's (or timeline's layer) context menu: "Selection all frames"
  5. Tools/Actions that should support multiframe operations:
    • Brush Tool (low-priority)
    • Move Tool
    • Transform Tool
    • Fill Tool (may be efficiently used on multiple frames with erase-mode-trick)
    • Filters
    • Copy-Paste selection (now we can only copy-paste frames, not selections)
    • Fill with Color/Pattern/Clear


There is also a set of unsorted bugs that we found out during the discussion:
  1. On Windows multiple main windows don't have unique identifier, so they are no distinguishable from OBS.
  2. Animated brush spits a lot of dabs in the beginning of the stroke
  3. Show in Timeline should be default for all the new layers
  4. Fill Tool is broken with Onion Skins (BUG:405753)
  5. Transform Tool is broken with Onion Skins (BUG:408152)
  6. Move Tool is broken with Onion Skins (BUG:392557)
  7. When copy-paste frames on the timeline, in-betweens should override the destination (and technically remove everything that was in the destination position). Right now source and destination keyframes are merged. That is not what animators expect.
  8. Changing "End" of animation in "Animation" docker doesn't update timeline's scroll area. You need to create a new layer to update it.
  9. Delayed Save dialog doesn't show the name of the stroke that delays it (and sometimes the progress bar as well). It used to work, but now is broken.
  10. [WISH] We need "Insert pasted frames", which will not override destination, but just offset it to the right.
  11. [WISH] Filters need better progress reporting
  12. [WISH] Auto-change the background of the Text Edit Dialog, when the text color looks alike.

As a conclusion, it was very nice to be at the sprint and to be able to talk to real painters! Face to face meetings are really important for getting such detailed lists of new features we need to implement. If we did this discussion through Phabricator we would spend weeks on it :)

I’ve updated the site so KDE now has web pages and lists the applications we produce.

In the update this week it’s gained Console apps and Addons.

Some exciting console apps we have include Clazy, kdesrc-build, KDebug Settings (a GUI app but has no menu entry) and KDialog (another GUI app but called from the command line).

This KDialog example takes on a whole new meaning after watching the Chernobyl telly drama.

And for addon projects we have stuff like File Stash, Latte Dock and KDevelop’s addons for PHP and Python.

At KDE we want to be a great place to be a home for your project and this is an important part of that.


August 12, 2019

KDevelop 5.4.1 released

We today provide a stabilization and bugfix release with version 5.4.1. This is a bugfix-only release, which introduces no new features and as such is a safe and recommended update for everyone currently using KDevelop 5.4.0.

You can find the updated Linux AppImage as well as the source code archives on our download page.



  • Fix crash: add missing Q_INTERFACES to OktetaDocument for IDocument. (commit. fixes bug #410820)
  • Shell: do not show bogus error about repo urls on DnD of normal files. (commit)
  • [Grepview] Use the correct icons. (commit)
  • Fix calculation of commit age in annotation side bar for < 1 year. (commit)
  • Appdata: add entry. (commit)
  • Fix registry path inside kdevelop-msvc.bat. (commit)


No user-relevant changes.


  • Update phpfunctions.php to phpdoc revision 347831. (commit)
  • Switch few http URLs to https. (commit)
kossebau Mon, 2019/08/12 - 22:00

Officially, on Friday the 2019 Krita Sprint was over. However, most people stayed until Saturday… It’s been a huge sprint! Almost a complete convention, a meeting of developers and artists.

All the sprinters together

The sprinters, artists, contributors and artists/contributors, all together. Photo taken by Krzyś.


On Monday, people started arriving. It was pretty great to meet again so many people who hadn’t seen each other for a long time, and to see so many people who hadn’t been to any Krita sprint before. We had rigged a HDR test system in the sprint area, which was, probably for the last time, since it’s getting too small, the 12th-century cellar underneath the Krita maintainer’s house, in the town centre of Deventer. Wolthera was kept busy all week giving an introduction to painting in HDR — she is, after all, the first person in the world to have actually done creative painting in HDR.

There was other hardware to test as well, like the Android tablet with Sharaf Zaman’s Android port of Krita. Sharaf couldn’t come to the sprint; his visum was denied, probably because the Dutch authorities were informed beforehand of the intention of the Indian government to cancel Kashmir’s special status. With a blanket closure of internet, mobile telephony and landline telephony, it was impossible to be in touch with Sharaf. We did some thorough testing, and we hope contact with Sharaf will be restored soon.


Since we still weren’t complete, we postponed the meeting until Thursday, so this day was a day for hacking and discussions. We had many more artists joining us than previously, so the discussions were lively and the meetings were good — but there were more bugs reported than bugs fixed.


On Wednesday, we went on an expedition, to the Openlucht Museum. With twenty-three people attending, it was more efficient to actually rent a bus for the lot of us. The idea about the outing was to make sure people who had never been at a sprint and people who had been at sprints before would mingle and get to know each other. That worked fine!

We had a somewhat disappointing guided tour. I had asked for a solid introduction in the social history of the Netherlands, but the guide still felt he needed to make endless inane and borderline sexist jokes that all fell very flat. Oh well, the buildings were great and the people inside the buildings were giving quite interesting information, with the rosmolen being a high point:

From David Revoys sketchbook

From David Revoy’s sketchbook

And as you can see, it gave the artists and developer/artists amongst us a chance to do some analog painting (althoug at least one sprint attendant tried to paint with Krita on a Surface tablet, unfortunately cut short by battery problems):

We followed up with dinner at an Indonesian restaurant, and went home tired but satisfied. There was still hacking and painting going on, though, until midnight.


Today we really had the core of our sprint. Some sprints are for coding, but this sprint was for bringing together people and gathering ideas. On Thursday, we discussed the future of Krita in quite a bit of detail.

In 2018/2019 the focus was fully on fixing bugs. There are now two full-time developers working on fixing bugs and improving stability more than this time last year, and both Boudewijn and Dmitry have dedicated all their coding time to fixing bugs as well. Weirdly enough, that doesn’t seems to make much of a dent in our number of open bugs:

Bugs, open, new and closed, since April.

One reason is that we manage to introduce too many regressions. That’s partly explained by our new hackers needing to learn the codebase, partly by an enormous increase in our user base (we’re on track to break 2,500,000 downloads a year in 2019), but mostly by our changes not getting enough testing before releasing. So, taking things out of the order we discussed them at the meeting, let’s report on our Bugs and Stability discussion first.

Bugs and Stability

As David Revoy reports, the 4.2 releases don’t feel as stable as the 4.1 releases. As noted above, this is not unexpected since we have two new full-time developers working on Krita, who aren’t that deep in the codebase yet. Another reason we have so much trouble with the 4.2 releases is that we updated to Qt 5.12, which seems to come with many regressions we either have to fix in Qt (and we do submit patches upstream, and those are getting accepted), or work around in Krita itself. On the other hand, we are merging bug fixes into our release branch until the last minute before the release, so those fixes get barely any testing: so the lack of testing isn’t something we can blame on our users, it’s to a large extent our own fault.

Yet Raghukamath and Deevad noted that they both don’t actually test master or the stable branch from day to day anymore because they are too busy actually using Krita, and the same goes for the other artists present. It’s clear that the developers cannot do regression testing, that our extensive set of unittests (although most are more like integration tests, technically) doesn’t catch the regressions — we have to find a better way.

Coincidentally (or not…) Anna (who was not present) had started some time ago a discussion about this on phabricator: T11021: Enhancements to quality assurance). There are many parts to that discussion, but one thing we concluded, based on discussions during the sprint, is that we will try the following:

  • We will release once a month, at the end of the month (we already try to do that…)
  • We will merge bug fixes to the stable branch until the middle of the month: the merge window thus is two weeks, while master is always open to bug fixes.
  • We will publish an article on telling our users what landed in the next stable version. That article will show up in all our users’ welcome screen news widget, right inside Krita. There will be links to download a portable version of Krita for every OS.
  • We will add a link to a survey (not bugzilla) in the welcome screen of those builds, and in the survey ask people for the results of testing the changes noted in the release article.
  • And then, two weeks later, we will release the next stable version of Krita, with only fixes merged during the test period for noted regressions.

Slightly related: we also want to do a monthly update article on changes for master, but without the survey mechanism. However, that would make two development updates a month, which might be a bit much to digest, so we’re starting slowly, with the stable release system.

Development focus for 2019/2020

In October we will release a hopefully super-stable Krita 4.3, with a bunch of new features as well, but still focused on stability. Boudewijn is still working on fixing the resource handling system, but that is going really slow, and is being really hard. It’s also hard for the maintainer of the whole project to find time to work on big coding projects, and it’s getting harder, the more management-like tasks there are.

Everyone in the meeting agreed that the text tool still needs much more work, maybe even another rewrite to get rid of the dependency on Qt’s internal rich text editor: the conversion between QTextDocument and SVG is lossy, and gives problems. We were all aware of the missing bits and the problems and bugs, so we didn’t need to discuss this in detail. So one focus is:

  • Text Tool: it is still not usable for the primary goal, namely comic book text balloons. We need to make it suitable. We know what to do, we just don’t have the time while fighting incoming bugs all the time.

So it’s clear that we still need to work on…

  • Stability and performance. During the discussion some particular issues were noted:
    • Raghukamath reported that the move tool has become very slow. This definitely is a regression:
    • One year ago, at the 2018 Krita Sprint, we made a list of Unfinished Stuff, ranging from missing features after the vector rewrite, unimplemented layer style options, half-implement animated transform mask, missing undo/redo support in the scripting layer. Most of that is still relevant. See the original sprint report.
    • Dmitry: noted he had made some expirements that show that we could make our brushes much faster by using new versions of AVX, but this would only help people with newer laptops. Boudewijn wondered whether the brush engines are the true bottleneck — if the improvement only shows in benchmarks, users won’t notice much difference. We might want to do another round of measuring using Intel’s VTune, if we can get another license for a year.
    • Resource handling is still being rewritten. That means that when Steven noted that he cannot update a workspace, Boudewijn replied that is part of the bigger problem with the current resource system. Boud is rewriting that, and has been working on it for two years now, leading to a huge merge request — big enough to almost bring gitlab to its knees. The rewrite might be be too big to actually finish, and it’s hard to distribute the work over multiple developers.

Since we had some many artists around whose view we had never before been able to canvass, we decided that digging into workflow issues might be the best thing to do: it would make a good theme for the next fundraiser, too. So:

    • Workflow issues
      • Stefan brought in multiple issues with animation, like editing on multiple frames at the same time or finally getting the clones feature done. Most issues are already in bugzilla as wishes, but unfortunately, we don’t have someone to work on animation full-time at the moment. Some progress was made and demonstrated during the sprint, though!
      • In the nineties and oughties, all desktop applications looked the same and followed more or less the same guidelines. That made sure that users knew they could investigate the contents of the application menus; it would be the first place to look for something relevant for their task at hand. That barely seems to happen anymore. So, discoverability of features in Krita is a problem that is getting worse. We made a list of things that were hard to find:
      • Our dockers are overcrowded and the contents hard to find; a docker hidden behind another docker isn’t going to be discovered by many users.
      • If there are tabs in a single docker, like with the transform tool, and some of those tabs aren’t visible because there are too many of them, like the liquify transform functionality, that functionality might as well not exist, it won’t be found.
      • Deevad suggested making each transform tool option into a separate tool; however, as Raghukamath noted, the objection to that is that we don’t want six new tools crowding the toolbox, nor having the weird pop-out tool selection buttons Photoshop has. This is a perennial discussion, and it’s next to impossible to come to a conclusion here.
    • Related to that is the question of the tool options docker. Right now, users can choose between putting the tool options in a docker, or in a popup button on the toolbar. Another option would be to make a toolbar out of the tool options, with some pop-ups, like Corel Painter and Photoshop did. But none of these options is a real solution. We might do want to do a survey, but from experience. users want to have at least the overview, tool options, color selector, layer docker, brush preset docker visible, as well as some others, and on most displays, there just isn’t the space for that…
    • Actually figuring out which dockers we have is pretty hard, too. Most people don’t seem to find the dockers submenu in the settings menu, or in the right-click menu on the docker titlebars, and if they find it, the list is too intimidating.
    • So, one thing we decided to do is create a tool to search through Krita’s functionality. Other applications are apparently facing the same problem, and this is an easy and cheap solution. Of course, people started asking for this tool to also search e.g. layer names. That lead to the thought that this is starting to look a bit like QtCreator’s locator widget…
    • A workflow improvement Steven suggested was to autosave the current session (that is, open windows, views, images) and restoring this on restarting Krita.
    • Mariya said that the combination of clone layers and transform masks doesn’t work as well for her as Photoshop’s smart objects. After some discussion, it seems that we might want to rethink showing masks in the hierarchy if there’s only one mask of a certain type: it’s easier to show them as a toggle in the layer’s row in the layerbox.
    • Sara asked whether Krita had a screen recorder. This should only record the canvas stroke by stroke and export to PNG or JPG. The old screen recorder could do this, but it was very broken and removed, and never intentionally released. This would be a biggish project, gsoc-sized, and needs to build on first finishing the porting to the generic strokes sytem, and then extend that system with recording. Another option would be to add a timer to save incremental versions and add an option to export incremental in addition to save incremental.
    • Emmett and Eoin discussed working with painting assistants: it would be interesting to make a hierarchy with grouped assistants. The conclusion was that we might want to show the assistants in the layer docker on top of the layers; other suggestions were a separate docker or putting the treeview in the tool option widget. If the first solution is chosen, it would be useful to also show reference images and maybe even guides in the layer^Wimage structure docker.

Some of the workflow issues mentioned already sound like new features, and then there were a number of discussions about what really would be new features:

    • It would help a lot with user support if the a statusbar indicator would show whether the stabilizer, canvas acceleration, instant preview (and others) are on or off. A screenshot would then immediately answer the questions we most ask users who need support.
    • After so many months of bug fixes, Dmitry really wants to work on one or two new brush engines: the first has thin bristles that could each have their own color. Sara mentions she really misses a brush like this. The other engine would be more like a calligraphy tool. Dmitry estimates needing two months per engine, which is quite a bit of time.
    • Eoin wants to have a tool, or a brush engine, or at least, something that would make it very easy to paste images or designs and transform them before pasting the next one. The images should come from an ordered or random collection. Currently, people use pipe brushes for that, but that is not convenient. A new tool is suggested.
    • Steven notes that it would be much more convenient to create a pipe brush from an animation on the timeline than the current system based on layers. This is true – we just never thought of it.
    • Mariya really wants to have a font selector where she can mark X fonts as her favourite and created a wish bug for it. It turns out that Calligra doesn’t have widget like that, and the standard Qt Font combobox doesn’t support it either, nor does Scribus: it might be that such a thing hasn’t been written in Qt.

Anoother thing that was discussed briefly was telemetry (we tried that, the project failed).

Marketing and Outreach

Our presence on Twitter, Mastodon, Tumblr, DeviantArt, Reddit is fine. On Facebook, non-official groups are more used than Krita’s own account (which is because Boud is the last maintainer standing, and he cannot stand facebook). Youtube needs improvement, we are absent on Instagram.


Sara Tepes volunteers to handle Krita’s instagram account (which we don’t have). Sara also wants to run “competitions” on Instagram with the prize being having the a number of selected images shown on either Krita’s splash or Krita’s welcome screen.

The splash screen is our main branding location, so we shouldn’t put other images in there other than the holiday jokes.

We could redesign the welcome screen to include an image location; it needs redesign in any case because it’s too drab right now. We wanted it to be not in-your-face, but it’s a bit too much not so now.

Once we have the image location, getting and selecting images can be a problem, as it was for the art book or main release announcements. Sara notes that instagram gives easy tools to select images from a larger set; other platforms are not so good.

In any case, selecting images will be quite a bit of work, and we do need to make sure we’re not playing favorites or forgetting where we come from: free software, open culture.

Conclusion: we are going to try to run the competition on all social networks for which we have a maintainer (instagram, twitter, mastodon, reddit). We can always extend this later to other places. Each maintainer can propose two images + attribution info + links, which will be shown in rotation for a month.

The system for doing this should be ready for the 4.3 release in October.

Note: we have to make a page with a very clear text explaining the rules: we don’t take ownership of the images, the images will be shown in Krita, there will be no licensing requirements for the images, certain kinds of images cannot be used.

Note 2: Scott should ask Ben Cooksley how we can get the welcome screen news widget traffic information on a regular basis.


We have already started improving our presence on YouTube. We feature existing Krita-related channels, and we are working with Ramon to provide interesting videos. We could do more, but let’s give Ramon a chance to build up some momentum first.

Development Fund and Fundraiser

Financially, Krita is doing okay. We do get between 2000 and 2700 euros a month in donations: that translates to one full time developer (yes, we’re not getting rich from working on Krita, these are not commercial fees). Windows Store + Steam bring in enough for three to four extra full-time developers. It would be good to become less dependent on the Windows Store, though, since Microsoft is getting more and more aggressive in promoting getting applications from the Windows Store.

Enter the Krita Development Fund. Like in most things, we try to look at what Blender is doing, and then try to find out whether that works for us. Often it does. We already have a notional Development Fund, but it’s basically a monthly paypal subscription or recurring bank transfer. We don’t have any feedback or extras for the subscribers, and the subscribers have no way to manage their subscription, or reach us other than in the usual way. We tried to implement a CiviCRM system for this, but that was way too complex for us to manage.

We need to reboot our Development Fund and migrate existing subscribers to the new fund. A basic list of requirements is:

      • A place on the website where people can subscribe and unsubscribe
      • A place where the names of people who want that are shown
      • A way to tell people what we are doing with the money and what we will be doing
      • Make sure companies and sponsors will also be able to join

And no doubt there will be other considerations and requirements. We should check Blender’s dev fund website, of course. We created a Phabricator task to track this, and it’s something we really want help with!

What wasn’t discussed

Interestingly, the new gitlab workflow seems to work for everyone. Gitlab’s UI is even less predictable and discoverable than Krita’s, but we didn’t need to discuss anything, people can work with it without much trouble.

Steam wasn’t much of a discussion item either: Windows is doing fine on Steam, our macOS version of Krita still has too many niggles to make it worth-while to put on Steam (or the Apple Store either, even if that were possible, license-wise), and the Linux market share is still too small to make it worth the time investment: still Emmet promised to contact Valve to see how we can get the appimage into Steam. At a first glance, the problem seems to be the version of libc required, which might mean we’ll have to figure out a way to build Qt 5.12 on older versions of Ubuntu or CentOS. But let’s wait and see, first.

Tangentially, we did discuss how to get more people involved in user support, but Agata already has plans towards involving people who already trying to help others in places like Reddit and the forum more recognition. It was late, and the discussion degenerated into hilarity pretty soon — still, this is something to work on, since the core development team just doesn’t have the capacity anymore to help every new Krita user’s teething problems anymore.


To create: a task for rethinking what goes into dockers, and what goes somewhere else.


Friday was the real hacking day. Some people already started leaving, but many people were staying around and started hacking on the issues identified during the meeting, like the action search widget. Bugs were being fixed, regressions identified and blogs posted. And even later on, on Saturday and Sunday, there was still hacking, like on the detached canvas feature.

Last week I finished writing all the new examples for the ROCS, together with a little description of each commented in the beginning of the code. The following examples were implemented:

  • Breadth First Search;
  • Depth First Search;
  • Topological Sorting Algorithm;
  • Kruskal Algorithm;
  • Prim Algorithm;
  • Dijkstra Algorithm;
  • Bellman-Ford Algorithm;
  • Floyd-Warshall Algorithm;
  • Hopcroft-Karp Bipartite Matching Algorithm.

It is good to note that while Prim algorithm and BFS were already in rocs, they were broken and could not be run. The following image is an example of a simple description of an algorithm:


About the step-by-step execution, I am considering our possibilities. My first idea was to take a look into the debugger for the QScriptEngine class, which is the QScriptEngineDebugger class. An instance of this class can be attached to our script engine, and it provides the programmer with a interface with all the necessary tools.

Although useful, I personally think our rocs don’t need all this tools. (but they can be provided separately) There are 3 ways to stop the code execution using this debugger:

  • By an not treated Exception inside the javascript code;
  • By a call to the debugger instruction, that automatically invokes the debugger interface;
  • By a breakpoint, that can be put in any line of the javascript code.

The first one is not really useful for us, as it halts the code execution. The second and the third can be really useful couple with an Continue command. But the second invokes the full debugger interface, which we don’t really want.


So, by using the third one, we can stop the execution in any line of the javascript code and create a step button with the Continue command to continue executing the code. The only problem is how to add the breakpoints, as there is no direct function to add them, and usually the programmer has to use the ConsoleWidget interface or the BreakpointsWidget to do this. The following image shows the Continue button, which is already working:


But the challenge of adding the breakpoints still remains. One of my ideas is to modify the code editor to accept an click on the line number bar, which triggers an signal to add/remove an breakpoint to that line. This is an clean alternative for me. But for that I have to check if the KTextEditor have this type of signal and create a way to add breakpoints in the code by function.

August 11, 2019

Some considerable time ago I wrote up instructions on how to set up a FreeBSD machine with the latest KDE Plasma Desktop. Those instructions, while fairly short (set up X, install the KDE meta-port, .. and that’s it) are a bit fiddly.

So – prompted slightly by a Twitter exchange recently – I’ve started a mini-sub-project to script the installation of a desktop environment and the bits needed to support it. To give it at least a modicum of UI, dialog(1) is used to ask for an environment to install and a display manager.

The tricky bits – pointed out to me after I started – are hardware support, although a best-effort is better than having nothing, I think.

In any case, in a VBox host it’s now down to running a single script and picking Plasma and SDDM to get a usable system for me. Other combinations have not been tested, nor has system-hardware-setup. I’ll probably maintain it for a while and if I have time and energy it’ll be tried with nVidia (those work quite well on FreeBSD) and AMD (not so much, in my experience) graphics cards when I shuffle some machines around.

Here is the script in my GitHub repository with notes-for-myself.

Installing FreeBSD is not like installing a Linux distribution. A Linux distro hands you something that will provide an operating system and more, generally with a selection of pre-installed packages and configurations. Take a look at ArcoLinux, which offers 14(?) different distribution ISOs depending on your preference for installed software.

FreeBSD doesn’t give you that – you end up with a text-mode prompt (there’s FreeBSD distributions, though, that do some extra bits, but those are outside my normal field-of-view). So it’s not really expected that you have a complete desktop experience, post-installation (nor, for that matter, a complete GitLab-server, or postfix-mail-relay, or any of the other specialised purposes for which you can use FreeBSD).

I could vaguely imagine bunging this into bsdinstall as a post-installation option, but that’s certainly not my call. Not to mention, I think there’s an effort ongoing to update the FreeBSD installer anyway, led by Devin Teske.

So to sum up: to install a FreeBSD machine with KDE Plasma, download script and run it; other desktop environments might work as well.

There is one week left of the call for papers for the foss-north IoT and Security Day. The conference takes place on October 21 at WTC in Stockholm.

We’ve already confirmed three awesome speakers and will fill the day with more contents in the weeks following the closing of the CfP, so make sure to get your submission in.

Patricia Aas

The first confirmed speaker is Patricia Aas who will speak about election security – how to ensure transparency and reliability into the election system so that it can be trusted by all – including a less technologically versed public.

Also, this is the first stage in our test of the new foss-north conference administration infrastructure, and it seems to have worked this far :-). Big thanks goes to Magnus for helping out.

This week in KDE’s Usability & Productivity initiative is massive, and I want to start by announcing a big feature: GTK3 apps with client-side decorations and headerbars using the Breeze GTK theme now respect the active KDE color scheme!

Pretty cool, huh!? This feature was written by Carson Black, our new Breeze GTK theme maintainer, and will be available in Plasma 5.17. Thanks Carson!

As you can see, the Gedit window still doesn’t display shadows–at least not on X11. shadows are displayed on Wayland, but on X11 it’s a tricky problem to solve. However I will say that that anything’s possible!

Beyond that, it’s also been a humongously enormous week for a plethora of other things too:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

If you’re getting the sense that KDE’s momentum is accelerating, you’re right. More and more new people are appearing all the time, and I am constantly blown away by their passion and technical abilities. We are truly blessed by… you! This couldn’t happen without the KDE community–both our contributors for making this warp factor 9 level of progress possible, and our users for providing feedback, encouragement, and being the very reason for the project to exist. And of course, the overlap between the two allows for good channels of communication to make sure we’re on the right track.

Many of those users will go on to become contributors, just like I did once. In fact, next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

Older blog entries



Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.