First disclaimer: I, of course, don’t speak for the whole project, I just write what I personally see and feel. And the blog post contains some technicalities; if you’re not interested in that, just skim over it, you’ll get the feel of it anyway. Or just skip straight to Future.
And btw, I have written most of this post at least two weeks ago, so the beta release that will happen probably this Wednesday, I believe, is just a coincidence.

In October last year we were focused and the direction was clear. Now, a half a year later, the situation is much different.
Last summer our plan was simple: get the resource rewrite done, fix as many bugs as possible, release 4.3.0 with resource rewrite and make a fundraiser for next year of development.
In October we already knew that fundraiser in 2019 is not going to happen and that the resource rewrite needs quite a bit of work as well. We assigned more developers to the resource rewrite task and we had two sprints: one in October, focused on getting those developers (me, Wolthera and Dmitry) engaged in the task, going to BlenderCon and real life meeting with some of Krita’s business partners, and second one in February, this time focused entirely on resource rewrite and describing the resource rewrite design decisions to the last developer (Ivan) who wasn’t there in October.
However as much as we wanted to focus on the resource rewrite, external factors ruled it out again and again. We had quite a lot of issues with building Krita on Windows and Mac, especially Python scripting and notarization on Apple that is now required for the program to be run on a standard user’s Mac. Both of it took several months to rule out (we’re dealing with it since January) and notarization still has some issues. It’s a boring, tedious, frustrating job, which I could taste at the very beginning (with just updating Krita’s dependencies on Windows) around January, but later it was mostly dealt with by Ivan, Dmitry and Boudewijn. Python is particularly tricky: on Windows there are two different Pythons, one (must be installed on the system) is for building Qt, one needs to be built and it provides Python scripting for Krita. Mixing those two up results in the wildest errors.
Another thing we’ve been busy with are regressions, mostly crashes with Colorize Mask, onion skin and animation files and the Transform Tool. Additionally some of the developers took their time off, be it for family reasons, health reasons or any other personal reasons.
In the end, I believe that on the resource rewrite I was the one working the most in the last several months, but even I wasn’t working on it all the time. On the last sprint I started working on bundle manager and resource manager, but after I implemented all functionalities, it turned out that I need a precise plan for the resource manager – I created a mockup, I put it up on krita-artists.org and started working on something else, my pet project of sorts: Wishes Manager.
So, yeah… the resource rewrite isn’t finished yet. Let me write down what every developer is working on.
Dmitry is helping voronwe13 (a very new, but a very promising volunteer) on new types of brush tips and textures, which allows for real-time, quick and easy impasto effects, but he says he’ll come back to bugs soon. I got assigned a task to improve selection tools a bit – those are explained here if anyone is interested: https://forum.kde.org/viewtopic.php?f=288&t=165355#p430735 – I implemented the second one already, but I have some issue that needs to be solved, then I will implement the first one and come back to the resource rewrite (I have two branches that are not on the official repo – one with a brand new Resource Manager, and one with Bundle Manager bundles view fixes to make it more user-friendly, but both are in a very messy state). Boudewijn and Ivan were busy doing builds, Wolthera works on the manual, but says she’ll come back to coding soon as well.
On the positive note, we’ve got two new developers: Emmet and Eoin, both were known to Krita project before as volunteers. Emmet maintains Steam version of Krita for, hmm, not sure how many months already, for sure over a year but was it one year or two? And he always did a great job there. Now Emmet and Eoin are working on animation. They made a list of tasks and they are running through them with a speed that leaves us, senior, long-time Krita developers (says Tiar, working already for a full year!) completely embarrassed. See for yourself: https://phabricator.kde.org/T12769.
Very recently Boudewijn, seeing that the resource rewrite branch is getting quite stable and we can actually paint on it now (thanks to Dmitry), merged it to master. A few issues later and after the release of 4.2.9, it was decided that branch krita/4.2 will be finished and we’ll release 4.3.0 with everything we had on master before the resource rewrite, and the resource rewrite release will become the grand Krita 5.0. Hellozee, our volunteer and a GSOC student from last year and applicant for this year, is now removing an ancient code that Krita kept as a compatibility feature (so that text and shapes created in older Krita can be open in Krita 4 as well). Right now branch master (you can download a nightly build on Krita’s website under the name Krita Next) contains the resource rewrite, while branch krita/4.3 (on the website as Krita Plus) contains the next release, including features like magnetic selections, RGBA (”impasto”) brushes and more.
We’ve got a bit of a problem. Previously, our plan contained the following:
None of it was achieved, and some of it is already impossible. The resource rewrite still needs a lot of work. Bugs number increases every day. Our extremely optimism-inducing bugs number graph we include in our weekly meetings looks like that:

In spring and summer last year most of us were working solely on bugs, so the number decreased. But then we released 4.2.0, and the bugs number skyrocketed, and it didn’t stopped since. It’s not all regressions, although to be honest, 4.2.0 wasn’t the most stable release ever (since I’m doing user support and I knew all issues preventing people from using one release and another (crashing on startup on Windows, saving issues on Windows, Colorize Mask crashes, animation files crashes…), I feel like version 4.2.9 is the first one I can just tell everyone to use). Considering our promise on the last fundraiser was that we will decrease the bugs number to zero, it looks terrible. It’s not from lack of trying… but all the easy bugs are fixed, now nearly every one of them requires a few days, a week or even more of work. And we have 555 left. And even bug fixes can cause regressions and more bug reports… it’s a Sisyphean work.
On the side note, I was reading the Joel on Software blog: joelonsoftware.com and in the article Top Five Wrong Reasons You Don’t Have Testers (https://www.joelonsoftware.com/2000/04/30/top-five-wrong-reasons-you-dont-have-testers/) he wrote: Unfortunately, neither Netscape, nor any other company on earth, has the manpower to sift through bug reports from 2,000,000 customers and decide what’s really important. I think I’d like to write some polemics to some little parts of this article, but for now – considering Krita has 3 millions of users and counting, I guess it’s not unexpected we cannot fix all of them, if even managing seems impossible to a bystander…
Last autumn we also got a Coverity scan, which means automatic checking for mistakes in the code, and we still have around 900 issues to fix.
Krita 4.3.0 won’t contain resource rewrite, because while we are working on it, the old master branch got so many new features (a lot of them coded up by volunteers, GSOC students etc.) that it’s better to release them earlier, so that users will get those features earlier and the new wave of bug reports will be divided into two smaller ones.
And, of course, the fundraiser is out of question. (It’s a lot of work to prepare one, too). And we still haven’t fulfilled the last fundraiser’s promise, have we. (That’s of course not entirely true. We did work hard. Our work is visible on the graph. I hope it is visible in Krita, too. But still, I’m a bit disappointed).
There is also an issue with testing – both unit tests and beta testing. Every time we touch some part of the code, one needs to make sure that it’s both documented and covered in unit tests. There are sometimes things I’m not sure how to make unit tests of – especially since Krita is really signal-heavy. For beta testing, there is one strategy that is already in place which is releasing a beta version two weeks or, in case of major version like 4.3.0, a month before the release, sometimes with a survey. We also have plans for making a new platform for beta testers so that we can coordinate test cases, testing specific features, regression testing etc. to make sure that all of the components are given enough attention according to their importance or frequency of usage. Participating in beta testing is and probably will be voluntary – we don’t have a single paid tester.
I feel like there is a huge amount of work still left that is invisible to our users (resource rewrite is mostly in the very guts of Krita, bugs often happen only in some circumstances that average user might not see…). And there is still a constant demand to improve of our other unfinished/unpolished areas: animation audio, text tool, shortcuts system, all of it requires a lot of work or thinking. Now that Eoin and Emmet fixed the animation cache, finishing up the Animated Transform Mask (which would allow for tweening, which for me is essential to get done, the task description is here: https://phabricator.kde.org/T11476) is possible to be done quite quickly. But that’s just one thing of at least fifteen I can recite at any given time from what I want to be done in Krita.
I feel like we don’t have enough manpower to handle all of this. (It might be because I’m still considerably new in the project – I wasn’t there, so I have no way of knowing if this is the constant state of the project. And I have a bit of a suspicion it is).
Thing is, the bug fixing, stabilizing, writing documentations and unit tests, resource rewrite or any other architectural rewrite – it’s not something we do and we’re done with it. Ok, maybe resource rewrite will be that way, but there will be a new thing to rewrite or rethink, or another huge architectural change, or another optimizing that just needs to be done, but it’s not something we can put on colorful gifs titled See how amazing this new thing is!
Moving forward, Krita has a tough decision to make. What should we focus on? Resource rewrite? New features? Trying to polish up what we have already? We need to make sure those technicalities are sorted out – but we also we need to make sure that our users get their new shiny features. Simultaneously. I believe we need to divide our attention between shiny features and groundwork. Now, the ratio is something to decide upon.
There is this fitting phrase in Polish: ”trying to catch two magpies by their tails”, or maybe ”trying to catch two magpies’ tails at once”. I tried to look up an English phrase, but ”spread oneself too thin”, while having some inherent truth to itself, isn’t that funny and anyway, I ain’t butter. The magpies though, that sounds exactly how I feel what we’re doing at Krita now.
It is, I believe, the best strategy for now, it’s just a tiny bit messy, and I bet I’m not the only person who’s lost. I’m sure we’ll figure it out in the end though.
Krita 4.3.0 will be the next full feature release of Krita. We’ve worked for a year on this new version of Krita, focusing especially on stability and performance. Many tool, like freehand painting and selections are faster than ever. And there is a bunch of fun new features, as well, many contributed by volunteers from all over the world.
The full release notes bring you all the details!
Please help improve Krita by testing this beta!
If you’re using the portable zip files, just open the zip file in Explorer and drag the folder somewhere convenient, then double-click on the krita icon in the folder. This will not impact an installed version of Krita, though it will share your settings and custom resources with your regular installed version of Krita. For reporting crashes, also get the debug symbols folder.
(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)
Note: the gmic-qt is not available on OSX.
For all downloads:
Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos! With your support, we can keep the core team working on Krita full-time.

Akademy 2020 is getting closer and the KDE Community is warming up for its biggest yearly event. If you are working on topics relevant to KDE, this is your chance to present your work and ideas to the community at large.
Akademy 2020 will take place online from Friday the 4th to Friday the 11th of September 2020. Training sessions will be held on Friday the 4th of September and the talks will be held on Saturday the 5th and Sunday the 6th of September. The rest of the week (Monday - Friday) will be Birds-of-a-Feather meetings (BoFs), unconference sessions and workshops.
If you think you have something interesting to present, tell us about it. If you know of someone else who should present, encourage them to do so too.
Talk proposals on topics relevant to the KDE Community and technology are:
Don't let this list restrict your ideas though! You can submit a proposal even if it doesn't fit in this list of topics as long as it is relevant to KDE. To get an idea of talks that were accepted previously, check out the program from previous years: 2019, 2018, and 2017.
Full details can be found in the Call for Proposals.
The deadline for submissions is Sunday 14th June 2020 23:59 UTC.
Heya! This post will be ridiculously brief and simple, albeit filled with screenshots.
As usual: This is a series of blog posts explaining different ways to contribute to KDE in an easy-to-digest manner.
The purpose of this series originated from how I feel about asking users to contribute back to KDE. I firmly believe that showing users how contributing is easier than they think is more effective than simply calling them out and directing them to the correct resources; especially if, like me, said user suffers from anxiety or does not believe they are up to the task, in spite of their desire to help back.
Last time I explained how translators with a developer account have a really straightforward workflow and how the entire localization process for KDE works. I’ve also posted a little article I made some time ago on how to create a live-test environment to translate Scribus more easily, given that Scribus might become a KDE application in the future.
This post explains the process of sending your first patch to KDE. This tutorial, of course, is only useful for small patches, likely those which alter only one file, as the web interface is convenient for such cases but not when there is a ton of files from the same project.
I recently learned how to do this, so I really got excited to write a tutorial about it. However, it’s not worth adding it to the wiki as of now since the KDE infrastructure is being migrated to Gitlab. So instead I’m writing this blog post. Seriously, it’s so ridiculously easy that I honestly believe if this had been promoted before there would be many more patches coming from people who are still in the process of getting used to the terminal. Or those who simply prefer using a GUI, like me.
For more information about sending patches, please refer to https://community.kde.org/Get_Involved/development and https://community.kde.org/Infrastructure/Phabricator.
First of all, I decided what I wanted to change. If you check https://phabricator.kde.org/T12810, you’ll see the only change I’ll be doing to the selected projects is switching the contents of the GenericName field. That is, I’ll be changing a single line of code in one file of each project. I’ll start with GCompris.

GCompris is not yet on KDE Invent. So, for downloading the project, I’m going to git clone it from cgit.kde.org. While the interface is not the best, it serves its purpose and it’s lightweight.
On cgit.kde.org, I press Ctrl+F to find the project’s repository. It’s available here.
We’ll need to copy the link portrayed on the screenshot, it’s in the bottom left corner of the page. You can either select it or right-click and copy its address.

Next we’ll be using a terminal command. Don’t worry, it’s easy to follow and it’s minimal too. For this I like using Yakuake, sometimes I use Konsole, but for this tutorial, I’ll be using Dolphin to keep it more GUI-friendly.
Open a Dolphin window and press F4 to open its terminal. It should open on your home directory, so you can download the repository there, but with Dolphin you can simply navigate to the folder you want and clone the repository, for instance, inside ~/Documents/Patches.
Please install git if you don’t yet have it on your system. This is the only thing you’ll need to install.
After selecting the desired folder and pressing F4, download the repository using git using the previously copied link. It will look similar to the command git clone git://anongit.kde.org/gcompris.git:

The above screenshot is only for illustration purposes, I cloned mine inside my home (~/) folder.
After that, you should have something like the following screenshot.

Open the desired file with e.g. Kate and make the desired change. In my case, I need to change org.kde.gcompris.desktop:

Don’t forget to save your changes. Now back to Dolphin, go back to its terminal and type git diff. Yes, the same tool we used for downloading the repository we use to create the patch! Git is really the only requirement for this tutorial. You should have something like this:


The original text which was removed is shown in red and prefixed with a minus, the text added in its stead is shown in green and prefixed with a plus. Just be sure the changes you did are correctly depicted in green.
For the patch, as we will see later, you can simply copy what shows up in your terminal. But another way to do that is by creating a file and sending its contents instead. If you want to create a file, use git diff > filename.diff. I used the extension .diff, but it doesn’t really matter; you can name it .txt or .patch, for instance. For checking if everything is alright, you can then open it by right-clicking the file and opening it with Kate, or you can run kate gcompris.diff as shown below.

Let’s now go to Phabricator. Before submitting a patch, you’ll need a KDE Identity account. You don’t need to have a developer account. With a KDE Identity, you can login to Phabricator which is located on phabricator.kde.org.
After logging into Phabricator, click on Code Review on the upper left corner of the page.

On the next window, click on Create Diff on the upper right corner.

On the next window, if you want to send the diff file, click on the Browse... button next to Raw Diff From File and select the file.

If you don’t want to send a file but instead you just want to paste the diff itself, you can do so on the Raw Diff section. Just remember: choose ONE of those two methods. Don’t send the diff through both methods.

I’ll repeat: choose ONE of those two methods. Don’t send the diff through both methods.
Also important: Don’t forget to add the correct repository name. In my case, it was this one:

On the next window, simply accept the default for creating a new revision, confirm and go to the next window. It will look similarly to the following:

For the users following this tutorial from their mobile, here’s a better framed screenshot:

Fill each field accordingly. Add a descriptive title and a short summary explaining what the patch does, and in the reviewers, repository, tags and subscribers sections, add the correct project. Just by typing the program name Phabricator should suggest the correct project for you, just click on the one you think is right. Again, in my case, it was GCompris.
After filling the fields correctly, click on Create New Revision.
And done! Your patch will look similar to this:

That’s it for sending your first patch through the web interface of Phabricator. If your patch has any issues, those should be pointed out during the review stage by the more experienced contributors responsible for the project.
The most important thing that should be done is getting in touch with the community. If you want to know more about contributing, please read this very clear wiki page on Getting Involved. There’s Matrix, Telegram and IRC for instant messaging.
Fast on the heels of the 20.04.0 release comes 20.04.0b. This fix corrects:
Logbook, diary, journal, bitacora… there many names for the same practice: write down what you do. This practice has been used among sailors and scientists for centuries, for good reasons. Nowadays there are plenty of tools to keep track of what you do but I haven’t seen anything as powerful as a team, a project or a product logbook.

Five years ago I wrote an article about my first interactions with this practice, back in the beginnings of my professional life, in the Canary Islands, Spain. The article describes the basics of any team or project logbook. You would benefit from reading it before you keep reading this article.
I would like to provide some additional insides about the diary, together a few tips and practices I have used throughout the years.
Why is the logbook useful
Keeping a logbook is useful for:
Who is the logbook for
Everybody would benefit from contributing to a logbook but in some situation or environment, this practices provide significant benefits:
Git based vs wiki
I always recommend to use a git-based tool for the logbook. It is not just that collaboration is easier, especially for developers, but also allows to integrate the habit of writing in their workflow easily. It will also be easier to structure and visualize the information through tags. Git is specially convenient for distributed teams too, which are the ones who benefit the most from this practice in terms of alignment.
Often the diary is used by people who does not know how to use git or is not part of their day to day workflow. I have had jobs in which I did not use git on regular basis. In such cases, a wiki can be the best option. Make sure you use a wiki with conflict resolution capabilities. Otherwise, the logbook will not scale. If you use a wiki that structure pages in editable sections, that might work too.
There are tools that combine the best of both worlds, like Gitlab, Github or zim-wiki. These are my favorites.
Structure and archive
I recommend to structure the logbook per day and per user. It doesn’t matter if we are talking about a product/project diary or a team one. Other options are possible but it is simpler to write down your entries one after the other one and use the tags to open the possibility to structure and visualize the information later on in different ways.
To use the logbook as reporting tool, at some point the write permissions should be removed. From that moment on, past entries should be amended in today’s section, not modified in the corresponding date.
When should the write permission be removed?
I suggest to do it weekly by default. I usually lock it down the following Monday at lunch time, providing the possibility to those who forgot to add their entries on the Friday before or were absent for whatever reason, to add their entries.
Project or product logbooks might use the sprint as the iteration, specially in cases where those sprints are two weeks long. But in general, if you have a good number of people writing in the logbook, I prefer a week long iteration.
To archive the diary, I usually move it away from the current one. I personally prefer to avoid scrolling or information from many days where I write. So I use a different file for past entries and current ones (that week). If you are using a wiki, move the content to a different page in read-only mode.
When and what to write?
As mentioned in the original article, you should include in the logbook what is…
Write down all the relevant events or any relevant information as soon as you have it. Write down what has been completed, solved, blocked… State facts and ask questions instead of writing opinions. Leave the judgement for the later analysis of the logbook. Keep them out of the logbook entries unless you create a specific tag for it (#self-note could be an example).
I add an entry at least every couple hours, before taking a break, except when I invest a longer period of time in an intensive task. So I usually have four entries or more daily in my logbook. I tend to write a lot more though. Other people write less than I do and that is OK as long as each one of us write down the relevant information for ourselves first, for our colleagues and finally to others.
Include decisions and agreements, conclusions, achievements, pitfalls, mistakes, external events that have influenced your work, external references and data sources that were relevant, etc. But remember, the journal is about what you did, not about what you need or should do, what is coming or what you think. Later on, with practice, you can expand the nature of the content. Make sure you agree worth other writers on this topic. It helps to keep the logbook clean.
One special type of entry is a comment on somebody else’s entry. I recommend to use it at the beginning only to add information or additional context to somebody else’s statement, not to provide any opinion or question. This is a journal, not a forum.
Ask yourself this question, what would I like to read about what any of my colleagues or myself are doing in 3 weeks or 3 months time?
How should I write?
Be concise. One or two lines maximum per entry is the best approach. Add links to the information sources, to the tickets, messages, bug reports, patches, logs, web pages… where the information is really generated and stored. Provide context to those.
Use ways to shorten the links to those common tools you use at work on daily basis. Add tags like dates, names and any other metadata that can help to contextualize and structure the information later on.
Remember, whatever you do, make it simple so adding information to the logbook is easy and fast.
Tags
One of the key elements of the journal is the capacity to structure and visualize later on the information through tags. As mentioned, the logbook is structured by date and user (two tags), but there are others you can and should use. The more experienced the team or project is with the logbook, the more tags can be used.
Warning: agree on the tags with your colleagues up front, otherwise you loose part of the filtering and visualizing of the information capabilities later on. If there is more than one logbook at your organization, agree on the tags with the other teams and projects. Define common tags.
I recommend to start with very few tags and increment their number over time. The tags to start with might depend a little on the environment, the type of team or project. These are the ones I commonly start with:
,
and
Use these tags in entries but specially by the user tag, to let others know how you feel and to be able to evaluate later on your mood in different time windows.The tags are usually located at the end of the entry. There is one exception. When you are commenting on somebody else’s entry, start with the user tag and some indent. Check the example below for this case.
Different tools might have different restriction about how to define a tag. Some have restricted keywords. I added the symbol “#” because it is a common one. For users, many tools use the “@” symbol, for instance.
Use standard emojis instead of tool specific ones. Again, make it simple. Consider emojis a tag.
Example
This is a simple example of how the journal would look like:
May
02/05/2020
@peter 
Conclusion
Keeping team, project or product logbooks is a very useful practice. In the short term increases engagement and alignment, in the mid terms reduces the effort in reporting. In the long terms keeping a journal improves analysis and retrospectives. on-boarding is another topic that heavily benefits from the existence of a diary.
I would like to know about your prior experience with logbooks or similar practices. If you want to try it out, feel free to contact me for questions or advice.
Following the post about what happened in KDE PIM in January and February let’s look into what the KDE PIM community has been up to in March and April. In total 38 contributors have made almost 1700 changes. Big thanks to everyone who helped us make Kontact better!
A new bundle of KDE applications has been released in April, including Kontact with its many bugfixes and improvements.
Every year in April the PIM team meets in Toulouse in France for a weekend of discussions and hacking. This year due to the coronavirus it wasn’t possible for us to meet so instead we held a virtual KDE PIM Sprint. You can read the sprint agenda as well as Volker’s report from the sprint.
To highlight some of the topics discussed
KMail has received its usual dose of bugfixes, mostly those:
There were some exciting improvements to KMail as well: Sandro Knauß has implemented support for Protected Headers for Cryptographic E-Mails. This means that we also send a signed/encrypted copy of headers and display the signed/encrypted headers if available and ignores the unsecure headers. Currently we don’t obfuscate the subject to not break current workflows. Those things will be improved later on. Sandro together with Thomas Pfeiffer get a funding from nlnet to improve mail encryption. That means there will be more improvement happen the next months. The next topic they will look at is to add Autocrypt support for KMail.
Volker has improved the look and feel of the “HTML Content” and “External References” warnings in emails.

As the Libre Avatar service has come back from the dead a while ago, so did now the support for it in KMail. The ‘Export to PDF’ feature which we introduced in the previous report has been polished (Daniel Vrátil, D27793).
The ‘Move To Trash’ code has been optimized so that deleting large amounts of emails should now be faster.
For developers it is now possible to open Chromium DevTools inside the message viewer pane to make it easier to debug message templates.
The Google Calendar and Google Contacts backends have been merged into a single Google Groupware resource (Igor Poboiko, D28560). The change should be mostly transparent to users, the old backends will be migrated to the new unified backend automatically after update. During this Igor also fixed various bugs and issues in the backends and the LibKGAPI library, big kudos to him!
The DAV resource is now able to synchronize the calendar color from KOrganizer to the DAV server (David Faure, D28938). Related to that, the menu to configure calendar color in KOrganizer has been simplified by removing the “Disable Color” action.
It is now easier to recognize and set the default calendar and the event editor now respects the settings correctly.

KJots, the note taking application, which has been on life support for 5 years, has received some love recently thanks to Igor Poboiko. Most of the things were happening under the hood: some ancient dusty code has been dropped, some refactoring happening, etc. However, if you still use KJots, you might also notice quite a number of changes too. And if you don’t, it’s a good time to consider using it :)
Igor has quite huge plans for the future of KJots. First of all, more bug squashing. Secondly: ability to store notes in Markdown format, synchronization with online services (thoughts are on OwnCloud/NextCloud, or proprietary Evernote). On a lesser scale, the port to the same text editing component as used by KMail email composer is being considered, which will give KJots more text-editing features. There are also plans to add a support for inline checkboxes introduced in Qt 5.14, which would allow making checklists and TODO-lists in KJots, and ability to sort books and pages by their modification date (so more relevant would pop up first).
Other parts of PIM has also received bugfixes and improvements. Kleopatra, the certificate management software, now displays GPG configuration tabs and option groups always in the same order (Andrey Legayev, T6446). A bug in Akregator has been fixed that could have caused some feeds to have an icon missing (David Faure, D28581). KAlarm has received a bunch of UI improvements as well as some smaller features - for instance it is now possible to import alarms from multiple calendars at once and the calendar list is now sorted by name (all by David Jarvie).
Lots of work went into modernizing Akonadi, the “backend” service for Kontact. One major change was switch to C++17 and some initial usage of C++17 features internally (public API is still C++11-compatible). Widgets for managing Tags have been improved and polished and the deprecated ItemModel and CollectionModel have been removed.
The KIMAP library has been optimized to better handle large message sets (Daniel Vrátil, D28944). The KLDAP library can now connect to LDAP server using SSL encryption (Tobias Junghans, D28915), alongside the existing TLS support.
Volker Krause has been working on preparing the KDAV library (which implements the DAV protocol) into KDE Frameworks.
Laurent Montel has been working throughout the entire PIM codebase, preparing it to port to Qt6, once it’s available.
Take a look at some of the junior jobs that we have! They are simple, mostly programming tasks that don’t require any deep knowledge or understanding of Kontact, so anyone can work on them. Feel free to pick any task from the list and reach out to us! We’ll be happy to guide you and answer all your questions. Read more here…
This week we have some big stuff for you, including a rewritten global shortcuts settings page, an option to remember Dolphin’s window state across launches, a fix for longstanding kerning issues with centered text in QML-based software, and much more!
Have a look at https://community.kde.org/Get_Involved to discover ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!
Finally, consider making a tax-deductible donation to the KDE e.V. foundation.
In order to navigate you to and from an airport KDE Itinerary needs to know where that airport actually is. That is a seemingly easy question, but surprisingly hard to answer with the level of precision we need. Since the recent work on public transportation line metadata left me with a local OpenStreetMap database, I tried to see if we can improve our airport coordinates a bit.
So far we use a simple mapping from airport IATA codes to a single geographic coordinate that we obtain from the airport’s Wikidata entry. That is, we have a single point somewhere in a often multiple kilometers wide area. Typically that’s somewhere around the center of the overall bounding box.
In some cases, such as Munich (MUC) this happens to be exactly where we want it to be, the “entrance” of the airport, ie. the place you have to go to to enter the airport (which is again usually a much larger area than the term “entrance” would suggest).
In many cases however we end up with a location somewhere in the middle of the runway, and thus either with navigation instructions that end with “and now walk 1.5km into an area you are not allowed to enter”, or worse, the routing engine snapping to the “other side”, leading you the opposite side of the airport.
Let’s look at Frankfurt (FRA), the following image marks a few relevant parts:
While it’s usually fairly clear for a human looking at a map where to go to enter an airport, it’s not that easy to determine this from OSM data in code. The following heuristics have proven useful:
Entrance tags on terminal building nodes. This seems like the obvious thing to look for, but unfortunately there are a few issues with this as data availability varies a lot. On a large airport you can end up with hundreds of those, or none at all (or worse, just one and that being far away from where one would expect it). When available on small airports though, this provides a very precise position.
Terminal buildings, after all you have to pass through those at the airport. This turns out to be rather robust on small to mid-sized airports where we don’t have air-side concourses or additional inaccessible terminal buildings (e.g. for government, VIP, military or freight use). To deal with the latter, detailed tags on the terminal buildings help a lot.
Railway stations inside the airport boundaries, or at least in very close proximity to a terminal building. Those are more common on medium to larger airports. In-airport inter-terminal transport systems can interfere with properly detecting railway stations though, here we rely again on detailed tagging in the OSM data.
Another aspect that helps humans to spot the entrance area is the structure of the road network leading there. That might offer additional hints, but is unfortunately much harder to deal with in code.
The above approach covers a wide range of airports, and for many of them it provides a significant improvement in navigation precision. This however fails on some of the very large airports, namely those with multiple entrance areas, typically due to having multiple sets of terminals that are largely disconnected from each other, to the extend of even looking like two or more airports close to each other and sharing the same runways.
London Heathrow (LHR) is one of the more extreme examples for this, with three distinct sets of terminals (marked with blue circles below), all with their own railway stations and access roads, and no internal connection between them.
These cases cannot be modelled by a single coordinate per airport anymore, here we’d need to take the respective terminal into account as well. That is possible, we have that information in the itinerary data model. What I’m still unsure about however is if we should attempt to cluster terminals automatically using the OSM data, or if those are so few cases that a manually created table would be more efficient to build and maintain?
Airports are particularly affected by this navigation problem, due to:
For train stations this applies to a much lesser extend. Besides being smaller, the public transport routing systems usually tend to produce sensible results no matter which coordinate you pick within the station boundaries (as train stations happen to be a very central concept of those systems).
That doesn’t mean everything is perfect there, the possible error scenarios are just different.
In the previous post I wrote about how you can help improving OSM and Wikidata data for public transport lines, and the same applies to airports. Verifying airport codes, tagging terminals, entrances and railway stations with all available details, and cross-linking Wikidata and OSM elements are easy ways to contribute.