Skip to content

Thursday, 24 July 2025

Intro

This week builds on the work from last week, where I started adding selection action buttons to the floating toolbar in Krita. The focus this time was on integrating more actions and improving the user interface by adding icons to those buttons.

Adding Buttons and Icons

After learning how Selection Tools are triggered through existing UI buttons, the next step was figuring out where those actions originate in the code and how to reuse them in new buttons. I also explored how to visually represent each button using Krita's icon system.

Here’s a simple example of how I added an icon to a button:

d->buttonCopyToNewLayer = new QPushButton();
d->buttonCopyToNewLayer->setIcon(KisIconUtils::loadIcon("duplicateitem"));
d->buttonCopyToNewLayer->setIconSize(QSize(25, 25));

This pattern forms the basis for a reusable template I can follow as I implement additional action buttons across the toolbar.

Finding Icons

Icons play a huge role in usability. Much like how we can recognize cartoon characters by their silhouettes, users often identify tools in a UI by their icons. Good icons make interfaces faster to use and easier to understand.

To find appropriate icons for my buttons, I’ve been referencing these sources:

Krita’s official icon library:
scripting.krita.org/icon-library

Krita source file:
$KRITASOURCE/krita/krita.action

If I couldn’t find an icon there, I searched the codebase for related keywords or looked at how similar tools were implemented with icons. When I exhaust these options, I can also reach out to @Animtim who helps create Krita's custom icons.

Conclusion

Buttons are most powerful when they’re not only functional but also accessible and visually intuitive. This week extends on the work from last week

Next on my list, while I continue adding selection buttons and icons, is to make the floating selection bar movable on the canvas!

Contact

To anyone reading this, please feel free to reach out to me. I’m always open to suggestions and thoughts on how to improve as a developer and as a person. Email: ross.erosales@gmail.com Matrix: @rossr:matrix.org

Wednesday, 23 July 2025

Hello everyone! Midterm evaluations are here, and I wanted to share an update on my GSoC project. Here’s what I’ve accomplished so far:

Progress So Far

Migration of Existing Fuzz Targets

The first step was migrating the existing build scripts and fuzz targets from the OSS-Fuzz repository into the respective KDE repositories. Maintaining them within the OSS-Fuzz repo added a bit of friction when making changes. Having them in KDE repos makes it easier to maintain and update them.

KArchive Fuzzer

Then I worked on KArchive fuzzer doing mainly two changes: First was to split the fuzzer into separate targets for each archive format (like zip, tar, 7z, etc.) to improve coverage. Second was to add libFuzzer dictionary files to guide the fuzzing process better. Here is an image showing the coverage after these changes:

KArchive Fuzzer

This coverage was tested using a local corpus and it is pretty solid for just fuzzing the “reading” part. The coverage will increase on OSS-Fuzz by time as the corpus keeps growing. Splitting the fuzzer into multiple targets allows the fuzzer to focus on specific archive formats, which keeps the corpus size smaller and more efficient.

KMime Fuzzer

After that, I focused on KMime. I created a fuzz target for it, which focused on the just the MIME parsing functionality. The parsing part of KMime is critical as it handles untrusted input, such as, from emails (in KMail).

KMime Fuzzer

For KMime, I also added a libFuzzer-style dictionary file to help guide the fuzzing process. This helps the fuzzer generate more meaningful inputs, which can improve coverage and help the fuzzer reach deeper code paths.

KDE Thumbnailers Fuzzer

After KMime, I moved on to KDE Thumbnailers. I created a fuzzer for the thumbnailers that are used in KDE applications to generate previews of files. This is important as it handles untrusted input from various file formats, such as images, documents, etc. KDE has a lot of thumbnailers, I started with the thumbnailers in KIO-Extras repository, which includes thumbnailers for various file formats like images, videos, documents, etc.

KDE Thumbnailers were tricky to fuzz because they aren’t standalone. They depend on KIO and KIOGui, which are pretty heavy and pull in a bunch of dependencies not required for thumbnailing. Building the full KIO stack inside OSS-Fuzz would have made the build process slow and complicated.

To avoid that, I wrote a custom build script that compiles just the thumbnailer source files and their direct dependencies. That keeps the fuzzers lightweight and focused only on the thumbnailing functionality.

KDE Thumbnailers Fuzzer

For these thumbnailers, I also created a dictionary file for each thumbnailer separately for the same reason as KMime.

KFileMetaData Fuzzer

At last, I worked on KFileMetaData. This library is used to extract metadata from files, such as images, videos, documents, etc. Same as KDE Thumbnailers, it handles untrusted input from various file formats, so fuzzing it is important to ensure it can handle malformed or unexpected data gracefully.

Initially, I made a single fuzzer that used Qt plugin system to load metadata extractors and ran the extractors based on content mimetype. However, this required using dynamic libraries which is not great for OSS-Fuzz integration. So I split the fuzzer into multiple targets, one for each extractor, and compiled them statically. This way, each fuzzer is focused on a specific extractor and doesn’t depend on dynamic linking.

KFileMetaData Fuzzer

The thumbnailers and kfilemetadata currently have the highest coverage among all the fuzzers I’ve created so far, which is great! The coverage will improve and reach closer to 100% for them as the corpus grows on OSS-Fuzz.

What’s Next

There are still many more libraries that could benefit from OSS-Fuzz integration. Here are some that I plan to work on next:

More Thumbnailers

KDE maintains a large number of thumbnailer plugins, and I intend to integrate as many of them as possible. The next ones on my list (provided by Albert Astals Cid) include:

Okular Generators & QMobipocket

QMobipocket is a library used by Okular for reading .mobi files. It parses Mobipocket documents and could benefit from fuzzing to identify edge cases and potential vulnerabilities.

Okular also includes several generators responsible for rendering various document formats. While most rely on third-party libraries, a few include custom code that has not yet been fuzzed. These components may be susceptible to bugs triggered by malformed files.

Fuzzing these generators is a bit tricky, since building the full Okular application and all its dependencies would slow down the build process and make its maintenance harder. To address this, I plan to build only the relevant generator source files and their minimal dependencies similar to the approach I used for KDE thumbnailers.

KContacts (VCard Parser)

KContacts is a KDE framework for handling contact data. It includes a VCard parser that reads .vcf files. Although the format is relatively simple, it supports multiple character encodings and codecs, making it an interesting candidate for fuzz testing.

That’s it for now. If you’re working on/know a KDE library that touches untrusted input and could benefit from fuzzing, please let me know! You can reach me on Matrix or Email.

Tuesday, 22 July 2025

One of the biggest things you can do for KDE (that does not involve coding) is helping us organize Akademy.

In 2026, we are organizing a special edition of Akademy to celebrate KDE's 30th birthday. We want to make this occasion memorable by celebrating this important milestone with Akademy. The birthday edition of Akademy will not only bring together contributors, users, and partners but will also reflect on three decades of community, collaboration, innovation, and Free Software.

Now is your chance to become KDE champions and help make Akademy 2026 happen! We are looking to host Akademy 2026 during June, July, August, September, or October. Download the Call for Hosts guide and submit a proposal to host Akademy in your city to akademy-proposals@kde.org by October 1, 2025.

Do not hesitate to send us your questions and concerns! We are here to help you organize a successful event, and you can reach out at any time for advice, guidance, or any assistance you may need. We will support you and help you make Akademy 2026 an event to remember.

This year there was another “Display Next Hackfest”, this time thanks to AMD organizing and hosting the event at their office in Markham, Toronto. Just like the last hackfests, there were other compositor developers and driver developers present, but in addition we had the color experts Charles Poynton and Keith Lee to pester with questions, which was very useful. In general, the event was very productive.

Picture of the hackfest room

We discussed a lot of things, so this is just a summary of what I personally consider most important, not exhaustive notes on every topic. You can read the full notes on Harry’s blog.

Commit Failure Feedback

Currently, when the compositor tries to commit changes to KMS and that fails, it almost always just gets -EINVAL as the response, in other words “something doesn’t work”. That’s not just annoying to debug, but can lead to the compositor spending a lot of time with useless atomic tests in some situations, like for example when KWin tries to turn displays on - I’ve seen cases where we spend multiple seconds testing every possible configuration of outputs (while the screen is frozen!), just for the actual problem to be unfixable without turning one of the displays or some feature off.

So we discussed how to improve on that, and the result was basically that we just want something - really anything is better than the current situation. We found that there’s a reserved field in the atomic ioctl, so we can even do this without introducing a completely new ioctl and just make that reserved field an optional pointer to another struct in which the kernel will write the feedback on what failed exactly. The most basic things we agreed to start with are to return

  • some enum for common issues, like limited scanout/memory bandwidth, limited connector bandwidth, invalid API usage, things like that
  • some string with possibly driver-specific information that the compositor can log, most important for debugging problems on user systems
  • an optional array of KMS object IDs for what objects are related to the failure, for example for which connectors are hitting bandwidth limits

New Backlight API

The current backlight API on Linux is that the kernel exposes one or multiple backlight related files in sysfs, and userspace writes to one of them with root permissions. This requires heuristics for which one of the files is the correct one to use, this API can only control a single backlight, it can’t be synchronized to the image a compositor presents, the firmware may or may not do animations, it may be linear or not, the minimum brightness is completely undefined (sometimes it’s zero!), in summary it’s a total mess.

Three years ago there was a proposal to add a backlight API to KMS instead, which however stalled as the author had to work on other tasks. We discussed on what exactly we want from that API:

  • a backlight property per connector
  • optional additional information about the mapping of backlight values to actual luminance (min/max values, linear/nonlinear curve)
  • (where possible) no animations in the firmware
  • (where possible) atomically update the backlight with the rest of the atomic commit, so we can properly synchronize content and light level for HDR on SDR displays

Adaptive Backlight Management

ABM is a feature on some AMD hardware, which reduces backlight intensity to save power and at the same time increases contrast of colors on the screen to compensate for that reduced backlight level. This is a bit of a controversial feature - on one hand it improves battery life, but on the other it messes with the colors on your screen, which to some people just doesn’t look good but is really bad if you’re trying to do color critical work.

Currently this feature is controlled through sysfs, which means power management daemons can mess up your colors without you being aware. Additionally it would be nice to automatically turn off the feature when you’re profiling your screen with a colorimeter, and as the feature reduces the backlight, also in very bright environments to get that extra brightness… To improve on that situation, we’ll get a KMS property to control the feature from the compositor side. I intend to make it off by default in Plasma, but users that want additional battery life will be able to easily opt into it in the display settings.

Autotests

We had three discussions about automatic tests - one about measuring power usage, one about testing KMS drivers, and one about testing compositors.

For power usage tests, we didn’t really agree on the best way to get a standardized testing framework, but we found some possible approaches that can work now - tests can be automated to some degree by using compositor-specific APIs, the remote desktop portal or OpenQA.

For testing KMS drivers, we talked a bit about how the IGT test suite is useful for testing specific cases, but it doesn’t really cover all the same bits that real world compositors use. The tests mostly use features in a somewhat self-contained manner, are mostly written by kernel developers, and thus test how they expect their APIs to be used, which is sometimes different from how compositors actually use it. A possible solution for that is for compositor developers to write automatic tests using their compositor, which run directly on KMS and execute some pre-programmed sequence of events, possibly requiring KMS features like color pipelines or overlay planes to be used successfully for the test to pass.

We still have to figure out the details, but if we can get compositors in DRM CI, that could help improve stability of both compositors and drivers.

Last but not least, for testing compositor’s KMS usage, we can’t really rely on our manual testing with actual hardware, and DRM CI tests will still be somewhat limited. Some compositors already use VKMS, a virtual KMS driver, for their automatic tests, and there are some pending kernel changes for configuring it to do a lot more than the rather simple and fixed setup we had so far. With the new API, we’ll be able to configure it to have nearly arbitrary amounts of planes, add and remove connectors and even entire GPUs! I still have to wire up a KWin autotest for this, but it will be very useful both in development and to prevent regressions in releases.

Color and HDR

We of course also spent a lot of time talking about color management and HDR, and started that off with the current state of implementations. Things are looking really good!

In terms of Wayland protocols, all the important bits are in, namely the color management and color representation protocols. There may still be some smaller changes to the protocols, maybe some additions here and there, but the most important parts are done and used in practice. If you’ve read my previous blog posts, you might know that KWin’s color management story is in a really good state, but other compositors are getting there as well. While Mutter is still lacking non-HDR color management bits, it now has basic HDR support, the Cosmic compositor has some preparations for it going on under the hood, and wlroots has basic HDR support as well.

On the application side, lots of applications are working on supporting it, like Qt, GTK, Godot and Firefox, or support it already, like Mesa, mpv and gamescope. Notably, Blender even has currently Wayland-exclusive HDR support!

We had a Q&A and some discussions with Charles Poynton and Keith Lee. For details you can look at the notes, but the most important thing I took from it was that we should adapt visuals to the user’s viewing environment based on their absolute luminance too, not just the relative light levels. How to actually do that adjustment in practice isn’t entirely figured out yet though, so that will be an interesting problem to solve.

We also talked a bit about the drm color pipeline API. I won’t go into details about this one either, but I’ll talk more about it my next blog post. TL;DR though is that this API allows us to use many more color operations that GPUs are capable of, to avoid compositing with shaders in more situations. I have a KWin implementation that proves the API works and by now the API and driver implementations basically just have to be merged into the kernel.

My “Favorite”: Pageflip Timeouts

Judging by how often I come across this issue in bug triage, if you’re reading this, chances aren’t too terrible that you’ve heard of this one already, possibly even seen it yourself in the form of

kwin_wayland_drm: Pageflip timed out! This is a bug in the amdgpu kernel driver
kwin_wayland_drm: Please report this at https://gitlab.freedesktop.org/drm/amd/-/issues
kwin_wayland_drm: With the output of 'sudo dmesg' and 'journalctl --user-unit plasma-kwin_wayland --boot 0'

in your own system logs at some point. To be clear, this is just an example and it does not only affect amdgpu. I’ve seen the same with NVidia and Intel too, but as amdgpu’s GPU resets have been a lot less reliable in the past, it’s been a bigger issue for them.

Basically, pageflip timeouts are when the compositor does an atomic commit through KMS, and then waits for that to complete… forever. When this happens, the kernel literally doesn’t allow the compositor to present to the screen anymore, so the screen is completely frozen forever, which is very bad, to state the obvious.

Fixing all the individual causes of the problem hasn’t really worked out so well, and this is a bad enough situation that there should be a way out when it does happen. We discussed how to do this, and I’m happy to report that we figured out a way forward:

  • we need a new callback in KMS that tells compositors when a pageflip failed and will never arrive
  • drivers need to support resetting the display-driver bits of the GPU to recover it
  • if the driver entirely fails to recover in the absolute worst case, it should send a device wedged event, which tells the compositor it should try to reload the entire driver / device

Scheduling Atomic Commits

When presenting images to a display, compositors try to get the absolute lowest possible latency achievable, without dropping frames of course. This is tricky enough with full information about everything, but it’s even worse if information is missing.

Currently, KWin just tries to commit 1.5ms before vblank start of a refresh cycle. This number was figured out experimentally - on a lot of hardware, we could easily reduce latency by 500µs-1ms without any issues, and on some other hardware we’d even need more latency to never drop frames! The latter problem I kind of already fixed by measuring how long each commit takes, but that just measures the CPU time, and not when the hardware is actually done programming the next frame. We also don’t know the deadline, on lots of hardware it is the start of vblank, but with some drivers it may be later or earlier.

We discussed how this could be solved, and concluded that we want

  • a callback that gives us a timestamp for when the hardware finished programming the last commit
  • information on when the deadline is, relative to the start of vblank

Variable Refresh Rate for the desktop

We want to use VRR to save power, not just to reduce stutter in games. However, for that we really want some additional APIs as well:

  • the compositor should be able to set a min and max refresh rate, for example for low framerate compensation (LFC) and for flicker reduction
  • the compositor should be able to do LFC instead of the driver having heuristics for that - so we need to be able to turn driver-side LFC off
  • on the Wayland side, it would be nice for video players to be able to report their preferred refresh rate, so we can set the display to a multiple of the video. Should be a really simple Wayland protocol, in case anyone wants to do it before I get around to it ;)
  • also on the Wayland side, a maximum desired refresh rate for games could be useful

Slow atomic commits

Amdgpu has some issues with atomic test commits being very slow, more specifically when it comes to significantly changing overlay plane state - on my desktop PC, enabling and resizing overlay planes regularly makes the test take tens of milliseconds. On my laptop it’s a lot faster for some reason, where it’s usually not noticeable, but even there a lot of tests take 1ms or more. We may need to do multiple atomic tests per frame, especially when you move the cursor… so that’s still quite bad!

We can work around the problem in the compositor to some degree, by avoiding frequent changes to the overlay plane state - we’ll certainly try to make it good enough to leave the feature on by default. Either way though, someone ‘just’ has to optimize this on the driver side, otherwise you might still see some stutter when enabling or disabling overlay planes.

Actual Hacking

It wouldn’t be a proper hackfest without any hacking of course! Before the event, I was working on overlay plane support for a while, and added color pipeline support on top of my WIP code for that. In terms of KMS offloading, the only big feature that was still missing is underlay support… so I added that at the hackfest. Turns out, if you already have complete overlay plane support, adding underlays to the mix isn’t actually all that difficult.

The code still need some cleaning up and not all of it is merged yet, but on my laptop it’s now basically a challenge to get KWin to not put videos and games on hardware planes, and the result is amazing efficiency for video playback and improved latency and performance for gaming in windowed mode. This is a larger topic though and deserves its own blog post soon™, so I won’t explain how it works here.

Tourist-y things

AMD invited us to go up the CN tower. It was a little bit foggy, but we still had a good view of the city:

group photo on the cn tower photo taken on the cn tower photo taken on the cn tower photo taken on the cn tower

I also visited Niagara Falls. It was a sight to behold!

photo of niagara falls photo of niagara falls photo of niagara falls

Conclusion

Thanks again to AMD for hosting the event, it was really fun. There might not be another display hackfest next year, as most of the big topics are finally nearing conclusions, but I hope to see many of the other hackfest participants again at other events :)

Using clang-format in Qt Creator

In this blog you will learn how to set up clang-format with Qt Creator for consistent, automatic code formatting, including custom style files and exclusion rules for subdirectories.

Continue reading Using clang-format in Qt Creator at basysKom GmbH.

Sunday, 20 July 2025

This is an update of my Plasma & Kate on Wayland end of 2021 post from close to 4 years ago.

Wayland, what?

Since years (or let’s say a decade now) Wayland based compositors are promoted as the successors to the venerable X.org X11 display server.

If you want to have some more high level overview about what is different to good old X11 see the Wayland Architecture overview.

But my Xeyes…

And yes, X11 will not just vanish in the next years and Xwayland will allow to run legacy applications for even longer without relying on the full low-level X11 stack.

But poor Xeyes will not be able to watch Wayland windows, evil security, I want my global key logger in no time!

My experience in 2025

Since my last post in 2021 I use Wayland more or less exclusive on all my private machines.

The latest newcomer to that is a M2 Macbook Air, even there it works just fine for me in general. For more details, here is my NixOS configuration for that machine, many thanks to the Asahi Linux team to make that feasible!

At work we use Arch Linux since some time and there Waylands works just fine for my needs, too.

I must confess I stay away from NVIDIA hardware and the AMD, Intel & Apple M2 GPUs I use work nicely with the open source drivers. Therefore, if you are a NVIDIA user, your mileage might vary.

Same if you are stuck on some older distribution, but as said, X11 is not gone. Before my 2021 switch I did keep using X11 due to driver issues, too, and I think that will still be feasible for years.

For day to day tasks I face no bugs that block me on either NixOS or Arch Linux.

For Kate itself, we had some persistent issues with bad parents of popups, that should now be fixed in the latest Frameworks 6 and some upcoming Qt 6.9.2 or higher patch release. That should remove the last logging spam on the terminal about Wayland issues like seen below.

qt.qpa.wayland: Creating a popup with a parent, QWidgetWindow(0x559ddf4cf3c0, name=“MainWindow#1Window”) which does not match the current topmost grabbing popup, QWidgetWindow(0x559de00644b0, name=“goWindow”) With some shell surface protocols, this is not allowed. The wayland QPA plugin is currently handling it by setting the parent to the topmost grabbing popup. Note, however, that this may cause positioning errors and popups closing unxpectedly. Please fix the transient parent of the popup.

The stuff did in principle work before, but that output still was unsettling and in rare cases could lead to totally misplaced popups or other issues with them.

One thing that I myself underestimated a lot in the past is, that Wayland is just a different thing than X11. It is a totally different platform in many aspects and just because stuff works as wanted on X11, if they are very low level, they might not work the same way on Wayland. Thankfully a lot of people helped to fix the issues we had!

All done?

No, naturally not. One could ask: but a lot of people worked on that a decade? How can that be?

Wayland is as said above really different to X11 in design (for good reasons, we no longer live in the 80ties and requirements have changed), it is expected that this will take time given the amount of code we have.

The currently known significant issues that still not work looks reasonable small to me. Naturally that might be different for others, one must be transparent that some stuff is working differently now and that some things still just not work (and some never will work by design).

My personal biggest issue I still have with Kate is the missing way to raise my existing windows if I open a new file via the terminal shell. All works fine if you do that via Dolphin and Co. as I get some activation token, that is at the moment just not feasible via the terminal with the shell.

With virtual desktops we miss some way to pick the right window or decide if we need to open a new one (even if we get a activation token from Dolphin), see bug 503519.

My current KDE Plasma Wayland session

Like in the last post, let’s take a look at how my current session looks.

My current KDE Plasma on Wayland session with Kate ;=)

Try it out!

I can encourage all to give Wayland a try, if you are on some up-to-date distribution. To hunt the last bugs, we need more adopters. That more and more distributions default to Wayland naturally helps in that respect. But one must manage the expectations, the currently known issues will not just disappear over night.

Help out!

If you encounter issues, please report them as bug reports to the respective upstream projects. And naturally, any help in fixing them would be welcome, too. For Kate/KTextEditor beside the above mentioned window activation, I got all stuff that bothered me fixed. Naturally there will still be things that annoy others, patches welcome, scratch your own itch!

Feedback

You can provide feedback on the matching Reddit post.

Saturday, 19 July 2025

KDE’s Gitlab setup has a branch naming rule that I always forget about – branch names should start with work/ if you want the server to allow you to rebase and push rebased commits (that is, only work branches can be --force pushed to).

I had to abandon and open new PRs a few times now because of this.

Something like this is easy to check on the client side with pre-commit hooks. (a pre-push hook can also be used, but I like the check to be as early as possible)

A simple hook script that checks your branch name starts with work/YOUR_USER_NAME (I like to have the username in the branch name) is rather simple to write:

#!/bin/bash

REPO_URL=$(git remote get-url origin)
KDE_REPO_HOST="invent.kde.org"

if [[ "${REPO_URL}" == *"${KDE_REPO_HOST}"* ]]; then

    BRANCH=$(git rev-parse --abbrev-ref HEAD)
    BRANCH_REGEX="^work/$USER/.*$"

    if ! [[ $BRANCH =~ $BRANCH_REGEX ]]; then
      echo "Your commit was rejected due to its name '$BRANCH', should start with 'work/$USER'"
      exit 1
    fi

fi

It checks that the Git repository is on invent.kde.org, and if it is, it checks if the current branch follows the desired naming scheme.

KDEGitCommitHooks

But the question is where to put this script?

Saving it as .git/hooks/pre-commit in the cloned source directory would work in general, but there are two problems:

  • Manually putting it into every single cloned KDE source directory on your system would be a pain;
  • KDEGitCommitHooks, which is used by many KDE projects, will overwrite the custom pre-commit hook script you define.

The second issue is not a problem since a few hours ago. KDEGitCommitHooks (a part of the extra-cmake-modules framework) now generates a pre-commit hook that, additionally to what it used to do before, executes all the custom scripts you place in the .git/hooks/pre-commit.d/ directory.

So, if a project uses KDEGitCommitHooks you can save the aforementioned script as .git/hooks/pre-commit.d/kde-branches-should-start-with-work.sh and it should be automatically executed any time you create a new commit (after KDEGitCommitHooks updates the main pre-commit hook in your project).

For projects that do not use KDEGitCommitHooks, you will need to add a pre-commit hook that executes scripts in pre-commit.d, but more on that in a moment.

Git templates

The first problem remains – putting this into a few hundred local source directories is a pain and error-prone.

Fortunately, Git allows creating a template directory structure which will be reproduced for any repository you init or clone.

I placed my template files into ~/.git_templates_global and added these two lines to ~/.gitconfig:

[init]
    templatedir = ~/.git_templates_global

I have two KDE-related hook scripts there.

The above one is saved as ~/.git_templates_global/hooks/pre-commit.d/kde-branches-should-start-with-work.

And the second file is the default main pre-commit (~/.git_templates_global/hooks/pre-commit) script:

#!/usr/bin/env bash

# If the user has custom commit hooks defined in pre-commit.d directory,
# execute them
PRE_COMMIT_D_DIR="$(dirname "$0")/pre-commit.d/"

if [ -d "$PRE_COMMIT_D_DIR" ]; then
    for PRE_COMMIT_D_HOOK in "$PRE_COMMIT_D_DIR"/*; do
        ./"$PRE_COMMIT_D_HOOK"
        RESULT=$?
        if [ $RESULT != 0 ]; then
            echo "$PRE_COMMIT_D_HOOK returned non-zero: $RESULT, commit aborted"
            exit $RESULT
        fi
    done
fi

exit 0

It tries to run all the scripts in pre-commit.d and reports if any of them fail.

This default main pre-commit script will be used in projects that do not use KDEGitCommitHooks. In the projects that do, KDEGitCommitHooks will replace it with a script that executes everything in pre-commit.d same as this one does, but with a few extra steps.

I’m currently backpacking in the Balkans and, considering that it’s been such a long time since I wrote a blog post on my blog, I figured it was a good idea to write about it.

As I am traveling, I am also field testing KDE Itinerary and sending patches as I buy new my tickets and reserve my hostels.

Ljubiana, Slovenia

My first stop was in the capital of Slovenia, Ljubljana. I went there by train from Berlin. I first took the night train to Graz and then an intercity train to Ljubljana. And while the connection was perfect and there was no delay, the whole journey took almost 19 hours.

Night Jet
Night Jet
Sleeping car in the night jet
Sleeping car in the night jet

I stayed only one night in Ljubljana in the party hostel Zzz and I enjoyed my time there quite a bit. Thanks to the Hostelworld app, it was super easy to find a group of fellow solo travelers to enjoy the evening with. We went to a food market and I had some delicious local pasta.

Street in Ljubiana
Street in Ljubiana
Castel
Castel
Food market
Food market
Another view of the food market
Another view of the food market
The food at the food market
The food at the food market
The food at the food market
The food at the food market

Rijeka, Croatia

My second stop was in Rijeka, the third biggest city in Croatia. I also took the train to get there. The city itself is very beautiful, same for the beaches. But I didn’t really like that the city has a massive sea port splitting the city center from the beaches. The hostel experience was the most disapointing of the whole trip so far.

Small port of Rijeka
Small port of Rijeka
Small stairs going to a swimming spot in Rijeka
Small stairs going to a swimming spot in Rijeka
Sunset from the swimming spot
Sunset from the swimming spot
The sea
The sea

Split, Croatia

My next stop was in Split, the second largest city in Croatia. While there was a train connection from Rijeka to Split, I decided to take the bus as the train was significantly slower than the bus. I stayed at the Hostel Old Town.

I really had a blast in Split. The city is very old, with Greek, Roman, Byzantine, and Venetian influence. I met a cool group of American/Norwegian travelers in a hostel bar and got dragged to the Ultra festival, which was an amazing experience.

Small allway in the old town of Split
Small allway in the old town of Split
Old fortification
Old fortification
Small town scare
Small town scare
Another old building
Another old building
Picture of the festival
Picture of the festival
Group picture of the people I meet at the festival
Group picture of the people I meet at the festival
Chilling out the day after the festival in the shades of a tree
Chilling out the day after the festival in the shades of a tree

The day after, I went to a boat party which was equally a blast.

Boat party
Boat party
Boat party with the sunset
Boat party with the sunset
Drinking a reasonable amount of Aperol
Drinking a reasonable amount of Aperol
Dancing with a stick
Dancing with a stick
More dancing
More dancing
Group photo at the end
Group photo at the end
A script element has been removed to ensure Planet works properly. Please find it in the original post. A script element has been removed to ensure Planet works properly. Please find it in the original post.

Last weekend I attended the Transitous Hack Weekend in Berlin. This was the first time we ran such an event for Transitous, and with quite a few more people attending than expected it most probably will not have been the last one either.

Transitous logo

DELFI Family & Friends Day

Immediately prior to the Transitous Hack Weekend there was the 2nd DELFI “Family & Friends Day”, for which a number of participants had been in Berlin anyway. DELFI is the entity providing the aggregated national public transport static and realtime schedule data for Germany, which is important input for Transitous.

Extent and quality of that data have room for improvement, so having many members of the community there to lobby for changes helps. And while there’s certainly awareness and willingness among the people doing the work, the complex processes and structures with many different public and private stakeholders don’t exactly yield great agility.

Transitous Hack Weekend

For the Transitous Hack Weekend Wikimedia Deutschland had kindly allowed us to use their WikiBär venue. Special thanks also to Jannis, Theo and Felix for cooking for the entire group during the weekend, which not only kept us all well fed but also made the event particularly efficient and allowed us to cover a wide range of topics in the short time, as you can see below.

A bunch of people sitting around desks with laptops, several small groups in active discussions.
Transitous Hack Weekend in progress (6 more participants not pictured). (CC0-1.0)

Topics

Transitous isn’t attached to any legal entity so far, which is a challenge when it comes to handling money, signing contracts or providing official data protection contacts. Eventually this needs to change.

Setting up our own foundation is of course an option, but that implies (duplicated) work and continuous cost that similar organizations have already covered. Therefore our preferred approach would be to attach Transitous to an existing likeminded foundation.

Usage policy

Transitous so far had no official rules on who can use it, and how. As long as it was primarily used by FOSS applications that wasn’t much of a problem, as that’s exactly what it is intended for. And even better, most of those were even actively contributing to Transitous in some form.

However we recently also got requests from non-FOSS and/or commercial users, which is not what Transitous is intended for. This is now being documented here.

For everyone else nothing should really change here, just please make sure you send proper client identification e.g. in a User-Agent header.

Data normalization and augmentation

There’s various reasons why we might want to modify the schedule data that goes into Transitous, it being incomplete, inconsistent ways of naming/labeling things between different data sources, or things just being outright wrong. Ideally all that gets resolved upstream, but that’s slow at best in most cases, and sometimes just not possible.

We were therefore discussing ideas on a declarative pattern matching/transformation rule system to define such modification in a way that remains maintainable when facing thousands of continuously changing datasets.

That still needs a bit more design work I think, but would allow to solve a number of current issues:

  • Unify route/trip names. How bad this can get can currently be observed e.g. with long distance trains in Germany and Eurostar services.
  • Normalize the mode of transport types across different sources. This would allow to fix Flixtrains being classified as regional train services for example.
  • Augment agency/operator contact information, for use in integrated upstream issue reporting.
  • Add or normalize line colors, and eventually add line, mode and operator logos.

Computing missing route shapes also fits into this, although that’s a bit special as it requires a much more elaborate computation than just modifying textual CSV cells.

Collaboration with other data aggregators

When Transitous started it was entirely unclear whether it would be able to ever scale beyond pure station-to-station public transport routing. We are way past that point meanwhile, with more and more things being added for full intermodal door-to-door routing. Many of those imply building similar dataset catalogs as we have for the public transport schedule data.

While we can do that, there’s often overlapping projects in those areas already, and our preferred solution would be to rather join forces. That is, collect and aggregate all input data there and consume it from that single source.

This includes:

  • Availability of sharing vehicles (from GBFS feeds or converted from other sources). Looking at Citybikes for that.
  • Elevator status data, which can be crucial for wheelchair routing. Looking at accessibility.cloud for that, the same source KDE Itinerary already uses.
  • Availability of parking spaces, which then can be considered when routing with a bike or car. Looking at ParkAPI for that.
  • Realtime road traffic data and dynamic roadsign data, which is useful for road routing. There seem to be some recent developments on this from CoMaps.

We also discovered a few data sets I had no idea where even available anywhere, like live positions of (free) taxis, which could allow new routing options. (Also, lots of Kegelrobben data, the use of which in a transportation context eludes me so far).

External communication

Transitous now has a Mastodon account! We’ll use that for service alerts, project updates and to share posts about related applications or events.

If you want to stay up to date on Transitous’ growing coverage and are up for a small Python coding task, a script to generate a list of coverage additions within the last week would help a lot with providing regular updates there.

We’d also like to have blog feed on the website, which would need to be set up but more importantly requires a few people committing to actually producing regular long-form content.

Documentation and onboarding

With a few people attending who weren’t neck-deep in Transitous or MOTIS code already, we had valuable fresh perspectives on the documentation and onboarding experience.

Both documentation and the usability and error handling of the tools in the import pipeline have benefited from this already, and there’s more improvements that yet have to be integrated.

Realtime data from vehicle positions

As realtime delay/disruption data still isn’t as widely available as basic schedule data, there’s high interest in computing that from vehicle positions. That’s essentially also what the “official” sources do, just that those can also take higher-level information about network operations, track closures, etc as well as human intervention/planning into account.

Vehicle position data tends to be more available, and can be obtained in various more or less creative ways:

  • Some operators publish those as GTFS-RT feeds or at least in some proprietary form for displaying on their own website.
  • Some systems send openly readable radio messages containing vehicle positions, such as AIS on ferries, ADS-B in aircraft as well as various radio protocols for trams and busses.
  • Crowd-sourcing from within traveler apps such as Träwelling or Itinerary, which know which train you are on already anyway and have access to GPS.
  • Dedicated driver apps for e.g. community-operated services.

Lots of opportunity for fun projects here.

And more…

Other topics people worked on included:

  • Automating the collection of the hundreds of French GTFS feeds.
  • Bringing new servers into operation.
  • Improving the quality of the German national aggregated GTFS(-RT) feeds by generating them from NeTEx and Siri data ourselves.
  • MOTIS and GTFS-RT diagnostic tooling.
  • Better QA and monitoring.

And there’s also the meeting notes in the wiki.

More plans and ideas

With the foundation we meanwhile have with Transitous, there’s also ideas and wishes that are now coming into reach:

  • Integrate elevation data for walk/bike/wheelchair routing.
  • Support for ride sharing as an additional mode of transportation.
  • Support “temporary POIs” such as events in geocoding.
  • Showing interactive network line maps, as e.g. done by LOOM.
  • Support for localized stop names and service alerts.
  • Easily embeddable JS components for departure boards or fixed-destination routing for our events.

This is probably the most exciting part for me personally, I’m very happy to see people pushing beyond just a replacement for proprietary routing APIs :)

If you are interested in any of this, join the Transitous Matrix channel and consider joining the Open Transport Community Conference in October!