Skip to content

Saturday, 7 February 2026

Transitous is a project that runs a public transport routing service that aspires to work wold-wide. The biggest leaps forward in coverage happened in the beginning, when it was just a matter of finding the right urls to download the schedules from. Most operators provide them in the GTFS format, which is also used in Google Transit and a few other apps.

However, the number of readily available GTFS schedules (so called feeds) that we are not using yet is starting to become quite small. As evident when comparing with Google Transit, there are still a number of feeds that are only privately shared with Google. This is not great from a standpoint of preventing monopolies and also a major problem for free and open-source projects which don’t have the resources to discuss with each operator individually or to even buy access to the data from them. Beyond that case, there is still a suprisingly large number of places in the world that do not publish any schedules in a standardized format, and that is something that we can fix.

Source data comes in many shapes and forms, but the ingredients we’ll definitely need are:

  • the lines
  • stop times
  • stop locations
  • and service dates

Sometimes you can find a provider specific API that returns the needed information, or there is open data in a non-standard format. In the worst case, it might be necessary to scrape data out of the HTML of the website.

Some examples:

a stop time from the API of ŽPCG (Railway in Montenegro):

Example 1:

{
    "ArrivalTime" : "15:49:16",
    "DepartureTime" : "15:51:16",
    "stop" : {
        "Latitude" : 42.511829,
        "Longitude" : 19.203468,
        "Name_en" : "Spuž",
        "external_country_id" : 62,
        "external_stop_id" : 31111,
        "local" : 1,
    }
}

It is immediately visible that we get the stop times, in for some reason extreme precision. We also get coordinates for the location, which makes conversion to GTFS much easier. Unfortunately the coordinates from this dataset are not exactly great, and can easily be off by multiple kilometers, but they nevertheless provide a rough estimate that we can improve on by matching them to OpenStreetMap.

The railway-data enthusiasts will also notice that we get a UIC country code and a stop code, which we can concatinate to get a full UIC stop identifier. We can make use of that for OSM matching later on.

Example 2:

<ElementTrasa Ajustari="0" CodStaDest="27888" CodStaOrigine="23428" DenStaDestinatie="Ram. Budieni"
    DenStaOrigine="Târgu Jiu" Km="1207" Lungime="250" OraP="45180" OraS="45300" Rci="R" Rco="R" Restrictie="0"
    Secventa="1" StationareSecunde="0" TipOprire="N" Tonaj="500" VitezaLivret="80"/>

This example is from open data for railways in Romania. Unfortunately this one does not give us coordinates, and the fact that the fields are in abbreviated Romanian doesn’t make it too easy to understand for someone like me who does not speak any vaguely related language. However looking at the numbers, we can figure out that OraP and OraS are seconds and provide the departure and arrival times. However here, the data does not model the times at stops, but the transitions between the stops, so some more reshuffling is necessary.

Example 3:

{
    "ArrivalTimes" : "12:35, 13:37, 14:02, 14:21, 14:52, 14:27, 15:04, 15:50, 16:14, 16:39, 17:08, 17:36, 18:55, 19:03, 19:12, 20:05, 20:33, 21:12, 21:47",
    "Classes" : "2",
    "DepartureTimes" : "12:35, 13:38, 14:03, 14:22, 15:12, 14:42, 15:29, 15:51, 16:15, 16:40, 17:33, 17:37, 18:57, 19:08, 19:13, 20:06, 20:34, 21:13, 21:47",
    "Route" : "Vilnius-Krokuva",
    "RouteStops" : "Vilnius, Kaunas, Kazlų Rūda, Marijampolė, Mockava, Trakiškė/Trakiszki, Suvalkai/Suwałki, Augustavas/Augustów, Dambrava/Dąbrowa Białostocka, Sokulka/Sokółka, Balstogė/Białystok, Balstogė (Žaliakalnio stotis) / Białystok Zielone Wzgórza, Varšuva (Rytinė stotis)/ Warszawa Wschodnia, Varšuva (Centrinė stotis)/ Warszawa Centralna, Varšuva (Vakarinė stotis)/ Warszawa Zachodnia, Opočnas (Pietinė stotis)/Opoczno Południe, Vloščova (Šiaurinė stotis)/Włoszczowa Północ, Mechuvas/Miechów, Krokuva (Pagrindinė stotis)/ Kraków Główny",
    "RunWeekdays" : "1,2,3,4,5,6,7",
    "Spaces" : "WHEELCHAIR, BICYCLE",
    "TrainNumber" : "33/141"
}

This example is from the, to my knowledge, only source of semi-machine-readable information on railway timetables in Lithuania. While json is straightforward to parse, this one for some reason does not use json arrays, but comma-separated lists. We can still work with this of course, until a stop appears whose name contains a comma. Oh well. Once again, no coordinates are provided. While it is tempting to think this should be easier to convert than xml in Romanian, this format has some more hidden fun. If you have been to the area the line from the example operates in, you might already have noticed that the time zone changes between Lithuania and Poland. Unfortunately, there is no notice of that in the data, just a randomly backwards-jumping arrival and departure time.

To work with this, we first need to match the stop names to coordinates, then figure out the time zones from that, and then convert the stop times.

Matching stations to locations

OpenStreetMap is the obvious choice for this. It is fairly easy to query the station locations, but matching the strings to the node in OSM is not trivial. There are often variations in spelling, particularly if the data covers neighbouring countries with different languages. The data may also have latinized names, while the country usually uses a different script and so on.

Since this problem comes up repeatedly, I am slowly improving my rust library (gtfs-generator) for this, so it can hopefully handle most of these cases automatically at some point. It aims to be very customizable, so the matching criteria needs to be supplied by the library user. The following example matches a stop if it has a matching uic_ref tag, which is a strong identifier. If no node has such a matching tag, all nodes in the radius of 20km are considered if their name is either a direct match, an abbreviation of the other spelling, similar enough or has matching words. The matching radius can be overridden in each query, so if nothing is known yet, the first guess can be the middle of the country with a large enough radius. As soon as one station is known, the ones appearing on the same route must be fairly close. The matching quality strongly depends on making a good guess of the distance from the previous stop, as it greatly reduces the risk of similarly named stations being mismatched.

Since OpenStreetMap provides different multi-lingual name tags, the order that these should be considered in needs to be set as well.

The code for matching will look something like this:

let mut matcher = osm::StationMatcherBuilder::new()
    .match_on(osm::MatchRule::FirstMatch(vec![
        osm::MatchRule::CustomTag {
            name: "uic_ref".to_string(),
        },
        osm::MatchRule::Both(
            Box::from(osm::MatchRule::Position {
                max_distance_m_default: 20000,
            }),
            Box::from(osm::MatchRule::FirstMatch(vec![
                osm::MatchRule::Name,
                osm::MatchRule::NameAbbreviation,
                osm::MatchRule::NameSimilar {
                    min_similarity: 0.8,
                },
                osm::MatchRule::NameSubstring,
            ])),
        ),
    ]))
    .name_tag_precedence(
        [
            "name",
            "name:ro",
            "short_name",
            "name:en",
            "alt_name",
            "alt_name:ro",
            "int_name",
        ]
        .into_iter()
        .map(ToString::to_string)
        .collect(),
    )
    .transliteration_lang(TransliterationLanguage::Bg)
    .download_stations(&["RO", "HU", "BG", "MD"])
    .unwrap();
    
    // Parse input

    let station = matcher.find_matching_station(&osm::StationFacts {
        name: Some(name),
        pos: Some(previous_coordinates.unwrap_or((46.13, 24.81))), // If we know nothing yet, bias to the middle of Romania, so we at least don't end up in the wrong country
        max_distance_m: match (previous_coordinates, previous_time, atime) {
            // We have previous coordinates, but nothing for this station. Base limit on max reachable distance at reasonable speed
            (Some(_prev_coords), Some(departure), Some(arrival)) => {
                let travel_seconds = arrival - departure;
                Some(travel_seconds * 70) // m/s
            }
            // No previous location known
            _ => Some(800000),
        },
        values: HashMap::from_iter([("uic_ref".to_string(), uic_ref.clone())]),
    });

For now, until I’m somewhat certain about the API, you’ll need to use the git repository directly to use the OSM matching feature.

Finally writing the GTFS file

After all the ingredients are collected, the actual GTFS conversion should be fairly easy. We now need to sort the data we collected into the main categories of objects represented by GTFS, routes, trips, stops, and stop_times.

Every trip needs to have a corresponding route. The exact distinction between different routes depends on the specific transit system. In the simplest case, if multiple buses operate with the same line number on the same day, they would belong to the same route. Each time the bus operates per day, a new GTFS trip starts.

If the system does not have the concept of routes, every trip simply is its own route. This is for example what Deutsche Bahn does for ICE trains, where each journey has it’s own train number.

The code for building the GTFS data looks somewhat like this:

    let mut gtfs = GtfsGenerator::new();

    // parse input

    gtfs.add_stop(gtfs_structures::Stop {
        id: stop_time.station_code.to_string(),
        name: Some(stop_time.station_name.to_string()),
        latitude: coordinates.as_ref().map(|(lat, _)| *lat),
        longitude: coordinates.as_ref().map(|(_, lon)| *lon),
        ..Default::default()
    })
    .unwrap();

    gtfs.add_stop_time(gtfs_structures::RawStopTime {
        trip_id: trip_id.clone(),
        arrival_time: Some(atime.unwrap_or(dtime)),
        departure_time: Some(dtime),
        stop_id: stop_time.station_code.to_string(),
        stop_sequence: stop_time.sequence,
        ..Default::default()
    })
    .unwrap();
    
    gtfs.write_to("out.gtfs.zip").unwrap();

Validating the result

A new GTFS feed is rarely perfect on the first try. I recommend running it through gtfsclean first. After all obvious issues are fixed (missing fields, broken references), you can use the validator of the French government and the canonical GTFS validator. It is worth using both, as they for slightly different issues.

Once all critical errors reported by the validators are fixed, you can finally test the result in MOTIS. You can get a precompiled static binary from GitHub Releases. Afterwards create a minimal config file using ./motis config out.gtfs.zip. The API on it’s own is not too useful for testing, so add

server:
  web_folder: /path/to/ui/directory

to the top of the file to make the web interface available.

Make sure to also enable the geocoding option, so you can search for stops.

Now all that’s needed is loading the data and starting the server:

./motis import
./motis server

You should now be able to search for your stops and find routes:

MOTIS showing a connection between Vilnius and Riga

Publishing

Once it is ready, the GTFS feed needs to be uploaded to a location that provides a stable url even if the feed is updated. The webserver should also support the Last-Modified-header, so the feed can be downloaded only when needed. A simple webserver serving a directory like nginx or apache works well here, but something like Nextcloud works equally well if you already have access to an instance of it.

Since the converted dataset needs to be regularly updated, I recommend setting up a CI pipeline for that purpose. The free CI offered by gitlab.com and GitHub is usually good enough for that. I recommend setting up pfaedle in the pipeline, to automatically add the exact path the vehicles take based on routing on OpenStreetMap data.

Once you have a URL, you can add it to Transitous and places where other developers can find it, like the Mobility Database

If you are interested in some examples of datasets generated this way, check out the Mobility Database entries for LTG Link and ŽPCG. You can find a list with some examples of feeds I generate here, including the generator source code based on the gtfs-generator.

You can always ask in the Transitous Matrix channel in case you hit any roadblocks with your own GTFS-converter projects.

This is a sequel to my first blog where I’m working on fixing a part of docs.kde.org: PDF generation. The documentation website generation pipeline currently depends on docs-kde-org, and this week I was focused on decoupling it.

Moving away from docs-kde-org

I began by copying the script that builds the PDFs in docs-kde-org to ci-utilities as a temporary step towards independence.

I then dropped the bundled dblatex for the one present in the upstream repos, wired up the new paths and hit build. To no one’s surprise, it failed immediately. The culprit here was the custom made kdestyle latex style. As a temporary measure, I excluded it to get PDFs to generate again, however several (new) locales broke.

So the pipeline now worked, albeit, partially.

(Commit 17bbe090) (Commit 3e8f8dc9)

Trying XeTeX

Most non-ASCII languages (Chinese, Japanese, Korean, Arabic, etc) are not supported for PDF generation because of wonky unicode support in pdfTeX (the existing backend driver that converts LaTeX code to a PDF)

So I switched the backend to XeTeX to improve the font and language handling. Now I’m greeted with a new failure :D

xltxtra.sty not found

xltxtra is a package used by dblatex to process LaTeX code with XeTeX. Installing texlive-xltxtra fixed this, and almost everything built (except Turkish)

(Commit 7ede0f94)

The deprecated package rabbit hole

While installing xltxtra, I noticed that it was deprecated and mostly kept for compatibility. So I tried replacing it with modern alternatives (fontspec, realscripts, metalogo).

Back to xltxtra.sty not found

At the point I looked at the upstream dblatex code and found references to xltxtra in a Latex style file and in the XSL templates responsible for generating the latex.

I experimented with local overrides, but the dependency was introduced during the XSLT stage so it isn’t trivial to swap out.

Clearer mental model

A very good outcome from the experiments is that I now understand the pipeline much better

DocBook XML -> XSLT Engine -> LaTeX Source -> XeLaTex -> PDF

XSL files are templates for the XSLT engine to convert the docbooks to latex and the problem is probably in one of these templates

Decision with mentor

After discussion, Johnny advised me to

  • Use texlive-xltxtra for now
  • Open an upstream issue
  • And that fixing dblatex itself is outside our scope

So I went ahead and filed the issue https://sourceforge.net/p/dblatex/bugs/134/. Now we keep moving forward while also documenting the technical debt.

Locales experiment

I temporarily re-enabled previously excluded locales (CJK etc) to see what happens. Unfortunately most of them didn’t generate silently. I’m still unsure about this part and the behavior might differ in other projects, so I’ll revisit this later.

(Commit 04b698a5)

kdestyle strikes back

Now I fed the absolute path of kdestyle to dblatex (to maintain PDF output parity) which introduced a new set of errors

  • Option clash for graphicx
  • conflicting driver options in hyperref (dvips, pdftex)

These seem to originate from the style file and is the next major thing to resolve.

(Commit 7ede0f94)

Improved Git skills

I also had a self inflicted problem this week. My feature branch diverged from master and I merged instead of rebasing. As a result the history got messed up. I fixed it by:

  • dropping the merge commit
  • rebasing properly
  • force pushing the cleaned branch

Lessons learnt. Working with open source projects has really made git less intimidating.

Where things stand

Right now

  • we can build with XeTeX
  • we rely on texlive-xltxtra
  • most locales work (except Turkish of course)
  • kdestyle introduces some problems

Also here’s a before vs after of the generated PDFs (the end goal would be to make both of them look identical)

Honestly I don’t feel like I got much accomplished but the understanding gained should make future changes much faster (and safer :D)

Dev logs log 1, log 2, log 3, log 4

Edit (09-02-2026): Added links to relevant commits based on feedback received

Last weekend I attended this years edition of FOSDEM in Brussels again, mostly focusing on KDE and Open Transport topics.

FOSDEM logo

KDE

As usual, KDE had a stand, this time back in the K building and right next to our friends from GNOME. Besides various stickers, t-shirts and hoodies, a few of the rare amigurumi Konqis where also available. And of course demos of our software on laptops, mobile phones, gaming consoles and graphic tables.

KDE stand at FOSDEM showing several members of the KDE crew and the KDE table with phones, laptops and a drawing tablet as well as stickers and t-shirts.
KDE stand (photo by Bhushan Shah)

Several KDE contributors also appeared in the conference program, such as Aleix’s retrospective on 30 years of KDE and Albert’s talk on Okular.

Itinerary

Meeting many people who have just traveled to FOSDEM is also a great opportunity to gather feedback on Itineray, especially with many disruptions allowing to test various special cases.

  • Eurostar’s ticket scanners apparently can’t deal with binary ticket barcodes correctly, however that’s exactly what the standard UIC SSB barcode they use on their Thalys routes is. Their off-spec workaround to base64-encode the content is now preserved by Itinerary.
  • With DB’s API being blocked increasingly often from other countries we now regularly end up with data mixed from different sources. That exposed some issues with merging data with a different amount of intermediate stops, e.g. due to ÖBB’s API listing border points there as well.

Open Transport Community

For the fourth time FOSDEM had a dedicated Railways and Open Transport track. It’s great to see this to continue to evolve with now also policymakers and regulators not just attending but being actively involved. And not just at FOSDEM, I’m very happy to see community members being consulted and involved in legislation and standardization processes at the moment that were largely inaccessible to us until not too long ago.

Photo of the opening of the Railway and Open Transport track, with the announcement of the Open Transport Community Conference 2026 on the projector.
Railway and Open Transport track opening.

Just in time for FOSDEM we also got a commitment for a venue for the next iteration of the Open Transport Community Conference, in October in Bern, Switzerland, at the SBB headquarter. More on that in a future post.

Transitous

At FOSDEM 2024 Transitous got started. What we have there today goes far beyond what seemed achievable back then, just two years ago. And it’s being put to use, the Transitous website lists a dozen applications built on top of it, and quite a few of the talks in the Railways and Open Transport track at FOSDEM referenced it.

And more

The above doesn’t capture all of FOSDEM of course, as every couple of meters you run into somebody else to talk to, so I also got to discuss new developments on standardization around semantic annotations in emails, OSM indoor mapping or new map vector tile formats, among other things.

It’s been few months since I last blogged about KDE Linux, KDE’s operating system of the future. So I thought it was time to fill people in on recent goings-on! It hasn’t been quiet, that’s for sure:

Project health is looking good

KDE Linux hit its alpha release milestone last September, encompassing basic usability for developers, internal QA people, and technical contributors. Our marketing-speak goal was “The best alpha release you’ve ever used”.

I’d say it’s been a success, with folks in the KDE ecosystem starting to use and contribute to the project. A few months ago, most commits in KDE Linux were made by just 2 or 3 of us; more recently, there’s a healthy diversity of contributors. Check out the last few days of commits:

The next step is working towards a beta release. This is something we can consider the equal of other traditional Linux OSs focused on traditional Linux users: the people who are slightly to fairly technical and computer-literate, but not necessarily developers. Solidly “two-dots-in-computers” users. We’re 62% of the way there as of the time of writing.

First public Q&A and development call

KDE Linux developers held their first public meeting today! The notes can be found here. This is the first of many, and these meetings will of course be open to all.

In this first meeting, devs fielded questions from technical users and discussed a number of open topics, coming to actionable conclusions on several of them. The vibe was really good.

If you want to know when the next meeting will be held, watch this space for a poll!

Delta updates enabled by default

After months of testing by many contributors, we turned on delta updates.

Delta updates increase update speed substantially by calculating the difference between the OS build you have and the one you’re updating to, only downloading that difference, and then applying it like a patch to build the new OS image.

As a result, each OS update should consume closer to 1-2 GB of network bandwidth, down from the 7 GB right now (this is if you’re updating daily; longer intervals between update will result in larger deltas). Still a lot, but now we have a mechanism for reducing the delta between builds even more.

This wonderful system was built by Harald Sitter. Thanks, Harald!

Integrating plasma-setup and plasma-login-manager

KDE Linux now delegates most first-user setup tasks to plasma-setup:

plasma-setup supports the use case of buying a device with KDE Plasma pre-installed where the user is expected to create a user account as part of the initial setup.

Thanks very much to Kristen McWilliam both for not only taking the lead to develop plasma-setup, but also integrating it into KDE Linux!

In addition, KDE Linux now uses plasma-login-manager instead of SDDM. This is a modern login manager intending to integrate more deeply with Plasma for operating systems that want that and use systemd (like KDE Linux does). Development was done primarily by David Edmundson and Oliver Beard, with assistance from Nicolas Fella, Harald Sitter, and Neal Gompa. KDE Linux integration work was done by Thomas Duckworth and Harald Sitter.

KDE Linux has been a superb test-bed for developing and integrating these new Plasma components, and now other operating systems get to benefit from them, too!

Better hardware support

As an operating system built for users bringing their own hardware, KDE Linux is fairly liberal about the drivers and hardware support packages that it includes.

Compared to the initial alpha release last September, the latest builds of KDE Linux include better support for scanners, fancy drawing tablets, Bluetooth file sharing, Android devices, Razer keyboards and mice, Logitech keyboards and mice, fancy many-button mice of all kinds, LVM-formatted disks, exFAT and XFS-formatted disks, audio CDs, Yubikeys, smart cards, virtual cameras (e.g. using your phone as one), USB Wi-Fi dongles with built-in flash storage, certain fancy professional audio devices, and Vulkan support on certain GPUs. Phew, that’s a lot!

Thanks to everyone who reported these issues, and to Hadi Chokr, Akseli Lahtinen, Thomas Duckworth, Fabio Bas, Federico Damián Schonborn, Giuseppe Calà, Andrew Gigena, and others who fixed them!

There’s still more to do. KDE Linux regularly receives bug reports from people saying their devices aren’t supported as well as they could be, or at all — especially older printers, and newer laptops from Apple and Microsoft. No huge surprises here, I guess! But still, it’s a big topic.

Better performance

Thomas Duckworth, Hadi Chokr, and I dug into performance and efficiency, improving the configuration of the kernel and various middleware layers like PulseAudio and PipeWire. Among them include using the Zen kernel, optimizing kernel performance, increasing various internal limits, and optimizing for low-latency audio.

Thanks very much to the CachyOS folks who blazed many of these trails, and whose config files we learned from.

Quieter boot process

Previously, the OS image chooser was shown on every boot. This is good for safety, but a waste of time and an unnecessary exposure of technical details in other cases.

Thomas Duckworth hid the boot menu by default, but made it show up if you mash the spacebar, or if the computer was force-restarted, or restarted normally very quickly after login. These are symptoms of instability; in those cases we show the OS image chooser on the next boot so you can roll back to an older OS version if needed.

Appropriately-set wireless regulatory domain

Different countries have different regulations regarding wireless hardware’s maximum transmit power. If you don’t tell the kernel what country your computer is located in, it will default to the lowest transmit power allowed anywhere in the world! This can reduce your Wi-Fi performance.

Thanks to Thomas Duckworth, KDE Linux now sets the wireless regulatory domain appropriately, looking it up from your time zone, and letting your hardware use all the power it legally can. It updates the value if you change the time zone, too! And also thanks to Neal Gompa for building the tool we integrated into KDE Linux for this.

The idea for this one came from reading CachyOS docs asking users do it manually. Maybe we have something worth copying now!

RAR support

Hadi Chokr added RAR support to our builds of Ark, KDE’s un-archiver. Now you can keep on modding your old games!

“Command not found” handler

I built a simple “command not found” handler that tries its best to steer people in the right direction when they run a command that isn’t available on KDE Linux:

Better Zsh config

KDE Linux now includes a default Zsh config, and it’s been refined over time by multiple people who clearly love their Zsh!

Thank you to Thomas Duckworth, Clément Villemur, and Daniele for this work.

Documentation moved to a more official location

KDE Linux documentation was wiki-based for the past year and a half, and benefited from the sort of organic growth easily possible there. However, it’s now found a more permanent and professional-looking home: https://kde.org/linux/docs.

This will be kept up to date and expanded over time just like the old wiki docs — which now point at the new locations. This work was done by me.

Easy setup for KDE development

KDE developers are a major target audience of KDE Linux. To that end, I wrote some setup tools that make it really easy for people to get started with KDE development. It’s all documented here; basically just run set-up-system-development in a terminal window and you’re ready! The tool will even tell you what to do next.

Saying hello to KCalc, Qrca, Kup, and new CLI tools

KDE Linux includes an intentionally minimal set of GUI apps, leaning on users to discover apps themselves — and if that sucks, we need to fix it. But we decided that a calculator app made sense to include by default. After much hemming and hawing between KCalc and Kalk (it was a tough call!), we eventually settled on KCalc, and now it’s pre-installed.

We’re also now including Qrca, a QR code scanner app. This supports the Network widget’s “scan QR code to connect to network” feature:

Next up is KDE’s Kup backup program for off-device backups! Kup is not nearly as popular as it should be, and I hope more exposure helps to get it additional development attention, too.

Finally, we pre-installed some useful command-line debugging and administration tools, including kdialog, lshw, drm_info, cpupower, turbostat, plocate, fzf, and various Btrfs maintenance tools.

This work was done by me, Ryan Brue, Kristen McWilliam, and Akseli Lahtinen.

Waving goodbye to Snap, Homebrew, Kate, Icon Explorer, Elisa, and iwd

Since the beginning, KDE Linux included Snap as part of an “all of the above” approach to getting software.

Snap works fine (in fact, better than Flatpak in some ways), but came with a big problem for us: It’s only available in the Arch User Repository (AUR). Getting software from AUR isn’t great, and we’ve been moving away from it, with an explicit goal of not using AUR at all by the time we complete our beta release.

Conversations with Arch folks revealed that there was no practical path to moving Snap out of AUR and into Arch Linux’s main repos, and we didn’t fancy building such a large and complex part of the system ourselves. So unfortunately that meant it had to go. We’re now all-in on Flatpak.

Homebrew was another solution for getting software not available in Discover, especially technical software libraries needed for software development. We never pre-installed Homebrew, but we did officially document and recommend it. However the problem of Homebrew-installed packages overriding system libraries was worse than we originally thought; there were reports of crashing and “doesn’t boot” issues not resolvable by rolling back the OS image, because Homebrew installs stuff in your home folder rather than a systemwide location. Accordingly, we’ve removed our recommendation, replacing it with a warning against using Homebrew in our documentation. Use Distrobox until we come up with something more suitable.

Another removal was Kate. Kate is amazing, but we already pre-install KWrite, and the two apps overlap significantly in functionality. Eventually we reasoned that it made sense to only pre-install KWrite as a general text editor and keep Kate as an optional thing for experts who need it.

We also removed Icon Explorer from the base image because developers who need it can now get a Flatpak build of it from Flathub.

Next up was Elisa. Local music library manager apps are not very popular these days, and the pre-installed Haruna app can already play audio files. So out it went, I’m afraid. Anyone who uses it (like I do!) can of course manually install it, no problem.

And finally, the iwd wireless daemon leaves KDE Linux. It was never enabled by default; it was just an option for those who needed it. And the one user who did need it eventually found a better solution to their wireless card issues. With news of Intel dis-investing in iwd, we decided it didn’t have a sunny future in KDE Linux anymore and removed it.

This work was done by me.

And lots more

These are just the larger user-facing changes. Tons of smaller and more technical changes were merged as well. It’s a fairly busy project.

You can use it!

It’s also not a theoretical project; KDE Linux is released and I typed this blog post on it! I’ve developed Plasma on it and run a business on it, too. It’s been my daily driver since last August.

You can probably install KDE Linux on your computer too, and become a part of the future. Even if you’re worried about using alpha software because you’re not a software developer or a mega nerd, it’s perfect for a secondary computer. KDE Linux is quite stable, and the OS rollback functionality reduces risk even more.

You can help build it!

If any of this is exciting, come help us build it! Working on KDE Linux is pretty easy, and there’s lots of support.

Welcome to a new issue of This Week in Plasma!

This week the Plasma team continued polishing up Plasma 6.6 for release in a week and a half. With that being taken care of, a lot of fantastic contributions rolled in on other diverse subjects, adding cool features and improving user interfaces. Check ’em out here:

Notable New Features

Plasma 6.7.0

The Window List widget now supports sorting and shows section headers in its full view, making it easier to navigate windows by virtual desktops, activities, or alphabetically. (Shubham Arora, plasma-desktop MR #3434)

Window List widget showing of its sorting and grouping capabilities

Notable UI Improvements

Plasma 6.6.0

System Settings’ Touchscreen Gestures page now hides itself when there are no touchscreens. This completes the project to hide all inapplicable hardware pages! (Alexander Wilms and Kai Uwe Broulik, KDE Bugzilla #492718 and systemsettings MR #391)

The “Enable Bluetooth” switch in the Bluetooth widget no longer randomly displays a blue outline on its handle even when not clearly focused. (Christoph Wolk, KDE Bugzilla #515243)

Plasma 6.7.0

Plasma’s window manager now remembers tiling padding per screen. (Tobias Fella, KDE Bugzilla #488138)

The wallpaper selection dialog now starts in the location you navigated to the last time you used it. (Sangam Pratap Singh, KDE Bugzilla #389554)

Theme previews on System Settings’ cursor settings page now scale better when using massive cursor sizes. (Kai Uwe Broulik, plasma-workspace MR #6244)

Update items on Discover’s updates page now have better layout and alignment. (Nate Graham, discover MR #1252)

Completed the project to make the delete buttons on System Settings’ theme chooser pages consistent. (Sam Crawford, plasma-desktop MR #3506, sddm-kcm MR #101, and plymouth-kcm MR #47)

The System Tray icon that Discover uses to represent an in-progress automatic update now looks a lot more like the other update icons, and a lot less like Nextcloud’s icon. (Kai Uwe Broulik, discover MR #1258 and breeze-icons MR #526)

Notable Bug Fixes

Plasma 6.5.6

It’s no longer possible to accidentally close the “Keep display configuration?” confirmation dialog by panic-clicking, unintentionally keeping bad settings instead of reverting them. (Nate Graham, kscreen MR #460)

Fixed a regression in sRGB ICC profile parsing that reduced color accuracy. (Xaver Hugl, KDE Bugzilla #513691)

3rd-party wallpaper plugins that include translations now show that translated text as expected. (Luis Bocanegra, KDE Bugzilla #501400)

Plasma 6.6.0

Fixed multiple significant issues on the lock screen that could be encountered with fingerprint authentication enabled: one that could break fingerprint unlocking, and another that could leave you with an “Unlock” button that did nothing when clicked. (David Edmundson, KDE Bugzilla #506567 and KDE Bugzilla #484363)

Fixed a Plasma crash caused by applying a global theme that includes a malformed layout script. (Marco Martin, KDE Bugzilla #515385)

Panel tooltips no longer inappropriately respect certain window placement policies on Wayland. (Tobias Fella, KDE Bugzilla #514820)

User-created global shortcuts are now always categorized as “Applications”, resolving an issue whereby apps added by choosing an executable using the file picker dialog would be inappropriately categorized as system services and couldn’t be edited or deleted. (Tobias Fella, KDE Bugzilla #513565)

Fixed two issues with recent files and folders in the Kickoff application launcher: now it shows the correct file type icons for items, and no longer sometimes shows a weird duplicate “files” section. (Christoph Wolk, KDE Bugzilla #496179 and KDE Bugzilla #501903)

Spectacle now shows the correct resolution in its tooltip for rectangular region screenshots when using a fractional scale factor on a single screen. (Noah Davis, KDE Bugzilla #488034)

The “Open With” dialog now filters its view properly when opened from a Flatpak app. (David Redondo, KDE Bugzilla #506513)

Keyboard focus no longer gets stuck in the Search widget after its search results appear. (Christoph Wolk, KDE Bugzilla #506505)

Frameworks 6.23

Fixed a complicated issue that could sometimes break automatic KWallet wallet unlocking on login. (Bosco Robinson, KDE Bugzilla #509680)

Fixed a visual regression with certain item lists that made the last one touch the bottom of its view or popup. (Marco Martin, KDE Bugzilla #513459)

Notable in Performance & Technical

Frameworks 6.23

Reduced KRunner’s maximum memory usage while file searching is enabled. (Stefan Brüns, KDE Bugzilla #505838)

How You Can Help

KDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.

Would you like to help put together this weekly report? Introduce yourself in the Matrix room and join the team!

Beyond that, you can help KDE by directly getting involved in any other projects. Donating time is actually more impactful than donating money. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer, either; many other opportunities exist.

You can also help out by making a donation! This helps cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.

To get a new Plasma feature or a bugfix mentioned here

Push a commit to the relevant merge request on invent.kde.org.

Friday, 6 February 2026

KDE Eco is an ongoing initiative taken by the KDE community to support the development and adoption of sustainable Free & Open Source Software worldwide and I am happy to be able to contribute to this mission as part of Season of KDE 2026.

As part of this initiative, KDE aims to measure the energy consumption of its software and eco-certify applications with the Blue Angel ecolabel. To make this possible, KEcoLab a dedicated lab based in Berlin provides remote access to energy-measurement hardware via a GitLab CI/CD pipeline to measure the energy usage of software following the guide documented in KDE Eco Handbook

To measure energy usage of a software we need to prepare three scripts baseline.sh , idle.sh and sus.sh and push them via MR to this repository, SUS or Standard User Scenario script emulates a standard user in a automated manner without human intervention, more on how to prepare SUS can be found here.

Currently, these scripts rely on xdotool to simulate user interactions. However, xdotool does not work on Wayland. Since the KEcoLab computer have recently migrated to Fedora 43, which uses Wayland by default, the existing scripts no longer work. To solve this problem,

I am working with my mentors Joseph, Aakarsh, and Karanjot to port the existing test scripts from xdotool to ydotool and kdotool, which are compatible with Wayland. As part of this effort, I’ve written a guide explaining how to use ydotool and kdotool more easily.

Setup ydotool and kdotool #

To test the scripts you have written it’s recommended to setup locally.

Note: Some features changes and some are removed with updates so it’s recommended to build them from source mentioned here

ydotool #

  1. Clone the repository and navigate into it
 git clone -b feat/setup-script https://invent.kde.org/neogg/ydotool.git
 cd ydotool 
  1. Run the setup script
chmod +x setup.sh
./setup.sh install
  1. Give ydotoold permission to uinput so it can run without root
sudo touch /etc/udev/rules.d/99-uinput.rules
echo 'KERNEL=="uinput", MODE="0660", GROUP="input"' | sudo tee /etc/udev/rules.d/99-uinput.rules > /dev/null
sudo usermod -aG input "$USER"

sudo udevadm control --reload-rules
sudo udevadm trigger

Note: A reboot (or full logout/login) is required for the input group change to take effect.

  1. Reboot your computer and run
systemctl --user start ydotoold

Verify that you can use ydotool by ydotool type "Hello" you will see Hello typed if installation was successful.

kdotool #

  1. Clone the repository and navigate into it
 git clone -b feat/setup-script https://invent.kde.org/neogg/kdotool.git
 cd kdotool 
  1. Run the setup script
chmod +x setup.sh
./setup.sh install

Verify that you can use kdotool by kdotool -h

How to use ydotool and kdotool to perform various actions #

Press keys #

For pressing keys we use ydotool , the syntax for pressing keys is a little confusing when you look at it :

ydotool key KEY_CODE:ACTION

KEY_CODE

Linux uses a file named input-event-codes.h that defines numeric codes for everything an input device can generate in Linux. Simply speaking for every type of keypress there is a numerical representation. For example, the LEFTCTRL key is represented by 29, and the letter T is represented by 20. How to find the code for the key you want to press , You can read the original .h file by nano /usr/include/linux/input-event-codes.h if you are in a linux system ( i hope you are using one). or inspect the file here

I use grep to find the code for the key i want since the header file is too long grep KEY_LEFTCTRL /usr/include/linux/input-event-codes.h and so on.

ACTION

  • 1 : key is pressed
  • 0 : key is released

So to press and release Ctrl you can use the following command

ydotool key 29:1 29:0

Press key combinations #

Key combinations is the same as above , you just need to keep all the keys pressed. So to press Ctrl+Alt+T which opens up a terminal you can use the following command

ydotool key 29:1 56:1 20:1 29:0 56:0 20:0

This translates to Ctrl(pressed) Alt(pressed) T(pressed) Ctrl(released) Alt(released) T(released)

This will hopefully open up a terminal.

Click ( left, right, double) #

For performing mouse clicks we can use the click command in ydotool , If you are do not like codes unlike me,I am sorry but again we have codes…

ydotool click OPTIONS BUTTON_CODE

OPTIONS

--repeat=N : can be used to press a button N times
--P : can be used to press and release a button as that's what we mostly do

BUTTON_CODE

This is a hexadecimal value that represents different mouse buttons (left, right, middle, etc.).
Yes, it looks ugly at first, but you can always open up my blog.

ydotool click 0xC0 # left click
ydotool click 0xC1 # right click
ydotool click 0xC2 # middle button click (scroll button)

ydotool click 0x40 # left button down (press)
ydotool click 0x80 # left button up (release)

ydotool click 0x41 # right button down (press)
ydotool click 0x81 # right button up (release)

And some combinations This can be useful to select text, drag and drop etc..

ydotool click 0x40 # left down 
ydotool mousemove 300 0 # drag right 
ydotool mousemove 0 200 # drag down 
ydotool click 0x80 # left up

same as above but more reliable with delays to simulate a real user

ydotool click 0x40
ydotool mousemove -D 20 300 0
ydotool mousemove -D 20 0 200
ydotool click 0x80

Type text #

Typing text is pretty simple using ydotool

ydotool type "string-you-want-to-type"

Focus a specific app #

kdotool search --name "Window Title" windowactivate
kdotool search --class appname windowactivate
kdotool search --classname org.kde.app windowactivate ( for kde apps )

Send mouse and keyboard commands to a specific window #

Focus the window using kdotool than execute commands using ydotool since commands are always executed in the active window.

Get current mouse location #

We can use kdotool for getting the current mouse location which also returns the windowID along with x and y coordinate of current mouse location

kdotool getmouselocation

The output :

x:42 y:96 screen:0 window:{e8aeec46-5c45-42cc-9839-8ad2edcb7f4f}

If you only want to extract the x and y value since thats what’s more important you can do that using regex

# Read mouse location
loc="$(kdotool getmouselocation)"

# Extract x and y
x=$(echo "$loc" | sed -n 's/.*x:\([0-9-]*\).*/\1/p')
y=$(echo "$loc" | sed -n 's/.*y:\([0-9-]*\).*/\1/p')

Move mouse to a specific coordinate #

Note : We can use ydotool to move the mouse but that may not give us any visual feedback if you are using it in a VM, that is you cannot see the actual mouse cursor move when the commands are executed but you can use the kdotool getmouselocation to see how the mouse location gets updated.

There is a command in ydotool

ydotool mousemove --absolute -x x-value -y y-value

but that does not work in Wayland properly.

We have an another command that moves the mouse relative to current mouse location so we can use that.

ydotool mousemove -x x-value -y y-value
  1. mousemove -9999 -9999 ( moves the mouse to top left 0,0)
  2. mousemove x y ( now we move relative to 0,0 so that’s like absolute mousemove)

Move mouse inside a specific window #

Since we cannot directly send mouse events to a window on Wayland, we rely on window geometry that we can get using kdotool and do relative mouse movement using ydotool.

The steps are:

  1. Focus the window
  2. Get the window geometry
  3. Reset the mouse to (0,0)
  4. Move the mouse relatively using the window coordinates
# First, focus the window:
kdotool search --class firefox windowactivate

# Get the window geometry:
kdotool search --class firefox getwindowgeometry
# Example output:
# Window {a24d3d34-85f6-4915-821b-54a71a959f6a}
# Position: 340.23748404968967,68.49159882293117
# Geometry: 768x864


#Reset the mouse position to the top-left corner of the screen:
ydotool mousemove -x -9999 -y -9999

# Move the mouse to the Position you got from above command (top left corner of the window )

# Now move the mouse inside the window (top-left corner of the window):
ydotool mousemove -x 120 -y 80

#At this point, the mouse cursor is inside the target window. so you can now move inside the window , but only once than again repeat the above steps once again ( i will think of a script to automate this)

Move mouse to the center of a window #

Moving to the center of a window uses the same idea, but instead of moving to x and y, we add half the width and height.

From the geometry we extract : x=120 y=80 width=1280 height=800

Calculate the center:

center_x = x + width / 2

center_y = y + height / 2

Reset the mouse again:

ydotool mousemove -x -9999 -y -9999

Move the mouse to the center of the window:

ydotool mousemove -x center_x -y center_y

These completes all the basic actions that are required to automate user actions , We may need to combine many of above to do tasks that simulate real user.

Let’s go for my web review for the week 2026-06.


The Retro Web

Tags: tech, hardware, history

This is a nice resource trying to document the history of computer hardware. Really cool stuff.

https://theretroweb.com/


IndieWebify.Me? Yes please!

Tags: tech, web, blog, self-hosting, indie

Looks like an interesting tool to check you’re doing “everything right” on your blog. That said, it looks like quite a few hoops to jump through. I wish there’d be a way to make all this a bit easier.

https://blog.rickardlindberg.me/2026/02/04/indie-webify-me-yes-please.html


“IG is a drug”: Internal messages may doom Meta at social media addiction trial

Tags: tech, social-media, attention-economy, law

Clearly a trial to keep an eye on. Some of those internal memos might prove decisive.

https://arstechnica.com/tech-policy/2026/01/tiktok-settles-hours-before-landmark-social-media-addiction-trial-starts/


Backseat Software

Tags: tech, product-management, metrics, ux, attention-economy, surveillance, history

Excellent historical perspective on how we ended up with applications filled with annoying interruptions and notifications. It’s been done indeed one step at a time and lead to poor UX really.

https://blog.mikeswanson.com/backseat-software/


AdNauseam

Tags: tech, web, browser, advertisement, attention-economy, privacy

I’m not sure I’m quite ready to use this… Still I like the idea, make some noise and have companies turning to those invasive ads to just pay for nothing. The more users the better I guess.

https://adnauseam.io/


Europe’s tech sovereignty watch

Tags: tech, europe, business, politics, vendor-lockin

Despite clearly being an advertisement for Proton’s offering, this shows how reliant European companies are on vendors showing strategic problems. We can cheer at the EU policies when they go in the right direction. It’s probably not enough already, but the European companies are clearly asleep at the wheel.

https://proton.me/business/europe-tech-watch


GDPR is a failure

Tags: tech, law, gdpr

The ideas behind GDPR are sound. The enforcement is severely lacking though. Thus its effects are too limited.

https://nikolak.com/gdpr-failure/


Mobile carriers can get your GPS location

Tags: tech, mobile, gps, privacy, surveillance, protocols

Yep, it’s worse than the usual triangulation everyone thinks about. It’s right there in the protocol, or why you’d better not let the GPS on all the time.

https://an.dywa.ng/carrier-gnss.html


Meet Rayhunter: A New Open Source Tool from EFF to Detect Cellular Spying

Tags: tech, spy, surveillance, mobile, hardware

Time to spy on the spies. Or at least know when they’re around.

https://www.eff.org/deeplinks/2025/03/meet-rayhunter-new-open-source-tool-eff-detect-cellular-spying


What If? AI in 2026 and Beyond

Tags: tech, ai, machine-learning, gpt, copilot, business, economics

Interesting analysis. It gives a balanced view on the possible scenarios around the AI hype.

https://www.oreilly.com/radar/what-if-ai-in-2026-and-beyond/


Selfish AI

Tags: tech, ai, machine-learning, gpt, copilot, copyright, ecology, economics, ethics

Let’s not forget the ethical implications of those tools indeed. Too often people put them aside simply on the “oooh shiny toys” or the “I don’t want to be left behind” reactions. Both lead to a very unethical situation.

https://www.garfieldtech.com/blog/selfish-ai


The API Tooling Crisis: Why developers are abandoning Postman and its clones?

Tags: tech, web, api, tests

Another space with rampant enshittification… No wonder users are jumping between alternatives.

https://efp.asia/blog/2025/12/24/api-tooling-crisis/


What’s up with all those equals signs anyway?

Tags: tech, email, encodings

If you didn’t know about quoted printable encoding. This is a way to understand it.

https://lars.ingebrigtsen.no/2026/02/02/whats-up-with-all-those-equals-signs-anyway/


The Disconnected Git Workflow

Tags: tech, git, email

A good reminder that Git doesn’t force you to use a web application to collaborate on code.

https://ploum.net/2026-01-31-offline-git-send-email.html


4x faster network file sync with rclone (vs rsync)

Tags: tech, networking, syncing

Need to move many files around? Rsync might not be the best option anymore.

https://www.jeffgeerling.com/blog/2025/4x-faster-network-file-sync-rclone-vs-rsync/


From Python 3.3 to today: ending 15 years of subprocess polling

Tags: tech, python, processes, system

Nice improvement in Python for waiting the end of a subprocess. Explains nicely the underlying options and available syscall if you need to do the same in your code.

https://gmpy.dev/blog/2026/event-driven-process-waiting


Django: profile memory usage with Memray

Tags: tech, python, memory, profiling, django

Looks surprisingly easy to profile the Django startup. Probably makes sense to profile other parts of your application but this is likely a bit more involved.

https://adamj.eu/tech/2026/01/29/django-profile-memray/


Flavours of Reflection

Tags: tech, reflection, type-systems, c++, java, python, dotnet, rust

Looking at several languages and their reflection features. What’s coming with C++26 is really something of another class than anything else. I just have concerned about its readability though.

https://semantics.bernardteo.me/2026/01/30/flavours-of-reflection.html


In Praise of –dry-run

Tags: tech, tools, tests, command-line

This is indeed a very good option to have when you make a command line tool.

https://henrikwarne.com/2026/01/31/in-praise-of-dry-run/


Some Data Should Be Code

Tags: tech, data, programming, buildsystems, infrastructure, automation

There is some truth to this. Moving some things to data brings interesting properties but it’s a two edged sword. Things are simpler to use when kept as code. Maybe code emitting structured data.

https://borretti.me/article/some-data-should-be-code


Plasma Effect

Tags: tech, graphics, shader

Neat little shader for a retro demo effect.

https://www.4rknova.com/blog/2016/11/01/plasma


Forget technical debt

Tags: tech, technical-debt, engineering, organisation

Interesting insight. Gives a lot to ponder indeed. Focusing on technical debt alone probably won’t improve a project much. It’s thus important to take a broader view for long lasting improvements.

https://www.ufried.com/blog/forget_technical_debt/



Bye for now!

In the last month, I have mostly been working on refactoring the Kdenlive keyframes system to make it more powerful. This is part of a NGI Zero Commons grant via NLnet.

Improving existing parameters handling

A first step in preparation for this work was to improve the usability of some effects. In Kdenlive, we support several effect libraries, like MLT, Frei0r and FFmpeg's avfilters. Not all effects expose their parameters in the same way. For example in MLT, we have a rectangle parameter allowing to define a zone in a video frame. Other effects expose several independent parameters for a rectangle, namely x, y, width and height.

Currently, these effects are displayed with a list of sliders for each value:

With my latest changes, these will be handled like a rectangle, which means it can directly be manipulated in the monitor overlay.

Another improvement is the addition of Point parameters which will allow selecting a point in the monitor.

These changes are planned to be in the 26.04 release.

Boosting keyframes

The next improvement being worked on is support of keyframes per parameter. Currently, once you add a keyframe to an effect, it is applied to all parameters, which is sometimes not wanted. This will also allow animating only one parameter while leaving the others parameters fixed. Below is a screenshot of a basic UI created to test the feature.

Also, currently you cannot see keyframes for different effects at the same time. Regrouping keyframes for all effects in one place will enable more powerful editing.

Kdenlive needs your support

Our small team has been working for years to build an intuitive open source video editor that does not track you, does not use your data, and respects your privacy. However, to ensure a proper development requires resources, so please consider a donation if you enjoy using Kdenlive - even small amounts can make a big difference.

After taking a 13 year hiatus from KDE development, Harald Sitter's talk on KDE Linux at Akademy 2024 was the perfect storm of nostalgia and inspiration to suck me back in. I've been contributing on and off since then.

This blog post outlines some gaping holes I see in its extensibility model, and how I plan to address them (assuming no objections from other developers).

banana

The Problem

KDE Linux is being built as an immutable OS without a traditional package manager. The strategy leans heavily on Flatpak for GUI applications, which (though, not without its problems) generally works well for its stated goal. But here's the thing: the Linux community has a relatively large population of CLI fanatics—developers who live in the terminal, who need $OBSCURE_TOOL for their workflow, who won't be satisfied with just what comes in a Flatpak.

The OS ships with a curated set of developer tools that we KDE developers decided to include. Want something else? There's a wiki page with suggestions for installation mechanisms we don't officially support—mechanisms that, let's be real, most of us don't even use ourselves.

This sets us up for the same reputation trap that caught KDE Neon:

Just like KDE Neon got pigeonholed with the reputation of being "for testing KDE software," KDE Linux risks getting branded as "for developing KDE software only."

There's also a deeper inconsistency here. One of the stated goals is making the end user's system exactly the same as our development systems. But if the tools we actually use day-to-day are already baked into the base image—and thus not part of the extensibility model we're asking users to adopt—then we're not eating our own dog food. We're shipping an experience we don't fully use ourselves.

The Solution

whale

Look at the wild success of Docker and Kubernetes. Their container-based approach proved that immutable infrastructure actually works at scale. That success paved the way for Flatpak and Snap to become the de facto solution for GUI apps, and now we're seeing immutable base systems everywhere. The lesson is clear: containers aren't just one solution among many—they're the foundation that makes immutable systems viable.

Containers for CLI Tools???

As crazy as it sounds, that's the logical next step. Let's look at the candidates to base our solution on top of:

distrobox/toolbox are trying to solve the right problem—building long-term, persistent development environments—but they're doing it on top of docker/podman, which were designed for ephemeral containers. They're essentially fighting against the grain of their underlying systems. Every time you want to do something that assumes persistence and state, you're working around design decisions made for a different use case. It works, but you can feel the friction.

systemd-nspawn is built for persistence from the ground up, which is exactly what we want. It has a proper init system, it's designed to be long-lived. The challenge here is that we need fine-grained control over the permissions model—specifically, we need to enable things like nested containers (running docker/podman inside the container) and exposing arbitrary hardware devices without a lot of manual configuration. systemd-nspawn makes these scenarios difficult by design, which is great for security but limiting for a flexible development environment.

devcontainers nail the developer experience—they're polished, well-integrated, and they just work. The limitation is that they're designed to be used from an IDE like VS Code, not as a system-wide solution. We need something that integrates with the OS itself, not just with your editor. That said, there's definitely lessons to learn from how well they've executed on the developer workflow.

Our knight in shining armor:

incus icon

Enter Incus. It checks all the boxes:

  • Proper API for building tooling on top of it
  • Nested containers work out of the box—want to run docker inside your Incus container? Go for it
  • Privileged container mode for when you need full system access and hardware devices
  • Built on LXC, which means it's designed for long-lived, system-level containers from day one, not retrofitted from ephemeral infrastructure

Bonus: it supports VMs too, for running less trusted workloads. People on Matrix said they want this option. I don't fully get the use case for a development environment, but the flexibility is there if we need it.

Architecture

Incus exposes a REST API with full OpenAPI specs—great for interoperability, but dealing with REST/OpenAPI in C++ is not something I'm eager to take on.

My first choice would be C#—it's a language I actually enjoy, and it handles this kind of API work beautifully. But I suspect the likelihood of KDE developers accepting C# code into the project is... low.

Languages that already build in kde-builder and CMake will probably have the least friction for acceptance, and of those, Python is the best fit for this job. The type system isn't as mature as I'd like (though at least it exists now with type hints), and the libraries for OpenAPI and D-Bus are... okay-ish. Not amazing, but workable.

Here's the plan:

  • Daemon in Python to handle all the Incus API interaction
  • CLI, KCM, and Konsole plugin in C++ for the user-facing pieces that integrate with the rest of KDE

This way we keep the REST/OpenAPI complexity in Python where it's manageable, and the KDE integration in C++ where it belongs.

Current Status

Look, the KDE "K" naming thing is awesome. I'm not going to pretend otherwise. My first instinct was "Kontainer"—obvious, descriptive, checks the K box. Unfortunately, it was already taken.

So I went with Kapsule. It's a container. It encapsulates things. The K is there. Works for me.

The implementation is currently in a repo under my user namespace at fernando/kapsule. But I have a ticket open with the sysadmin team to move this into kde-linux/kapsule. Once that's done I'll be able to add it to kde-builder and start integrating it into the KDE Linux packages pipeline.

The daemon and CLI are functional. Since a picture is worth a thousand words, here's a screenshot of the CLI in action:


Here's docker and podman running inside ubuntu and fedora containers, respectively:


And here's chromium running inside the container, screencasting the host's desktop:


Next Steps

Deeper integration

Right now, Kapsule is a CLI that you have to manually invoke, and it lives kind of separately from the rest of the system. That's fine for a proof of concept, but the real value comes from making it invisible to users who just want things to work.

Konsole

Konsole gained container integration in !1171, so I just need to create and add an IContainerDetector for Kapsule. Once that's wired up, I'll add a per-distro configurable option to open terminals in the designated container by default.

When Kapsule is stable enough, that becomes the default behavior. Users won't have to know or care about Kapsule—they just open a terminal and their tools are there. Unless they break their container, which leads nicely to the next point...

KCM

A System Settings module for container management:

  • Create, delete, start, stop containers
  • Easy reset if you ran something that broke things
  • For advanced users: configuration options like which distro to use, resource limits, etc.

Discover

These containers need to be kept up to date. Most will have PackageKit inside them, so we can create a Discover plugin that connects to the container's D-Bus session and shows updates for the container's packages alongside the host's updates. Seamless.

Moving dev tools out of the base image

This is the long-term goal: get Kapsule stable and good enough that we can remove ZSH, git, clang, gcc, docker, podman, distrobox, toolbox, and the rest of the dev tools from the base image entirely. All of those already work in Kapsule.

Once that happens, we're eating our own dog food. The extensibility model we're asking users to adopt is the same one we're using ourselves.

Pimped out container images

We'll maintain our own image repository. There's no real limit to the number of images we can offer, and everyone in the #kde-linux channel can show off their style. Want a minimal Arch-based dev container? A fully-loaded Fedora workstation? A niche distro for embedded development? A Nix-based image (I'm looking at you, Hadi Chokr)? All possible.

Trying it out

Honestly, the best way to try it out is to wait for me to get it integrated into the KDE Linux packages pipeline and into the base image itself. Hopefully that'll be in the next few days.

Thursday, 5 February 2026

This particular guides are for myself in which i made mistakes and so that i won’t repeat them agian.