Skip to content

Welcome to Planet KDE

This is a feed aggregator that collects what the contributors to the KDE community are writing on their respective blogs, in different languages

Monday, 9 February 2026

In my last post, I laid out the vision for Kapsule—a container-based extensibility layer for KDE Linux built on top of Incus. The pitch was simple: give users real, persistent development environments without compromising the immutable base system. At the time, it was a functional proof of concept living in my personal namespace.

Well, things have moved fast.

rocket launching

It's in KDE Linux now

Kapsule is integrated into KDE Linux. It shipped. People are using it. It hasn't caught fire.

The reception in #kde-linux:kde.org has been generally positive—which, if you've ever shipped anything to a community of Linux enthusiasts, you know is about the best outcome you can hope for. No pitchforks, some genuine excitement, and a few good bug reports. I'll take it.

Special shoutout to @aks:mx.scalie.zone, who has been daily-driving Kapsule and—crucially—hasn't had it blow up in their face. Having a real user exercising the system outside of my own testing has been invaluable.

Bug fixes

Before shipping, I spent a full day rigorously testing every main scenario I could think of—and writing integration tests to back them up. (Those tests run on my own machine for now, since the CI/CD pipelines don't exist yet. More on that below.) The result: the first version that landed in KDE Linux was quite stable. There was only one minor issue that isn't blocking anything.

CI/CD

I'm in the process of building out proper CI/CD pipelines. This is the unglamorous-but-essential work that turns a "project some guy hacked together" into something that can be maintained and contributed to by others. Automated builds, automated tests, the whole deal. Not exciting to talk about, but it's what separates hobby projects from real infrastructure.

Konsole integration

This is the one I'm most excited about. In the last post, I mentioned that Konsole gained container integration via !1171, and that I'd need to wire up an IContainerDetector for Kapsule. That work is underway, with two merge requests up:

!1178 (Phase 2) adds containers directly to the New Tab menu. You can see your available containers right there alongside your regular profiles—pick one, and you get a terminal session inside that container. Simple.

containers in new tab menu

A big chunk of this work was refactoring the container listing to be fully async. There are a lot of cases where listing containers can take a while—distrobox list calls docker ps, which pokes docker.socket, which might be waiting on systemd-networkd-wait-online.target—and we absolutely cannot block the UI thread during all of that.

!1179 (Phase 3) takes it a step further: you can associate a container with a Konsole profile. This is what gets us to the "it just works" experience—configure your default profile to open in your container, and every new terminal is automatically inside it.

container profile configuration

These lay the groundwork for Konsole to be aware of Kapsule containers. The end goal hasn't changed: open a terminal, you're in your container, you don't have to think about it. We're not at the "it just works" stage yet, but the foundation is being poured.

Future work: configurable host integration

switchboard operator

Right now, Kapsule gives you some control over how tightly the container integrates with the host. The --session flag on kap create lets you control whether the host's D-Bus session socket gets mounted into the container. That's a good start, I think I need to go further.

The bigger issue is filesystem mounts. Currently, Kapsule unconditionally mounts / to /.kapsule/host and the user's home directory straight through to /home/user. That means anything running inside the container has full read-write access to your entire home directory.

That's fine when you trust everything you're running, but some tools are less predictable than others. There are horror stories floating around of autonomous coding agents hallucinating paths and rm -rfing directories they had no business touching. Containers are a natural mitigation for this—if something goes sideways, the blast radius is limited to what you explicitly shared. But that only works if you can actually control what gets shared.

The fix is making filesystem mounts configurable. Instead of unconditionally exposing everything, you should be able to say "this container only gets access to ~/src/some-project" and nothing else. Want a fully integrated container that feels like your host? Mount everything. Want a sandboxed environment for running less predictable tools? Expose only what's needed. The trust model should be a dial, not a switch.

A word of caution, though: this is a mitigation for accidents and non-deterministic tools, not a security boundary for running genuinely untrusted workloads. Kapsule containers share the host's Wayland, PulseAudio, and PipeWire sockets—that's a lot of attack surface if you're worried about malicious code. For truly untrusted workloads, VMs would be a much better fit, and Incus already supports those. It's not on my roadmap, but all the building blocks are there if someone wants to explore that direction.

New on the radar: exporting apps

Here's one I didn't see coming. aks asked about running a GUI application installed inside a container and having it show up on the host like a normal app. I realized... I didn't have that functionality at all.

The naive approach is straightforward enough: drop a .desktop file into ~/.local/share/applications that launches the app inside the container. But I really don't like that solution, and here's why:

  • Stale entries. If the container gets deleted, or Kapsule itself gets uninstalled, those .desktop files just sit there like ghosts. You click them, nothing happens, and now you're debugging why your app launcher is showing dead entries.
  • No ownership tracking. There's no built-in mechanism to say "this .desktop file belongs to this container, managed by Kapsule." If something goes wrong, there's no clean way to find and remove all the artifacts.

What I really want is a way to tie exported application entries to their owning container and to Kapsule itself. That way, when a container goes away, its exported apps go away too. Clean. Automatic. No orphans.

This might require changes to kbuildsycoca—the KDE system configuration cache builder—to support some kind of ownership or provenance metadata for .desktop files. I need to investigate whether that's feasible or if there's a better approach entirely. It's the kind of problem where the quick hack is obvious but the right solution requires some thought.

Taking a break (in theory)

chained to a desk

I'm going to do my best to not touch Kapsule until the weekend. I have a day job that I've been somewhat neglecting in favor of hacking on this, and my employer probably expects me to, you know, do the thing they're paying me for.

We'll see how good my self-control is.

Ever since C++20 introduced coroutine support, I was wondering how this could integrate with Qt. Apparently I wasn’t the only one: before long, QCoro popped up. A really cool library! But it doesn’t use the existing future and promise types in Qt; instead it introduces its own types and mechanisms to support coroutines. I kept wondering why no-one just made QFuture and QPromise compatible – it would certainly be a more lightweight wrapper then.

With a recent project at work being gargantuan mess of QFuture::then() continuations (ever tried async looping constructs with continuations only? 🥴) I had enough of a reason to finally sit down and implement this myself. The result: https://gitlab.com/pumphaus/qawaitablefuture.

Example

#include <qawaitablefuture/qawaitablefuture.h>

QFuture<QByteArray> fetchUrl(const QUrl &url)
{
    QNetworkAccessManager nam;
    QNetworkRequest request(url);

    QNetworkReply *reply = nam.get(request);

    co_await QtFuture::connect(reply, &QNetworkReply::finished);
    reply->deleteLater();

    if (reply->error()) {
        throw std::runtime_error(reply->errorString().toStdString());
    }
    co_return reply->readAll();
}

It looks a lot like what you’d write with QCoro, but it all fits in a single header and uses native QFuture features to – for example – connect to a signal. It’s really just syntax sugar around QFuture::then(). Well, that, and a bit of effort to propagate cancellation and exceptions. Cancellation propagation works both ways: if you co_await a canceled QFuture, the “outer” QFuture of coroutine will be canceled as well. If you cancelChain() a suspended coroutine-backed QFuture, cancellation will be propagated into the currently awaited QFuture.

What’s especially neat: You can configure where your coroutine will be resumed with co_await continueOn(...). It supports the same arguments as QFuture::then(), so for example:

QFuture<void> SomeClass::someMember()
{
    co_await QAwaitableFuture::continueOn(this);
    co_await someLongRunningProcess();
    // Due to continueOn(this), if "this" is destroyed during someLongRunningProcess(),
    // the coroutine will be destroyed after the suspension point (-> outer QFuture will be canceled)
    // and you won't access a dangling reference here.
    co_return this->frobnicate();
}

QFuture<int> multithreadedProcess()
{
    co_await QAwaitableFuture::continueOn(QtFuture::Launch::Async);

    double result1 = co_await foo();
    // resumes on a free thread in the thread pool
    process(result1);

    double result2 = co_await bar(result1);
    // resumes on a free thread in the thread pool
    double result3 = transmogrify(result2);

    co_return co_await baz(result3);
}

See the docs for QFuture::then() for details.

Also, if you want to check the canceled flag or report progress, you can access the actual QPromise that’s backing the coroutine:

QFuture<int> heavyComputation()
{
    QPromise<int> &promise = co_await QAwaitableFuture::promise();
    promise.setProgressRange(0, 100);

    double result = 0;

    for (int i = 0; i < 100; ++i) {
        promise.setProgressValue(i);
        if (promise.isCanceled()) {
            co_return result;
        }
        frobnicationStep(&result, i);
    }
    co_return result;
} 

Outlook

I’m looking to upstream this. It’s too late for Qt 6.11 (already in feature freeze), but maybe 6.12? There have been some proposals for coroutine support on Qt’s Gerrit already, but none made it past the proof-of-concept stage. Hopefully this one will make it. Let’s see.

Otherwise, just use the single header from the qawaitablefuture repo. It an be included as a git submodule, or you just vendor the header as-is.

Happy hacking!

Caveat: GCC < 13

There was a nasty bug in GCC’s coroutine support: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101367 It affects all GCC versions before 13.0.0 and effectively prevents you from writing co_await foo([&] { ... }); – i.e. you cannot await an expression involving a temporary lambda. You can rewrite this out as auto f = foo([&] { ... }); co_await f; and it will work. But there’s no warning at compile time. As soon as the lambda with captures is a temporary expression inside the co_await, it will crash and burn at runtime. Fixed with GCC13+, but took me a while to figure out why things went haywire on Ubuntu 22.04 (defaults to GCC11).

The second maintenance release of the 25.12 series is with the usual batch of stability fixes and workflow improvements. Highlights of this release include fixes to various monitor issues and refactoring of the monitor dragging mechanism. See the changelog below for more details.

Kdenlive needs your support

Our small team has been working for years to build an intuitive open source video editor that does not track you, does not use your data, and respects your privacy. However, to ensure a proper development requires resources, so please consider a donation if you enjoy using Kdenlive - even small amounts can make a big difference.

For the full changelog continue reading on kdenlive.org.

Sunday, 8 February 2026

Minimalism came in like a wrecking ball somewhere around 2013. It delivered a terminal diagnosis to all but a few prevailing designs at the time. One of them, called Oxygen, had reigned supreme in KDE Plasma. As with many others, its demise was inevitable. Anyone aspiring to demonstrate that they were building something new and...... Continue Reading →

Saturday, 7 February 2026

Transitous is a project that runs a public transport routing service that aspires to work wold-wide. The biggest leaps forward in coverage happened in the beginning, when it was just a matter of finding the right urls to download the schedules from. Most operators provide them in the GTFS format, which is also used in Google Transit and a few other apps.

However, the number of readily available GFTFS schedules (so called feeds) that we are not using yet is starting to become quite small. As evident when comparing with Google Transit, there are still a number of feeds that are only privately shared with Google. This is not great from a standpoint of preventing monopolies and also a major problem for free and open-source projects which don’t have the resources to discuss with each operator individually or to even buy access to the data from them. Beyond that case, there is still a suprisingly large number of places in the world that do not publish any schedules in a standardized format, and that is something that we can fix.

Source data comes in many shapes and forms, but the ingredients we’ll definitely need are:

  • the lines
  • stop times
  • stop locations
  • and service dates

Sometimes you can find a provider specific API that returns the needed information, or there is open data in a non-standard format. In the worst case, it might be necessary to scrape data out of the HTML of the website.

Some examples:

a stop time from the API of ŽPCG (Railway in Montenegro):

Example 1:

{
    "ArrivalTime" : "15:49:16",
    "DepartureTime" : "15:51:16",
    "stop" : {
        "Latitude" : 42.511829,
        "Longitude" : 19.203468,
        "Name_en" : "Spuž",
        "external_country_id" : 62,
        "external_stop_id" : 31111,
        "local" : 1,
    }
}

It is immidiately visible that we get the stop times, in for some reason extreme precision. We also get coordinates for the location, which makes conversion to GTFS much easier. Unfortunately the coordinates from this dataset are not exactly great, and can easily be off by multiple kilometers, but they nevertheless provide a rough estimate that we can improve on by matching them to OpenStreetMap.

The railway-data enthusiasts will also notice that we get a UIC country code and a stop code, which we can concatinate to get a full UIC stop identifier. We can make use of that for OSM matching later on.

Example 2:

<ElementTrasa Ajustari="0" CodStaDest="27888" CodStaOrigine="23428" DenStaDestinatie="Ram. Budieni"
    DenStaOrigine="Târgu Jiu" Km="1207" Lungime="250" OraP="45180" OraS="45300" Rci="R" Rco="R" Restrictie="0"
    Secventa="1" StationareSecunde="0" TipOprire="N" Tonaj="500" VitezaLivret="80"/>

This example is from open data for railways in Romania. Unfortunately this one does not give us coordinates, and the fact that the fields are in abbreviated Romanian doesn’t make it too easy to understand for someone like me who does not speak any vaguely related language. However looking at the numbers, we can figure out that OraP and OraS are seconds and provide the departure and arrival times. However here, the data does not model the times at stops, but the transitions between the stops, so some more reshuffling is necessary.

Example 3:

{
    "ArrivalTimes" : "12:35, 13:37, 14:02, 14:21, 14:52, 14:27, 15:04, 15:50, 16:14, 16:39, 17:08, 17:36, 18:55, 19:03, 19:12, 20:05, 20:33, 21:12, 21:47",
    "Classes" : "2",
    "DepartureTimes" : "12:35, 13:38, 14:03, 14:22, 15:12, 14:42, 15:29, 15:51, 16:15, 16:40, 17:33, 17:37, 18:57, 19:08, 19:13, 20:06, 20:34, 21:13, 21:47",
    "Route" : "Vilnius-Krokuva",
    "RouteStops" : "Vilnius, Kaunas, Kazlų Rūda, Marijampolė, Mockava, Trakiškė/Trakiszki, Suvalkai/Suwałki, Augustavas/Augustów, Dambrava/Dąbrowa Białostocka, Sokulka/Sokółka, Balstogė/Białystok, Balstogė (Žaliakalnio stotis) / Białystok Zielone Wzgórza, Varšuva (Rytinė stotis)/ Warszawa Wschodnia, Varšuva (Centrinė stotis)/ Warszawa Centralna, Varšuva (Vakarinė stotis)/ Warszawa Zachodnia, Opočnas (Pietinė stotis)/Opoczno Południe, Vloščova (Šiaurinė stotis)/Włoszczowa Północ, Mechuvas/Miechów, Krokuva (Pagrindinė stotis)/ Kraków Główny",
    "RunWeekdays" : "1,2,3,4,5,6,7",
    "Spaces" : "WHEELCHAIR, BICYCLE",
    "TrainNumber" : "33/141"
}

This example is from the, to my knowledge, only source of semi-machine-readable information on railway timetables in Lithuania. While json is straightforward to parse, this one for some reason does not use json arrays, but comma-separated lists. We can still work with this of course, until a stop appears whose name contains a comma. Oh well. Once again, no coordinates are provided. While it is tempting to think this should be easier to convert than xml in Romanian, this format has some more hidden fun. If you have been to the area the line from the example operates in, you might already have noticed that the time zone changes between Lithuania and Poland. Unfortunately, there is no notice of that in the data, just a randomly backwards-jumping arrival and departure time.

To work with this, we first need to match the stop names to coordinates, then figure out the time zones from that, and then convert the stop times.

Matching stations to locations

OpenStreetMap is the obvious choice for this. It is fairly easy to query the station locations, but matching the strings to the node in OSM is not trivial. There are often variations in spelling, particularly if the data covers neighbouring countries with different languages. The data may also have latinized names, while the country usually uses a different script and so on.

Since this problem comes up repeatedly, I am slowly improving my rust library (gtfs-generator) for this, so it can hopefully handle most of these cases automatically at some point. It aims to be very customizable, so the matching criteria needs to be supplied by the library user. The following example matches a stop if it has a matching uic_ref tag, which is a strong identifier. If no node has such a matching tag, all nodes in the radius of 20km are considered, if their name is either a direct match, an abbreviation of the other spelling, similar enough or has matching words. The matching radius can be overridden in each query, so if nothing is known yet, the first guess can be the middle of the country with a large enough radius. As soon as one station is known, the ones appearing on the same route must be fairly close. The matching quality strongly depends on making a good guess of the distance from the previous stop, as it greatly reduces the risk of similarly named stations being mismatched.

Since OpenStreetMap provides different multi-lingual name tags, the order that these should be considered in needs to be set as well.

The code for matching will look something like this:

let mut matcher = osm::StationMatcherBuilder::new()
    .match_on(osm::MatchRule::FirstMatch(vec![
        osm::MatchRule::CustomTag {
            name: "uic_ref".to_string(),
        },
        osm::MatchRule::Both(
            Box::from(osm::MatchRule::Position {
                max_distance_m_default: 20000,
            }),
            Box::from(osm::MatchRule::FirstMatch(vec![
                osm::MatchRule::Name,
                osm::MatchRule::NameAbbreviation,
                osm::MatchRule::NameSimilar {
                    min_similarity: 0.8,
                },
                osm::MatchRule::NameSubstring,
            ])),
        ),
    ]))
    .name_tag_precedence(
        [
            "name",
            "name:ro",
            "short_name",
            "name:en",
            "alt_name",
            "alt_name:ro",
            "int_name",
        ]
        .into_iter()
        .map(ToString::to_string)
        .collect(),
    )
    .transliteration_lang(TransliterationLanguage::Bg)
    .download_stations(&["RO", "HU", "BG", "MD"])
    .unwrap();
    
    // Parse input

    let station = matcher.find_matching_station(&osm::StationFacts {
        name: Some(name),
        pos: Some(previous_coordinates.unwrap_or((46.13, 24.81))), // If we know nothing yet, bias to the middle of Romania, so we at least don't end up in the wrong country
        max_distance_m: match (previous_coordinates, previous_time, atime) {
            // We have previous coordinates, but nothing for this station. Base limit on max reachable distance at reasonable speed
            (Some(_prev_coords), Some(departure), Some(arrival)) => {
                let travel_seconds = arrival - departure;
                Some(travel_seconds * 70) // m/s
            }
            // No previous location known
            _ => Some(800000),
        },
        values: HashMap::from_iter([("uic_ref".to_string(), uic_ref.clone())]),
    });

For now, until I’m somewhat certain about the API, you’ll need to use the git repository directly to use the OSM matching feature.

Finally writing the GTFS file

After all the ingredients are collected, the actual GTFS conversion should be fairly easy. We now need to sort the data we collected into the main categories of objects represented by GTFS, routes, trips, stops, and stop_times.

Every trip needs to have a corresponding route. The exact distinction between different routes depends on the specific transit system. In the simplest case, if multiple buses operate with the same line number on the same day, they would belong to the same route. Each time the bus operates per day, a new GTFS trip starts.

If the system does not have the concept of routes, every trip simply is its own route. This is for example what Deutsche Bahn does for ICE trains, where each journey has it’s own train number.

The code for building the GTFS data looks somewhat like this:

    let mut gtfs = GtfsGenerator::new();

    // parse input

    gtfs.add_stop(gtfs_structures::Stop {
        id: stop_time.station_code.to_string(),
        name: Some(stop_time.station_name.to_string()),
        latitude: coordinates.as_ref().map(|(lat, _)| *lat),
        longitude: coordinates.as_ref().map(|(_, lon)| *lon),
        ..Default::default()
    })
    .unwrap();

    gtfs.add_stop_time(gtfs_structures::RawStopTime {
        trip_id: trip_id.clone(),
        arrival_time: Some(atime.unwrap_or(dtime)),
        departure_time: Some(dtime),
        stop_id: stop_time.station_code.to_string(),
        stop_sequence: stop_time.sequence,
        ..Default::default()
    })
    .unwrap();
    
    gtfs.write_to("out.gtfs.zip").unwrap();

Validating the result

A new GTFS feed is rarely perfect on the first try. I recommend running it through gtfsclean first. After all obvious issues are fixed (missing fields, broken references), you can use the validator of the French government and the canonical GTFS validator. It is worth using both, as they for slightly different issues.

Ones all critical errors reported by the validators are fixed, you can finally test the result in MOTIS. You can get a precompiled static binary from GitHub Releases. Afterwards create a minimal config file using ./motis config out.gtfs.zip. The API on it’s own is not too useful for testing, so add

server:
  web_folder: /path/to/ui/directory

to the top of the file to make the web interface available.

Make sure to also enable the geocoding option, so you can search for stops.

Now all that’s needed is loading the data and starting the server:

./motis import
./motis server

You should now be able to search for your stops and find routes:

MOTIS showing a connection between Vilnius and Riga

Publishing

Once it is ready, the GTFS feed needs to be uploaded to a location that provides a stable url even if the feed is updated. The webserver should also support the Last-Modified-header, so the feed can be downloaded only when needed. A simple webserver serving a directory like nginx or apache works well here, but something like Nextcloud works equally well if you already have access to an instance of it.

Since the converted dataset needs to be regularly updated, I recommend setting up a CI pipeline for that purpose. The free CI offered by gitlab.com and GitHub is usually good enough for that. I recommend setting up pfaedle in the pipeline, to automatically add the exact path the vehicles take based on routing on OpenStreetMap data.

Once you have a URL, you can add it to Transitous and places where other developers can find it, like the Mobility Database

If you are interested in some examples of datasets generated this way, check out the Mobility Database entries for LTG Link and ŽPCG. You can find a list with some examples of feeds I generate here, including the generator source code based on the gtfs-generator.

You can always ask in the Transitous Matrix channel in case you hit any roadblocks with your own GTFS-converter projects.

This is a sequel to my first blog where I’m working on fixing a part of docs.kde.org: PDF generation. The documentation website generation pipeline currently depends on docs-kde-org, and this week I was focused on decoupling it.

Moving away from docs-kde-org

I began by copying the script that builds the PDFs in docs-kde-org to ci-utilities as a temporary step towards independence.

I then dropped the bundled dblatex for the one present in the upstream repos, wired up the new paths and hit build. To no one’s surprise, it failed immediately. The culprit here was the custom made kdestyle latex style. As a temporary measure, I excluded it to get PDFs to generate again, however several (new) locales broke.

So the pipeline now worked, albeit, partially.

(Commit 17bbe090) (Commit 3e8f8dc9)

Trying XeTeX

Most non-ASCII languages (Chinese, Japanese, Korean, Arabic, etc) are not supported for PDF generation because of wonky unicode support in pdfTeX (the existing backend driver that converts LaTeX code to a PDF)

So I switched the backend to XeTeX to improve the font and language handling. Now I’m greeted with a new failure :D

xltxtra.sty not found

xltxtra is a package used by dblatex to process LaTeX code with XeTeX. Installing texlive-xltxtra fixed this, and almost everything built (except Turkish)

(Commit 7ede0f94)

The deprecated package rabbit hole

While installing xltxtra, I noticed that it was deprecated and mostly kept for compatibility. So I tried replacing it with modern alternatives (fontspec, realscripts, metalogo).

Back to xltxtra.sty not found

At the point I looked at the upstream dblatex code and found references to xltxtra in a Latex style file and in the XSL templates responsible for generating the latex.

I experimented with local overrides, but the dependency was introduced during the XSLT stage so it isn’t trivial to swap out.

Clearer mental model

A very good outcome from the experiments is that I now understand the pipeline much better

DocBook XML -> XSLT Engine -> LaTeX Source -> XeLaTex -> PDF

XSL files are templates for the XSLT engine to convert the docbooks to latex and the problem is probably in one of these templates

Decision with mentor

After discussion, Johnny advised me to

  • Use texlive-xltxtra for now
  • Open an upstream issue
  • And that fixing dblatex itself is outside our scope

So I went ahead and filed the issue https://sourceforge.net/p/dblatex/bugs/134/. Now we keep moving forward while also documenting the technical debt.

Locales experiment

I temporarily re-enabled previously excluded locales (CJK etc) to see what happens. Unfortunately most of them didn’t generate silently. I’m still unsure about this part and the behavior might differ in other projects, so I’ll revisit this later.

(Commit 04b698a5)

kdestyle strikes back

Now I fed the absolute path of kdestyle to dblatex (to maintain PDF output parity) which introduced a new set of errors

  • Option clash for graphicx
  • conflicting driver options in hyperref (dvips, pdftex)

These seem to originate from the style file and is the next major thing to resolve.

(Commit 7ede0f94)

Improved Git skills

I also had a self inflicted problem this week. My feature branch diverged from master and I merged instead of rebasing. As a result the history got messed up. I fixed it by:

  • dropping the merge commit
  • rebasing properly
  • force pushing the cleaned branch

Lessons learnt. Working with open source projects has really made git less intimidating.

Where things stand

Right now

  • we can build with XeTeX
  • we rely on texlive-xltxtra
  • most locales work (except Turkish of course)
  • kdestyle introduces some problems

Also here’s a before vs after of the generated PDFs (the end goal would be to make both of them look identical)

Honestly I don’t feel like I got much accomplished but the understanding gained should make future changes much faster (and safer :D)

Dev logs log 1, log 2, log 3, log 4

Edit (09-02-2026): Added links to relevant commits based on feedback received

Last weekend I attended this years edition of FOSDEM in Brussels again, mostly focusing on KDE and Open Transport topics.

FOSDEM logo

KDE

As usual, KDE had a stand, this time back in the K building and right next to our friends from GNOME. Besides various stickers, t-shirts and hoodies, a few of the rare amigurumi Konqis where also available. And of course demos of our software on laptops, mobile phones, gaming consoles and graphic tables.

KDE stand at FOSDEM showing several members of the KDE crew and the KDE table with phones, laptops and a drawing tablet as well as stickers and t-shirts.
KDE stand (photo by Bhushan Shah)

Several KDE contributors also appeared in the conference program, such as Aleix’s retrospective on 30 years of KDE and Albert’s talk on Okular.

Itinerary

Meeting many people who have just traveled to FOSDEM is also a great opportunity to gather feedback on Itineray, especially with many disruptions allowing to test various special cases.

  • Eurostar’s ticket scanners apparently can’t deal with binary ticket barcodes correctly, however that’s exactly what the standard UIC SSB barcode they use on their Thalys routes is. Their off-spec workaround to base64-encode the content is now preserved by Itinerary.
  • With DB’s API being blocked increasingly often from other countries we now regularly end up with data mixed from different sources. That exposed some issues with merging data with a different amount of intermediate stops, e.g. due to ÖBB’s API listing border points there as well.

Open Transport Community

For the fourth time FOSDEM had a dedicated Railways and Open Transport track. It’s great to see this to continue to evolve with now also policymakers and regulators not just attending but being actively involved. And not just at FOSDEM, I’m very happy to see community members being consulted and involved in legislation and standardization processes at the moment that were largely inaccessible to us until not too long ago.

Photo of the opening of the Railway and Open Transport track, with the announcement of the Open Transport Community Conference 2026 on the projector.
Railway and Open Transport track opening.

Just in time for FOSDEM we also got a commitment for a venue for the next iteration of the Open Transport Community Conference, in October in Bern, Switzerland, at the SBB headquarter. More on that in a future post.

Transitous

At FOSDEM 2024 Transitous got started. What we have there today goes far beyond what seemed achievable back then, just two years ago. And it’s being put to use, the Transitous website lists a dozen applications built on top of it, and quite a few of the talks in the Railways and Open Transport track at FOSDEM referenced it.

And more

The above doesn’t capture all of FOSDEM of course, as every couple of meters you run into somebody else to talk to, so I also got to discuss new developments on standardization around semantic annotations in emails, OSM indoor mapping or new map vector tile formats, among other things.

It’s been few months since I last blogged about KDE Linux, KDE’s operating system of the future. So I thought it was time to fill people in on recent goings-on! It hasn’t been quiet, that’s for sure:

Project health is looking good

KDE Linux hit its alpha release milestone last September, encompassing basic usability for developers, internal QA people, and technical contributors. Our marketing-speak goal was “The best alpha release you’ve ever used”.

I’d say it’s been a success, with folks in the KDE ecosystem starting to use and contribute to the project. A few months ago, most commits in KDE Linux were made by just 2 or 3 of us; more recently, there’s a healthy diversity of contributors. Check out the last few days of commits:

The next step is working towards a beta release. This is something we can consider the equal of other traditional Linux OSs focused on traditional Linux users: the people who are slightly to fairly technical and computer-literate, but not necessarily developers. Solidly “two-dots-in-computers” users. We’re 62% of the way there as of the time of writing.

First public Q&A and development call

KDE Linux developers held their first public meeting today! The notes can be found here. This is the first of many, and these meetings will of course be open to all.

In this first meeting, devs fielded questions from technical users and discussed a number of open topics, coming to actionable conclusions on several of them. The vibe was really good.

If you want to know when the next meeting will be held, watch this space for a poll!

Delta updates enabled by default

After months of testing by many contributors, we turned on delta updates.

Delta updates increase update speed substantially by calculating the difference between the OS build you have and the one you’re updating to, only downloading that difference, and then applying it like a patch to build the new OS image.

As a result, each OS update should consume closer to 1-2 GB of network bandwidth, down from the 7 GB right now (this is if you’re updating daily; longer intervals between update will result in larger deltas). Still a lot, but now we have a mechanism for reducing the delta between builds even more.

This wonderful system was built by Harald Sitter. Thanks, Harald!

Integrating plasma-setup and plasma-login-manager

KDE Linux now delegates most first-user setup tasks to plasma-setup:

plasma-setup supports the use case of buying a device with KDE Plasma pre-installed where the user is expected to create a user account as part of the initial setup.

Thanks very much to Kristen McWilliam both for not only taking the lead to develop plasma-setup, but also integrating it into KDE Linux!

In addition, KDE Linux now uses plasma-login-manager instead of SDDM. This is a modern login manager intending to integrate more deeply with Plasma for operating systems that want that and use systemd (like KDE Linux does). Development was done primarily by David Edmundson and Oliver Beard, with assistance from Nicolas Fella, Harald Sitter, and Neal Gompa. KDE Linux integration work was done by Thomas Duckworth and Harald Sitter.

KDE Linux has been a superb test-bed for developing and integrating these new Plasma components, and now other operating systems get to benefit from them, too!

Better hardware support

As an operating system built for users bringing their own hardware, KDE Linux is fairly liberal about the drivers and hardware support packages that it includes.

Compared to the initial alpha release last September, the latest builds of KDE Linux include better support for scanners, fancy drawing tablets, Bluetooth file sharing, Android devices, Razer keyboards and mice, Logitech keyboards and mice, fancy many-button mice of all kinds, LVM-formatted disks, exFAT and XFS-formatted disks, audio CDs, Yubikeys, smart cards, virtual cameras (e.g. using your phone as one), USB Wi-Fi dongles with built-in flash storage, certain fancy professional audio devices, and Vulkan support on certain GPUs. Phew, that’s a lot!

Thanks to everyone who reported these issues, and to Hadi Chokr, Akseli Lahtinen, Thomas Duckworth, Fabio Bas, Federico Damián Schonborn, Giuseppe Calà, Andrew Gigena, and others who fixed them!

There’s still more to do. KDE Linux regularly receives bug reports from people saying their devices aren’t supported as well as they could be, or at all — especially older printers, and newer laptops from Apple and Microsoft. No huge surprises here, I guess! But still, it’s a big topic.

Better performance

Thomas Duckworth, Hadi Chokr, and I dug into performance and efficiency, improving the configuration of the kernel and various middleware layers like PulseAudio and PipeWire. Among them include using the Zen kernel, optimizing kernel performance, increasing various internal limits, and optimizing for low-latency audio.

Thanks very much to the CachyOS folks who blazed many of these trails, and whose config files we learned from.

Quieter boot process

Previously, the OS image chooser was shown on every boot. This is good for safety, but a waste of time and an unnecessary exposure of technical details in other cases.

Thomas Duckworth hid the boot menu by default, but made it show up if you mash the spacebar, or if the computer was force-restarted, or restarted normally very quickly after login. These are symptoms of instability; in those cases we show the OS image chooser on the next boot so you can roll back to an older OS version if needed.

Appropriately-set wireless regulatory domain

Different countries have different regulations regarding wireless hardware’s maximum transmit power. If you don’t tell the kernel what country your computer is located in, it will default to the lowest transmit power allowed anywhere in the world! This can reduce your Wi-Fi performance.

Thanks to Thomas Duckworth, KDE Linux now sets the wireless regulatory domain appropriately, looking it up from your time zone, and letting your hardware use all the power it legally can. It updates the value if you change the time zone, too! And also thanks to Neal Gompa for building the tool we integrated into KDE Linux for this.

The idea for this one came from reading CachyOS docs asking users do it manually. Maybe we have something worth copying now!

RAR support

Hadi Chokr added RAR support to our builds of Ark, KDE’s un-archiver. Now you can keep on modding your old games!

“Command not found” handler

I built a simple “command not found” handler that tries its best to steer people in the right direction when they run a command that isn’t available on KDE Linux:

Better Zsh config

KDE Linux now includes a default Zsh config, and it’s been refined over time by multiple people who clearly love their Zsh!

Thank you to Thomas Duckworth, Clément Villemur, and Daniele for this work.

Documentation moved to a more official location

KDE Linux documentation was wiki-based for the past year and a half, and benefited from the sort of organic growth easily possible there. However, it’s now found a more permanent and professional-looking home: https://kde.org/linux/docs.

This will be kept up to date and expanded over time just like the old wiki docs — which now point at the new locations. This work was done by me.

Easy setup for KDE development

KDE developers are a major target audience of KDE Linux. To that end, I wrote some setup tools that make it really easy for people to get started with KDE development. It’s all documented here; basically just run set-up-system-development in a terminal window and you’re ready! The tool will even tell you what to do next.

Saying hello to KCalc, Qrca, Kup, and new CLI tools

KDE Linux includes an intentionally minimal set of GUI apps, leaning on users to discover apps themselves — and if that sucks, we need to fix it. But we decided that a calculator app made sense to include by default. After much hemming and hawing between KCalc and Kalk (it was a tough call!), we eventually settled on KCalc, and now it’s pre-installed.

We’re also now including Qrca, a QR code scanner app. This supports the Network widget’s “scan QR code to connect to network” feature:

Next up is KDE’s Kup backup program for off-device backups! Kup is not nearly as popular as it should be, and I hope more exposure helps to get it additional development attention, too.

Finally, we pre-installed some useful command-line debugging and administration tools, including kdialog, lshw, drm_info, cpupower, turbostat, plocate, fzf, and various Btrfs maintenance tools.

This work was done by me, Ryan Brue, Kristen McWilliam, and Akseli Lahtinen.

Waving goodbye to Snap, Homebrew, Kate, Icon Explorer, Elisa, and iwd

Since the beginning, KDE Linux included Snap as part of an “all of the above” approach to getting software.

Snap works fine (in fact, better than Flatpak in some ways), but came with a big problem for us: It’s only available in the Arch User Repository (AUR). Getting software from AUR isn’t great, and we’ve been moving away from it, with an explicit goal of not using AUR at all by the time we complete our beta release.

Conversations with Arch folks revealed that there was no practical path to moving Snap out of AUR and into Arch Linux’s main repos, and we didn’t fancy building such a large and complex part of the system ourselves. So unfortunately that meant it had to go. We’re now all-in on Flatpak.

Homebrew was another solution for getting software not available in Discover, especially technical software libraries needed for software development. We never pre-installed Homebrew, but we did officially document and recommend it. However the problem of Homebrew-installed packages overriding system libraries was worse than we originally thought; there were reports of crashing and “doesn’t boot” issues not resolvable by rolling back the OS image, because Homebrew installs stuff in your home folder rather than a systemwide location. Accordingly, we’ve removed our recommendation, replacing it with a warning against using Homebrew in our documentation. Use Distrobox until we come up with something more suitable.

Another removal was Kate. Kate is amazing, but we already pre-install KWrite, and the two apps overlap significantly in functionality. Eventually we reasoned that it made sense to only pre-install KWrite as a general text editor and keep Kate as an optional thing for experts who need it.

We also removed Icon Explorer from the base image because developers who need it can now get a Flatpak build of it from Flathub.

Next up was Elisa. Local music library manager apps are not very popular these days, and the pre-installed Haruna app can already play audio files. So out it went, I’m afraid. Anyone who uses it (like I do!) can of course manually install it, no problem.

And finally, the iwd wireless daemon leaves KDE Linux. It was never enabled by default; it was just an option for those who needed it. And the one user who did need it eventually found a better solution to their wireless card issues. With news of Intel dis-investing in iwd, we decided it didn’t have a sunny future in KDE Linux anymore and removed it.

This work was done by me.

And lots more

These are just the larger user-facing changes. Tons of smaller and more technical changes were merged as well. It’s a fairly busy project.

You can use it!

It’s also not a theoretical project; KDE Linux is released and I typed this blog post on it! I’ve developed Plasma on it and run a business on it, too. It’s been my daily driver since last August.

You can probably install KDE Linux on your computer too, and become a part of the future. Even if you’re worried about using alpha software because you’re not a software developer or a mega nerd, it’s perfect for a secondary computer. KDE Linux is quite stable, and the OS rollback functionality reduces risk even more.

You can help build it!

If any of this is exciting, come help us build it! Working on KDE Linux is pretty easy, and there’s lots of support.

Welcome to a new issue of This Week in Plasma!

This week the Plasma team continued polishing up Plasma 6.6 for release in a week and a half. With that being taken care of, a lot of fantastic contributions rolled in on other diverse subjects, adding cool features and improving user interfaces. Check ’em out here:

Notable New Features

Plasma 6.7.0

The Window List widget now supports sorting and shows section headers in its full view, making it easier to navigate windows by virtual desktops, activities, or alphabetically. (Shubham Arora, plasma-desktop MR #3434)

Window List widget showing of its sorting and grouping capabilities

Notable UI Improvements

Plasma 6.6.0

System Settings’ Touchscreen Gestures page now hides itself when there are no touchscreens. This completes the project to hide all inapplicable hardware pages! (Alexander Wilms and Kai Uwe Broulik, KDE Bugzilla #492718 and systemsettings MR #391)

The “Enable Bluetooth” switch in the Bluetooth widget no longer randomly displays a blue outline on its handle even when not clearly focused. (Christoph Wolk, KDE Bugzilla #515243)

Plasma 6.7.0

Plasma’s window manager now remembers tiling padding per screen. (Tobias Fella, KDE Bugzilla #488138)

The wallpaper selection dialog now starts in the location you navigated to the last time you used it. (Sangam Pratap Singh, KDE Bugzilla #389554)

Theme previews on System Settings’ cursor settings page now scale better when using massive cursor sizes. (Kai Uwe Broulik, plasma-workspace MR #6244)

Update items on Discover’s updates page now have better layout and alignment. (Nate Graham, discover MR #1252)

Completed the project to make the delete buttons on System Settings’ theme chooser pages consistent. (Sam Crawford, plasma-desktop MR #3506, sddm-kcm MR #101, and plymouth-kcm MR #47)

The System Tray icon that Discover uses to represent an in-progress automatic update now looks a lot more like the other update icons, and a lot less like Nextcloud’s icon. (Kai Uwe Broulik, discover MR #1258 and breeze-icons MR #526)

Notable Bug Fixes

Plasma 6.5.6

It’s no longer possible to accidentally close the “Keep display configuration?” confirmation dialog by panic-clicking, unintentionally keeping bad settings instead of reverting them. (Nate Graham, kscreen MR #460)

Fixed a regression in sRGB ICC profile parsing that reduced color accuracy. (Xaver Hugl, KDE Bugzilla #513691)

3rd-party wallpaper plugins that include translations now show that translated text as expected. (Luis Bocanegra, KDE Bugzilla #501400)

Plasma 6.6.0

Fixed multiple significant issues on the lock screen that could be encountered with fingerprint authentication enabled: one that could break fingerprint unlocking, and another that could leave you with an “Unlock” button that did nothing when clicked. (David Edmundson, KDE Bugzilla #506567 and KDE Bugzilla #484363)

Fixed a Plasma crash caused by applying a global theme that includes a malformed layout script. (Marco Martin, KDE Bugzilla #515385)

Panel tooltips no longer inappropriately respect certain window placement policies on Wayland. (Tobias Fella, KDE Bugzilla #514820)

User-created global shortcuts are now always categorized as “Applications”, resolving an issue whereby apps added by choosing an executable using the file picker dialog would be inappropriately categorized as system services and couldn’t be edited or deleted. (Tobias Fella, KDE Bugzilla #513565)

Fixed two issues with recent files and folders in the Kickoff application launcher: now it shows the correct file type icons for items, and no longer sometimes shows a weird duplicate “files” section. (Christoph Wolk, KDE Bugzilla #496179 and KDE Bugzilla #501903)

Spectacle now shows the correct resolution in its tooltip for rectangular region screenshots when using a fractional scale factor on a single screen. (Noah Davis, KDE Bugzilla #488034)

The “Open With” dialog now filters its view properly when opened from a Flatpak app. (David Redondo, KDE Bugzilla #506513)

Keyboard focus no longer gets stuck in the Search widget after its search results appear. (Christoph Wolk, KDE Bugzilla #506505)

Frameworks 6.23

Fixed a complicated issue that could sometimes break automatic KWallet wallet unlocking on login. (Bosco Robinson, KDE Bugzilla #509680)

Fixed a visual regression with certain item lists that made the last one touch the bottom of its view or popup. (Marco Martin, KDE Bugzilla #513459)

Notable in Performance & Technical

Frameworks 6.23

Reduced KRunner’s maximum memory usage while file searching is enabled. (Stefan Brüns, KDE Bugzilla #505838)

How You Can Help

KDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.

Would you like to help put together this weekly report? Introduce yourself in the Matrix room and join the team!

Beyond that, you can help KDE by directly getting involved in any other projects. Donating time is actually more impactful than donating money. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer, either; many other opportunities exist.

You can also help out by making a donation! This helps cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.

To get a new Plasma feature or a bugfix mentioned here

Push a commit to the relevant merge request on invent.kde.org.

Friday, 6 February 2026

Let’s go for my web review for the week 2026-06.


The Retro Web

Tags: tech, hardware, history

This is a nice resource trying to document the history of computer hardware. Really cool stuff.

https://theretroweb.com/


IndieWebify.Me? Yes please!

Tags: tech, web, blog, self-hosting, indie

Looks like an interesting tool to check you’re doing “everything right” on your blog. That said, it looks like quite a few hoops to jump through. I wish there’d be a way to make all this a bit easier.

https://blog.rickardlindberg.me/2026/02/04/indie-webify-me-yes-please.html


“IG is a drug”: Internal messages may doom Meta at social media addiction trial

Tags: tech, social-media, attention-economy, law

Clearly a trial to keep an eye on. Some of those internal memos might prove decisive.

https://arstechnica.com/tech-policy/2026/01/tiktok-settles-hours-before-landmark-social-media-addiction-trial-starts/


Backseat Software

Tags: tech, product-management, metrics, ux, attention-economy, surveillance, history

Excellent historical perspective on how we ended up with applications filled with annoying interruptions and notifications. It’s been done indeed one step at a time and lead to poor UX really.

https://blog.mikeswanson.com/backseat-software/


AdNauseam

Tags: tech, web, browser, advertisement, attention-economy, privacy

I’m not sure I’m quite ready to use this… Still I like the idea, make some noise and have companies turning to those invasive ads to just pay for nothing. The more users the better I guess.

https://adnauseam.io/


Europe’s tech sovereignty watch

Tags: tech, europe, business, politics, vendor-lockin

Despite clearly being an advertisement for Proton’s offering, this shows how reliant European companies are on vendors showing strategic problems. We can cheer at the EU policies when they go in the right direction. It’s probably not enough already, but the European companies are clearly asleep at the wheel.

https://proton.me/business/europe-tech-watch


GDPR is a failure

Tags: tech, law, gdpr

The ideas behind GDPR are sound. The enforcement is severely lacking though. Thus its effects are too limited.

https://nikolak.com/gdpr-failure/


Mobile carriers can get your GPS location

Tags: tech, mobile, gps, privacy, surveillance, protocols

Yep, it’s worse than the usual triangulation everyone thinks about. It’s right there in the protocol, or why you’d better not let the GPS on all the time.

https://an.dywa.ng/carrier-gnss.html


Meet Rayhunter: A New Open Source Tool from EFF to Detect Cellular Spying

Tags: tech, spy, surveillance, mobile, hardware

Time to spy on the spies. Or at least know when they’re around.

https://www.eff.org/deeplinks/2025/03/meet-rayhunter-new-open-source-tool-eff-detect-cellular-spying


What If? AI in 2026 and Beyond

Tags: tech, ai, machine-learning, gpt, copilot, business, economics

Interesting analysis. It gives a balanced view on the possible scenarios around the AI hype.

https://www.oreilly.com/radar/what-if-ai-in-2026-and-beyond/


Selfish AI

Tags: tech, ai, machine-learning, gpt, copilot, copyright, ecology, economics, ethics

Let’s not forget the ethical implications of those tools indeed. Too often people put them aside simply on the “oooh shiny toys” or the “I don’t want to be left behind” reactions. Both lead to a very unethical situation.

https://www.garfieldtech.com/blog/selfish-ai


The API Tooling Crisis: Why developers are abandoning Postman and its clones?

Tags: tech, web, api, tests

Another space with rampant enshittification… No wonder users are jumping between alternatives.

https://efp.asia/blog/2025/12/24/api-tooling-crisis/


What’s up with all those equals signs anyway?

Tags: tech, email, encodings

If you didn’t know about quoted printable encoding. This is a way to understand it.

https://lars.ingebrigtsen.no/2026/02/02/whats-up-with-all-those-equals-signs-anyway/


The Disconnected Git Workflow

Tags: tech, git, email

A good reminder that Git doesn’t force you to use a web application to collaborate on code.

https://ploum.net/2026-01-31-offline-git-send-email.html


4x faster network file sync with rclone (vs rsync)

Tags: tech, networking, syncing

Need to move many files around? Rsync might not be the best option anymore.

https://www.jeffgeerling.com/blog/2025/4x-faster-network-file-sync-rclone-vs-rsync/


From Python 3.3 to today: ending 15 years of subprocess polling

Tags: tech, python, processes, system

Nice improvement in Python for waiting the end of a subprocess. Explains nicely the underlying options and available syscall if you need to do the same in your code.

https://gmpy.dev/blog/2026/event-driven-process-waiting


Django: profile memory usage with Memray

Tags: tech, python, memory, profiling, django

Looks surprisingly easy to profile the Django startup. Probably makes sense to profile other parts of your application but this is likely a bit more involved.

https://adamj.eu/tech/2026/01/29/django-profile-memray/


Flavours of Reflection

Tags: tech, reflection, type-systems, c++, java, python, dotnet, rust

Looking at several languages and their reflection features. What’s coming with C++26 is really something of another class than anything else. I just have concerned about its readability though.

https://semantics.bernardteo.me/2026/01/30/flavours-of-reflection.html


In Praise of –dry-run

Tags: tech, tools, tests, command-line

This is indeed a very good option to have when you make a command line tool.

https://henrikwarne.com/2026/01/31/in-praise-of-dry-run/


Some Data Should Be Code

Tags: tech, data, programming, buildsystems, infrastructure, automation

There is some truth to this. Moving some things to data brings interesting properties but it’s a two edged sword. Things are simpler to use when kept as code. Maybe code emitting structured data.

https://borretti.me/article/some-data-should-be-code


Plasma Effect

Tags: tech, graphics, shader

Neat little shader for a retro demo effect.

https://www.4rknova.com/blog/2016/11/01/plasma


Forget technical debt

Tags: tech, technical-debt, engineering, organisation

Interesting insight. Gives a lot to ponder indeed. Focusing on technical debt alone probably won’t improve a project much. It’s thus important to take a broader view for long lasting improvements.

https://www.ufried.com/blog/forget_technical_debt/



Bye for now!