Skip to content

Saturday, 30 August 2025

Every year the KDE Community conducts a large-scale field-test of KDE Itinerary, their fantastic travel companion app, disguised as annual community conference. This year’s Akademy takes place in the capital of Germany, Berlin. I have decided to try and exclusively use KDE Itinerary (full trip planner app) and KTrip (focused on public transport) for all my travel needs from and to the venue as well as its accompanying events.

KDE Itinerary travel companion app “Select Departure Stop” page with a grayed out list of recently searched for stops. “Current Location” is highlighted in a “Determining Location…” state
WIP: Finding your way home from dinner with Itinerary

Did you know that you can import various parts of the Akademy website into Itinerary? For example, if you go to the Social Event page and “share” it with Itinerary, it will add date, time, and location of the event to your timeline. This even works with many other websites, such as hotels. Of course, it’s preferred to import the actual reservation but if you don’t have one, you can just paste the hotel website URL into Itinerary, add your check-in and check-out dates, and you’re good to go.

While I won’t be needing it for Berlin, another feature I’ve wanted for a long time is a live currency converter. Itinerary has always displayed the exchange rate when travelling to a foreign country but it didn’t let you input an arbitrary amount and convert it for you. Now on every trip that involves travel to a country using a currency other than your home country, a handy currency converter is displayed.

KDE Itinerary app displaying details of a trip to Atlanta: No compatible power plugs. An interactive unit converter from USD to EUR is pointed at.
Anything you need to know about your trip, now with an interactive currency converter

I realized that we actually didn’t download the currency conversion table anymore. In KDE Frameworks 5 KUnitConversion did that implicitly when you used it but it’s a simple value-based synchronous API and doing an unsolicited blocking network request in there was obviously quite problematic – not just because KRunner is multi-threaded. In Frameworks 6, there’s instead a dedicated update job but you have to run it yourself when needed. On a desktop, KRunner’s unit converter likely already did that which is why this probably went unnoticed but on Android, the app certainly cannot rely on that.

In my opinion, the biggest issue for using Itinerary/KTrip as daily driver to navigate a foreign city, however, was the lack of a “Current Location” option when searching for a departure stop. Sure, when I know where I’m at I can search for the stop by name but what about finding your way back to the hotel from your dinner place? I therefore wrote a prototype to add a button to the stop picker page. It uses Qt Location to get the device’s location and feeds it into a journey request. It’s still very much work in progress and definitely not merged in time for Akademy, so in the meantime you’ll have to use a custom build.

In order to test my changes on an actual Android phone, I used Craft, KDE’s meta build system and package manager. It is super easy to set up for building libraries and applications for any major platform KDE software supports. For Android, it comes with a ready-to-use Docker image containing all the necessary Qt and Android libraries. I just had to configure it to use my local git checkout where I already made some changes and was able to produce a working APK within a couple of minutes. All of that without even touching Android Studio or worrying about getting the right NDK and Gradle version and what not. Great stuff!

I am very much looking forward to seeing you all in Berlin very soon!

Going to Akademy
6 – 11 September 2025
Technische Universität Berlin, Germany

A somewhat recurring problem I encounter in things I work on is the need to compute simplified geographic polygons, or more specifically, simplified hulls of geographic polygons. Here’s an overview on the currently used approach, maybe someone has pointers to better algorithms for this.

Coverage polygons

Geographic polygons are used in a few different places:

  • The Transport API Repository and consumers of it like KPublicTransport use them for describing areas covered by a public transport routing service, to automatically pick a suitable one at a given location.
  • The Emergency and Weather Aggregation Service uses them to describe areas affected by an emergency. Alerts that don’t carry explicit geometry but refer to an external dataset using a geographic code are particularly affected here, as those datasets (e.g. derived from OSM boundaries) are used for many different purpose and thus are often extremely detailed.

Meter-resolution geometry isn’t needed for any of those use-cases, hundreds of meters or even a kilometers are more than sufficient. And using a higher resolution does come at a cost, larger geometry needs to be stored and transferred, and rendering or doing intersection tests becomes significantly more expensive to compute.

Simplified coverage

Applying generic geometry simplification algorithms here isn’t ideal, as we would want a result that is still covering the original geometry, ie. a simplified hull. In the above use-cases covering more area is merely inefficient or maybe mildly annoying, while covering a smaller area can be catastrophic (e.g. someone potentially affected by an emergency not getting alerted).

A bounding box is the extreme case fulfilling that requirement and minimizing the resulting geometry, but we are obviously looking for a more reasonable trade-off between additionally covered area and geometry complexity.

Also, not all additionally covered area is equal here, covering extra area at land potentially impacts more people than covering extra area at sea. Or even more generally, additionally covered area hurts the more the more densely it is populated, with the sea just being the extreme (although not uncommon) case here. None of the below takes any of this into consideration though, but it’s one aspect where I’d expect some room for improving the result.

Douglas-Peucker

An common way to simplify polylines and polygons is the Douglas-Peucker algorithm. What this does is check whether for two points the maximum distance of any point in between to a line from the first to the last one is below a given threshold. If yes, all intermediate points can be discarded, otherwise this is recursively repeated separately for two parts split at the point with the maximum distance.

Animation of Douglas-Peucker algortihm applied to a polyline with 10 points.
Source: Wikipedia, CC-BY-SA 3.0

This is fairly easy to implement, and we have multiple copies of this around in different languages and for different polygon or polyline data structures, e.g. in Marble, KPublicTransport, KPublicAlerts, FOSS Public Alert Server and Transport API Repository.

For sufficiently small thresholds and relatively “smooth” geometries this works quite well, although on its own it doesn’t guarantee the result is a hull of the input.

Results however deteriorate when using a threshold in the same order of magnitude as there are features in the geometry, say a kilometer threshold on a geometry containing a highly detailed Fjord-like coastline.

A set of islands in Alaska with, with an overlay polygon perfectly covering their outlines.
2.6MB GeoJSON input with highly detailed complex coast lines.
The same islands, this time with a very jagged coverage polygon.
Douglas-Peucker result with a threshold of ~1km (0.5% of the overall area width).

And if you push this further, it can also result in self-intersecting polygons, which is something we generally want to avoid here.

Poylgon offsetting

To deal with the fact that Douglas-Peucker doesn’t produce a hull, we could offset or buffer the polygon. That is, enlarging it by moving all points to the “outside” by a given amount.

Implementing that is a bit more involved, so this is usually done by using the Clipper2 library, which seems to be somewhat of the reference implementation for some of the more complex polygon manipulation algorithms.

There’s a few small surprises in there, such as it working with integer coordinates and typos in its API (“childs”), but beyond that it’s reasonably easy to use both from C++ and Python (examples in Marble, KPublicTransport, FOSS Public Alert Server and Transport API Repository).

Simplifying by offsetting

These two steps can also be combined in another way to obtain a simplified hull:

  • Offset the polygon first, by a value large enough that unwanted details will get merged together. E.g. at half their width narrow concave areas such as Fjords disappear that way, islands will get “swallowed” by nearby land, etc.
  • Apply Douglas-Peucker, with a threshold below the offset size. The result will then still be a hull of the original geometry, and since the previous step removed small details that would interfere we get a “smoother” result.
  • Apply a negative offset, to shrink the polygon closer to its original size again. The maximum value to use for that is the value the polygon was initially offset with minus half the threshold used for Douglas-Peucker.

This has proven to be very effective with polygons with highly detailed relatively small concave features, such as land polygons with a Fjord-heavy coastline. And as a welcome side-effect it often also automatically fixes self-intersecting geometry in the input data.

The same islands again, this time with a smoother overlay polygon due to covering also many smaller water features.
Simplified hull, reduced to just 17kB.

It’s not quite as effective on the inverse though, ie. geometry with fine-grained convex features, such as the sea polygon in the above example.

Truncating numbers

There’s a technical simplification orthogonal to the above also worth mentioning when using GeoJSON or other textual formats such as CAP. Printed out naively floating point numbers easily end up with 12 or 13 decimals. That’s enough for a sub-millimeter resolution, which is pointless when working with kilometer-scale features.

It’s therefore also useful to implement adaptive rounding and truncation of the coordinates based on the size of the overall geometry, resulting in just four to six decimals usually in our use-cases, which is a two- to three-times reduction in (textual) size (example).

This only helps with storage and transfer volume of course, the computational complexity of processing the geometry doesn’t change by this.

Can this be done better?

As this isn’t really such an uncommon problem, are there better ways to do this? Ie. are there ways to achieve a higher-quality geometry at the same size or ways to further reduce the size without compromising quality? Are there other algorithms and approaches worth trying?

Welcome to a new issue of This Week in Plasma!

This week saw huge improvements to the Plasma clipboard, KRunner, and drawing tablet support — not to mention a bunch of UI improvements in Discover, and plenty more, too! So without further ado…

Notable New Features

Plasma’s clipboard now lets you mark entries as favorites, and they’ll be permanently saved so you can always access them easily! This is very useful when you find yourself pasting the same common text snippets all the time. The feature request was 22 years old; this may be a new record for oldest request ever implemented in KDE! (Kendrick Ha, link)

Starred /saved clipboard items

Plasma now lets you configure touch rings on your drawing tablet! (Joshua Goins, link)

Discover now lets you install hardware drivers that are offered in your operating system’s package repos! (Evan Maddock, link)

KRunner and KRunner-powered searches can now find global shortcuts! (Fushan Wen, link)

Global shortcuts/actions in KRunner

Notable UI Improvements

Plasma 6.5.0

KRunner and KRunner-powered searches now use fuzzy matching for applications. (Harald Sitter, link)

Fuzzy match in KRunner for “Thunderbirb”

Improved the way Discover presents error messages to be a bit more user-friendly and compliant with KDE’s Human Interface Guidelines. (Oliver Beard and Nate Graham, link 1 and link 2)

Discover now lets you write a review for apps that don’t have any reviews yet. (Nate Graham, link)

On operating systems using RPM-OSTree (like Fedora Kinoite), there’s no longer an awkward red icon used in the sidebar and other places you’d expect black or white icons. (Justin Zobel, link)

KDE Gear 25.12.0

Opening a disk in KDE Partition Manager from its entry in Plasma’s Disks & Devices widget no longer mounts the disk, which is annoying since you’ll then have to unmount it in the app before you can do anything with it. (Joshua Goins, link 1 and link 2)

Notable Bug Fixes

Plasma 6.4.5

Fixed a critical issue that could cause the text of a sticky note on a panel to be permanently lost if that panel was cloned and then later deleted. This work also changes handling for deleted notes’ underlying data files: now they’re moved to the trash, rather than being deleted immediately. Should be a lot safer now! (Niccolò Venerandi, link 1 and link 2)

Fixed a very common KWin crash when changing display settings that was accidentally introduced recently. (David Edmundson, link)

Made a few strings in job progress notifications translatable. (Victor Ryzhykh, link)

Fixed an issue that could allow buttons with long text to overflow from System Monitor’s process killer dialog when the window was very very small. (Nate Graham, Link)

Fixed an issue in the time zone chooser map that would cause it to not zoom to the right location when changing the time zone using one of the comboboxes. (Kai Uwe Broulik, link)

The warnings shown by System Settings’ Fonts page in response to various conditions will now be shown when you adjust all the fonts at once, not only when you adjust one at a time. (Nate Graham, link)

Plasma 6.5.0

Fixed a case where Plasma could crash while you were configuring the weather widget. (Bogdan Onofriichuk, link)

Fixed an issue that could cause System Settings to crash while quitting when certain pages were open. (David Redondo, link)

Plasma is now better at remembering if you wanted Bluetooth on or off on login. (Nicolas Fella, link)

Panels in Auto-Hide, Dodge Windows, and Windows Go Below modes will now respect the opacity setting. (Niccolò Venerandi, link)

Frameworks 6.18

Fixed an issue that caused Plasma to crash when dragging files from Dolphin to the desktop or vice versa when the system was set up with certain types of mounts. (David Edmundson, link)

Other bug information of note:

Notable in Performance & Technical

Plasma 6.5.0

Implemented support for “overlay planes” on single-output setups, which have the potential to significantly reduce GPU and power usage for compatible apps displaying full-screen content. Note that NVIDIA GPUs are currently opted out because of unresolved driver issues. (Xaver Hugl, link)

Implemented support for drag-and-drop to and from popups created by Firefox extensions, and presumably other popups implemented with the same xdg_popup system, too. (Vlad Zahorodnii, link)

Fixed an issue that would cause V-Sync to be inappropriately disabled in certain games using the SDL library. (Xaver Hugl, link)

Undetermined release date

The annotating feature in Spectacle has been extracted into a re-usable library so that it can also be used in other apps in the future! Such integration is still in progress (as is working out a release schedule for the git repo that the library lives in now), but you’ll hear about it once it’s ready! (Noah Davis and Carl Schwan, link)

How You Can Help

KDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.

You can help KDE by becoming an active community member and getting involved somehow. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer, either; many other opportunities exist, too.

You can also help us by making a donation! A monetary contribution of any size will help us cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.

To get a new Plasma feature or a bugfix mentioned here, feel free to push a commit to the relevant merge request on invent.kde.org.

Friday, 29 August 2025

When travelling, I tend to rely on public Wi-Fi hotspots a lot, for example on trains, while waiting at the station, in cafe’s and so on.

Accepting the same terms and conditions every time gets annoying pretty quickly, so a few years ago I decided to automate this. The project that came out of that is freewifid.

It continously scans for Wi-Fi networks it knows, and sends you a notification when it found one it can automatically connect to. You can then allow it to connect to that network automatically in the future.

A freewifid notification asking whether to connect to a known network

Adding support for new captive portals

Adding support for a new kind of captive portal is pretty easy. You just need to implement a small rust trait that includes a function that sends the specific request for the captive portal. Often this is very simple and looks like this:

pub struct LtgLinkProvider {}

impl LtgLinkProvider {
    pub fn new() -> LtgLinkProvider { LtgLinkProvider {} }
}

impl CaptivePortal for LtgLinkProvider {
    fn can_handle(&self, ssid: &str) -> bool {
        ["Link WIFI"].contains(&ssid)
    }

    fn login(&self, http_client: &ureq::Agent) -> anyhow::Result<()> {
        // Store any cookies the landing page might send
        common::follow_automatic_redirect(http_client)?;

        http_client
            .post("http://192.168.1.100:8880/guest/s/default/login")
            .send_form([
                ("checkbox", "on"),
                ("landing_url", crate::GENERIC_CHECK_URL),
                ("accept", "PRISIJUNGTI"),
            ])?;

        Ok(())
    }
}

For finding out what needs to be sent, you can use your favoute browser’s inspection tools.

For testing, Plasma’s feature for assigning a random MAC-address comes very handy.

Integration with Plasma

It could be interesting to write a KCM for freewifid, so you can graphically remove networks again. Support for ignoring public networks in the presence of a given SSID is also already implemented, but currently needs to be enabled by editing the config file. Writing a KCM is not high on my list of priorities right now, but if this sounds like something you’d like to do, I’d happily help with with the freewifid interfacing parts.

Project

The project is hosted on Codeberg. I’ll happily accept merge requests for additional captive portals there.

There are some prebuilt release binaries, but I’m not too certain they’ll work on every distribution. With a rust compiler installed, the project is very simple to build (cargo build). A systemd unit is provided in the repository, which you can use to run freewifid as a user unit.

Freewifid also supports running as a system service non-interactively for use in embedded projects.

Thursday, 28 August 2025

After I took a longer break from KDE development, I’ve been back in action for a few months now. It’s really nice to be back among friends, hacking on what I like most: Plasma. My focus has been on Plasma Mobile with some work naturally bleeding over into other areas.

Plasma on more Devices

I’d like to share some bits and pieces that I’ve worked on in the past months. Most of my efforts have revolved around making Plasma Mobile suitable for a wider range of devices and use-cases. The purpose of this work is that I want to make Plasma Mobile a more viable base for all kinds of products, not just mobile phones. We have a really mature software stack and great tools and applications which make it relatively easy for companies to create amazing products without having to hire large teams and many years to get the product ready for their market. This is I think a very interesting and worthwhile niche for Plasma to get into and I’m sure that Valve is not the only company that understands this.

Convergence Improvements

Convergence, or rather being able to support and switch between formfactors and usage patterns has always been a pet-peeve of mine and still is.
One area was improving using the available screen real estate use landscape displays (Plasma Mobile has quite naturally been rather “portrait-focused”, though a few smaller patches go a long way.)

Configurable number of columns in the Quicksettings drawer

I also improve usability with different pixel densities in the mobile shell by making the size of the top panel configurable. Also, when plugging in a second monitor, Plasma Mobile now switches from “all apps are maximized” to normal window management. (I’m currently working on KWin supporting more fine-grained window management. Currently, we just maximize all windows which has problems especially with modal dialogs.)

One changeset I worked on earlier this year makes it possible to ship multiple user interfaces for settings modules (“kcms”). An example is the “remote desktop” kcm which now shows a mobile-focused UI in Plasma Mobile. What happens here is that we load a main_phone.qml file in Plasma Mobile (where “phone” is picked from a list of form factors set in the environment of the session, so basically the “main” QML file gets picked based on the device. This mechanism allows us to share components quite easily, reducing the delta between different device UIs.

Mobile and Desktop RDP settings

This actually builds on top of work that I’ve done ten years ago which added support for form factors to our plugin metadata system.
I’ve also made the “Display & Monitor” kcm usable on mobile, this is a pretty important thing to have working when you want to be able to plug in an external monitor into your device. I have a mobile version of the keyboard KCM in the pipeline, too, but this will need a bit more work before it’s ready for prime-time.

More Features

There’s a new page in the mobile Wi-fi settings module, showing connection details and tranfer speeds. The code for this was amazingly simple since I could lift most of the functionality from the desktop panel widget. A shared code-base across devices really speeds up development.

Connection details for the mobile wifi settings

Adding useful features here and there, such as having the list of available bluetooth devices now filtered by default and only showing devices which actually make sense to pair (with an option to “Show all devices” in good Plasma manner). This feature isn’t mobile-specific, so desktop and laptop users will benefit.

Welcome to Okular Mobile

Not all my work goes into infrastructural and “shell” bits. The mobile okular version has now kind of caught up with the desktop version since it got a nice welcome screen when opened. This allows the user to easily open a document either from the “Documents” directory on disk (this is actually configurable) or one of the recent files viewed.

Okular Mobile Welcome Screen

Going to Akademy ’25

After having missed our yearly world conference for a number of years, this year I will be at Akademy again. I’m really looking forward to seeing everybody in person again!

I’m going to Akademy!

See you in Berlin!

Hello again everyone!

I’m Derek Lin also known as kenoi, a second-year Math student at the University of Waterloo.

Through Google Summer of Code 2025 (GSoC), mentored by Harald Sitter, Tobias Fella, and Nicolas Fella, I have been developing Karton, a virtual machine manager for KDE.

As the program wraps up, I thought it would be a good idea to put together what I’ve been able to accomplish as well as my plans going forward.

A final look at Karton after the GSoC period.

Research and Initial Work

The main motivation behind Karton is to provide KDE users with a more Qt-native alternative to GTK-based virtual machine managers, as well as an easy-to-use experience.

I had first expressed interest in working on Karton in early Feburary where I made the initial full rewrite (see MR #4), using libvirt and a new UI, wrapping virt-install and virt-viewer CLIs. During this time, I had been doing research, writing a proposal, and trying out different virtual machine managers like GNOME Boxes, virtmanager, and UTM.

You can read more about it in my project introduction blog!

A screenshot of my rewrite in March 8, 2025.

VM Installation

One of my goals for the project was to develop a custom libvirt domain XML generator using Qt libraries and the libosinfo GLib API. I started working on the feature in advance in April and was able to have it ready for review before the official GSoC coding period.

I created a dialogue menu to accept a VM name, installation media, storage, allocated RAM, and CPUs. libosinfo will attempt to identify the ISO file and return a OS short-ID (ex: fedora40, ubuntu24.04, etc), otherwise users will need to select one from the displayed list.

Through the OS ID, libosinfo can provide certain specifications needed in the libvirt domain XML. Karton then fills in the rest, generating a UUID, a MAC address to configure a virtual network, and sets up display, audio, and storage devices. The XML file is assembled through QDomDocument and passed into a libvirt call that verifies it before adding the VM.

VM information (id, name, state, paths, etc) in Karton is parsed explicitly from the saved libvirt XML file found in the libvirt QEMU folder, ~/.config/libvirt/qemu/{domain_name}.xml.

All in all, this addition (see MR #8) completely removed the virt-install dependency although barebones.

A screenshot of the VM installation dialog.

The easy VM installation process of GNOME Boxes had been an inspiration for me and I’d like to improve it in the future by adding a media installer and better error handling later on.

Official Coding Begins!

A few weeks into the official coding period, I had been addressing feedback and polishing my VM installer merge request. This introduced much cleaner class interface separation in regards to storing individual VM data.

SPICE Client and Viewer

My use of virt-viewer previously for interacting with virtual machines was meant as a temporary addition, as it is a separate application and is poorly integrated into Qt/Kirigami and lacks needed customizability.

Previously, clicking the view button would open a virtviewer window.

As such, the bulk of my time was spent working with SPICE directly, using the spice-client-glib library, in order to create a custom Qt SPICE client and viewer (see MR #15). This needed to manage the state of connection to VM displays and render them to KDE (Kirigami) windows. Other features such as input forwarding, audio receiving also needed to be implemented.

I had configured all Karton-created VMs to be set to autoport for graphics which dynamically assigns a port at runtime. Consequently, I needed to use a CLI tool, virsh domdisplay, to fetch the SPICE URI to establish the initial connection.

The viewer display works through a frame buffer. The approach I took was rendering the pixel array I received to a QImage which could be drawn onto a QQuickItem to be displayed on the window. To know when to update, it listens to the SPICE primary display callback.

You can read more about it in my Qt SPICE client blog. As noted, this approach is quite inefficient as it needs to create a new QImage for every frame. I plan on improving this in the future.

Screenshots of my struggles getting the display to work properly.

I had to manage receiving and forwarding Qt input. Sending QMouseEvents, mouse button clicks, were straightforward and can be mapped directly to SPICE protocol mouse messages when activated. Keystrokes are taken in as QKeyEvents and the received scancodes, in evdev, are converted to PC XT for SPICE through a map generated by QEMU. Implementing scroll and drag followed similarly.

I also needed manage receiving audio streams from the SPICE playback callback, writing to a QAudioSink. One thing I found nice is how my approach supported multiple SPICE connections quite nicely. For example, opening multiple VMs will create separate audio sources for each so users can modify volume levels accordingly.

Later on, I added display frame resizing when the user resizes the Karton window as well as a fullscreen button. I noticed that doing so still causes resolution to appear quite bad, so proper resizing done through the guest machine will have to be implemented in the future.

Now, we can watch Pepper and Carrot somewhat! (no hardware accelleration yet)

UI

My final major MR was to rework my UI to make better use of screen space (see MR #25). I moved the existing VM ListView into a sidebar displaying only name, state, and OS ID. The right side would then have the detailed information of the selected VM. One my inspirations was MacOS UTM’s screenshot of the last active frame.

When a user closes the Karton viewer window, the last frame is saved to $HOME/.local/state/KDE/Karton/previews. Implementing cool features like these are much easier now that we have our own viewer! I also added some effects for opacity and hover animation to make it look nice.

Finally, I worked on media disc ejection (see MR #26). This uses a libvirt call to simulate the installation media being removed from the VM, so users can boot into their virtual hard drive after installing.

Demo Usage

As a final test of the project, I decided to create, configure and use a Fedora KDE VM using Karton. After setting specifications, I installed it to the virtual disk, ejected the installation media, and properly booted into it. Then, I tried playing some games. Overall, it worked pretty well!

List of MRs

Major Additions:

Subtle Additions:

Difficulties

My biggest regret was having a study term over this period. I had to really manage my time well, balancing studying, searching for job positions, and contributing. There was a week where I had 2 midterms, 2 interviews, and a final project, and I found myself pulling some late nighters writing code at the school library. Though it’s been an exhausting school term, I am still super glad to have been able to contribute to a really cool project and get something work!

I was also new to both C++ and Qt development. Funny enough, I had been taking, and struggling on, my first course in C++ while working on Karton. I also spent a lot of time reading documentation to familiarize myself with a lot of the different APIs (libspice, libvirt, and libosinfo).

Left: Karton freezes my computer because I had too many running VMs.

Right: 434.1 GiB of virtual disks; my reminder to implement disk management.

What’s Next?

There is still so much to do! Currently, I am on vacation and I will be attending Akademy in Berlin in September so I won’t be able to work much until then. In the fall, I will be finally off school for a 4 month internship (yay!!). I’m hoping I will have more time to contribute again.

There’s still a lot left especially with regards to the viewer.

Here’s a bit of an unorganized list:

  • Optimize VM display frame buffer with SPICE gl-scanout
  • Improved scaling and text rendering in viewer
  • File transfer and clipboard passthrough with SPICE
  • Full VM snapshotting through libvirt (full duplication)
  • Browse and installation tool for commonly installed ISOs through QEMU
  • Error handling in installation process
  • Configuration and allow modifying of existing VMs in the application
  • Others on the issue tracker

Release?

In its current state, Karton is not feature complete, and not ready for officially packaging and releasing. In addition to the missing features listed before, there have been a lot of new and moving parts throughout this coding period, and I’d like to have the chance to thoroughly test the code to prevent any major issues.

However, I do encourage you to try it out (at your own risk!) by cloning the repo. Let me know what you think and when you find any issues!

In other news, there are some discussions of packaging Karton as a Flatpak eventually and I will be requesting to add it to the KDE namespace in the coming months, so stay tuned!

Conclusion

Overall, it has been an amazing experience completing GSoC under KDE and I really recommend it for anyone who is looking to contribute to open-source. I’m quite satisfied with what I’ve been able to accomplish in this short period of time and hoping to continue to working and learning with the community.

Working through MRs has given me a lot of valuable and relevant industry experience going forward. A big thank you to my mentor, Harald Sitter, who has been reviewing and providing feedback along the way!

As mentioned earlier, Karton still definitely has a lot to work on and I plan continuing my work after GSoC as well. If you’d like to read more about my work on the project in the future, please check out my personal blog and the development matrix, karton:kde.org.

Thanks for reading!

Socials

Website: https://kenoi.dev/

Mastodon: https://mastodon.social/@kenoi

GitLab: https://invent.kde.org/kenoi

GitHub: https://github.com/kenoi1

Matrix: @kenoi:matrix.org

Discord: kenyoy

Catching Up

These last few weeks have been pretty hectic due to me moving countries and such, so I have not had the time to write a blog post detailing my weekly progress, because of this I have decided to compress it all into a singular blog post talking about all the changes I have been working on and what I plan on doing in the future.


The NewMailNotifier Agent

In the last blog post I wrote I talked about the progress that had been made in the newmailnotifier agent, and that in the following weeks I would finish implementing the changes and testing its funcionality. Well, it ended up taking quite a bit longer as I found that several other files had to also be moved to KMail from KDE-PIM Runtime, and these ones were being used in the runtime repo. The files I have found so far and that I have been looking into are:

  • newmailnotificationhistorybrowsertext.cpp
  • newmailnotificationhistorybrowsertext.h
  • newmailnotificationhistorybrowsertextwidget.cpp
  • newmailnotificationhistorybrowsertextwidget.h
  • newmailnotificationhistorydialog.cpp
  • newmailnotificationhistorydialog.h
  • newmailnotificationhistorywidget.cpp
  • newmailnotificationhistorywidget.h
  • newmailnotifieropenfolderjob.cpp
  • newmailnotifieropenfolderjob.h
  • newmailnotifiershowmessagejob.cpp
  • newmailnotifiershowmessagejob.h

The Troublesome Migration Agent

The MR for the singleshot capability in the Akonadi repo was given the green light and just recently got merged. On the other hand, the MR with the changes for the agent received feedback and several improvements were requested.

Most importantly, Carl brought to my attention how recent MR’s by Nicolas Fella removed the job tracker from the migration agent, thus making it unnecessary to add it as a temporary folder. Both the requested changes and the removal of the folder have been carried out, while doing so I even realized that in my singleshot MR I was missing the addition of the new finished()signal in the agentbase header file, which I have now also added.

After doing this though, I once again focused on the problem that persisted, the singleshot capability not working properly. The migration agent would initialize without issue when running the Akonadi server but would then not shut down after completing its tasks. I knew that the isPluginOpen() method worked in sending the finished signal, as when I opened and closed the plugin the agent would shut down correctly.

With the help of my mentor Claudio, we found that the migrations were in fact not even running, the agent would start but the jobs would fail to run, because of this the logic implemented to signal the finilization of a job never had the chance to run, and thus isPluginOpen()remained untouched.

Furthermore, the way I had designed the plugin letting the agent know that it was open had proven to be insufficient, as the migrations (once we get them to run as intended) would emit the jobFinished() signal after concluding, thus triggering the isPluginOpen() method with the default value of false and shutting down the agent, even if the plugin was still open.

The times the singleshot capability did work (when opening and closing the plugin), we also found that the status would show as “Broken” and the statusMessage as “Unable to start”, which may need changing, but most troubling was that the opening of the plugin would not restart the agent, therefore only showing an empty config window. I need to find a way to either restart from the agent itself or notify Akonadi so that it restarts it when the plugin runs.


Current Status and What’s Next

The GSOC concludes next week and these last few weeks have not seen any MR requests from my part, so my plan is to continue with the refactoring beyond the end of the programme, working on completing the NewMailNotifier and Migration agents, as well as dealing with a few of the agents in KMail, namely MailFilter, MailMerge and the UnifiedMailBox.

As of now, the identified issues to solve regarding the Migration agent are:

  • The agent not knowing if the plugin is open or closed when emitting the finished() signal.
  • The migrations not running.
  • The status and statusMessage showing as “Broken” and “Unable to start”, respectively.
  • The agent not being able to restart itself.

In the case of the NewMailNotifier:

  • Complete the transfer of the UI related logic to KMail
  • Test the D-Bus connection and the modified slotShowNotificationHistory()

While there’s still work ahead, I feel that these weeks have been invaluable in terms of learning, debugging, and understanding the bigger picture of how the different Akonadi agents fit together. The experience has been both challenging and rewarding, and I’m looking forward to tackling the remaining issues with a clearer path forward.

Although GSoC is officially ending, this is just a milestone rather than a finish line, and I’m excited to continue contributing to Merkuro and the KDE ecosystem as a whole.

Monday, 25 August 2025

Today marks both a milestone and a turning point in my journey with open source software. I’m proud to announce the release of KDE Gear 25.08.0 as my final snap package release. You can find all the details about this exciting update at the official KDE announcement.

After much reflection and with a heavy heart, I’ve made the difficult decision to retire from most of my open source software work, including snap packaging. This wasn’t a choice I made lightly – it comes after months of rejections and silence in an industry I’ve loved and called home for over 20 years.

Passing the Torch

While I’m stepping back, I’m thrilled to share that the future of KDE snaps is in excellent hands. Carlos from the Neon team has been working tirelessly to set up snaps on the new infrastructure that KDE has made available. This means building snaps in KDE CI is now possible – a significant leap forward for the ecosystem. I’ll be helping Carlos get the pipelines properly configured to ensure a smooth transition.

Staying Connected (But Differently)

Though I’m stepping away from most development work, I won’t be disappearing entirely from the communities that have meant so much to me:

  • Kubuntu: I’ll remain available as a backup, though Rik is doing an absolutely fabulous job getting the latest and greatest KDE packages uploaded. The distribution is in capable hands.
  • Ubuntu Community Council: I’m continuing my involvement here because I’ve found myself genuinely enjoying the community side of things. There’s something deeply fulfilling about focusing on the human connections that make these projects possible.
  • Debian: I’ll likely be submitting for emeritus status, as I haven’t had the time to contribute meaningfully and want to be honest about my current capacity.

The Reality Behind the Decision

This transition isn’t just about career fatigue – it’s about financial reality. I’ve spent too many years working for free while struggling to pay my bills. The recent changes in the industry, particularly with AI transforming the web development landscape, have made things even more challenging. Getting traffic to websites now requires extensive social media work and marketing – all expected to be done without compensation.

My stint at webwork was good while it lasted, but the changing landscape has made it unsustainable. I’ve reached a point where I can’t continue doing free work when my family and I are struggling financially. It shouldn’t take breaking a limb to receive the donations needed to survive.

A Career That Meant Everything

These 20+ years in open source have been the defining chapter of my professional life. I’ve watched communities grow, technologies evolve, and witnessed firsthand the incredible things that happen when passionate people work together. The relationships I’ve built, the problems we’ve solved together, and the software we’ve created have been deeply meaningful.

But I also have to be honest about where I stand today: I cannot compete in the current job market. The industry has changed, and despite my experience and passion, the opportunities just aren’t there for someone in my situation.

Looking Forward

Making a career change after two decades is terrifying, but it’s also necessary. I need to find a path that can provide financial stability for my family while still allowing me to contribute meaningfully to the world.

If you’ve benefited from my work over the years and are in a position to help during this transition, I would be forever grateful for any support. Every contribution, no matter the size, helps ease this difficult period: https://gofund.me/a9c55d8f

Thank You

To everyone who has collaborated with me, tested my packages, filed bug reports, offered encouragement, or simply used the software I’ve helped maintain – thank you. You’ve made these 20+ years worthwhile, and you’ve been part of something bigger than any individual contribution.

The open source world will continue to thrive because it’s built on the collective passion of thousands of people like Carlos, Rik, and countless others who are carrying the torch forward. While my active development days are ending, the impact of this community will continue long into the future.

With sincere gratitude and fond farewells,

Scarlett Moore

Intro

In my final week of GSoC with KDE's Krita this summer, I am excited to share this week's progress and reflect on my journey so far. From the initial setup to building the Selection Action Bar, this project has been a meaningful learning experience and a stepping stone toward connecting with Krita's community and open source development.

Final Report

Progress

This week I finalized the Selection Action Bar with my mentor Emmet and made adjustments based on my merge request feedback.

Some key areas of feedback and fixes included:

  • Localization of user-facing strings
  • Removing unused parameters
  • Refactoring naming conventions and standardized styling

These improvements taught me that writing good code is not just about features, but also about clarity, consistency, and collaboration.

Alongside updating my feature merge request, I also worked on documentation to explain how the Selection Action Bar works and how users can use it.

Reflection

Looking back over the past 12 weeks, I realize how much this project has shaped both my technical and personal growth as a developer.

Technical Growth When I started, navigating Krita's large C++/Qt codebase felt overwhelming. Through persistence, code reviews, and mentorship, I've grown confident in reading unfamiliar code, handling ambiguity, and contributing in a way that fits the standards of a large open source project. Following Krita's style guidelines showed me how important naming conventions and standardized code styling are for long-term maintainability.

Personal Growth One of the most important lessons I learned is that open source development isn't about rushing to get the next feature in. It's about patience, clarity, and iteration. Code reviews taught me to embrace feedback, ask better questions, and view them as opportunities for growth rather than blockers.

Community Lessons The most valuable part of this experience was connecting with the Krita and KDE community. I experienced first-hand how collaborative and thoughtful the process of open source development is. Every suggestion, from small style tweaks to broader design decisions, carried the goal of improving the project for everyone. That sense of shared ownership and responsibility is something I want to carry with me in all my future contributions.

Conclusion

These final weeks have been very rewarding. I have grown from starting out by simply reading Krita's large codebase to implementing a feature that enhances users' workflow.

While this marks the end of GSoC for me, it is not the end of my open source journey. My plan moving forward is to:

  • Continue refining the Selection Action Bar based on user feedback
  • Add customization options to the Selection Action Bar
  • Stay involved through ownership of feature creation, bug fixes, community participation, and feature proposals with the Krita and KDE community

Finally, I would like to thank my mentor Emmet, the Krita Developers Dmitry, Halla, Tiar, Wolthera, everyone I interacted with in Krita Chat, and the Krita community for their guidance, patience, and encouragement throughout this project.

I also want to thank Google Summer of Code for making this journey possible and giving me the chance to grow as a developer while contributing to open source.

Contact

To anyone reading this, please feel free to reach out to me. I'm always open to suggestions and thoughts on how to improve as a developer and as a person.
Email: ross.erosales@gmail.com
Matrix: @rossr:matrix.org

Sunday, 24 August 2025


Implementation of Python virtual environment runtime switching

A long‑running backend that evaluates Python code must solve one problem well: switching the active interpreter or virtual environment at runtime without restarting the host process. A reliable solution depends on five pillars: unambiguous input semantics, reproducible version discovery, version‑aware initialization, disciplined management of process environment and sys.path, and transactional switching that can roll back safely on failure. 


The switching workflow begins with a single resolver that accepts either an interpreter executable path or a virtual environment directory. If the input is a file whose basename looks like a Python executable, the resolver treats it as such, and when the path sits under bin or Scripts it walks one directory up to infer the venv root. If the input is a directory, the resolver confirms a venv by checking for pyvenv.cfg or conda‑meta. Inputs that do not meet either criterion are interpreted as requests to use the system Python. One subtle but important detail is to avoid canonicalizing paths during this phase. Symlinked venvs frequently point into system trees; resolving them prematurely would collapse a virtual environment back into “system Python,” undermining the caller’s intent.

Pic 1. Project structure created by venv/virtualenv
Pic 2. Project structure in {virtual_env_path}/bin
Pic 3. Project structure created by conda


Once a target has been identified, the backend determines the interpreter’s major.minor version and applies a session‑level version policy. Virtual environments often publish their version and preferred executable in pyvenv.cfg; the backend reads version, executable and base‑executable if present, falling back to executing the interpreter with a small snippet to print its major and minor components when necessary. For system Python, a small set of common candidates are probed until one responds. At first login, the backend records the initialized major.minor pair and considers subsequent switches compatible only if they match that normalized value. This deliberately conservative choice prevents ABI mismatches inside a single process.

Pic 5.  Content in pyvenv.cfg

Initialization deliberately follows two distinct paths because Python’s embedding APIs changed significantly in 3.8. For older runtimes, the legacy sequence sets the program name and Python home using Py_SetProgramName and Py_SetPythonHome and then calls Py_Initialize. To keep the embedded interpreter’s view of the world coherent, the backend then runs a short configuration script that clears and rebuilds sys.path, sets sys.prefix and sys.exec_prefix, and establishes VIRTUAL_ENV in os.environ. This legacy path also relies on process‑level environment manipulation, which is described below. For modern runtimes, the backend uses the PyConfig API. It constructs an isolated configuration, sets program_name, home, executable and base_executable explicitly, marks module_search_paths_set, and appends each desired search path through PyWideStringList_Append before calling Py_InitializeFromConfig. This approach minimizes dependence on ambient process environment and makes the search space explicit and predictable. It is worth emphasizing that even when switching to the system interpreter on Py≥3.8, module search paths should be set explicitly rather than relying on implicit heuristics.


The legacy initialization path leans on controlled modification of the host process environment. Before entering a venv, the backend saves the current PATH and PYTHONHOME, prepends the venv’s bin or Scripts directory to PATH, unsets PYTHONHOME and clears PYTHONPATH, and sets VIRTUAL_ENV. On restore, PATH and PYTHONHOME are put back, VIRTUAL_ENV and PYTHONPATH are cleared, and a guard bit records that the environment is no longer modified. A frequent source of instability in ad‑hoc implementations is PATH inflation during rapid switching. The fix is straightforward: always rebuild PATH from the original value captured before the first switch rather than stacking new prefixes on top of already mutated values.


Search path construction is handled in two places. On the C++ side, we can expand the venv’s library layout into a concrete list of directories—lib/pythonX.Y/site‑packages, lib/pythonX.Y, and lib64 variants—and, if desired, appends a fallback set of system paths. On the Python side, a short configuration fragment clears sys.path and appends the new list in order, then sets sys.prefix and sys.exec_prefix to the venv root and publishes VIRTUAL_ENV in the environment. Projects that require strict isolation can omit the system fallback entirely or tie the decision to pyvenv.cfg’s include‑system‑site‑packages.


Switching itself is transactional. Before attempting a change, the backend captures a compact description of the current state—the venv directory and detected version. It then finalizes the current interpreter, applies the new target and logs in. If initialization fails for any reason, the backend finalizes again and restores the previous state, re‑logging in and restoring the prior version record on success. This simple but strict “switch‑or‑rollback” contract prevents half‑initialized sessions and ensures the host remains usable regardless of individual switch failures.


Operational visibility matters both for diagnostics and for UI integration. The backend publishes getters for the current venv directory, the detected Python version, and the chosen interpreter path. It can also discover virtual environments by scanning starting directories for pyvenv.cfg and recognizable layout patterns, returning a list of environment paths with associated versions. For consumption by other components, structured formats such as JSON simplify parsing and future evolution; even when initial implementations return human‑readable strings, migrating to a structured schema pays off quickly.


Several pitfalls recur in real deployments. Symlinked venvs must be treated carefully to avoid collapsing into system paths during resolution. PATH must be rebuilt from an original baseline to avoid unbounded growth during rapid switching. On Py≥3.8, the system interpreter should be initialized with explicit module search paths rather than relying on implicit platform logic. On Windows, hard‑coded “C:/Python” roots are fragile; build paths from CMake‑injected PYTHON_STDLIB/PYTHON_SITELIB or query sysconfig from a known interpreter. Finally, enforcing a stable major.minor within a process, while conservative, prevents obscure ABI issues that are otherwise difficult to reproduce.


A typical backend sequence for switching to a new venv reads cleanly: accept a target path, resolve it to either a venv or the system interpreter, finalize the current interpreter, set the new Python home and program name or PyConfig fields as appropriate, initialize, publish paths, and report success. If any step fails, finalize immediately and restore the previous environment. Switching to the system interpreter follows the same template, with the additional recommendation to populate module_search_paths explicitly for Py≥3.8. Querying the active environment simply returns the cached directory, version, and executable path.


A robust runtime venv switcher is primarily a matter of careful engineering rather than novel algorithms. By unifying input semantics, discovering versions reliably, choosing the correct embedding API for the runtime, treating the host environment and sys.path as controlled resources, and insisting on transactional switching with rollback, the backend achieves predictable, production‑grade behavior without sacrificing flexibility.


Implementation of Python interpreter hot switching in Cantor backend architecture

In Cantor’s backend architecture, the Python interpreter is embedded in a long‑running service process, and the frontend communicates with it via a lightweight protocol over standard input and output. The essence of runtime virtual‑environment switching is not to replace this service process but to terminate the current interpreter and reinitialize a new interpreter context within the same process, thereby avoiding any rebuild of the frontend‑backend communication channel. This approach requires a stable message protocol, controllable interpreter lifecycle management, consistent cross‑platform path and environment injection, and compatibility constraints combined with transactional rollback at the version level to ensure safety and observability during switching.

The message protocol adopts a framed “command–response” model with explicit separators and covers environment switching, environment query, and environment discovery. When a switch is initiated, the frontend issues the switching command and immediately follows with an environment‑information query to validate the state and synchronize the UI. Upon receiving the command, the service process first resolves the target environment, accepting either a virtual‑environment root directory or an interpreter executable, normalizing both into an environment root and interpreter path, while avoiding misclassification of system directories as virtual environments. Environment detection adheres to cross‑platform structural conventions: pyvenv.cfg and bin/python[3] on Unix‑like systems, Scripts/python.exe and conda‑meta on Windows.

The interpreter “hot‑switch” follows an explicit lifecycle sequence: finalize the current interpreter, then initialize a new one. For Python 3.8 and later, the PyConfig isolated‑initialization path is used with explicit settings for the executable, base_executable, home, and module_search_paths to minimize external interference; for earlier versions, traditional APIs are used in conjunction with environment variable and sys.path injection. To ensure semantic equivalence with terminal‑based environment activation, sys.prefix and sys.exec_prefix are rebuilt, module search paths are reconstructed, and key variables such as VIRTUAL_ENV, PATH, PYTHONHOME, and PYTHONPATH are injected when entering the new environment and cleaned when reverting to the system environment.

The compatibility policy enforces equality on the major.minor version. After the first successful initialization, the initialized interpreter version is recorded; subsequent switches are permitted only to environments with the same major.minor, mitigating uncertainty introduced by cross‑version ABI or interpreter‑state differences. The switching operation is transactional: prior to finalization, the current environment and version are cached; if initializing the new environment fails, the system automatically rolls back to the previous environment and restores version information, ensuring the server remains available under exceptional conditions. Observability is provided by returning key details—environment root, interpreter path, and version—through the query command, enabling UI presentation and traceability at interpreter granularity; diagnostic outputs are produced on critical paths such as version mismatch, initialization failure, and environment restoration to facilitate investigation of cross‑platform and resolution issues.

The Settings page’s interpreter selector uses a “lazy‑load plus runtime cache” strategy. On first entry, it recursively scans the user directory and conventional locations, deduplicating and classifying environments based on structural markers and version probing; immediately after rendering, it asynchronously requests the backend’s current environment, and if no response arrives within a bounded timeframe, it falls back to locally detecting the active interpreter to ensure sensible defaults in both the drop‑down and input field. To avoid UI jitter, switching is triggered by an explicit confirm/apply action; once applied, an environment‑change signal is emitted, the session layer issues a combined “switch plus query” command to complete the closed loop, and the results are fed back to the UI. Both success and failure are reported in a uniform response format; on failure, the Settings page raises a one‑time warning for the dialog session and automatically realigns to the last known‑good environment to preserve a stable user experience.

In typical usage, providing an absolute interpreter path is recommended for its determinism and cross‑platform clarity; supplying a virtual‑environment root is also supported, and the system will resolve the corresponding interpreter automatically. Returning to the system interpreter can be achieved via an empty path or a dedicated “system interpreter” option in the UI; the backend will clear injected variables and restore system path semantics. When switching across minor versions is required, a more robust practice is to manage backend instances at the major.minor granularity—or to separate them explicitly in the UI—to reduce the frequency of rollbacks and perceived interruptions.

The end‑to‑end interaction sequence and the Settings page “discover–compare–align–apply” workflow are illustrated by the two diagrams above. The former depicts message exchange and lifecycle management across the Settings page, session layer, service process, and embedded interpreter; the latter details environment enumeration, validation, backend alignment, and user confirmation. Together they constitute an engineering‑grade runtime virtual‑environment switching loop that balances stability, cross‑platform consistency, and observability, meeting both interaction and maintainability requirements.


Pic 6. End-to-end timing of runtime switching


Pic 7.  Set up the "Discover-Compare-Align-Apply" workflow on the page



How to switch Python virtual environment through cantor

1. When you open Cantor, if you do not select a virtual environment in the General Tab of Configure Cantor, the Python in the current system will be opened by default. You can get the environment linked to the current Python interpreter by entering "sys.path"

2. 
Open the General Tab of Configure Cantor. You can use the top two options to choose to manually import through the folder (the default is to perform a 5-level recursive search) or manually select the Python interpreter to import the virtual environment.

3. Select the virtual environment you want to switch to and click "Apply" to switch to the new environment


4. Enter "sys.path" again to verify


5. If you select the wrong virtual environment version, the system will prompt an error
                                          

6. If the environment switch fails, the program will fall back to the last successfully switched environment, which is "venv1" in this test