Skip to content

Welcome to Planet KDE

This is a feed aggregator that collects what the contributors to the KDE community are writing on their respective blogs, in different languages

Thursday, 19 February 2026

Automating Repetitive GUI Interactions in Embedded Development with Spix

2023-07-05-18-08-38-small_Blog_Christoph_Spix

As Embedded Software Developers, we all know the pain: you make a code change, rebuild your project, restart the application - and then spend precious seconds repeating the same five clicks just to reach the screen you want to test. Add a login dialog on top of it, and suddenly those seconds turn into minutes. Multiply that by a hundred iterations per day, and it’s clear: this workflow is frustrating, error-prone, and a waste of valuable development time.

In this article, we’ll look at how to automate these repetitive steps using Spix, an open-source tool for GUI automation in Qt/QML applications. We’ll cover setup, usage scenarios, and how Spix can be integrated into your workflow to save hours of clicking, typing, and waiting.

The Problem: Click Fatigue in GUI Testing

Imagine this:

  • You start your application.
  • The login screen appears.
  • You enter your username and password.
  • You click "Login".
  • Only then do you finally reach the UI where you can verify whether your code changes worked.

This is fine the first few times - but if you’re doing it 100+ times a day, it becomes a serious bottleneck. While features like hot reload can help in some cases, they aren’t always applicable - especially when structural changes are involved or when you must work with "real" production data.

So, what’s the alternative?

The Solution: Automating GUI Input with Spix

Spix allows you to control your Qt/QML applications programmatically. Using scripts (typically Python), you can automatically:

  • Insert text into input fields
  • Click buttons
  • Wait for UI elements to appear
  • Take and compare screenshots

This means you can automate login steps, set up UI states consistently, and even extend your CI pipeline with visual testing. Unlike manual hot reload tweaks or hardcoding start screens, Spix provides an external, scriptable solution without altering your application logic.

Setting up Spix in Your Project

Getting Spix integrated requires a few straightforward steps:

1. Add Spix as a dependency

  • Typically done via a Git submodule into your project’s third-party folder.
git subrepo add 3rdparty/spix git@github.com:faaxm/spix.git

2. Register Spix in CMake

  • Update your CMakeLists.txt with a find_package(Spix REQUIRED) call.
  • Because of CMake quirks, you may also need to manually specify the path to Spix’s CMake modules.
LIST(APPEND CMAKE_MODULE_PATH /home/christoph/KDAB/spix/cmake/modules)
find_package(Spix REQUIRED)
  • Add Spix to your target_link_libraries call.
target_link_libraries(myApp
  PRIVATE Qt6::Core
          Qt6::Quick 
          Qt6::SerialPort 
          Spix::Spix
)

4. Initialize Spix in your application

  • Include Spix headers in main.cpp.
  • Add some lines of boilerplate code:
    • Include the 2 Spix Headers (AnyRPCServer for Communication and QtQmlBot)
    • Start the Spix RPC server.
    • Create a Spix::QtQmlBot.
    • Run the test server on a specified port (e.g. 9000).
#include <Spix/AnyRpcServer.h>
#include <Spix/QtQmlBot.h>
[...]

//Start the actual Runner/Server
spix::AnyRpcServer server;
auto bot = new spix::QtQmlBot();
bot->runTestServer(server);

At this point, your application is "Spix-enabled". You can verify this by checking for the open port (e.g. localhost:9000).

Spix can be a Security Risk: Make sure to not expose Spix in any production environment, maybe only enable it for your Debug-builds.

Where Spix Shines

Once the setup is done, Spix can be used to automate repetitive tasks. Let’s look at two particularly useful examples:

1. Automating Logins with a Python Script

Instead of typing your credentials and clicking "Login" manually, you can write a simple Python script that:

  • Connects to the Spix server on localhost:9000
  • Inputs text into the userField and passwordField
  • Clicks the "Login" button (Items marked with "Quotes" are literal That-Specific-Text-Identifiers for Spix)
import xmlrpc.client

session = xmlrpc.client.ServerProxy('http://localhost:9000')

session.inputText('mainWindow/userField', 'christoph')
session.inputText('mainWindow/passwordField', 'secret') 
session.mouseClick('mainWindow/"Login"')

When executed, this script takes care of the entire login flow - no typing, no clicking, no wasted time. Better yet, you can check the script into your repository, so your whole team can reuse it.

For Development, Integration in Qt-Creator can be achieved with a Custom startup executable, that also starts this python script.

In a CI environment, this approach is particularly powerful, since you can ensure every test run starts from a clean state without relying on manual navigation.

2. Screenshot Comparison

Beyond input automation, Spix also supports taking screenshots. Combined with Python libraries like OpenCV or scikit-image, this opens up interesting possibilities for testing.

Example 1: Full-screen comparison

Take a screenshot of the main window and store it first:

import xmlrpc.client

session = xmlrpc.client.ServerProxy('http://localhost:9000')

[...]
session.takeScreenshot('mainWindow', '/tmp/screenshot.png')k

Now we can compare it with a reference image:

from skimage import io
from skimage.metrics import structural_similarity as ssim

screenshot1 = io.imread('/tmp/reference.png', as_gray=True)
screenshot2 = io.imread('/tmp/screenshot.png', as_gray=True)

ssim_index = ssim(screenshot1, screenshot2, data_range=screenshot1.max() - screenshot1.min())

threshold = 0.95

if ssim_index == 1.0: 
    print("The screenshots are a perfect match")
elif ssim_index >= threshold:
    print("The screenshots are similar, similarity: " + str(ssim_index * 100) + "%")
else:
    print("The screenshots are not similar at all, similarity: " + str(ssim_index * 100) + "%")

This is useful for catching unexpected regressions in visual layout.

Example 2: Finding differences in the same UI

Use OpenCV to highlight pixel-level differences between two screenshots—for instance, missing or misaligned elements:

import cv2

image1 = cv2.imread('/tmp/reference.png')
image2 = cv2.imread('/tmp/screenshot.png')

diff = cv2.absdiff(image1, image2)

# Convert the difference image to grayscale
gray = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)

# Threshold the grayscale image to get a binary image
_, thresh = cv2.threshold(gray, 30, 255, cv2.THRESH_BINARY)

contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image1, contours, -1, (0, 0, 255), 2)

cv2.imshow('Difference Image', image1)
cv2.waitKey(0)

This form of visual regression testing can be integrated into your CI system. If the UI changes unintentionally, Spix can detect it and trigger an alert.

1024-637_Blog_Christoph_Spix

Defective Image

1024-639_Blog_Christoph_Spix

The script marked the defective parts of the image compared to the should-be image.

Recap

Spix is not a full-blown GUI testing framework like Squish, but it fills a useful niche for embedded developers who want to:

  • Save time on repetitive input (like logins).
  • Share reproducible setup scripts with colleagues.
  • Perform lightweight visual regression testing in CI.
  • Interact with their applications on embedded devices remotely.

While there are limitations (e.g. manual wait times, lack of deep synchronization with UI states), Spix provides a powerful and flexible way to automate everyday development tasks - without having to alter your application logic.

If you’re tired of clicking the same buttons all day, give Spix a try. It might just save you hours of time and frustration in your embedded development workflow.

The post Automating Repetitive GUI Interactions in Embedded Development with Spix appeared first on KDAB.

The Variables To start a handshake, we need two public numbers that everyone knows: Base (g): 2 Modulus (p): 19 Step 1: The Private Secrets Two parties, Alice and Shiva, choose secret numbers (Private Keys).

Wednesday, 18 February 2026

OSU logo

The UN Open Source Principles are comprised of eight guidelines and provide a framework to guide the use, development and sharing of open source software across the United Nations. This is part of the UN's Open Source United (OSU) initiative, which aims to coordinate and increase open source efforts across the United Nations system.

According to OSU:

"Across the UN, teams are building powerful digital tools, but much of this work is isolated. Open Source United breaks these silos, encourages collaboration, and makes innovation easier to share and reuse. By working together, we deliver solutions that are more transparent, sustainable, and cost-effective."

Alongside another 119 FLOSS organisations, KDE will support the effort to connect UN teams and their partners, as well as the global community, encouraging them to share, discover and reuse open-source solutions in their work to carry out the UN’s mission worldwide.

Tuesday, 17 February 2026

Qt for MCUs 2.11.2 LTS has been released and is available for download. This patch release provides bug fixes and other improvements while maintaining source compatibility with Qt for MCUs 2.11 (see Qt for MCUs 2.11 LTS released). This release does not add any new functionality.

Measuring activity is not about producing more metrics. It is about supporting better decisions and enabling continuous improvement. We restricted our analysis to main/master to observe validated flow and kept visualizations simple to promote adoption across the community.

In my last post, I made a solemn vow to not touch Kapsule for a week. Focus on the day job. Be a responsible adult.

Success level: medium.

I did get significantly more day-job work done than the previous week, so partial credit there. But my wife's mother and sister are visiting from Japan, and they're really into horror movies. I am not. So while they were watching people get chased through dark corridors by things with too many teeth, I was in the other room hacking on container pipelines with zero guilt. Sometimes the stars just align.

coding while untold horrors occur in the next room

Here's what came out of that guilt-free hack time.

Konsole integration: it's actually done

containers in new tab menu

The two Konsole merge requests from the last post—!1178 (containers in the New Tab menu) and !1179 (container association with profiles)—are merged. They're in Konsole now. Shipped.

Building on that foundation, I've got two more MRs up:

!1182 adds the KapsuleDetector—the piece that actually wires Kapsule into Konsole's container framework. It uses libkapsule-qt to list containers over D-Bus and OSC 777 escape sequences for in-session detection, following the same pattern as the existing Toolbox and Distrobox detectors. It also handles default containers: even if you haven't created any containers yet, the distro-configured default shows up in the menu so you can get into a dev environment in one click.

!1183 is a small quality-of-life addition: when containers are present, a Host section appears at the top of the container menu showing your machine's hostname. Click it, and you get a plain host terminal. This matters because once you set a container as your default, you need a way to get back to the host without going through settings. Obvious in hindsight.

The OSC 777 side of this lives in Kapsule itself—kapsule enter now emits container;push / container;pop escape sequences so Konsole knows when you've entered or left a container. This is how the tab title and container indicator stay in sync.

Four merge requests across two repos (Konsole and Kapsule) to get from "Konsole doesn't know Kapsule exists" to "your containers are in the New Tab menu and your terminal knows when you're inside one." Not bad for horror movie time.

Configurable host mounts: the trust dial is real

In the last post, I talked about making filesystem mounts configurable—turning the trust model into a dial rather than a switch. That's shipped now.

--no-mount-home does what it says—your home directory stays on the host, the container gets its own. --custom-mounts lets you selectively share specific directories. And --no-host-rootfs goes further, removing the full host filesystem mount entirely and providing only the targeted socket mounts needed for Wayland, audio, and display to work.

The use case I had in mind was sandboxing AI coding agents and other tools you don't fully trust with your home directory. But it's also useful for just keeping things clean—some containers don't need to see your host files at all.

Snap works now

Here's a screenshot of Firefox running in a Kapsule container on KDE Linux, installed via Snap:

screenshot of firefox in snap in kapsule

I expected this one to be a multi-day ordeal. It wasn't.

Snap apps—like Firefox on Ubuntu—run in their own mount namespace, and snap-update-ns can't follow symlinks that point into /.kapsule/host/. So our Wayland, PipeWire, PulseAudio, and X11 socket symlinks were invisible to anything running under Snap, resulting in helpful errors like "Failed to connect to Wayland display."

The fix was straightforward: replace all those symlinks with bind mounts via nsenter. Bind mounts make the sockets appear as real files in the container's filesystem, so Snap's mount namespace setup handles them correctly. That was basically it.

While I was in there, I batched all the mount operations into a single nsenter call instead of running separate incus exec invocations per socket. That brought the mount setup from "noticeably slow" to "instant"—roughly 10-20x faster on a cold cache. And the mount state is now cached per container, so subsequent kapsule enter calls skip the work entirely.

NVIDIA GPU support (experimental)

jensen huang with nvidia logo and chip

This one's interesting both technically and in terms of where it's going.

Kapsule containers are privileged by design—that's what lets us do nesting, host networking, and all the other things that make them feel like real development environments. The problem is that upstream Incus and LXC both reject their NVIDIA runtime integration on privileged containers. The upstream LXC hook expects user-namespace UID/GID remapping, and the default codepath wants to manage cgroups for device isolation. Neither applies to our containers.

So I wrote a custom LXC mount hook that runs nvidia-container-cli directly with --no-cgroups (privileged containers have unrestricted device access anyway) and --no-devbind (Incus's GPU device type already passes the device nodes through). This leaves nvidia-container-cli with exactly one job: bind-mount the host's NVIDIA userspace libraries into the container rootfs so CUDA, OpenGL, and Vulkan work without the container image shipping its own driver stack.

There's a catch, though. On Arch Linux, the injected NVIDIA libraries conflict with mesa packages. The container's package manager thinks mesa owns those files, and now there are mystery bind-mounts shadowing them. It works, but it's ugly and will cause problems during package updates. I hit this on Arch first, but I'd be surprised if other distros don't have the same issue—any distro where mesa owns those library paths is going to complain.

So NVIDIA support is disabled by default for now. The plan: build Kapsule-specific container images that ship stub packages for the conflicting files, and have images opt-in to NVIDIA driver injection via metadata. Two independent flags control the behavior: --no-gpu disables device passthrough entirely (still on by default), and --nvidia-drivers enables the driver injection.

Architecture: pipelines all the way down

turtles all the way down meme

The biggest behind-the-scenes change in v0.2.1 is the complete restructuring of container creation. The old container_service.py was a 1,265-line monolith that did everything sequentially in one massive function. It's gone now.

In its place is a decorator-based pipeline system. Container creation is a series of composable steps, each a standalone async function that handles one concern:

Pre-creation:     validate → parse image → build config → store options → build devices
Incus API call:   create instance
Post-creation:    host network fixups → file capabilities → session mode
User setup:       mount home → create account → configure sudo → custom mounts → host dirs → enable linger → mark mapped

Each step is registered with an explicit order number and gaps of 100 between steps, so inserting new functionality doesn't require renumbering everything. The decorator handles sorting by priority with stable tie-breaking, so import order doesn't matter.

This pattern worked well enough that I plan to extend it to other large operations—delete, start, stop—as they accumulate their own pre/post logic.

On the same theme of "define it once, use it everywhere": container creation options are now defined in a single Python schema that serves as the source of truth for the daemon's validation, the D-Bus interface (which now uses a{sv} variant dicts, so adding an option never changes the method signature), and the C++ CLI's flag generation. Add a new option in Python, recompile the CLI, and you've got a --flag with help text and type validation. Zero manual C++ work.

The long-term plan is to use this same schema to dynamically generate the graphical UI in a future KCM. Define the option once, get the CLI flag, the D-Bus parameter, the daemon validation, and the Settings page widget—all from the same schema.

First external contributor

Marie Ramlow (@meowrie) submitted a fix for PATH handling on NixOS—the first external contribution to Kapsule. I don't have a NixOS setup to test it on, so this one's on trust. That's open source for you: someone shows up, fixes a problem you can't even reproduce, and you merge it with gratitude and a prayer.

Testing

The integration test suite grew substantially. New tests cover host mount modes, custom mount options, OSC 777 escape sequence emission, and socket passthrough. The test runner now does two full passes—once with the default full-rootfs mount and once with --no-host-rootfs—to verify both configurations work.

Bugs caught during testing that would have been embarrassing in production: a race condition in the Incus client where sequential device additions could clobber each other (the client wasn't waiting for PUT operations to complete), and Alpine containers failing because they don't ship /etc/sudoers.d by default.

CI/CD: of all the things to break

oil pipeline fire

I finally built out the CI/CD pipelines. They use the same kde-linux-builder image that builds KDE Linux itself—mainly because it's one of the few CI images with sudo access enabled, which we need for Incus operations.

The good news: the pipeline successfully builds the entire project, packages it into a sysext, deploys it to a VM, and runs the integration tests. That whole chain works. I was pretty pleased with myself for about ten minutes.

The bad news: when the first test tries to actually create a container, the entire CI job dies. Not "the test fails." Not "the runner reports an error." The whole thing just... stops. No exit code, no error message, no logs after that point. Nothing.

I'm fairly sure it's causing a kernel panic in the CI runner's VM. Which is, you know, not great.

Debugging this has been miserable. I can't get any logs after the panic because there are no logs—the kernel is gone. I tried adding debug prints before each step in the container creation pipeline to isolate exactly where it dies. The prints don't come through either, probably because of output buffering, or maybe the runner agent doesn't get a chance to stream the output to GitLab before the entire VM goes down.

The weird part: it's not a nested virtualization issue. Regular Incus works fine on the same runner—you can create containers interactively, no problem. And it doesn't reproduce on KDE Linux at all. Something about the specific combination of the CI environment and Kapsule's container creation path is triggering it, and I have no way to see what.

I've shelved this for now. The pipeline is there, the build and deploy stages work, and the tests would work if the runner didn't kernel panic when Kapsule tries to create a container. If anyone reading this has ideas, I'm all ears.

What's next: custom container images

shipping containers

The biggest item on my plate is custom container images. Right now, Kapsule uses stock distribution images from the Incus image server. They work, but they're not optimized for our use case—things like the NVIDIA stub packages I mentioned above need to live somewhere, and "just install them at container creation time" adds latency and fragility.

Incus uses distrobuilder for image creation, so the plan is straightforward: image definitions live in a directory in the Kapsule repo, a CI pipeline invokes distrobuilder to build them, and the images get published to a server.

The "published to a server" part is where it gets political. I talked to Ben Cooksley about hosting Kapsule images on KDE infrastructure, and he's—understandably—not yet convinced that Kapsule needs its own image server. It's a fair pushback. This is all still experimental, and spinning up image hosting infrastructure for a project that might change direction is a reasonable thing to be cautious about.

So for now, I'll host the images on my own server. They probably won't be the default, since the server is in the US and download speeds won't be great for everyone. But they'll be available for testing and for anyone who wants the NVIDIA integration or other Kapsule-specific tweaks. I'll bug Ben again when the image story is more fleshed out and there's a clearer case for why KDE infrastructure should host them.

Beyond that: get the Konsole MRs (!1182 and !1183) reviewed and merged, and figure out why CI kills the kernel. The usual.

Plasma 6.6 makes your life as easy as possible, without sacrificing the flexibility or features that have made Plasma the most versatile desktop in the known universe.

With that in mind, we’ve improved Plasma’s usability and accessibility, and added practical new features into the mix.

Check out what’s new and how to use it in our (mostly) visual guide below:

A script element has been removed to ensure Planet works properly. Please find it in the original post.
A script element has been removed to ensure Planet works properly. Please find it in the original post.

Highlights

On-Screen Keyboard

Enjoy this new and improved on-screen keyboard

Spectacle Text Recognition

Extract text from screenshots in Spectacle

Plasma Setup

Set up a user account after the operating system has been installed

New Features

Those who like tailoring the look and feel of their environment can now turn their current setup into a new global theme! This custom global theme can be used for the day and night theme switching feature.

A more subtle way of modifying the look of your apps is by changing the color intensity of every frame:

Choose emoji (Meta+.) skin tones more easily with a new skin tone selector:

A major focus of Plasma 6.6 has been speeding up common workflows. So if the system has a camera, you can quickly connect to a new Wi-Fi network simply by scanning its QR code:

Hover the pointer over any app’s icon playing sound in the task manager, and scroll to adjust its volume:

And save yourself a click by enabling Open on hover in your Windows List widget. You can also filter out windows not on the current desktop or activity:

Hold down the Alt key and double-click on a file or folder on the desktop to bring up its properties:

Accessibility

To help everyone use and enjoy Plasma, we’ve improved accessibility across the board.

If you have colorblindness, check out the filters on System SettingsAccessibility page, under Color Blindness Correction. Plasma 6.6. adds a new grayscale filter, bringing the total to four filters that account for different kinds of colorblindness:

Still in the area of enhancements for the visually impaired, our Zoom and Magnifier feature has gained a new tracking mode that always keeps the pointer centered on the screen, bringing the total to four modes:

In addition, we added support for “Slow Keys” on Wayland, and the standardized “Reduced Motion” accessibility setting.

Screenshots and Screen Recording

Speaking of accessibility, Spectacle can now recognize and extract text from images it scans. Among other use cases, this makes it easy to write alt texts for visually-impaired users:

You can also filter windows out of a screencast by choosing a special option from the pop-up menu that appears when right-clicking a window’s title bar:

Virtual Keyboard

Plasma 6.6 also features a new on-screen keyboard! Say hello to the brand-new Plasma Keyboard:

Plasma Setup

Plasma Setup is the new first-run wizard for Plasma, and creates and configures user accounts separately from the installation process.

With Plasma Setup, the technical steps of operating system installation and disk partitioning can be handled separately from user-facing steps like setting up an account, connecting to a network, and so on. This facilitates important use cases such as:

  • Companies shipping Plasma pre-installed on devices
  • Businesses or charity organizations refurbishing computers with Plasma to give them new life
  • Giving away or selling a computer with Plasma on it, without giving the new owner access to the previous owner’s data

But That’s Not All…

Plasma 6.6 is overflowing with goodies, including:

  • The ability to have virtual desktops only on the primary screen
  • An optional new login manager for Plasma
  • Optional automatic screen brightness on devices with ambient light sensors
  • Optional support for using game controllers as regular input devices
  • Font installation in the Discover software center, on supported operating systems
  • Choose process priority in System Monitor
  • Standalone Web Browser and Audio Volume widgets can be pinned open
  • Support for USB access prompts and a visual refresh of other permission prompts
  • Smoother animations on high-refresh-rate screens

To see the full list of changes, check out the complete changelog for Plasma 6.6.

In Memory of Björn Balazs

In September, we lost our good friend Björn Balazs to cancer.

An active and passionate contributor, Björn was still holding meetings for his Privact project from bed even while seriously ill during Akademy 2025.

Björn’s drive to help people achieve the privacy and control over technology that he believed they deserved is the stuff FLOSS legends are made of.

Björn, you are sorely missed and this release is dedicated to you.

Plasma Setup, the new wizard that guides users through the initial configuration of KDE Plasma, is having its first release as part of the Plasma 6.6 release!

With Plasma Setup, the technical steps of operating system installation and disk partitioning can be handled separately from user-facing steps like setting up an account, connecting to a network, and so on. This facilitates important use cases such as:

  • Companies shipping Plasma pre-installed on devices
  • Businesses or charity organizations refurbishing computers with Plasma to give them new life
  • Giving away or selling a computer with Plasma on it, without giving the new owner access to the previous owner’s data

This has been several months in the making, as it has been my primary focus ever since I was hired by KDE e.V. last year. The project has seen a ton of work, and we've collaborated with distros and other stakeholders to ensure it meets the needs of the community.

I am very excited to see Plasma Setup finally in the hands of users, and how it will make KDE (and Linux/FOSS in general) more accessible to a wider audience. This is a key piece that was needed in order for Plasma to be more viable and accessible to a whole class of users (non-technical end-users, businesses, governments, etc.).

There are still plenty of improvements that can be made, and contributions are very welcome! If you are interested in contributing, please check out the project on KDE's GitLab: https://invent.kde.org/plasma/plasma-setup

Monday, 16 February 2026

Here is an overview on the new features added to the Quick3D.Particles module for Qt 6.10 and 6.11. The goal was to support effects that look like they are interacting with the scene and to do this without expensive simulations/calculations. Specifically we'll be using rain effect as an example when going trough the new features. 

Sunday, 15 February 2026

Tellico 4.2 is available, with some improvements and bug fixes. This release now requires Qt6 (> 6.5) as well as KDE Frameworks 6. One notable behavior change is that when images are removed from the collection, the image files themselves are also removed from the collection data folder.

Users have provided substantial feedback in a number of areas to the mailing list recently, which is tremendously appreciated. I’m always glad to hear how Tellico is useful and how it can be better. Back up those data files!

Improvements:

Bug Fixes:

  • Fixed bug with XML generation for user-locale (Bug 512581).