Skip to content

Monday, 15 September 2025

KWin Gamepad Plugin: Weeks 3-4

Picking up from weeks 1+2 ( research + prototypes with libevdev/uinput ), these two past weeks were about moving from “research-only mode” to turning ideas into programming logic that lives inside KWin to: detect gaming controllers and their input events, keeps Plasma awake on controller activity, handles hot-plug and pre-existing connections on startup, and lays down the first mappings from controller input to keyboard/mouse actions without stepping on other apps utilizing the controllers.

From the start my mentors and I have had a general idea of the features we wanted to add but weren't too sure how to implement them. After some thinking and experimenting they advised me to start off with a KWin::Plugin. This would allow us to start introducing the gaming controller functionalities to KWin while avoiding having to edit the core or guts of KWin. It would also be a great entry point for current and future game controller input objectives, allowing us to start small with a 1st party KWin plugin, build on it, and possibly integrate it into core functionality.

When it comes to creating KWin plugins I had a few options:

  • Scripts: Written in QML/JavaScript and used for automating window management, tiling, shortcuts, etc.
  • Effects: Implement visual effects on windows, the desktop, or transitions.
  • Core/Native: These are built into KWin itself and extend KWin’s internal functionality.

Since the plugin needs low-level device access, such as monitoring /dev/input/event*, listening to udev hotplugs, opening fds, and reacting to evdev events the best choice was to go with Core / Native plugin. As opposed to Effect and Script plugins which aren’t designed to open devices or do long-running I/O, most simply just live inside the rendering/scripting layers.

I started off by searching for an example of how to build a KWin plugin so I could start learning how to build my own. Thankfully my mentor @zamundaaa provided me with some great examples:

  • Example / Tutorial plugin located in src/plugin/examples/plugin
  • Screenshots plugin located in src/plugins

Between both of these examples and mentoring I was able to piece together the scaffolding ( essential parts ) of a KWin plugin and was able to put together the first version of this plugin, gamepad plugin, located in: kwin/src/plugins/gamepad. At this point the plugin is structured as follows:

main.cpp // Entry point & Defines GamepadManagerFactory Class
metadata.json // Declares the plugin to KWin, define information about plugin
CMakeLists.txt // C++ Build/Installation/Logging wiring
gamepadManager.{cpp/h} // Plugin Logic: Defines GamepadManager Class
gamepad.{cpp/h} // Game Controller Object: Wrapper Class for Physical Controller

Implementation notes

GamepadManagerFactory

GamepadManagerFactory Class serves simply as the entry point for the plugin. It's a factory class, or a class used to create other classes / object types. Like the examples, it inherits from PluginFactory and declares it as its interface as well as pointing to the metadata.json file for this plugin. It initializes the plugin through its create() function which returns a GamepadManager.

GamepadManager

GamepadManager class serves as the central coordinator (the “brain” or “hub”) of the entire project. While creating this I took a lot of inspiration from src/backend/drm/drm_backend.{cpp/h}, which itself is responsible for handling drm/gpu devices. GamepadManager covers many responsibilities. It owns and manages all gamepad devices, handles discovery (startup enumeration, hot-plug), lifecycle (adding/removing), and communication (signals when pads are added/removed, or when their state changes). Overall its responsible for keeping track of the current set of controllers and their status.

Detect hot-plug and pre-existing device detection:

For this part many of the DRM backend pattern were used. The first thing the manager class does on initialization is create two QMetaObject::Connections that monitor the current KWin session for devicePaused and deviceResumed signals. This helps track devices when Plasma goes in and out of sleep/suspend which causes devices to be Paused and Resumed. It then enumerates over all event devices located in /dev/input/event* to handle any pre-existing connections to game controllers. If it discovers an event device it adds the gamepad ( start tracking it and its input ).

// On init:
// Enumerate current input nodes to filter and add ONLY event nodes
QDir dir(QStringLiteral("/dev/input"));
const auto files = dir.entryList({QStringLiteral("event*")}, QDir::Files | QDir::Readable | QDir::System);
for (const QString &file : files) {
 const QString path = dir.absoluteFilePath(file);
 if (!isTracked(path)) {
 addGamepad(path);
 }
}

Finally using udev it monitors the subsystems and filter for only "input" subsystem events. It uses QSocketNotifier to produce signal notifications from udev events and creates a connections between that notifier and a memeber function, handleUdevEvent, that handles events coming from the udev monitor when an input device is detecetd. Some checks are performed to verify if the device is a gaming controller, such as expected input events and input event types. This include input events like BTN_JOYSTICK and BTN_GAMEPAD, which are commonly defined in gaming controllers. As well as checking for joystick or D-pad capabilities. If the checks pass the game controller is "added", or in other words, the device is wrapped in a Gamepad class, kept track of and its presence monitored.

// setup udevMonitor
if (m_udevMonitor) {
 m_udevMonitor->filterSubsystemDevType("input");
 const int fd = m_udevMonitor->fd();
 if (fd != -1) {
 m_socketNotifier = std::make_unique<QSocketNotifier>(fd, QSocketNotifier::Read);
 connect(m_socketNotifier.get(), &QSocketNotifier::activated, this, &GamepadManager::handleUdevEvent);
 m_udevMonitor->enable();
 }
}

Gamepad

Gamepad is a wrapper class. It's purpose is to be tied to a physical controller. One Gamepad object per physical game controller. This enables quick access/reference to the device and allows for the physical controller to be treated like an object. This class is also responsible for device input handling, Plasma Idle refresh, and button to keyboard/mouse mappings. In the future things might get split up into seperate files but as it is, it handles a lot. As with the GamepadManager, this class takes a lot of inspiration from DRM backend patterns.

Detect Input Events:

Once a gaming controller device is detected it gets wrapped in a Gamepad class object. Which in turn wraps the controller in a libevdev object pointer. This is the part that gives access to the controller through the libevdev API, making it easier to work with it and monitor its input events. Like GamepadManager the first thing this class does is use QSocketNotifier to produce notifications from the controllers fd, i.e monitor for input. It then creates a connections between that notifier and a member function, handleEvdevEvent, which handles all incoming input events from that device.

libevdev *evdev = createEvDevice();
if (evdev) {
 m_evdev.reset(evdev);

 m_notifier = std::make_unique<QSocketNotifier>(m_fd, QSocketNotifier::Read, this);
 connect(m_notifier.get(), &QSocketNotifier::activated, this, &Gamepad::handleEvdevEvent);

 qCDebug(KWIN_GAMEPAD) << "Connected to Gamepad ( new libevdev* ): " << libevdev_get_name(m_evdev.get()) << "at" << m_path;
}

Plasma Idle Refresh On Controller Activity

With the ability to monitor for all input events from the device, the plugin then uses that information to know when to reset Plasma idle timer. For this Gamepad imports/includes input.h file and makes a call to input()->simulateUserActivity() when an input event is detected from the controller. This causes Plasma idle timer to be reset and prevents the system from going into sleep/suspend mode while using only gaming controller.

// reset idle time
input()->simulateUserActivity();

Controller -> Keyboard & Mouse Mapping

Gamepad uses API function from libevdev to check for input events, identify the specific input event and map that to a keyboard or mouse input event. Using libevdev_next_event() it checks for the input event coming from that game controller. It then identifies the specific input event through its input event type, code, and value. To simulate a mouse and keyboard the core/inputdevice.h file is imported and used to declare GenericInputDevice which inherits from InputDevice. That GenericInputDevice effectively behaves like a virtual keyboard and mouse inside KWin’s input stack.

When specific libevdev input event are identified, such as EV_KEY + BTN_SOUTH ( A button press ) OR EV_KEY + BTN_EAST ( B button press ), it call InputDevice::sendKey() to simulate keyboard key press and inject the desired keys into KWin input pipeline. In this case Enter for A ( BTN_SOUTH ) and Escape for B ( BTN_EAST ). To emulate mouse/pointer the plugin makes calls to InputDevice::sendPointerButton() for left and right mouse buttons, and InputDevice::sendPointerMotionDelta() for pointer movement.

architecture_diagram_0
architecture_diagram_1
architecture_diagram_2
architecture_diagram_3

Here is a list of all the buttons to keyboard/mouse mappings:

Face Buttons
------------
BTN_SOUTH → Enter (Qt::Key_Return)
BTN_EAST → Escape (Qt::Key_Escape)
BTN_NORTH
BTN_WEST

Bumpers
-------
BTN_TL → Alt (Qt::Key_Alt)
BTN_TR → Tab (Qt::Key_Tab)

Trigger Buttons
---------------
ABS_Z → Mouse Left Click
ABS_RZ → Mouse Right Click

D-Pad
-----
BTN_DPAD_LEFT → Arrow Left (Qt::Key_Left)
BTN_DPAD_RIGHT → Arrow Right (Qt::Key_Right)
BTN_DPAD_UP → Arrow Up (Qt::Key_Up)
BTN_DPAD_DOWN → Arrow Down (Qt::Key_Down)

Analog Sticks
-------------
ABS_RX / ABS_RY → Pointer Motion

Center Buttons
--------------
BTN_SELECT → Show On-Screen Keyboard ( WIP )
BTN_START → Meta/Super (Qt::Key_Meta)

Prevent Stepping On Other Apps

It's essential that the plugin doesn't emulate keyboard and mouse for the gaming controller when another app is reading from it. Most likely in such cases the device is being used for something else and not being used to navigate the desktop. To achieve this the GamepadManager class creates an instance of inotify object, and adds a watch device to the fd of each game controller that’s added as a Gamepad. Whenever inotify produces a notification a function, GamepadManager::handleFdAccess, is called which increments a counter in Gamepad, Gamepad::m_usageCount by +1 if the event value is IN_OPEN or Gamepad::m_usageCount by -1 if the event value is IN_CLOSE_WRITE | IN_CLOSE_NOWRITE. The plugin will only attempt to emualte keyboard/mouse if m_usageCount is 0. This prevents emulation of keyboard and mouse when other apps have the game controller opened / in use.

// Process all inotify events in the buffer
for (char *ptr = buffer; ptr < buffer + length;) {
 struct inotify_event *event = reinterpret_cast<struct inotify_event *>(ptr);

 auto it = m_watchesToGamepads.find(event->wd);
 if (it != m_watchesToGamepads.end()) {
 Gamepad *pad = it.value();
 if (event->mask & IN_OPEN) {
 pad->countUsage(+1);
 } else if (event->mask & (IN_CLOSE_WRITE | IN_CLOSE_NOWRITE)) {
 pad->countUsage(-1);
 }
 qCDebug(KWIN_GAMEPAD) << "Device" << pad->path() << "in use by:" << pad->usageCount() << " other apps";
 }
 ptr += sizeof(struct inotify_event) + event->len;
}

Opt-In

Many of the native plugins that ship with KWin are enabled by default but for our gaming controller plugin we will disable it by default and make it an opt-in option. This will allow users to start experimenting and benefiting from the plugin without risking the possibility of breaking current game controller input on their system.

{
 "KPlugin": {
 "Category": "Input",
 "Description": "Enable KWin game controller input detection",
 "EnabledByDefault": false, <---------- Not enabled by default. 
 "License": "GPL",
 "Name": "gamepad"
 },
 "X-KDE-ServiceTypes": ["KWin/Plugin"]
}

Testing

  • Controller awareness at startup and hot-plugging: tested in development session, KWin logs show the plugin picking up controllers in both scenarios, works as expected.
  • Preventing sleep/suspend: tested in development session. Set suspend timer to 1min, repeatedly press A and B back and forth, and at 5min no suspend was initiated, works as expected.
  • USB and Bluetooth connectivity support: tested in development session, KWin logs show plugin picking up on the controllers in both scenarios, works as expected.
  • Mapping from controller to keyboard and mouse: tested in development session, all buttons are map to expected keyboard and mouse, works as expected.
  • Backoff On Grab: tested in development session. Verified mapping work, started Steam app, verify mapping no longer enabled.

Testing device: 8Bitdo Gaming Controller (USB/2.4h/Bluetooth)

What’s next from here

  • Integration into KWin Proper: Start pushing changes upstream for others to test.
  • Map to Virtual Keyboard: Allow users to navigate over and get input from a virtual keyboard. Might open the way for logging in using only game controller.
  • Test Cases: As per best practices when developing for KWin.
  • KCM integration: A GUI option for users to toggle plugin ON/OFF. Ground work for more robust, user defined, button remapping.
  • Use Config for Mapping: Using a config file to keep track of and read from all the button to keyboard/mouse button mapping.

Reference documentation:

Checkout the source code here: KWin Gamepad Plugin: https://invent.kde.org/yorisoft/kwin/-/tree/work/yorisoft/gamepad-plugin/src/plugins/gamepad

I was at Akademy 2025 last-last week where I did some preliminary research on optimizing the VM viewer’s display rendering on Karton. After some more work this past week, it’s somewhat here! I’m still finishing up the merge request, but exciting news to come!

This has been something I’ve been planning on for quite a while now and will significantly improve the experience using Karton :)

a comparison with an old video I had.

Old Rendering Pipeline

My original approach for rendering listened to display-primary-create and invalidate-display-primary SPICE signals. Everytime it received a callback, it would create a new QImage and render that to the QQuickItem (the viewer window). As you can imagine, this was very inefficient as it is basically generating new images for every single frame being rendered. It suffered a lot from screen-tearing any time there were sudden changes to the screen.

You can read more about my experiences in my SPICE client blog.

We can do better!

Rendering via OpenGL can offload a lot of these tasks to the GPU and can significantly improve performance. I had known about GL properties in SPICE for a while now, but I kept putting it off since I really didn’t want to deal with any more graphics stuff after my last attempt.

Fast forward to last-last week, I was attending my first ever KDE Akademy in Berlin and all of a sudden gained some motivation.

It was really exciting hearing talks about all the kool things happening in KDE.

gl-draw

My first order of business was getting the gl-draw signal to properly receive gl-scanouts from my SPICE connection. After setting up the callback, I found out that I had to reconfigure my VMs to properly support it.

This was easy enough as I’ve made the Karton VM installation classes a few months ago done through the libvirt domain XML format. VMs need enabling of GL and 3D acceleration through the graphics element in the XML. The socket connection to SPICE also had to be switched from TCP to UNIX, which was set to /tmp/spice-vm{uuid}.sock. As a result, previous VMs configured in Karton will no longer work as the previous rendering pipeline has been removed.

<graphics type="spice" socket="/tmp/spice-vm{uuid}.sock">
    <listen type="socket" socket="/tmp/spice-vm{uuid}.sock"/>
    <gl enable="yes"/>
</graphics>
<video>
    <model type="virtio" heads="1" primary="yes">
        <acceleration accel3d="yes"/>
    </model>
    <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
</video>

An example libvirt domain XML snippet generated by Karton

Once properly configured, I was able to get SpiceGlScanout objects from my callback linked to the gl-draw signal. Now, I needed to render these scanouts onto my QQuickItem canvas.

EGL stuff

Having no background in graphics, I pretty much had no idea what I was doing by this point.

The SpiceGlScanout is a struct that looks like this:

struct SpiceGlScanout {
    gint fd;
    guint32 width;
    guint32 height;
    guint32 stride;
    guint32 format;
    gboolean y0top;
};

The width, height, stride, etc…, are all parameters that can be used to set your final rendered frame, but the important field is the fd (file descriptor) which is a “a drm DMABUF file that can be imported with eglCreateImageKHR”. I didn’t know what that was; but at least I learned I should be using the EGL library to do the processing.

I had found some forum articles (Qt forum, Arm developer forum) related to rendering OpenGL textures which used the EGL library and were quite helpful. I also looked at the SPICE GTK widget source code which gave me some ideas on the GL parameters to work with.

From these references, I saw that they pretty much followed the same pattern. Very simply put:

-> create egl image from a bunch of attributes/settings
-> generate texture from the fd 
-> bind texture to a texture type 
-> "glEGLImageTargetTexture2DOES" use this function?? still don't know what this does lol
-> destroy egl image

I originally tried setting the GL context properties manually, but there were some issues with getting it to detect my display and apparently thread syncronization. Then, I found out that Qt had a QOpenGLFunctions library which had all of the EGL functions and context properties wrapped and made my life a whole bunch easier.

OpenGL texture -> Qt

After a ton of trial and error, it looked like my EGL images were properly being created. Now I needed to render these GL textures to the QQuickItem.

How you do so is, within the inherited updatePaintNode() function, you return a QSGNode which has the information for updating that frame. Looking through the Qt documentation, QNativeTexture is a struct that allows you to store a texture ID to an OpenGL image. With that, you can create a wrapper QRhi class from the QNativeTexture with some of the generic context of your display.

Finally, you can use the createTextureFromRhiTexture() function under QQuickWindow which allows you to create a QSGTexture from that RHI for a QSGNode that can be returned by updatePaintNode(). And, we’re done! Yay!

To sum it up, here’s the framebuffer pipeline:

gl-draw signal->receive gl-scanout->import GL texture->GL texture ID->QNativeTexture->QRhi->QSGTexture->QSGNode->QQuickItem

so much smoother! yes, I was very excited.

Socials

Website: https://kenoi.dev/

Mastodon: https://mastodon.social/@kenoi

GitLab: https://invent.kde.org/kenoi

GitHub: https://github.com/kenoi1

Matrix: @kenoi:matrix.org

Discord: kenyoy

Sunday, 14 September 2025

I’m happy to announce the 0.8.2 release of Subtitle Composer.

This release contains bugfixes and few improvements including:

  • Fixed issues and crashes with newer Qt6 versions
  • Fixed Waveform and VideoPlayer paint issues
  • Fixed PGS subtitle mime type
  • Improved Wayland compatibility
  • Improved GoogleCloudEngine translations
  • Added configurable whitespace detection to VobSub import
  • Replaced deprecated FFmpeg channel code
  • Require FFmpeg >= 5.1.5

As usual all binaries are available from download page.

Source tarball can be downloaded from download.kde.org.

— Mladen

I was able to attend the talks at Akademy this year in Berlin! The last time I attended Akademy in person was in 2022, so it was really nice being able to come back and meet everyone again.

I was unfortunately not able to attend BoFs (development meetings) due to having to leave early. I did attend some meetings a few months earlier however, you can read more in my Plasma sprint recap post.

Talks 🔗

Akademy runs with two concurrent tracks of talks, and so sometimes there were two talks at the same that I both wanted to attend, I had a hard time deciding! Here are some of the ones I attended:

KDE Linux: Banana Growth Cycle 🔗

Harald released KDE Linux Alpha was to the public during the talk! I hadn’t followed the project super closely, but it was awesome getting up to speed learning about the state of the project and the inner workings of how the distribution works.

The Role of New Languages in the Future of the Qt Ecosystem 🔗

I was introduced to Qt Bridges, which is an effort to go beyond Qt bindings for other languages and tightly integrate with them (ex. Rust, Python). Once this is more mature, it will likely be an easy recommendation for others to start learning Qt with, who don’t want to use C++!

KDE Goals - One Year Recap 🔗

It was interesting to see all the work that had been done on the KDE Goals so far!

I am actually involved with one of them this time around (“We care about your input”) through my work on plasma-keyboard. Blog post likely coming in a few months, once that work is further along!

Next-Gen Documentation Infrastructure for KDE 🔗

KDE’s reference API documentation has been a bit of sore spot for me, since it didn’t support QML very well. As a result, I usually go manually go through header files instead in the source code to figure out how to use libraries.

The talk went over Nicolas’s work on doing the mammoth task of porting all of KDE’s API documentation to QDoc from Doxygen, which properly supports QML. The new api.kde.org went live, and boy is it such an improvement! It’s much easier for me to point new developers to the Kirigami documentation now.

Fedora KDE Plasma Desktop Edition is Real, Now What? 🔗

I personally use Fedora on my workstation and laptops, and so it was cool to get some history about how Plasma on Fedora was revived in the past, and plans for the future. Neal also expressed some interest in a Plasma Bigscreen spin (similar to the one for Plasma Mobile), which could be pretty interesting once it becomes more mature!

Plasma Mobile Power Management: Reliable Sleep and Wake Ups 🔗

Bhushan gave an update on his work power management work across the Plasma stack! He obtained an NLNet grant recently for the project, detailed on his blog.

Discussions 🔗

I was really happy to meet and discuss with quite a few people during the event.

I met Bart, Luca, Casey and Pablo from the postmarketOS project! As it is the main platform I test and develop Plasma Mobile with, it was really nice to finally meet some of their developers (I had met Bart and Luca at Akademy 2022)! I also was able to finally meet Florian, who has been collaborating with me in contributing to Plasma Mobile in the past few years!

I met Dorota, who has been working on Wayland input related things for the past few years, and is in the process of pushing through updates to text-input-v3, and Jakob who has been working on the KDE side pushing through the input related KDE goals! We discussed some input related topics, which was insightful as I worked on the client side through plasma-keyboard (and my limited Wayland knowledge).

I also discussed some Kirigami page navigation related topics with Marco. I’m doing a bit of investigation into how we can improve the way we navigate between pages in applications, and perhaps restricting the page left/right gesture into the side (similar to iOS).

I’m back from Akademy 2025 in Berlin, and what an experience it was.

At this point, I’ve gotten a reputation as a “big picture guy”, so that’s what I’ll focus on here, rather than the details of my experiences in specific events. Lots of other folks are starting to write blog posts you can find on https://planet.kde.org about their Akademy experiences that I’m sure will be full of juicy details!


But basically, to me this year’s Akademy felt like it had a theme: “KDE is on the cusp of something big.”

Here’s one example: at the very cool C-base hackerspace, I was talking with someone who mused that 15 years ago, Akademy was full of KDE hackers talking about the government one day using our software… and then fast-forward 15 years and our two keynote speakers are from the German government talking about using KDE’s software!

Then we had a talk from the “End of 10” crowd about KDE’s campaign encouraging people to upgrade to Linux rather than buying new hardware capable of running Windows 11. And then as if to reflect on the success of this initiative, Patrick Fitzgerald gave a talk about how to do massive migrations from Windows to Linux, with examples provided of cases where literally thousands of machines were migrated to KDE software at a small fraction of the cost of moving to Windows 11.

Till Adam gave a talk about how commercial work changes relationships with respect to his experience in KDAB, a software consultancy founded by KDE contributors. I found this talk highly relevant given that David Edmundson and I just started a KDE-focused company this year ourselves. Alexandra Betouni also gave a talk about rising to the top of a company. Hmm, lots of companies!

We heard about how Mercedes is rolling out a vehicle powered by KDE technology under the hood.

In the “hallway track”, I had a fascinating discussion about how KDE’s efforts to improve accessibility have the potential to be an industry-wide force multiplier.

And then I gave a talk myself about the big picture of all of these trends — that as the world falls apart around us, everything being on fire includes tremendous opportunities for change that KDE is well-positioned to benefit from.


Basically, at age 29, KDE is all grown up now. Our software solves real problems for real people, at scale. It works for governments and big businesses. It saves or earns money for a lot of people. Our competitors are beginning to falter and look weak. But through it all, KDE remains healthy and strong, and grows in stature.

So I found Akademy 2025 to be an unexpectedly serious conference, full of heavy topics and sharing of priceless wisdom from hard-earned experience. There was of course also a lot of fun hacking and group gatherings and renewing of social bonds, but throughout everything was that underpinning that KDE isn’t just a fun little online community anymore, but rather a player with a growing significance on the world stage.

Pretty cool stuff, I think! Personally, I get energized by working on things that matter, and boy did Akademy 2025 leave me with the impression that KDE matters.

From the 3rd to the 5th of September, the Kdenlive team was reunited in Berlin for a sprint and to attend Akademy, KDE's annual conference. This was an occasion for us to meet in person since our team is spread across continents, and to join our forces to make Kdenlive better. And I must say this was one of the most productive sprints in Kdenlive's history!

We were kindly hosted by c-base for our Sprint so a big thanks to the team for welcoming us there!

Let's get into the details of what we did:

We started by reviewing and updating our roadmap, so it is easier to understand what we are working on, what we plan and when. Another important step towards improving our workflows is that we created issues for each of these goals where the details will be discussed, so everyone can follow and possibly help us on the road to success.

Dopesheet

Very exciting, I received a grant from the NGI Zero Commons Fund through NLnet to work on a dopesheet feature in Kdenlive. This will bring a much improved keyframing interface with powerful features. We discussed what core features we want in it and some drafts on how that would work. This feature won't be ready for the December release, but I will post updates on the progress of this task in the coming months.

We then reviewed specific parts of the UI that we would like to improve. All these ideas will be discussed in specific issues so that we can refine the implementation.

This task started two years ago but we never took time to finalize it. We progressed a lot on this and you can expect it to land in the December release. Among the changes, we decided to rename the Project Bin to Media, Render to Export, and reorganize the menus to make things more logical. We will make another blog to present these changes in detail once this is done.

Timeline toolbar

We want to cleanup the UI, make the timeline timecode display cleaner and get rid of the large Master button currently taking a lot of space.

Monitor UI

We plan to move the audio vu-meter to a collapsible vertical widget on the right side of the monitor to free some space in the toolbar, make the zone duration always visible and move the insert/overwrite actions currently in the timeline toolbar there.

Audio monitor

When selecting an audio clip, the Clip Monitor currently displays a huge audio waveform that is not that useful. We reviewed the UI to also display an overview at the top, making it easier to zoom and see where you are in the clip.

Monitor with audio before the Sprint
Monitor with audio a few days after the Sprint

Layout and docks

We have several open issues regarding docking. One of the frequent request we have is to save the layout per project file, since sometimes you want very specific layouts for a project. We discussed how to make it happen and are also evaluating switching the library managing the widgets docking to KDDockWidgets that would bring us some very nice improvements like being able to detach the timeline or group several undocked widgets together

Titler

Our current titler does the job for simple tasks but many users would like to be able to use some animation presets to make their titles more dynamic. We discussed the possible options to make this a reality. Among the ideas, we could use Lottie animations, since our video backend MLT already has support to play them through the Glaxnimate module. Another option would be to implement a Qml producer for MLT, allowing to play Qml files directly as a video. Any help on that topic is welcome.

Website

We have some planned changes to make our website look better and discussed some of the options.

And all the rest

We discussed tons of other things and even managed to shoot some interviews of our team members. Less relevant maybe for users but we also reviewed some of the administrative and trademark issues, and CI/CD issues

Akademy 2025

Akademy was also an occasion to have interesting exchanges, notably with Glaxnimate's maintainer, Plasma developers and more. We are now back home with tons of ideas and TODO's, and the next release of Kdenlive, to be launched in December, will shine with some of the improvements we prepared during this week in Berlin !

If you would like to help our small team, you are always welcome to contribute by giving some feedback, talk about us, create a merge request or donate.

Despite the lack of posts (which we apologise for) the builds have continued to happen on the neon build servers. Packages for Plasma 6.4.5, coupled with KDE frameworks 6.18, and KDE release service 25.08.1 built on top of Qt 6.9.2 have just been released to the neon user archives. Live Image ISO’s and containers are available for download from the usual location.

The builds will continue to happen for the foreseeable future and hope that everyone enjoys the latest and greatest KDE created software, if that’s your cup of tea. 😉

Saturday, 13 September 2025

This year’s Akademy was in Berlin at the Technical University of Berlin. The experience, as usual, was amazing. Unlike in previous years, there was a huge emphasis on styling, unification, and graphical work. This whole wave of talks was invigorating.

As a side note, this year our A/V was vastly improved and this should make it much easier for our contributors and viewers online to see and understand what we did. As part of the organization, I will help process these recordings and make sure they are awesome.

Once again I spoke on the progress with the design system. This year’s talk focused on our progress on icons. Definitively, one of the lengthiest pieces of work coming from the Foundations portion of the design system.

In addition to speaking on these topics, I shared the newly created (and definitively experimental) Ocean color scheme, light/dark, Ocean Plasma Style, and Ocean icon pack. In case you missed these assets, here you go: https://drive.google.com/file/d/1oLVq0SViOFB6lur3qn0bwV7_gHzqu2KM/view?usp=sharing

One thing to note, and after much discussion during Akademy, we have aligned more properly on the way that we should work in light of the addition of the design system.

In our current process, we use GIT as the source for our icons. Anyone can download and apply the icons on their Plasma system. However, this process is not quite geared toward designers. After all, all icons located in the repo are exported icons. They are one-layer graphics that only function with node work. If a designer needed to work with these assets, they would have to likely recreate them to gain the appropriate shape control needed to make desired changes.

This leads to overhead work and style inconsistencies. Above all, it leads to a state where the real source of the icon doesn’t exist unless we dig through each individual computer where the icon was developed.

With the use of applications like PenPot or Figma, that question is resolved. Users are able to download an asset library owned by the design team at Plasma. The source is protected but it’s also distributed in away that doesn’t affect the master copies. If changes are needed, change requests can be submitted to master and the design team can decide to apply those changes or not.

Effectively, this means a change in the way that icons are stored. Moving the work from Git to PenPot/Figma seems like the best choice.

This requires communication, habit changes, risk management, etc. While I am speaking of this right now, we are “not” changing our current process to obtain Breeze icons from its repo. However, it means more information will come in the future as we develop a more effective way to work with a design system.

I am so excited to see the progress done in Union, and even more excited to start passing on design system components into Union to see how they fare against the newly created engine. Union is also under heavy development. I encourage you to watch Arjen Hiemstra’s presentation at Akademy when it’s published.

When this happens, this would be the second set of graphical controls that are executed via Union. I am sure many challenges lay ahead but I feel energized by it. I am sure we are on our way to resolving long-standing design and development issues that have slowed us down.

As a result of this year’s Akademy, I created a set of action items for myself that I have to review to be able to continue. One major item for me is to develop our master component source in PenPot. Even though Ocean icons are not 100% executable in PenPot, other assets like buttons, sliders, progress bars, inputs, etc are executable. I will dedicate the time to create these items and leave Figma for components behind, shedding also any legacy branding coming from the design system sources and only focusing on what we need for Ocean styles.

With that, Akademy has been a thrill. I go home energized and happy for what we have accomplished. All this while keeping a vibrant community and a vibrant free desktop system for all Linux users.

Once again, all KDE nerds had their yearly gathering around somewhere in the world. We call this gathering Akademy and this year it was in Berlin.

I don't really have anything in-depth to share, except for my first talk I had. I spent a lot of time listening to talks and chilling at BoFs. Since I was with my wife, we also went around Berlin looking for fun things, such as the Aquarium at the zoo.

TL;DR: I don't remember much, but I had a lot of fun and I had my first talk!

Day 1

We arrived around ~13.00 at Berlin airport and spent some time getting to our hotel. After a good nap, we went to the welcome event, where I had a nice hotdog and chatted with various folks.

It was a bit of a blur, I was so sleepy. But I do remember having fun.

Also I was super happy that our planes were finally on time this year, unlike last time...

Day 2

I arrived to Akademy venue around 9:30 and spent the whole day going to talks and taking notes of said talks. I will share those notes later in the post.

I also had a lot of discussions with other KDE devs about Union and the like.

Day 3

I spent so much time just being anxious about my talk, so that I don't remember much else.

I have embedded the talk in here.

Here's also a link to the talk: Youtube link.

I rushed the talk a bit due to worrying it would take too long, I tend to go "hummmm" a lot.. So I forgot to mention two bits:

  • The cosplay in the intro slide is what I wish I wore for the talk.. :D
  • We should warn newcomers about any of the possible negativity their contributions may gather.

Other than that, it went fine I think.

Later in the evening we visited c-base and it was really cool looking hackerspace. Though I was already out of any energy at that point, we left a bit early.

My Akademy Notes file

Here's a link to the notes, excuse my bad handwriting: Akademy 2025 notes.pdf

Day 4

First we went to the aquarium, which was fun. We saw very cool sharks and other huge fish there. I did not even know Koi fish could grow that big. We also saw a lot of different lizards, toads and insects. I tried my best to befriend the iguana in there... But I don't think they spoke Finnish.

Me chatting to an iguana

We then also had a korean sandwich, I bought myself a pair of new cool pants and we visited a Lego store.

Later in the evening, I went to a dinner with my coworkers, which was really fun.

Day 5

Went to more Akademy BoFs. One of the more interesting ones was the BoF around KDE Linux so we chatted about it and any related issues with it.

I also went to a BoF around KIO + Sandboxing, to see what we can do to make tools that depend on KIO work better in sandboxed environment, such as Flatpaks.

Sadly I don't have much notes from either, since they were rather speedy and I missed parts of them all because I was busy tinkering on my KomoDo app.

Day 6

On the last day, we had a scavenger hunt in the morning and then went to a game museum. I was so exhausted that I couldn't even think of walking around Berlin anymore, so I just joined the game museum part. It was rather cool and I spent some time playing various arcade games they had set up.

There was also some "PainGame" that was basically pong but with pickups that would cause actual pain to the other player. The players had to hold their hand on some panel that would heat up, cause electric shocks and whip the hand with some plastic bit.

Well I tried it and pulled my hand off the moment I felt it heating up. I already had enough anxiety at the moment, didn't need to contribute more to it.

After a pizza at a nice little pizza place, we went back to hotel and slept.

The next morning we went to a plane at 5 am and were soon back home.

Ramblings and thoughts

Berlin is not a good place for me to go to. It's very loud, uh.. fragrant and there's a lot of things moving constantly.

My nerves were constantly shot. I kept constantly looking around for bad shit to happen, I could not relax at all. I managed to mask it to the best of my abilities, but that just drained me further.

So, uh, sorry anyone who thought I was rather hard to approach. I was just constantly anxious. Akademy itself was really nice and people there were really friendly and fun, but Berlin just was too much for me.

I also really enjoyed every single talk and BoF I went to!

I just can't deal with big cities well, I suppose. Next year I will have to limit the time I'm traveling, preferably ~4 days or so. Anything more is out of my limits.

Still, looking forward to where it will be next year. :)

Thanks for reading, I know there wasn't much actual knowledge in this blogpost, but maybe you liked my talk and/or my notes.

Thursday, 11 September 2025

Since the ninth of October 2025’s “Big Tent” comments by frame.work about how they don’t care whether the people they sponsor are racist and transphobic, my happiness with my 12 has dropped below zero. Don’t buy frame.work, people.

https://community.frame.work/t/framework-supporting-far-right-racists/75986/2

Since 2023 my laptop situation has been pretty awful… I had two laptops (okay, I admit, that doesn’t sound that awful), a Windows Lenovo Yoga laptop and an Apple Macbook Pro. Powerful machines, of course, but the Yoga has a ghastly keyboard that’s that prone to double registering single keypresses, and way worse touchpad. The touchpad was so bad that the touch screen was really necessary. The Macbook Pro is fast, has nice hardware, a nice screen (no touch, though…) but: it runs MacOS. And the Yoga runs Windows. First 11, now 10, because 11 is torture.

Also… The Yoga is my Windows Krita dev and test device, the Macbook is my macOS dev and test device.

I wanted a laptop for myself. For watching videos, doing some sketching, writing RPG write-ups, managing my own stuff.

So, when Framework announced the Framework 12 I started getting interested: small, cute, colorful, not a build system powerhouse, but nice enough specs, touch screen, pen enabled, enough memory possible. And cute. I pre-ordered on, in bubble-gum with a lavender keyboard. Yeah, it clashes. I love it, it’s so colorful. I also knew I would love the upgradability, repairability, extensibility, and the looks. ‘Cause it’s cute.

Photo of the motherboard of the framework laptop. All components are labeled and there are qr codes to lead you to more information.

Some months after ordering, it arrived, last Monday. I was in hospital, after getting my SRS operation (succesful, recovery is going better than expected!), but today, I was home, and even groggy from recovery, I managed to put it together and install KDE Neon on it. It’s that easy to assemble the colorful, cute self-assembly version of the Framework 12. Fun, too.

And when I was installing Neon (I might switch later on, I don’t need to be too stable here, this one is mine, and it’s for fun!) I noticed something.

Something weird and unexpected.

The keyboard really is GOOD. It’s got good travel, it feels good, it invites typing. They keys have the right texture, and so has the palm rest. It’s the best laptop keyboard I’ve used in ages, and yes, that includes the Macbook Pro M2. If only for the keyboard, I’d buy it again.

The touchpad, too. Colorful, clearly demarcated from the palm rest, giving good feedback — it’s in every way that counts better than the Yoga’s touchpad. And there’s still a touch screen for when my four year trained reflexes take over. The screen is bright, clear and sharp. The resolution isn’t the highest, but then, this is 12″, so it is fine.

The plastic casing feels very solid, too. Sure, it’s not thin enough to shave with, like the Yoga or a Macbook Air, but then, I’ve already had laser treatment, so shavability isn’t an issue anymore.

Such a relief from all the black and grey hardware that has been surrounding me for years.

Photo of a round table. The table is covered with glass; underneath the glass is woven wickerwork. On the table, in the middle is the bubblegum pink framework laptop. Around the laptop are mostly black, sometimes gray tablets, e-readers and laptops.

The Ars Technica reviewer said “Sure, it’s cute and functional, but for the money you can get better specs”. Look… “Cute and functional” is a unique selling point if there’s ever one: is there any other laptop on the market that provides that? Better specs might mean faster whatever, but… It is perfectly functional. And cute, that too.

Joe Brokmeier’s review on Linux Weekly News was much closer to my opinion than the Ars review. He choose sage green, which is also quite delectable a colour.

Functional — it does everything I need now, and there’s already a CPU upgrade coming that I could install myself, if I wanted to.

Cute — I ordered a bunch of USB C modules so I can swap out colors according to my mood, too.

Halla is happy now!