Skip to content

Tuesday, 1 April 2025

Hey everyone!!

Welcome to my blog post. I am Roopa Dharshini, a mentee in Season of KDE 2025 for the KEcoLab project. In this blog, I will explain my work in the SoK mentorship program.

Getting Started With SoK

For my proposal I crafted a detailed timeline for each week. With this detailed plan and with the help of my wonderful fellow contributors and mentors, I was able to complete all the work before the end of the mentorship program.

Various technical documentation tools under consideration (screenshot from Roopa Dharshini published under a <a href=\CC-BY-SA-4.0 license.)" src="https://eco.kde.org/blog/images/2025-03-31-roopa-sok25-proposal.png" style="max-width: 100%; height: auto" />

I started by first week working to understanding the project's codebase, studying KECoLab's handbook and existing documentation, setting up a GitLab wiki in the forked repository, and discussing the GitLab wiki's Merge Request (MR) feature. I explored and discussed various technical documentation tools with the mentors. Initially, we had planned to continue with GitLab, but later due to the flexibility of KDE's community wiki, we proceeded with that as our preferred documentation tool.

Usage scenario script documentaion (screenshot from Roopa Dharshini published under a <a href=\CC-BY-SA-4.0 license.)" src="https://eco.kde.org/blog/images/2025-03-31-roopa-sok25-usage-scenario.png" style="max-width: 100%; height: auto" />

I got to work creating an outline for the entire technical documentation. Usage scenarios scripts are essential for executing the automation pipeline in KEcolab. So, my fellow mentees and I started our documentation process with usage scenario scripting: we drafted a short page describing it's importance, provided some scripts, and detailed their structure. This documentation is structured in a way that even non-technical contributors are able to follow the guidelines and create their own scripts.

CI/CD Pipeline documentation (screenshot from Roopa Dharshini published under a <a href=\CC-BY-SA-4.0 license.)" src="https://eco.kde.org/blog/images/2025-03-31-roopa-sok25-ci-cd.png" style="max-width: 100%; height: auto" />

After this, I wrote various texts for the technical documentation (CI/CD pipeline, Home Page) of the KEcoLab project. There was a change in the audience for our documentation: initially we focused on the users of KEcoLab, but later we decided to write documentation for both the people who wish to contribute and provide new changes to KEcoLab as well as those who use KEcoLab for their software measurements. This change had us writing in-depth technical documentation for developers who wish to change the code for better efficiency. The CI/CD pipeline is essential for the energy measurement automation in KEcoLab. Writing detailed CI/CD pipeline documentation that explains its use, structure, and job execution was challenging, yet rewarding.

  1. User Guide documentation for KEcoLab Users
  2. Usage Scenario Script documentation
  3. Accessing result documentation for users
  4. CI/CD pipeline documentation for contributors
  5. Contribution guidelines

How did I apply to Season of KDE?

Accepted Proposal (screenshot from Roopa Dharshini published under a <a href=\CC-BY-SA-4.0 license.)" src="https://eco.kde.org/blog/images/2025-03-24-roopa-sok25-proposal.png" style="max-width: 100%; height: auto" />

Season of KDE is a mentorship program that happens every year between January and March. It is a three-month mentorship where mentees will be guided through a project they propose. You start by writing a proposal and timeline to work on from the projects listed on the KDE Ideas page. You tag the mentors in the issue, and they will review your proposal and check whether you are suitable or not. You can checkout my proposal for the KEcoLab project. After review, mentors will hopefully mark your proposal as accepted. And that’s how I got into it!

Challenges I faced

Applying to SoK was not easy for me. I ran into my first challenge when I tried to create a new KDE Invent account. I thought there were some technical issues with the website, so I tried every day to create an account (you are limited to one account creation chance per 24-hour period). After a long wait, I reached out to SoK admin Johnny for help, and he assisted me in creating an account. I was really scared to submit my proposal because there was only one week before the submission deadline, but I trusted my skills and submitted it. So, keep in mind that “it is never too late to apply."

The second challenge was team collaboration. Similar to me, there were 2 other contributors selected for this project. I was brand new to KDE. At first it was hard to communicate with my other contributors, but later on we started to work really well together. Those are the main challenges I faced during my contributions to SoK. Challenges are never an end point; they are a stepping stone to move further.

Thank You Note!

Challenges make the journey worthwhile. Without any challenges, I wouldn’t have known the perks of contributing to KDE in SoK. I take a moment here to thank my wonderful mentors Kieryn, Aakarsh, Karanjot, and Joseph for guiding me throughout this journey. Also, I want to thank my fellow contributors to the project Shubhanshu and Utkarsh for collaborating with me to achieve what we proposed successfully. Finally, I am thankful to the KDE e.V. and the KDE community for supporting us new contributors to the amazing KDE project.

KEcoLab is hosted on Invent. Are you interested in contributing? You can join the Matrix channels Measurement Lab Development and KDE Eco and introduce yourself.

Thank you!

Monday, 31 March 2025

KDE Dragon

Introduction -

Over the last 10 weeks, I had the opportunity to contribute to MankalaEngine by exploring and integrating new algorithms for gameplay, as well as working on adding the Pallanguli variant to the engine. My journey involved researching about various algorithms like Monte Carlo Tree Search (MCTS), implementing Q-learning, an ML-based approach, and evaluating their performance against the existing algorithms of MankalaEngine. Also assisted in reviewing the implementation of the Pallanguli variant.

Implementing and Testing MCTS

I first explored Monte Carlo Tree Search (MCTS) and implemented it in MankalaEngine. To assess its effectiveness, I tested it against the existing algorithms, such as Minimax and MTDF, which operate at depth 7 before each move.

MCTS Performance Results -

Player 1Player 2MCTS Win Rate
RandomMCTS80%
MCTSRandom60%
MinimaxMCTS0%
MCTSMinimax0%
MTDFMCTS0%
MCTSMTDF0%

The results was not good enough. This was expected because existing Minimax and MTDF algorithms are strong and operate at depth 7 before each move.

Moving to Machine Learning: Implementing Q-Learning.

Given MCTS's poor performance against strong agents, I explored Machine Learning (ML) techniques, specifically Q-Learning, a reinforcement learning algorithm. After learning its mechanics, I implemented and trained a Q-learning agent in MankalaEngine, testing it against existing algorithms.

Q-Learning Performance Results -

Player 1Player 2Q-Learning Win Rate
RandomQ-Learning100%
Q-LearningRandom98%
MinimaxQ-Learning100%
Q-LearningMinimax0%
MTDFQ-Learning100%
Q-LearningMTDF10%

Q-learning showed significant improvement, defeating existing algorithms in most cases. However, it still had weaknesses.

Techniques Explored to Improve Q-Learning Results:

To improve performance, I experimented with the following techniques:

  • Using Epsilon decay to balance exploration (random moves) and exploitation (using learned strategies).

  • Increased rewards for wins to reinforce successful strategies.

  • Training Q-learning against Minimax and MTDF rather than only against itself.

Despite these improvements, Q-learning still could not consistently outperform all existing algorithms.

After these experiments and research, I believe more advanced algorithms like DQN or Double DQN are needed to outperform all existing algorithms. This would also an exciting project for this summer.

Apart from exploring ML algorithms, I also worked on integrating the Pallanguli variant of the Mancala game into MankalaEngine. My contributions included:

  • Reviewing Srisharan’s code, suggesting fixes and discussions.

  • Creating Merge Request (MR) that allows users to input custom initial counters for Pallanguli.

Conclusion -

This journey has been an incredible learning experience, and I am grateful for the guidance of my mentors, Benson Muite and João Gouveia, who were always there to help.

I look forward to continuing my contributions to the KDE Community, as I truly love the work being done here.

Thank you to the KDE Community for this amazing opportunity!

Many people are, understandably, confused about brightness levels in content creation and consumption - both for SDR and for HDR content. Even people that do content creation as their job sometimes get it really wrong.

Why is there so much bad information about it out there?

Before jumping into the actual topic, I want to emphasize that most people that have gaps in their knowledge about HDR and SDR are not to blame for it. The standards that define colorspaces are usually confusingly written, many don’t paint the full picture, finding the one you actually need can be difficult, some you need to pay for to even read, and generally there is not a lot of well organized and free information about this out there.

When you have basically no information, you just go with what you do know - you see how Microsoft Windows does HDR for example, maybe you take a look at a draft for the sRGB specification or simply the Wikipedia pages, and do the best with what you have. The result is often less than ideal.

Having worked on this stuff for a while now, and having read lots about it from people that actually know what they’re doing, I think I know the topic well enough to clear up some misconceptions, but do keep in mind that my knowledge is limited too, and I may still make mistakes. If you’re sure I got anything wrong, tell me about it!

If you want an entry point for way more information than this blog post provides, check out color-and-hdr.

How brightness works with sRGB

sRGB is the colorspace most content uses today. Despite that, very annoyingly, its specification is not openly available… but there’s a draft version that you can download freely here, which is good enough for this topic.

The (draft) specification defines two things that are important when it comes to brightness:

  • a set of reference display conditions
  • a set of reference viewing conditions (I’ll call that “viewing environment” from here on)

The reference display conditions are seemingly quite straight forward. The display luminance is 80cd/m², we have a whitepoint of D65, and a transfer function. Transfer functions describe how to calculate the output luminance from the encoded values of an image, and with sRGB that’s

Y = X ^ 2.2

where Y is the relative luminance on the display, and X is the relative luminance on the input.

The viewing environment has a few more parameters, but it’s conceptually not difficult to understand: It describes how bright your environment is, what color temperature the lights in your room have, and how much your display reflects the environment at you.

sRGB viewing environment

How to create sRGB content “correctly”?

The assumption that many people take from the specification is that you should calibrate your display to 80cd/m². On its own, that information is completely wrong!

It’s obvious when you think about how end users actually view content: They set the brightness level of the display to what they’re comfortable with in the current environment. You make the display really bright when you’re outside, less bright when in a normally lit room, and even darker than that when the lights are off.

The part that’s missing with just calibrating the display to some luminance level is that you must take the viewing environment into account. Either you set up the sRGB reference viewing environment (with measurements!)… or you just don’t. When you create content, in most cases you should do exactly the same thing as the person that will consume the content does: Just set the brightness to what’s comfortable in the environment you’re in. It still helps to keep your viewing environment mostly fixed of course, lots of brightness changes mean you’re constantly readjusting and that’s not good.

There’s another big thing to take into account for sRGB, which is its confusing transfer function.

The sRGB transfer function

The sRGB specification doesn’t just define a transfer function for the display, but it also defines a second transfer function. This sRGB piece-wise transfer function is

if X < 0.04045: Y = X / 12.92
else: Y = ((X + 0.055) / 1.055)^2.4

and it’s slightly different from gamma 2.2 in that it has that linear bit for the very dark area.

The purpose of this transfer function is to optimize encoding of dark parts of the image - with 8 bits per color, gamma 2.2 becomes really small in the lowest few values. 1/255 for example results in roughly 0.0000051 with gamma 2.2, and 0.0003035 with the sRGB piece-wise transfer function.

This difference might sound insignificant, but it is noticeable. The most well known place of where the wrong transfer function is used is Microsoft Windows: When you enable HDR in Windows, it uses the piece-wise transfer function for sRGB content, instead of the gamma 2.2 transfer function that which your display uses in SDR mode. The result is that dark areas of SDR games and videos are brighter than they should be, and look “washed out”.

So when should you use the sRGB piece-wise transfer function? So far, I don’t know of any case where you should, outside of working around that Windows problem in your application… I’m also only concerned with displaying images though, and not editing or creating them, so take that with a grain of salt.

How brightness works with HDR

Most HDR content uses the SMPTE ST 2084 transfer function. The specification for this is freely available here.

SMPTE ST 2084 is a bit different from the sRGB spec, in that it only defines a transfer function but no complete colorspace or viewing environment. That transfer function is the Perceptual Quantizer (PQ): It tries to compress luminance levels in a way that matches how sensitive human eyes are in specific luminance ranges, and it’s defined in absolute luminance - a PQ value of 0.0 means <= 0.005cd/m², and 1.0 maps to 10000 cd/m².

The missing parts are defined by different specifications, rec.2100 and BT.2408. More specifically, rec.2100 uses the BT.2020 primaries with the PQ transfer function (or the HLG transfer function, but we’ll ignore that here) and a recommended viewing environment for such HDR content:

rec.2100 viewing environment

BT.2408 expands on that with an HDR reference white and graphics white, at 203cd/m². This is mostly meant for the context of broadcasts, referring with “graphics” to logos or subtitles in the video stream.

Despite the transfer function being “absolute”, just like with sRGB, the luminance numbers don’t mean anything in isolation. When displaying HDR content, just like with SDR, we need to take the viewing environment into account, and adjust luminance levels accordingly.

How is this handled in Wayland?

Every transfer function in the color management protocol has reference display conditions and a viewing environment attached to it, defined by a few parameters. Most relevant for this topic are

  • a reference luminance, also known as HDR reference white, graphics white or SDR white
  • minimum and maximum mastering luminances, basically how dark and bright the display the content was made for can go

When content is displayed on the screen, the compositor translates between the viewing environment of the content, and the viewing environment of the user. While we don’t usually have full knowledge of what exactly that viewing environment is like, the brightness slider in KDE Plasma provides a very good approximation by configuring the reference luminance to be used for content on the display. The calculation for this brightness adjustment is rather simple, in linear space you just do

output = input * output_reference / input_reference

You can configure the maximum reference luminance (brightness slider at 100%) with the “Maximum SDR Brightness” in the display settings of Plasma 6.3. The minimum and maximum luminance your display can achieve can only be configured with the kscreen-doctor command line tool right now, but an easy to use calibration utility for this is nearly finished (and the default values are usually fine too).

In general, this system is working really well… with one rather big exception.

HDR in Windows games

As mentioned before, Windows in HDR mode does sRGB wrong, but the story with HDR content is kind of worse.

When you use Windows 11 on a desktop monitor and enable HDR, you get an “SDR content brightness” slider in the settings - treating HDR content as something completely separate that’s somehow independent of the viewing environment, and that you cannot adjust the brightness of. With laptop displays however, you get a normal brightness slider, which applies to both SDR and HDR content.

The vast majority of Windows games expect the desktop monitor case: Static, never changing luminance levels, which are displayed on the screen without any adjustments whatsoever. Windows also didn’t have a built-in HDR calibration tool until Windows 11, so nearly every Windows game ships with its own HDR calibration settings and completely ignores system settings. This doesn’t just cause issues for Windows 11 laptops of course, but also for playing these same games with HDR on Linux.

Until Plasma 6.2, we worked around that, also mostly not doing brightness adjustments, and the result was that those HDR calibration settings in games worked basically like on Windows. However, these workarounds broke Linux native applications that want to mix HDR and SDR in their own windows, made tone mapping worse, and blocked features like HDR on “SDR” laptop displays, so in Plasma 6.3 we had to drop them.

This doesn’t mean you can’t play Windows games with HDR in 6.3 anymore, you just have to adjust their configuration to match the changed brightness levels. In most cases, this means you set the HDR paper white in games to 203cd/m², and then set the maximum luminance with the game’s configuration screen, like this one from Baldur’s Gate 3:

Baldur's Gate 3 HDR calibration

How to implement good HDR

After ranting about how Windows games do it wrong, I should end this blog post by also explaining how to do it right. I will skip most of the implementation details, but on a high level if you’re implementing HDR in a Wayland native application or toolkit, you should

  • use the Wayland color management protocol
  • get the capabilities of the compositor and/or graphics driver, specifically the transfer functions they support
  • get the preferred image description from the compositor, and the luminances you’re supposed to target from that. When using these luminance values, keep in mind that reference luminance adjustment the compositor will do!
  • every time the preferred image description changes, get the new one and adjust your application to it
  • now render for these parameters, and set the image description you actually ended up targeting on the surface, either through Vulkan or with the Wayland protocol (not both at the same time!)
  • SDR things, like user interfaces in games, should use the reference luminance too
  • if your application has some need to differentiate between “SDR” and “HDR” displays (to change the buffer format for example), you can do so by checking if the maximum mastering luminance is greater than the reference luminance
  • now you can, and really should drop all HDR settings from your application. If HDR has a performance penalty in your application, a toggle to limit the app to SDR could still be useful, but everything else should be completely automatic and the user should not be bothered with calibration screens or similar annoyances

Saturday, 29 March 2025

Kaidan 0.12.2 fixes some bugs. Have a look at the changelog for more details.

Changelog

Bugfixes:

  • Fix removing corrected message (melvo)
  • Fix showing message bubble tail only for first message of sender (melvo)

Download

Or install Kaidan for your distribution:

Packaging status

OSPP banner with text in English and Mandarin.

The KDE community will again participate in the Open Source Promotion Plan (OSPP), a program in which students can contribute to open source projects. Burgess Chang, is the KDE community contact.

As part of OSPP 2024, Hànyáng Zhāng (张汉阳) added Android support to Blinken. The work done is described in a series of blog posts, available in both English and Mandarin. The Android version is available from the KDE F-Droid nightly repository.

Unlike the Google Summer of Code, where stipends are funded by a company, stipends are primarily funded by the Chinese government with options for open source communities to contribute additional stipends if they wish to have more students participate in their projects than they get allocated. It is good that there is recognition that contributing to open source software is a skill that students should acquire.

The range of contributions that can be made in OSPP is not just limited to programming, contributions to other aspects that improve the open source software ecosystem such as translation and documentation are welcome. As it is a government funded program, there is a little more oversight to ensure tax payer funds are well spent. In particular, for most projects, contributions should be made to a publicly available repository associated with the project and that student participants are selected primarily based on their project application.

The plan aims to increase the programming and software engineering skills of students by encouraging them to participate in real world projects during their vacation period. While it is funded by the Chinese people, open source projects with contributors from all over the world apply to participate, and students from any part of the world can also apply to participate.

Mandarin and English are the official communication languages for the program, knowledge of one of these is sufficient to participate in the program.

The OSPP website lists the dates for each phase of the program. Important dates for this year are:

  • 04 April - 04 May: Project submission period for approved open source communities
  • 09 May - 09 June: Student project application period
  • 01 July - 30 September: Coding and development period for accepted projects

Friday, 28 March 2025

Kaidan 0.12.1 fixes some bugs. Have a look at the changelog for more details.

Changelog

Bugfixes:

  • Do not highlight unpinned chats when pinned chat is moved (melvo)
  • Fix deleting/sending voice messages (melvo)
  • Fix crash during login (melvo)
  • Fix opening chat again after going back to chat list on narrow window (melvo)
  • Increase tool bar height to fix avatar not being recognizable (melvo)
  • Fix width of search bar above chat list to take available space while showing all buttons (melvo)
  • Fix storing changed password (melvo)
  • Fix setting custom host/port for account registration (melvo)
  • Fix crash on chat removal (fazevedo)
  • Move device switching options into account details to fix long credentials not being shown and login QR code being temporarily visible on opening dialog (melvo)
  • Allow setting new password on error to fix not being able to log in after changing password via other device (melvo)

Download

Or install Kaidan for your distribution:

Packaging status

Thursday, 27 March 2025

Twinimation Studios have released a new Krita workshop, and we wanted to give them a chance to introduce their new offering to Krita's users:

Greetings everyone! Entering the art world is sometimes seen as an expensive endeavor. From art schools to subscription based software, artists across different fields tend to have notable expenses. But have you ever wondered if you can become an artist without spending a fortune? Twinimtion Studios is back to answer the question with our very first full workshop! Becoming an Artist on a Budget is a specialty made guide guide to help aspiring artists begin their artistic journey WITHOUT breaking the bank. This workshop consists of 9 main videos bundled into one easy to digest package, along with some special bonus showcase videos as well. Included is also a bonus freebie list of numerous artistic products ideas to begin a paid art hobby or career.

Within this workshop, we provide tips and tricks on how one can begin their art journey for completely free. After reviewing a list of affordable resources to learn art skills, we recommend numerous free art programs with a special spotlight on Krita! We explain how versatile Krita is, and how it can be used across numerous different art fields, such as animation, comics, and painting! Following some other drawing tutorials, the workshop concludes with a special lesson on entrepreneurship, where we explain how aspiring artists can create a paid hobby or full business through their artwork while remaining on a budget.

With so many people wanting to enter the art scene and build a career from it, we hope this workshop will be a helpful guide for all of those who wish to create their own artistic brand. Additionally, we have many other Krita focused animation courses on our website!

Twinimation Studios was founded by instructors Andria and Arneisha Jackson; MFA graduates who've studied animation for 9 years and want to share their professional knowledge with the world. We provide tutorials on different styles of animation, character design, illustration, film creation and so much more! Look forward to our future tutorials and workshops where we will continue to expand our repertoire to fit several different art fields.

Here is a link to the workshop:

Become an Artist on a Budget

Wednesday, 26 March 2025

Plasma's login experience is an area that we know requires some improvement — it works OK in the basic case, but it's very barebones and doesn't handle anything beyond that.

As a complete desktop experience, it's our job to provide support for the edge cases too.

What we want

  • Great out-of-box experience in multi-monitor and high DPI and HDR
  • Keyboard layout switching
  • Virtual keyboards
  • Easy Chinese/Japanese/Korean/Vietnamese (CJK) input
  • Display and keyboard brightness control
  • Full power management
  • Screen readers for blind people (which then means volume control)
  • Pairing trusted bluetooth devices
  • Login to known Wi-Fi for remote LDAP
  • Remote (VNC/RDP) support from startup

A brief history

In Plasma 5, we retired our own bespoke display manager KDM, in favour of SDDM. A display manager started for multiple lightweight Qt Desktops. It was modern at the time making use of new QML for the front and as a big selling point at a time when Plasma was also making use of it.

SDDM's Big Architecture Problem

We ran into a problem, though. SDDM is designed to show a single greeter window, loading arbitrary QML from the specified theme.

Whilst this all sounded great for Plasma, the abstraction is at the wrong level — for our wishlist we need a lot more tight integration from our login screen than just a window showing sessions and users.

With SDDM, power management is reinvented from scratch with bespoke configuration. We can't integrate with Plasma's network management, power management, volume controls, or brightness controls without reinventing them in the desktop-agnostic backend.

SDDM was already having to duplicate too much functionality we have in KDE, which was very frustrating when we're left maintaining it.

The Competition

GNOME's GDM is the gold-standard of display managers, and it achieves this higher level of quality by running half of a Gnome session.

Gnome's GDM

SDDM got closer when it added Wayland support — it had to use a compositor such as KWin. But because the project tries to be agnostic between desktops, it has to support any compositor. There aren't compositor-agnostic ways to do even simple things like set the keyboard layout, so in the end this compositor agnosticism goal simply didn't work.

Theme Problems

A major mistake we made throughout Plasma 5 was conflating "writing UI in a high level scripting language" with "it's themable" as the same thing — they are not. QML does make it easy to modify and iterate without programming skills, but it still contains business logic. It should not be the primary method of customisation that we expose to users.

Ultimately, this was poor for end users. We pushed back on adding support for configurable theme options, because building a theme engine within a theme was wasteful! But often people want to just change a few things. Choosing a theme meant finding a combination that had everything. The store filled up with themes that are 99.9% identical code-wise; most are just wallpaper mods.

It was also poor for theme developers. Not only do they have to modify the visuals, but also re-implement focus handling, accessibility, and the same boring logic again. They can't benefit from widespread testing so regressing these functionalities is common. Reddit is full of screenshots of broken SDDM themes.

Finally, it's poor for us Plasma developers: theme support holds us back from adding new features or tidying code; if we want to add a new feature that in any way affects existing UIs, the situation get very messy very fast. The end result is that features just don't land, and the end user is the one that misses out.

So, what's the plan?

It's worth stressing nothing is official or set in stone yet, whilst it has come up in previous Plasma online meetings and in the 2023 Akademy. I'm posting this whilst starting a more official discussion on the plasma-devel mailing list.

Oliver Beard and I have made a new mutli-process greeter, that uses the same startup mechanism as the desktop session. It doesn't have all the features that we propose at the start of the blog, but an architecture where features and services can be slowly and safely added.

For customisation we intend to expose the same familiar settings that exist in Plasma and bring the design more in-line with the existing screenlocker where we also dropped arbitrary QML years ago. We'll make the wallpaper configurable with any existing wallpaper plugins, and expose the existing plasma theme and colour settings. Syncing will be a case of copying files, not re-inventing things.

The backend

When starting work on this, I tried to explore all alternative backends out there even with fully working implementations, however in practice nothing was maintained that matched our requirements. SDDM has been proven in the real world, so we have taken that and stripped it down to cover what we want moving forwards. I also aim to incubate it into the KDE ecosystem to have full autonomy over the project and merging stuck patches.

Current State

All of this has been implemented as two new repositories. Plasma Login Manager a continuation of SDDM and Plasma Login for front end and KCM (settings) code. These might be merged at some point.

The new code all works, and is at roughly feature parity with what we're replacing. A screenshot looks roughly the same as a stock Plasma SDDM setup. Whilst this is at a state where developers can opt-in, I would not want distros to be packaging things at this point.

Plasma Login, looks the same

Please do reach out if this sounds interesting, either directly or in the Plasma Matrix room - or with merge requests!

KdeGuiTest (previously called KdeEcoTest) is an automation and testing tool which allows one to record and simulate user interactions with the GUI of an application. It is being developed as part of the KDE Eco initiative to create usage scenario scripts for measuring the energy consumption of software. The main goals in Season of KDE 2025 are (i) to debug remaining issues, and (ii) to make KdeGuiTest more user-friendly by creating a Graphical User Interface so it is easier to create, edit, and run emulation scripts.

Progress of the KDE season so far:

  • Creation of a 4-cross PDF to be used for testing shift coordination.
  • Detection of the issues affecting shift coordination.
  • Fixed “Platform supported” error due to pynput.
  • Integration of KdeGuiTest with its own newly-built Graphical User Interface.

SHIFT COORDINATION ERROR

There is a difference between what is recorded when creating a script and what is played when running the script.

How I planned to fix this:

  • Identify the root of the shift error by creating a script to click on four target crosses, and then running the script to locate and solve differences.
  • Develop and implement a correction mechanism to accurately map recorded coordinates to playback positions.

My progress so far on the shift error:

  1. Creation of the 4-crosses PDF to test shift error: A target PDF is created with 4 crosses horizontally and vertically opposite each other.
This is the target PDF used to detect the shift error (image from Oreoluwa Oluwasina published under a <a href=\CC-BY-SA-4.0 license.)" src="https://eco.kde.org/blog/images/Target_pdf.png" style="max-width: 100%; height: auto" />
  1. Created a test script on the target PDF to detect shift error: I created a test script that clicks on the four crosses. I tested the KdeGuiTest tool on the target PDF and identified the issues affecting the shift error, namely:

    • Difference in mouse coordinates when scripts are created and playback
    • Screen position
    • Screen resolution

I am still working on a fix for this. See my comments at the end of this post for more.

FIXED “PLATFORM NOT SUPPORTED” ERROR DUE TO PYNPUT

While creating a script with KdeGuiTest we encountered the error “Platform not supported”. It was not recognizing the pynput backend in the code, which is the main technology used in KdeGuiTest to simulate user interactions in software applications. pynput is a Python library that allows you to control and monitor input devices. It is used for interacting with your keyboard and mouse through Python code. Read more from Mohamed Ibrahim in SoK23 here, Athul Raj K in SoK24 here, and Amartya Chakraborty in SoK24 here.

The error was fixed by installing the pynput package from https://github.com/krathul/pynput and other packages like Rust.

GRAPHICAL USER INTERFACE FOR KDEGUITEST USING PyQt5

One of the main goals in Season of KDE 2025 is to make KdeGuiTest more user-friendly and have it be easier to create, edit, and run final scripts using a Graphical User Interface.

To better understand the idea of the interface Emmanuel had in mind, I presented a prototype in one of our weekly meetings. Fortunately, it fit the proposed idea! To implement the prototype, I built the GUI from scratch with some feedback from my mentor Emmanuel. PyQt5 is used for creating graphical user interfaces with Python. It's a powerful and versatile toolkit that allows developers to build desktop applications that look and feel native on various operating systems, including Windows, macOS, and GNU/Linux.

The GUI has the following features:

  • Create script interface
  • Buttons for the following commands:
    • dw - define window
    • ac - add clicks
    • sc - stop add clicks
    • ws - write to the screen
    • wtl - write test timestamp to log
    • wmtl - write message to log
  • Action buffer widget
  • Final script widget
  • A run script interface
  • File dialog

You can see the current version of the scripting interface below.

This is the create script interface (image from Oreoluwa Oluwasina published under a <a href=\CC-BY-SA-4.0 license.)" src="https://eco.kde.org/blog/images/Create_script.png" style="max-width: 100%; height: auto" />

LOOKING TO THE FUTURE OF KDEGUITEST

First, fixing the shift error: The main shift error identified is due to differences in mouse coordinates. The mouse coordinates are recorded in the GUI in order to track them. I will then develop and implement a correction mechanism to accurately map recorded coordinates to playback positions. Second (time-permitting), developing three test scripts in collaboration with KEcoLab to compare energy consumption of PDF readers:

  1. GNU/Linux + Okular Script:
  • Open PDF document
  • Simulate typical reading patterns (scrolling, page turns)
  • Test PDF search functionality
  • Change to different view modes (single page, continuous)
  • Measure annotation and highlighting features
  • Document energy consumption metrics
  1. Windows + Okular script:
  • Replicate testing scenarios from GNU/Linux + Okular script
  • Adapt window management code for Windows environment
  • Collect equivalent energy consumption data points
  1. Windows + Adobe Acrobat Script:
  • Mirror the same testing scenarios as Okular script for a direct comparison
  • Account for Adobe Acrobat's specific UI elements and behaviors
  • Test comparable features (navigation, search, annotations)
  • Measure energy consumption patterns

Interested In Contributing?

KdeGuiTest is hosted here. If you are interested in contributing, you can join the Matrix channels KdeGuiTest, KDE Eco, and Measurement Lab Development and introduce yourself. Thank you to the Season of KDE 2025 admin and mentorship team, the KDE e.V., and the incredible KDE community for supporting this project.

Please feel free to contact me here: <@oree_x:matrix.org>