Skip to content

Thursday, 11 September 2025

Becoming the official Maintainer of Clazy

During this year’s Akademy in Berlin, I gave a talk about the awesome features & usecases of Clazy. Afterwards, I also talked with Ivan Čukić and the topic of maintainership. Sérgio Martins was the original author and long time maintainer. Ivan took over the role for a while, but since I became quite active in the project, he thought it would be a good idea for me to be the official maintainer.

I am very honored that he transferred maintainership of Clazy to me. I’ll do my best to continue pushing the project forward. Also thanks to Sérgio and Ivan for their previous work and KDAB for supporting it. Also thanks to all the other people who have contributed so far!

In case you want to get started in Clazy development, your contributions are welcome! Feel free to reach out to me if you have issues with getting started :)

Tuesday, 9 September 2025

I’ve just returned from this year’s Akademy in Berlin. Unfortunately, I couldn’t attend the entire conference that is still ongoing but the weekend of talks and welcome and social events have been magnificent. I’ve also meet a couple of fellow KDE hackers that I haven’t seen in a decade.

A view into an atrium with glass ceiling, marble columns, on three floors, a statue below
Lichthof at TU Berlin, just outside our conference rooms

Friday afternoon I arrived at Berlin Hauptbahnhof on an ICE that was, I kid you not, punctual to the minute. That of course meant there wasn’t much interesting to collect for KDE Itinerary. Nevertheless, I took an ODEG regional train to the hotel and tried to gather as much data from its onboard Wifi portal as I could from my phone between Hauptbahnhof and Alexanderplatz. Turns out: they use GraphQL for their live status website. While KPublicTransport has facilities for GraphQL, any other operator we support just polls a URL that yields a JSON feed. Therefore, a lot more work and testing is needed in order to support ODEG in KDE Itinerary’s live status page.

The welcome event at Schleusenkrug was fun and the weather was a lot better than anticipated. I was very glad I did pack a pair of shorts after all! Saturday the conference started with an interesting keynote on digital sovereignty. As always it was quite difficult to choose which of the parallel tracks to attend. I enjoyed Saturday’s “Lessons Learned” presentation on Plasma. Next, I was skeptical about the use of CSS in the Union styling effort (particular with me having had to deal with Qt CSS in Qt Widgets and GTK 3 CSS lately) but the current state looks fairly promising. The lightning talks on KPublicAlerts and Clazy were another personal highlight.

Edit Hotel Reservation page, edit fields for check in and check out time and date. Below, a description field with "Room 123" and "Door Code 123456" pointed at by the mouse cursor
Don’t forget your room number with Itinerary

During every conference, there’s some real world improvements to be made in our software stack. For instance, annoyed by the fact that I couldn’t just right-click a Wifi network in order to change its settings, I added functionality in ExpandableListItem to also show its expandable actions in a context menu. There was also an issue with the network connectivity monitor (what tells you that you need to log into a captive portal) sending spurious notifications about limited connectivity when in fact it was just losing connection before the laptop went to sleep. Finally, KDE Itinerary now lets you add a description to a hotel reservation so you can note down your room number and/or door key code. Unfortunately, Schema.org has no dedicated fields for that yet.

Sunday, I attended a talk on how to deal with negative feedback. I think many of us have unfortunately been in the situation where we were proud about a change or feature we made and then were almost burnt out by negative feedback and harassment on the internet. Another important presentation put emphasis on the fact that maintainers don’t grow on trees and how to make sure a project remains alive even when the original author had to move on with life. This evening’s social event took place at c-base. It’s a hacker space that cosplays as a fictional crashed space station. How cool is that?! While we were down low near the Spree, I could still grab a glimpse of the lunar eclipse that happened that evening. I hope you could, too.

On Monday, I made my way back home. Thanks to everyone involved in organizing this conference and the sponsors to make this event happen! I am looking forward to seeing you all again soon!

Saturday, 6 September 2025

Today I have something very exciting to share: the Alpha release of KDE Linux, KDE’s new operating system!

Many of you may be familiar with KDE Linux already through Harald Sitter‘s 2024 Akademy talk about it (slides; recording), or the Wiki page, or its web page on kde.org.

For everyone else, let me briefly explain: KDE Linux is a new operating system intended for daily driving that showcases Plasma and KDE software in the best light, and makes use of modern technologies.

KDE Linux uses Arch packages as a base, but it’s definitely not an “Arch-based distro!” There’s no package manager, and everything except for the base OS is either compiled from source using KDE’s kde-builder tool, or Flatpak. Sounds weird, huh?! We’ll get to that later.

Harald has been leading the charge to build KDE Linux, assisted by myself, Hadi Chokr, Lasath Fernando, Justin Zobel, and others. We’ve built it up to an Alpha release that’s officially ready for use by QA testers, KDE developers, and very enthusiastic KDE fans.

What’s in the Alpha release?

Today we’re releasing KDE Linux’s Testing Edition. This edition provides unreleased KDE software built from source code; a preview of what will become the next stable release.

In practice, we’re being quite conservative, and it’s already pretty darn stable for daily use. In fact, I’ve had KDE Linux on my home theater PC for about six months, and it’s been on my daily driver laptop for one month. Since then, I’ve done all my KDE development on it, as well as everything else I use a laptop for. It really does work. It’s not a toy or a science experiment.

KDE Linux running on an HTPC, a 2 year-old laptop, and a 10-year old laptop — all pretty much flawlessly

Since KDE Linux offers unreleased, in-progress software, you’ll probably notice some bugs if you use it. Good, that’s the point of the testing edition! Report those bugs so we can fix them before they end up shipping to people using released software.

But where to?

  • If the bug appears to be caused by KDE Linux’s design or integration, use invent.kde.org. Ignore the scary red banner at the top of the page.
  • If the bug appears to be in a piece of KDE software itself, like Plasma or Dolphin (such that it would eventually manifest on other operating systems as well), use bugs.kde.org.

So if this is an Alpha release, what’s known to be broken?

Great question. To start with, some things are intentionally unsupported right now; see also https://community.kde.org/KDE_Linux#Non-goals. For example:

  • Computers with only BIOS support (not UEFI)
  • Loading new kernel modules at runtime
  • proprietary drivers for pre-Turing NVIDIA GPUs

There are also quite a few things that need work. You can find specific notable examples at https://community.kde.org/KDE_Linux#Current_state. A few are:

  • No secure boot
  • Pre-Turing NVIDIA GPUs require manual setup
  • Immature QA and testing infrastructure
  • User experience papercuts in Flatpak KDE apps and updating the system using Discover

We’d love your help getting these and other topics straightened out!

Just what the world needs, another Linux distro…

A sentiment I have in the past expressed myself.

However, there’s a method to our madness. KDE is a huge producer of software. It’s awkward for us to not have our own method of distributing it. Yes, KDE produces source code that others distribute, but we self-distribute our apps on app stores like Flathub and the Snap and Microsoft stores, so I think it’s natural thing for us to have our own platform for doing that distribution too, and that’s an operating system. I think all the major producers of free software desktop environments should have their own OS, and many already do: Linux Mint and ElementaryOS spring to mind, and GNOME is working on one too.

Besides, this matter was settled 10 years ago with the creation of KDE neon, our first bite at the “in-house OS” apple. The sky did not fall; everything was beautiful and nothing hurt.

Speaking of KDE neon, what’s going on with it? Is it canceled? If not, doesn’t this amount to unnecessary duplication?

KDE neon is not canceled. However it has shed most of its developers over the years, which is problematic, and it’s currently being held together by a heroic volunteer. KDE e.V. has been reaching out to stakeholders to see if we can help put in place a continuity or transition plan. No decision has yet been made about its future.

While neon continues to exist, KDE Linux therefore does represent duplication. As for unnecessary? That I’m less sure about that. Harald, myself, and others feel that KDE neon has somewhat reached its limit in terms of what we can do with it. It was a great first product for KDE to distribute our own software and prepare the world for the idea of KDE in that role, and it served admirably for a decade. But technological and conceptual issues limit how far we can continue to develop it.

See also https://community.kde.org/KDE_Linux#Differences_from_KDE_neon/Prior_art

Time will tell how these two products relate to one another in the future. Nothing is settled.

What are the architecture choices? Why did you build KDE Linux the way you did?

For detailed information about this, see https://community.kde.org/KDE_Linux#Architecture.

We wanted KDE Linux to be super safe by default, providing guardrails and tools for rolling back when there are issues.

For example, KDE Linux preserves the last few OS images on disk, automatically. Get a bad build? Simply roll back to the prior one! It’s as easy as pie too; they show up right there on the boot menu:

It’s like being able to roll back to an older kernel, but for the whole OS.

To make this possible, KDE Linux has an “immutable base OS”, shipped as a single read-only image. Btrfs is the base file system, Wayland is the display server, PipeWire is the sound server, Flatpak gets you apps, and Systemd is the glue to hold everything together.

We also wanted to settle on a specific KDE software development story, with the OS built in the same way we compile our software locally — using kde-builder and flatpak-builder. This should minimize differences between dev setups and user packages that cause un-reproducible bugs (yes, this means we would love for you to use the Flatpak versions of KDE software!). There are genuine benefits for KDE developers here.

If these technologies aren’t your cup of tea, that’s fine. Feel free to ignore KDE Linux and continue using the operating system of your choice. There are plenty of them!

Why an immutable base OS? Isn’t that really limiting?

In some ways, yes. But in other ways, it’s quite freeing.

In my opinion, the traditional model of package management in the FOSS world has been one of our strongest features as well as our most bitter curse. A system made of mutable packages you can swap out at runtime is amazing for flexibility and customization, but terrible for long-term stability. I guarantee that every single person reading these words who’s used a terminal-based package manager has used it to blow up their system at least once. C’mon, admit it, you know it’s true! 😀 And in some distros, even using the GUI tools can get you into an unbootable state after an upgrade. If this has never happened to you on a traditional mutable Linux distro… I don’t believe you!

The pitfalls for non-experts are nearly infinite, their consequences can be showstopping, and the recovery procedures usually involve asking an expert for help. No expert around? Back to Windows.

Over the past 30 years, many package-based operating systems have made improvements to their own system-level package management tools to smooth out some of these sharp edges, but the essence of danger remains. It’s inherent in the system.

So in KDE Linux, we take on that risk and do the system-level package management for you, delivering a complete OS all in one piece. If it’s broken, it’s our fault, not yours. And then you’ll roll back to the previous build, yell at us, and we’ll fix it.

By delivering the OS in a complete image-based package, we can perform safe updates by simply swapping out the old OS image for the new one. There’s no risk of a “half-applied update” or “local package conflicts”, or anything like that. It’s also super-fast (once the new OS image is downloaded, that is), unlike the “offline update” system used by PackageKit, where you have to wait minutes on boot following an update. Those issues don’t exist on KDE Linux.

Wait… if the whole system is one piece and you can’t change it, how do you install new software?

Well, only the base OS in /usr is immutable; /etc is writable for making system-level config changes, and your entire home folder is of course yours to do what you want with, including installing software into it. So that’s what you do: use Discover to get software, mostly from Flathub at this point in time, but Snap is also technically supported and you can use snap in a terminal window (support in Discover may arrive later).

That’s fine for apps in Flathub and the Snap Store, but what about software not available there? What about CLI tools and development libraries?

Containers offer a modern approach: essentially you download a tiny tiny Linux-based OS into a container, and then you can install whatever that OS’s own package management system provides into the container. KDE Linux ships with support for Distrobox and Toolbx.

That’s right, after trashing package management, now I’m endorsing package management! The difference? This is user-level packaging and not system-level packaging. System-level packaging is what’s dangerous. Take away the danger by doing it in your home folder, and you regain almost all of the benefits, with almost none of the risks.

AppImage apps work too.

Homebrew also works; it’s an add-on system-level package manager that allows you to download tons of stuff you might want for power user and development purposes. Note that Homebrew packages are not segregated, so they can override system libraries and present problems. This should be considered an experts’ tool.

Compiling anything you want from source code is also possible — provided the dependencies are available, and Homebrew or containers can be used for this.

Finally, there’s nothing stopping folks from making command-line tools available via Flathub or another 3rd-party Flatpak repository. Some are already there. So this could be a potential avenue of improvement too.

But as you can see, the story here is fragmented, with a menu of imperfect options rather than a single unified approach. That’s a valid critique, and it’s something that needs more work if we want an obvious default choice here.

For more information about this topic, see https://community.kde.org/KDE_Linux#Installing_software_not_available_in_Discover

That’s not enough power! I want to change the base OS!

Actually I lied. There’s another option for developers and super power users, one that does allow intermingling stuff: you can use systemd-sysext to overlay files of your choice on top of the base OS.

It’s a really cool tool you might not be aware of. I’ve started using it in KDE Linux to overlay my built-from-source KDE software on top of the base system for development and testing purposes, and it’s just been a super great experience. Way better than compiling stuff to a prefix in $HOME. No more weird random DBus and Polkit failures or stale file conflicts.

Now, this is quite a bit riskier as you can destabilize the OS itself by overlaying broken parts on top of working parts. But undoing any such changes is super simple, since, again, it’s all self-contained. That’s gonna be a common theme here.

However, I think the better answer for “I want to change the base OS” is “please get involved with developing KDE Linux!” That way if your changes are amazing, the whole world can benefit from them, and the burden of maintaining them over time can be shared with others.

See also https://kde.org/linux/#this-is-so-cool-how-can-i-get-involved-with-development

Still not enough power! I need to be able to swap out kernel modules and base packages at runtime!

Wow, you really are sounding like an OS developer. Maybe you want to help us develop KDE Linux? The OS could benefit tremendously from your skills and experience!

That said, there’s some truth to the idea that an immutable OS like KDE Linux isn’t the best choice for doing this kind of really low-level development or system optimization. That’s fine; there are hundreds of other traditional Linux-based operating systems out there that can serve this purpose.

If your goal really is to build your own OS for your own personal or commercial purposes, it’s hard to go wrong with Arch Linux; it’s one of the tools we used to build KDE Linux, in fact. In a lot of ways it’s more of an OS building toolkit than it is an OS itself.

If it’s all Flatpak and containers and stuff, does it really showcase Plasma and KDE software in the best light? Really?

Well, we’re kind of cheating a bit here. A couple KDE apps are shipped as Flatpaks, and the rest you download using Discover will be Flatpack’d as well, but we do ship Dolphin, Konsole, Ark, Spectacle, Discover, Info Center, System Settings, and some other System-level apps on the base image, rather than as Flatpaks.

The truth is, Flatpak is currently a pretty poor technology for system-level apps that want deep integration with the base system. We tried Dolphin and Konsole as Flatpaks for a while, but the user experience was just terrible.

So for the Alpha release, these apps are on the base OS where they can properly integrate with the rest of the system. There’s no reason to torture people with issues that we know won’t be fixed anytime soon!

Other apps not needing as as deep a level of system integration are fine as Flatpaks right now, and we’re engaging with Flatpak folks to see how we can push the technology forward to work better for the deep integration use cases.

This is because one of KDE Linux’s other goals is to be a test-bed for bringing new technologies to KDE. Our software that behaves awkwardly when sandboxed or run on a filesystem other than Ext4 represents something for us to fix, because those technologies aren’t going away. We need to be embracing that future, not ignoring it. KDE Linux both helps and forces us to do it.

This should be exciting. New technology is fun! You get to help guide the future. Let’s not get caught up yelling at clouds here!

I’m a KDE developer. Why should I migrate to KDE Linux, and how does KDE development work?

Easy setup, speed, safety, DBus and Polkit finally work properly, space savings, consistent platform targets, and more. There’s a lot to like. See also https://kde.org/linux/#im-a-kde-developer-why-should-i-use-kde-linux-and-how-does-kde-development-work

Forget the haters, this project is cool! How can I help?

Great! For starters, install it on your computers. 🙂 We’re looking to get more feedback from daily drivers. The Matrix room is a great place to get in touch with us.

You can help out with some of the tasks and projects mentioned at https://community.kde.org/KDE_Linux#Current_state. Those are high priority. And lots more easier, lower-priority tasks can be found here. You can submit Issues or Merge Requests on invent.kde.org.

And finally, help spread the news! If you couldn’t tell, I’m really excited about this project, and I think a lot of other folks will be as well… once they hear about it!

Friday, 5 September 2025

The 2025 edition of KDE's annual community gathering starts today.  Unforunately circumstances mean i won't be attending BUT i was fortunate enough to attend my first ever Akademy last year.  Indeed, that is one whole year ago but neurodivergent scriptophobia and a incredible ability to procrastinate have kept me from writing about it, until now.

Attending talks and sprints was certainly a blast from the past, not that I had ever attended another tech based conference before.  It certainly brought back memories of campus life in my late teens studing anthropolgy/socialogy.  The real hightlight being able to meet fellow attendees for the time in person, all of whom I had only ever previously chatted to in matrix rooms before.

Over the past year we've continued to provide the community with regular package updates from KDE neon and many aspects of how we release packages has changed but this probably deserves it's own post on the neon blog. Aside from this i've immersed myself in the innards of KDE's sysadmin tooling whilst helping with the roll out of the shiny new ephemeral vm builders that fellow antipode Ben has lovingly crafted.  Will definitely use this new tech as a foundation to help automate not only Snap publishing, but also Flatpak's and appimage's.  After all, in KDE it's all about the apps !!

I'd like to thank everyone in the community who have been so welcoming and supportive, the KDE e.v. for helping me attend last years event and wish all this years attendees a fantastic experience.  Cheers !!

 

Thursday, 4 September 2025

On Android apps need special permissions to access certain things like the camera and microphone. When the app tries to access something that needs special permission for the first time you will prompted once and afterwards the permission can be removed again in the app settings.
For sandboxed apps on desktop as done by flatpak or snap for example the situation is similar. Such apps can’t access all systems services, instead they have to through xdg-desktop-portal which will show a dialog where the user can grant permission to the app. Unfortunately we lacked the “configure permissions” part of it which means granted permissions disappear into the void and pre-authorization is not possible.
This changes with Plasma 6.5 which will include a new settings page where you can configure application permissions!

Features

Main view showing Application Permissions
Main view showing Application Permissions

The main view after selecting an application shows all the permissions you can configure. The ones at the top are controlled by simple on/off switches and are turned on by default – applications are currently allowed to do those things by the portal frontend except when explicitly denied.

The settings that follow are a bit more interesting, you can configure if application requests to do those things or not. Additionally the “Always Ask” setting will make it so that a dialog is always shown when the app tries to take a screenshot for example. The default state for these settings after you install an app is “Ask Once”, a dialog will be shown and depending on if you click yes or no future requests are allowed or denied. The Location setting is a bit special as it allows configuring the accuracy with which the current location is fetched.

Configuring saved screen cast and remote desktop sessions
Configuring saved screen cast and remote desktop sessions

Finally you can configure screen cast and remote desktop sessions that the app is able to restore in the future. Here you can see exactly what the application is able to record and control and revoke those permissions. The Plasma specific override for remote control can also be enabled here.

A Note on Non-Sanboxed Apps and X11

For non-sandboxed (“host”) apps only a subset of settings will be shown. The reason is simple: Some portals just forward a request from the application to another service. Denying host apps access to such portals would either have no effect as they are not using the portal in the first place or can always talk to the service directly anyway. However some things such as recording screen contents or sending fake input events always require that these apps use the portal because they are simply not possible through other means so these settings will be shown. On Wayland anyway – on X11 everything can do everything. Even so these settings will also be shown on X11 if you are using an app that uses the portal to do these things.

Outlook

Of course as we implement new portals support for these will also be added here if suitable. For existing portals permission support can be added – preferably upstream. One such system is currently in development for the input capture portal. If you think there is a portal dialog that can be hooked up to a permission system which currently isn’t please file a bug report and we can investigate it.

Why PyCups3 is So Damn Intelligent

In my last blog , I shared just how smart PyCups3 is. This time, let’s go one layer deeper and see why it’s so intelligent.

But first, let’s warm up with a bit of Python magic. ✨


What the Heck is a Dunder Method?

Straight from the Python 3.x docs:

Dunder methods (a.k.a. magic methods) are special methods you can define in your classes to give them superpowers. These methods let you make your class objects behave like built-in Python types. They’re called dunder because they start and end with a double underscore — like __init__.

Wednesday, 3 September 2025

So, after the font family, opentype handling and font metrics rework, there’s some text properties left that aren’t particularly related to one another. Furthermore, at time of writing I also have tackled better UI elements, as well as style presets, so I’ll talk a bit about those too.

Language

The first few properties are all affected by the language set on the text shape. So the first thing I tackled was a language selector.

The format accepted by SVG is an xml:lang, which takes a BCP47 language code. This is unlike QLocale, which uses a POSIX style string of lang_territory_script@modifier, where it ignores the modifier.

While you’d think that’d be good enough, that modifier is actually kinda important. For example, when parsing PSD files, the BCP47 code associated with the extremely American term “Old German” is de-1901, “German according to the 1901 orthography”.

So the first thing I ended up doing is creating a little BCP47 parser, that can utilize QLocale for the proper name, without losing the variants and extensions and what have you.

For the UI widget, it is at its most basic a combobox. Instead of showing all possible BCP47 codes (which’d be too much), we instead only show previously used languages in this session and “favourited” languages, with the latter defaulting to the current languages of the application. Users can type full BCP47 language codes into the combobox and have it parsed.

By default the language dropdown shows the current list of locales
Artists will be able to type in any valid BCP47 code, and Krita will try its best to parse and display it.

However, it’s likely someone may not know the BCP47 code for a given language, and while they could find that on, say, Wikipedia, that still is a considerable amount of friction. So instead we use QLocale to populate a model with the basic names and territory codes. This model then is set onto a ProxySortFilterModel, which in turn is used as a sort of completer model. This allows users to type in partial names, and it’ll show an overlay with matches.

Artists will be able to type in a language and Krita will provide a search.
Languages that have been used this session will be added to the dropdown, where they can be made persistent.

There’s a bit of a problem here though: QLocale can only provide the native name and the English name of a locale, but not, say, the Mandarin chinese name of French. I don’t know if we can ever fix this, as I don’t particularly feel like making translators translate every single language name.

Either way, this should allow someone who uses their computer in English, but speaks French to type in fr and then go into the dropdown to mark that as a favourite so it is easily accessible in all future text shapes.

Most programs use text language to provide spellcheck or hyphenation. Krita does neither right now, but that doesn’t mean it doesn’t use language. For one, some OpenType fonts have a locl feature that gets toggled when you provide the shaper with the appropriate language. There’s two other places we use the language as well:

Line Break and Word Break

Line break and word break both modify the soft wrapping in a piece of text.

Line break in particular can be modified to allow line breaks before or after certain punctuation, with a loose line breaking breaking at all possible places, and a strict line breaking not allowing breaks before and after certain punctuation marks. Which marks those are depends on the content language, and CSS defines several rules for Chinese and Japanese content.

LibUnibreak, which we’re using, doesn’t have loose rules for any languages, but it does allow for strict. A project like KoReader is able to customize the result of LibUnibreak to support more languages, and that might be a future solution. However, it might also be that we look for a different line breaking library as LibUnibreak is going to be limited to Unicode 15.0: Unicode 15.01 introduced improvements for Brahmic scripts, and because those scripts are quite complex, it won’t work with LibUnibreak’s architecture.

The Krita slogan in Korean. Left has word-break: normal, right word-break: keep all, which means soft breaks happen only at the spaces.

Word break is somewhat similar to line break. Some scripts, in particular Korean and Ethiopian, have two major ways of line breaking: word-based and grapheme-based, and depending on the context, you may need one or the other. Word break provides a toggle for that choice.

Text Transform

Text transform is also affected by language. I didn’t really need to do much here however, just forgot to have it inherit properly. I don’t recall if I did touch upon that before, but another annoying thing with text-transform, given it can change the amount of codepoints in a sentence, is that you will need to take care to keep track of those changes, and ideally I’d rework the code so this happens at the last possible moment.

Spacing

Then there’s a number of spacing/white space related features. Letter and word spacing were not touched in these last few weeks, though they do have some peculiarities, and I will discuss them when talking about phase 3 features, when the time comes.

White Space

The CSS white-space rule is my enemy. It exists to facilitate hand-written XML files, and in those, you might want to manually line break text, and have the computer automatically unbreak said text and remove unnecessary white space. And that is also the default behaviour.

In the context of a what-you-see-is-what-you-get text editor however, this default behaviour mostly gives us a text interaction where spaces are getting eaten for no particular reason. Not just that, but this painful default behaviour is called “normal”. Which means that, if we set white space to the much more preferrable “pre-wrap”, users will see a toggle they don’t know. And if they set the toggle to what they would consider the default behaviour, “normal”, they get distinctly not “normal” behaviour.

This’ll probably be fixed by CSS-Text-4, where white-space has been split into white-space-collapse and text-wrap, with the former having much more descriptive “collapse” and “preserve” modes. Until we handle that properly though, the white-space property is going to be hidden by default so we don’t have to troubleshoot that endlessly.

Beyond that, the thing that got fixed code-wise is that white space can now be applied per range. To do this, you need to get the whole string inside the paragraph, and then process each range separately, as white spaces present in a previous range can affect whether collapses happen in the current.

Tab size and Indentation

So, the way tabs work is that instead of just adding spacing, it adds spacing so the next grapheme starts at a multiple of the tab-size. This required some rework, in particular with line-breaking and especially with wrapping text in a shape, but I’ve managed to get that to work.

Showcasing the tab-size property. This is probably not going to be the most used property, but it is good to have functional

Text indentation is similar, though I don’t quite understand how it should work for text-in-shape. Like, should the identation be equal for every new line (current behaviour) or instead be similar to tab-size, and snap to a multiple?

Showcasing text-indent. Both paragraphs have text-indent set to 1 em, but the left one only sets text-indent per new line, while the right one uses hanging indentation.

Text Align and Text Anchoring

Text align didn’t need any changes code wise. It did however need some fixing UI wise. You see, for text-in-shape, SVG 2 uses text-align, while for all other text types, including inline-size wrapping, it uses text-anchor.

This is because SVG was originally a lot dumber in regards to how text works, and the most important thing it needed to know is where the start (anchor) of a given chunk of text is. Inline-size, meanwhile, is basically the simplest form of wrapping bolted onto SVG 1.1 text, and thus uses text-anchor. This does mean that this simple wrapping doesn’t have access to full justification of text, as that’s only in text-align.

On the flip side, text-align and text-align-last only apply to text-in-shape and come from CSS-text. There’s many different options there, and some of them also change behaviour depending on the text-direction. Text-align-last even only really applies when the text is full justified.

The UI for text-anchor and text-align. The top 4 buttons is what most people will see, while clicking on the arrow will fold out the precise options.

These are all tiny implementation details that I didn’t want to bore people with, so I tried to simplify it to three buttons (start/middle/end) and a toggle for justification. People can still set each property separately, but most people won’t ever have to touch this.

First lines of Krita's application description typeset very formally as if it were a roman text.
SVG 2 only allows justification when the text is in a shape, but it is otherwise functional.

Text Rendering

Within text layout, there is this concept of “hinting”, that is, hints as how to properly rasterize the vector glyphs onto a limited amount of pixels. A full vector program like Inkscape doesn’t really have any use for this, but within Krita text is rasterized before compositing.

Now, SVG has the text-rendering property which allows us to decide what kind of rendering we have. For krita, we use “optimizeSpeed” to mean no antialiasing and full hinting, great for pixel art fonts, while “optimizeLegibility” only does full-hinting for vertical text, and light hinting for regular. Then there’s “geometricPrecision” which uses no hinting whatsoever.

If your system does upscaling, it may not be visible, but each of the above texts are rendered with pixel art fonts, and optimize speed enabled, as well, in some cases letterspacing and baseline shift has been enabled. The text layout will snap such offsets to the nearest pixel so text remains crisp.

Besides toggling hinting and antialiasing, these options also get used to ensure the offsets, like line height, baseline shift, tab-size, text-indent dominant baseline and text-decoration get adjusted to the nearest pixel. This should make it easier to have nice pixel perfect lettering, which is something artists have expressed a need for.

QML items and Docker rework

One of the things that was important to me when designing the text properties docker was to ensure folks wouldn’t get overwhelmed. Due this, I spend a lot of time trying to make sure only relevant properties are seen. CSS’ structure is designed to facilitate this, as it allows for partial stylesheets and you don’t need to provide a definition for each style, so I decided to have that reflect inside the docker. Furthermore, when adding new properties I made sure that the search is able to search for alternate key terms, so that a state-of-art term like “tracking” brings up the property “letter-spacing”. The docker is also set up to make it easy to revert properties, so that people can experiment with new properties and if they don’t like them, revert them with a single click.

One of the original mockups for the text properties docker. Design 1 was one that tried to hide all properties under fold-outs. Removing of inherited properties was also in this design.
UI wireframe with a number of comments like "Even in this design, properties need to be grouped" and "Search should provide results for aliases, like tracking showing letterspacing".
In design 2 I focused on making the UI less busy and more experimentation friendly. This is the one we went with at the end for that precise reason.

Some people in the text tool thread weren’t too happy with this; things that appear and disappear are generally considered scary UI. But to me this isn’t much different from layers inside a layer stack. Furthermore, disappearing UI elements generally are the most annoying on systems where this is not an internal representation inside the data, as that means there’s a good chance the UI and the data get out of sync. Because CSS does have partial sheets, figuring out which controls are currently relevant is trivial.

The final docker like how its shown in Krita. Biggest changes is that properties which have a main widget (like font substyle selection) can be expanded to show the precise options. Similarly, the button at the start will show a multi-headed icon when selecting a range of text with multiple different values.

Revert buttons also had many critics, because other text applications don’t have this, and it’s extra buttons, and extra buttons is bad UI. This is a bit of a UI myth, as a good UI is one that allows you to stay in control of the data object you’re manipulating. Sometimes, this can mean that there’s less buttons, but with the complexity of CSS, I felt it was important you would always know whether a property was currently being applied or inherited. Similarly, the fact you can quickly revert a property should facilitate experimentation. And it is not like there’s no one out there that hasn’t gone insane with frustration trying to figure out which text properties are being set, and how to unset them. In many applications, this leads to liberal use of “remove all formatting”, which shouldn’t be as necessary with this docker.

Still, some good points were made: Some basic properties always need to show, like font-size and font-family, as to avoid confusion. Similarly, we always want to hide some properties, like our footgun friend white-space. Finally, some folks just always want to see all properties.

Some people want to always see all properties at all times. A part of me still wonders if they fully understand how many properties Krita supports, but regardless, it is possible. Note that the search bar becomes a filter in this situation.

For this reason, I made it so that all properties can be set “always shown”, “when relevant”, “when set”, “never shown” and “follow default” (The difference between “when relevant” and “when set” is that the former also shows when CSS inheritance is happening. Follow default allows for quickly switching all properties) When there’s no properties that are situationally visible, the search to add a property is replaced with a filter search. This way, we should be able to have a default behaviour suited for progressive disclosure, while people who hate disappearing widgets can disable those. And, as a final plus, I won’t have to deal with people complaining about toggles they don’t use in their language and how I should remove this unnecessary feature.

Both when showing all, or only showing relevant, the search at the bottom can find results for different keywords. In this case, letter-spacing, word-spacing and font-kerning all have “tracking” defined as a possible keyword, and thus the rest are filtered away. This should help people with typography backgrounds find properties even without knowing css all that well.

This was all facilitated by the docker using QML. We’ve previously had attempts at QML integration in Krita, but that was always either a separate UI (krita sketch) that easily broke, or a somewhat unmaintained docker like the touch docker. This time however, we’re committed to actually getting this to work properly, and I have also started using QML for the glyph palette.

One annoying thing with using QML, and even with using the QtQuick.Controls is that we didn’t have access to our QWidget controls anymore, meaning we didn’t have sliders. Thankfully another contributor, Deif Lou, was so kind to hack a port of those sliders together, and I was able to import those into Krita to use in the text properties docker.

Style Presets

With a docker that only shows relevant properties, and can show somewhere between 30-50 of them, the need for presets became self-evident. CSS has support for style classes build in, being its primary usecase even. Krita doesn’t have support for style classes themselves (it can only parse them), but that doesn’t mean we cannot provide presets.

For the presets, I decided to use a small SVG file with a single text object as the storage method. We identify the element that contains the stored style by adding a krita:style-type attribute to that element:

<?xml version="1.0" encoding="UTF-8"?>
<svg xmlns="http://www.w3.org/2000/svg" height="14pt" width="40pt" viewbox="0 0 40 14" xmlns:krita="http://krita.org/namespaces/svg/krita">
    <text transform="translate(0, 12)" font-size="12"><title>Superscript</title><desc>This halves the font-size and offsets the text towards the top.</desc>Super<tspan krita:style-type="character" style="baseline-shift:super; font-size: 0.5em;">script</tspan></text>
</svg>

The reason for a text object instead of using a plain CSS class, is that this allows us to provide a good demonstrative sample. If you create a style like a synthetic superscript, you will want to show that superscript in the context of the text it will be situated in, because otherwise it is invisible.

The style preset selector. Not mentioned is the significant amount of time that was spend making the style and font previews look good. I do think it’s time well-spend as I think the nice big preview make it very inviting to use.

There’s also the possibility to create a “pixel-relative” style. File wise, this means that krita:style-resolution="72dpi" gets added as an attribute. In practice, this is because CSS itself has a hardcoded relationship between points:pixels:inches of 96:72:1, which can be worked with if you’re working in a vector-only environment, but not in a raster one. For this reason, Krita only really stores points and font-relative units. A pixel-relative style then gets scaled depending on the document resolution, so that users can create styles for 12 pixel high text (again, this is because of the pixel-perfect font requirement).

There’s more work that needs to be done here. For example, right now, we cannot edit the fill/stroke, so we’re just not storing that. We also have a very harsh split between paragraph (block level) and character (inline-level) properties, and it would be more ergonomic if we could remove that.

Next Steps

Phase 2 is nearly done. Before finishing this, I’d like to rework the properties a bit more because I got a bit better at QML, and I think I can handle some things more elegantly now. I also want to see if I can get rid of the harsh paragraph/character properties distinction.

After that, phase 3 will start. I have already started on it with some converters so old Krita 5.1 texts can be converted to wrapped text. I also want to replace the text tool options so new texts can be made on basis of a style preset or whatever is in the docker now. I want to remove the old rich text editor. And I want to work on a svg character transforms editing mode, called typesetting mode.

Finally, there’s still the issue of the text paths, that is “text-on-path” and “text-in-shape” That one is being blocked by the sheer complexity of editing the paths themselves (The text-layout parts work fine). We’ve recently done a bunch of meetings to figure out the proper solution here, so I’ll be working on that soon as well.

Appendix

Old German

The reason I find “Old German” very American is because terms like “Old German”, “Old English” and “Old Dutch” all refer to languages that predate the establishment of the United States of America. So, only someone from that country would think that these are appropriate names for what’s basically an older standard spelling variant.

Adobe uses them largely to pick the older style hyphenation dictionaries, which is important for backwards compatibility, but I can’t really understand why “German 1901” was not considered instead of “Old German”.

As an aside, despite having a special “Old Dutch” category inside Adobe products, there isn’t a special BCP47 variant for it. I honestly don’t know which version of Dutch they mean by “Old Dutch”, because in my lifetime, spelling rules have changed significantly, twice, and the latter was so disliked that there’s a second version of it used by certain newspapers. Similarly, you will also not find “nl-1996” or “nl-white-booklet” in BCP47, meaning that even if I did figure it out, we still wouldn’t be able to assign a meaningful BCP47 code to it.

Sunday, 31 August 2025

Window Activation, what’s that?

In short: for some actions you want to activate your application window. How to do that differs from platform to platform.

Some weeks ago there was a post by Kai about window activation on Wayland.

It explains nicely how the XDG Activation protocol works and how the different parts of the KDE & Qt stack implement it to correctly allow to activate the right windows in your KDE Plasma Wayland session.

What’s the issue?

For the most parts this works now flawlessly, but there is some corner case not handled that well at the moment: stuff launched by our terminal emulator Konsole.

This is no large issue for many users, but I rely in my usual workflow with Kate a lot on just being able to do

❯ kate mycoolfile.cpp

in my Konsole window and if Kate is already running to have the existing window activated.

That worked just fine on X11, but on Wayland the safeguards to avoid misuse of window activation stopped that. There I just get some flashing of the Kate icon in the task switcher.

After some years of ignoring that issues myself and hoping a solution would just materialize, I now gave that some try to fix it myself.

The hack (solution)

In the most cases it is easy to get the application that launches stuff to export a new XDG_ACTIVATION_TOKEN environment variable for the new process and all stuff just works.

Unfortunately that is not that easy for a terminal emulator.

Naturally Konsole could create and export one before starting the shell, but that is an one-time use token. In the best case the first launched Kate or other application could use it once.

The current solution is to allow applications to query a XDG_ACTIVATION_TOKEN via DBus from Konsole, if they can prove they got started inside the shell session by passing a KONSOLE_DBUS_ACTIVATION_COOKIE that got exported by Konsole to the shell.

This leads to the following client code in Kate/KWrite and Konsole itself.

    // on wayland: init token if we are launched by Konsole and have none
    if (KWindowSystem::isPlatformWayland()
        && qEnvironmentVariable("XDG_ACTIVATION_TOKEN").isEmpty()
        && QDBusConnection::sessionBus().interface()) {
        // can we ask Konsole for a token?
        const auto konsoleService = qEnvironmentVariable("KONSOLE_DBUS_SERVICE");
        const auto konsoleSession = qEnvironmentVariable("KONSOLE_DBUS_SESSION");
        const auto konsoleActivationCookie
            = qEnvironmentVariable("KONSOLE_DBUS_ACTIVATION_COOKIE");
        if (!konsoleService.isEmpty() && !konsoleSession.isEmpty()
            && !konsoleActivationCookie.isEmpty()) {
            // we ask the current shell session
            QDBusMessage m =
                QDBusMessage::createMethodCall(konsoleService,
                    konsoleSession,
                    QStringLiteral("org.kde.konsole.Session"),
                    QStringLiteral("activationToken"));

            // use the cookie from the environment
            m.setArguments({konsoleActivationCookie});

            // get the token, if possible and export it to environment for later use
            const auto tokenAnswer = QDBusConnection::sessionBus().call(m);
            if (tokenAnswer.type() == QDBusMessage::ReplyMessage
                && !tokenAnswer.arguments().isEmpty()) {
                const auto token = tokenAnswer.arguments().first().toString();
                if (!token.isEmpty()) {
                    qputenv("XDG_ACTIVATION_TOKEN", token.toUtf8());
                }
            }
        }
    }

The KONSOLE_DBUS_SERVICE and KONSOLE_DBUS_SESSION environment variables did already exist before, the only new one is the secret KONSOLE_DBUS_ACTIVATION_COOKIE that is filled by Konsole with some random string per shell session.

The org.kde.konsole.Session has now a new activationToken method to retrieve a token. That method will only work if the right cookie is passed and if Konsole itself is the current active window (else it will not be able to request a token internally).

As it easily visible alone from the naming, that is some Konsole only interface and one needs own code in applications that want to make use of it.

Therefore this is more of a hack than a proper solution.

If it inspires others to come up with something generic, that would be awesome!

I will happily help to implement that in Kate and Co., but until that happens, at least my workflow is back working in my Wayland session.

Feedback

You can provide feedback on the matching Lemmy and Reddit posts.

Hello again! I'm Ajay Chauhan, and this update continues my Google Summer of Code 2025 journey with Kdenlive. Over the past few months, I've been working on transforming Kdenlive's marker system from simple point markers to a range-based markers. Let me take you through the progress we've made since my last update.

From Backend to Frontend:

Building the User Interface

The real magic happened when we started building the user interface. I began with the Marker Dialog - the window where users create and edit markers. This was a significant challenge because we needed to maintain the simplicity of point markers while adding the complexity of range functionality.

I added three new UI elements:

  • A checkbox to toggle between point and range markers
  • An "End Time" field for setting the marker's end position
  • A "Duration" field that automatically calculates and displays the time span

The trickiest part was keeping these fields synchronized. When a user changes the start time, the duration updates automatically. When they modify the duration, the end time adjusts accordingly. It's like having three interconnected gears that always move together.

🔗 Commit: feat(range-marker): implement range marker functionality in MarkerDialog

Visual Magic in the Monitor

Next came the Monitor Ruler - the horizontal timeline you see above the video preview. This is where range markers work visually with the video preview.

I implemented a visual system:

  • Range Span: A semi-transparent colored rectangle that shows the marker's duration
  • Start and End Lines: Vertical lines marking the beginning and end of each range
  • Color Consistency: Each marker type gets its own color, with range markers using that color for the entire span

The core of this visualization is the rangeSpan rectangle:

Rectangle {
    id: rangeSpan
    visible: guideRoot.isRangeMarker
    x: (model.frame * root.timeScale) - ruler.rulerZoomOffset
    width: Math.max(1, guideRoot.markerDuration * root.timeScale)
    height: parent.height
    color: Qt.rgba(model.color.r, model.color.g, model.color.b, 0.5)
}

What this does:

  • visible: guideRoot.isRangeMarker - Only shows for range markers (not point markers)
  • width: Math.max(1, guideRoot.markerDuration * root.timeScale) - Width represents duration, with a minimum of 1 pixel
  • color: Qt.rgba(...) - Uses the marker's category color with 50% transparency

But the real challenge was making these markers interactive. Users can now drag the left or right edges of a range marker to resize it in real-time. The visual feedback is immediate - you see the marker grow or shrink as you drag, making it incredibly intuitive.

🔗 Commit: feat(monitor-ruler): Display the range marker color span in the Clip Monitor

Timeline Integration

The Timeline Ruler was the next frontier. This is the main timeline where users spend most of their editing time, so the range markers needed to be even more sophisticated here.

I added several visual features similar to the monitor ruler.

The timeline implementation also includes the same drag-to-resize functionality, but with additional constraints to prevent markers from extending beyond clip boundaries.

🔗 Commit: feat(markers): drag-to-resize range markers in monitor and timeline

The drag-to-resize

The drag-to-resize functionality was perhaps the most technically challenging feature. I had to implement:

  • Left and Right Handle Resizing: Dragging the left and right edges changes both the start position and duration
  • Live Preview: Visual feedback during resize operations
  • Constraint Handling: Preventing invalid durations (minimum 1 frame)
  • Binding Management: Restoring Qt's automatic updates after resize completion

The resize handles only appear when range markers are wide enough, and they provide visual feedback through color changes and opacity adjustments. Here's how the left and right resize handle works:

Rectangle {
    id: leftResizeHandle
    visible: guideRoot.isRangeMarker && rangeSpan.width > 10
    width: 4
    height: parent.height
    x: rangeSpan.x
    color: Qt.darker(model.color, 1.3)
    opacity: leftResizeArea.containsMouse || leftResizeArea.isResizing ? 0.8 : 0.5

    MouseArea {
        id: leftResizeArea
        anchors.fill: parent
        anchors.margins: -2  // Extends clickable area
        cursorShape: Qt.SizeHorCursor
        preventStealing: true

        onPositionChanged: {
            if (isResizing) {
                var globalCurrentX = mapToGlobal(Qt.point(mouseX, 0)).x
                var realDeltaX = globalCurrentX - globalStartX
                var deltaFrames = Math.round(realDeltaX / root.timeScale)
                var newStartPosition = Math.max(0, startPosition + deltaFrames)

                // Live preview updates
                rangeSpan.x = (newStartPosition * root.timeScale) - ruler.rulerZoomOffset
                rangeSpan.width = Math.max(1, newDuration * root.timeScale)
            }
        }
    }
}

Key implementation details:

  • anchors.margins: -2 - Extends the clickable area beyond the visible handle
  • preventStealing: true - Prevents other mouse areas from interfering
  • Global coordinate tracking ensures accurate resize calculations across zoom levels
  • Live preview updates provide immediate visual feedback

🔗 Commit: feat(monitor-ruler): enable right-click capture for resizing markers

Zone-to-Marker

One of the other features I implemented was the Zone-to-Marker Conversion system. This feature allows users to define a time zone in the monitor and instantly create a range marker from it, bridging the gap between Kdenlive's existing zone functionality and the new range marker system.

Before this feature, users would have to manually create range markers.

This was time-consuming and error-prone, especially when working with precise time ranges that were already defined as zones.

How It Works

The zone-to-marker system works in two ways:

Method 1: Context Menu Integration Users can right-click on the monitor ruler and select "Create Range Marker from Zone" from the context menu. This instantly creates a range marker spanning the currently defined zone.

Method 2: Quick Action A dedicated action that can be triggered from the main window, allowing users to quickly convert zones to markers without navigating through menus.

1. Monitor Proxy Enhancement I added a new method to the MonitorProxy class that handles the zone-to-marker conversion:

bool MonitorProxy::createRangeMarkerFromZone(const QString &comment, int type)
{
    // Validate zone boundaries
    if (m_zoneIn <= 0 || m_zoneOut <= 0 || m_zoneIn >= m_zoneOut) {
        return false;
    }

    std::shared_ptr<MarkerListModel> markerModel;

    // Determine which marker model to use based on monitor type
    if (q->m_id == int(Kdenlive::ClipMonitor)) {
        auto activeClip = pCore->monitorManager()->clipMonitor()->activeClipId();
        if (!activeClip.isEmpty()) {
            auto clip = pCore->bin()->getBinClip(activeClip);
            if (clip) {
                markerModel = clip->getMarkerModel();
            }
        }
    } else {
        // For project monitor, use the timeline guide model
        if (pCore->currentDoc()) {
            markerModel = pCore->currentDoc()->getGuideModel(pCore->currentTimelineId());
        }
    }

    if (!markerModel) {
        return false;
    }

    // Convert zone to range marker
    GenTime startPos(m_zoneIn, pCore->getCurrentFps());
    GenTime duration(m_zoneOut - m_zoneIn, pCore->getCurrentFps());
    QString markerComment = comment.isEmpty() ? i18n("Zone marker") : comment;

    // Use default marker type if none specified
    if (type == -1) {
        type = KdenliveSettings::default_marker_type();
    }

    bool success = markerModel->addRangeMarker(startPos, duration, markerComment, type);
    return success;
}

User Experience Features

1. Smart Validation The system validates zone boundaries before creating markers:

  • Ensures zone start is before zone end
  • Prevents creation of zero-duration zones
  • Handles edge cases gracefully

2. Automatic Naming If no comment is provided, the system automatically generates a descriptive name like "Zone marker" or uses the existing zone name if available.

3. Feedback System Users receive immediate feedback through status messages:

  • Success confirmation when markers are created
  • Error messages for invalid operations
  • Warning messages for missing zones

Features

Timeline Controller Integration: Added methods like resizeGuide and suggestSnapPoint to make range markers work seamlessly with Kdenlive's existing timeline operations.

The backend integration happens through the resizeMarker method in the monitor proxy:

void MonitorProxy::resizeMarker(int position, int duration, bool isStart, int newPosition)
{
    std::shared_ptr<MarkerListModel> markerModel;

    // Determine appropriate marker model based on monitor type
    if (q->m_id == int(Kdenlive::ClipMonitor)) {
        auto activeClip = pCore->monitorManager()->clipMonitor()->activeClipId();
        if (!activeClip.isEmpty()) {
            auto clip = pCore->bin()->getBinClip(activeClip);
            if (clip) {
                markerModel = clip->getMarkerModel();
            }
        }
    }

    if (markerModel) {
        GenTime pos(position, pCore->getCurrentFps());
        bool exists;
        CommentedTime marker = markerModel->getMarker(pos, &exists);

        if (exists && marker.hasRange()) {
            GenTime newDuration(duration, pCore->getCurrentFps());
            // Apply constraints and update the marker
            if (newDuration < GenTime(1, pCore->getCurrentFps())) {
                newDuration = GenTime(1, pCore->getCurrentFps());
            }
            markerModel->editMarker(pos, newStartTime, marker.comment(),
                                  marker.markerType(), newDuration);
        }
    }
}

🔗 Commit: feat: add functionality to create range markers from defined zones

What This Means for Kdenlive Users

Before Range Markers

Users could only place markers at specific points in time. To mark a section, they'd need multiple point markers and remember which ones belonged together.

After Range Markers

Users can now:

  • Mark Complete Sections: Create a single marker that spans an entire intro, chapter, or highlight
  • Visual Organization: See at a glance which parts of their project are marked and how long each section is
  • Efficient Editing: Resize markers to adjust section boundaries without recreating them
  • Better Collaboration: Share projects with clear, visual section markers

Final Thoughts

This GSoC project has been an incredible journey. From the initial concept of extending Kdenlive's marker system to the final implementations of a fully featured range marker interface, every step has been a learning experience. I still have some things to improve in the Merge Request, but I'm happy with the progress I've made.

I'm grateful to my mentor Jean-Baptiste Mardelle for his guidance throughout this project, and to the entire Kdenlive community for their support and feedback in the Merge Request.

As I move forward in my studies and career, I'll always remember this summer spent improving Kdenlive's marker system. The skills I've developed, the challenges I've overcome, and the community I've been part of will continue to influence my work for years to come.


Saturday, 30 August 2025

Hello again!

This is the second part of my GSoC journey with the KDE community.
In my previous blog, I introduced my project “Modernize Account Management with QML” and shared how I worked on building a shared infrastructure for Akonadi and ported the Knut configuration dialog to QML.

Now, I’m excited to share the final part of my work and wrap up this amazing journey.


Work Accomplished

Since my last update, I successfully ported the configuration dialogs for singlefileresource-based resources to QML, marking a significant milestone in modernizing account management.

The new architecture leverages the shared infrastructure I built earlier (QuickAgentConfigurationBase) and consists of two main parts:

  1. Common QML Component (SingleFileConfig.qml):

    • A reusable, Kirigami-based form component.
    • Handles universal settings: file path selection, display name, read-only mode, monitoring, and periodic updates.
    • Eliminates code duplication and ensures a consistent look and feel.
  2. Resource-Specific QML Wrappers:

    • Each resource (Ical, Vcard, Mbox) now has its own Main.qml.
    • Uses a TabBar layout to organize common settings from SingleFileConfig and resource-specific tabs (like compaction for Mbox or activities configuration).

Resources Ported

  • Ical Resource:

    • Migrated calendar file (.ics) configuration.
    • Now provides a clean, two-tab interface (File and Activities) built with Kirigami FormCard components.
    • Offers a more intuitive user experience.
  • Vcard Resource:

    • Migrated address book (.vcf) configuration.
    • Confirmed the reusability of the SingleFileConfig component.
  • Mbox Resource:

    • The most complex port.
    • Included unique tabs for Compact Frequency and Lock Method.
    • Old QWidget .ui files and C++ classes replaced with pure QML components (CompactPage.qml, LockMethodPage.qml).
    • Integrated directly into the new configuration base class.

Lessons Learned

  • Navigating Large Codebases: Learned how to work in KDE’s modular ecosystem, tracing dependencies and understanding project structure.
  • Debugging & Documentation: Improved my debugging skills across QML/C++ and writing clear documentation for future developers.
  • Mentorship & Feedback: Mentor guidance helped refine my coding style, problem-solving approach, and overall contributions.

Looking Ahead

While the main scope of my project is complete, there’s still plenty of room to grow.
I plan to continue contributing to KDE by porting more configuration dialogs to QML as time permits.

I’m deeply grateful to my mentors — Carl Schwan, Claudio Cambra, and Aakarsh MJ — and every KDE developer for their constant guidance, patience, and support throughout this journey. Thank you so much!