Skip to content

Friday, 5 September 2025

The yearly release of Apple's operating systems, including macOS 26 Tahoe, are just around the corner, and we've been busy weeding out issues in Qt to prepare for the final release. Here's what you need to know.

The 2025 edition of KDE's annual community gathering starts today.  Unforunately circumstances mean i won't be attending BUT i was fortunate enough to attend my first ever Akademy last year.  Indeed, that is one whole year ago but neurodivergent scriptophobia and a incredible ability to procrastinate have kept me from writing about it, until now.

Attending talks and sprints was certainly a blast from the past, not that I had ever attended another tech based conference before.  It certainly brought back memories of campus life in my late teens studing anthropolgy/socialogy.  The real hightlight being able to meet fellow attendees for the time in person, all of whom I had only ever previously chatted to in matrix rooms before.

Over the past year we've continued to provide the community with regular package updates from KDE neon and many aspects of how we release packages has changed but this probably deserves it's own post on the neon blog. Aside from this i've immersed myself in the innards of KDE's sysadmin tooling whilst helping with the roll out of the shiny new ephemeral vm builders that fellow antipode Ben has lovingly crafted.  Will definitely use this new tech as a foundation to help automate not only Snap publishing, but also Flatpak's and appimage's.  After all, in KDE it's all about the apps !!

I'd like to thank everyone in the community who have been so welcoming and supportive, the KDE e.v. for helping me attend last years event and wish all this years attendees a fantastic experience.  Cheers !!

 

Thursday, 4 September 2025

On Android apps need special permissions to access certain things like the camera and microphone. When the app tries to access something that needs special permission for the first time you will prompted once and afterwards the permission can be removed again in the app settings.
For sandboxed apps on desktop as done by flatpak or snap for example the situation is similar. Such apps can’t access all systems services, instead they have to through xdg-desktop-portal which will show a dialog where the user can grant permission to the app. Unfortunately we lacked the “configure permissions” part of it which means granted permissions disappear into the void and pre-authorization is not possible.
This changes with Plasma 6.5 which will include a new settings page where you can configure application permissions!

Features

Main view showing Application Permissions
Main view showing Application Permissions

The main view after selecting an application shows all the permissions you can configure. The ones at the top are controlled by simple on/off switches and are turned on by default – applications are currently allowed to do those things by the portal frontend except when explicitly denied.

The settings that follow are a bit more interesting, you can configure if application requests to do those things or not. Additionally the “Always Ask” setting will make it so that a dialog is always shown when the app tries to take a screenshot for example. The default state for these settings after you install an app is “Ask Once”, a dialog will be shown and depending on if you click yes or no future requests are allowed or denied. The Location setting is a bit special as it allows configuring the accuracy with which the current location is fetched.

Configuring saved screen cast and remote desktop sessions
Configuring saved screen cast and remote desktop sessions

Finally you can configure screen cast and remote desktop sessions that the app is able to restore in the future. Here you can see exactly what the application is able to record and control and revoke those permissions. The Plasma specific override for remote control can also be enabled here.

A Note on Non-Sanboxed Apps and X11

For non-sandboxed (“host”) apps only a subset of settings will be shown. The reason is simple: Some portals just forward a request from the application to another service. Denying host apps access to such portals would either have no effect as they are not using the portal in the first place or can always talk to the service directly anyway. However some things such as recording screen contents or sending fake input events always require that these apps use the portal because they are simply not possible through other means so these settings will be shown. On Wayland anyway – on X11 everything can do everything. Even so these settings will also be shown on X11 if you are using an app that uses the portal to do these things.

Outlook

Of course as we implement new portals support for these will also be added here if suitable. For existing portals permission support can be added – preferably upstream. One such system is currently in development for the input capture portal. If you think there is a portal dialog that can be hooked up to a permission system which currently isn’t please file a bug report and we can investigate it.

Why PyCups3 is So Damn Intelligent

In my last blog , I shared just how smart PyCups3 is. This time, let’s go one layer deeper and see why it’s so intelligent.

But first, let’s warm up with a bit of Python magic. ✨


What the Heck is a Dunder Method?

Straight from the Python 3.x docs:

Dunder methods (a.k.a. magic methods) are special methods you can define in your classes to give them superpowers. These methods let you make your class objects behave like built-in Python types. They’re called dunder because they start and end with a double underscore — like __init__.

Wednesday, 3 September 2025

…might be the one you already have.

I’m sure some of you are chucking over my realization of something so obvious! Yeah, fair enough. Perhaps this will at least burnish my KDE Eco credentials a bit.


Last October, the loud fan noise, poor multi-core CPU performance, and at best 4-hour battery life of my daily driver Lenovo ThinkPad X1 Yoga laptop were starting to become significant hindrances. It’s still a good laptop, but it just wasn’t good for me and my use cases anymore. It now belongs to my daughter.

I’ve gotten pickier over the years as I’ve discovered what I really want in a laptop, and looked for a cheap “good-enough” stop-gap that could be recycled to another family member after I located the perfect final replacement.

I found an amazing deal on a refurbished 2023 HP Pavilion Plus 14 and pulled the trigger! Here it is driving my workstation:

This basic little Pavilion is the best laptop I’ve ever used.

With a large 68 Watt-hour battery, its energy-efficient AMD 7840U CPU delivers a true 9-hour battery life with normal usage. For real! It also ramps up to do a clean build of KWin in less than 10 minutes! The laptop’s 2.8K 120Hz OLED screen is magnificent. Its keyboard and touchpad are truly the best I’ve ever used on a PC laptop. Linux compatibility is excellent too. Everything works out of the box. It’s just… great.

The problem is, it isn’t perfect. The speakers are awful, the aluminum casing is fairly thin and prone to denting while traveling, and there’s no touchscreen or fingerprint reader. USB 4 ports would also be nice, as would putting one on each side, rather than both on the right.

So I kept looking for the perfect replacement!

I still haven’t found one.

Everything out there sucks. Something important is always bad: usually the battery life, screen, or speakers. Often the keyboard layout is either bad, or just not to my liking. Other times the touchpad is laggy. Or it’s not physically rugged. Or there’s no headphone jack (what the heck). Or the palmrest is coated in some kind of gummy sticky material that will be disgusting with caked-on sweat and skin in a matter of weeks. Or Linux compatibility is poor. Or it’s absurdly expensive.

So for now, I’ll stick with the little Pavilion that could.

If only HP made this exact laptop with a thicker case and better speakers! A fingerprint reader and a touchscreen would be nice-to-haves as well. Replaceable RAM would easily be possible with a small redesign, as there’s empty space in the case. A USB 4 port on each side would be the cherry on top.

Ah well. Nothing’s perfect, and it’s good enough.

So, after the font family, opentype handling and font metrics rework, there’s some text properties left that aren’t particularly related to one another. Furthermore, at time of writing I also have tackled better UI elements, as well as style presets, so I’ll talk a bit about those too.

Language

The first few properties are all affected by the language set on the text shape. So the first thing I tackled was a language selector.

The format accepted by SVG is an xml:lang, which takes a BCP47 language code. This is unlike QLocale, which uses a POSIX style string of lang_territory_script@modifier, where it ignores the modifier.

While you’d think that’d be good enough, that modifier is actually kinda important. For example, when parsing PSD files, the BCP47 code associated with the extremely American term “Old German” is de-1901, “German according to the 1901 orthography”.

So the first thing I ended up doing is creating a little BCP47 parser, that can utilize QLocale for the proper name, without losing the variants and extensions and what have you.

For the UI widget, it is at its most basic a combobox. Instead of showing all possible BCP47 codes (which’d be too much), we instead only show previously used languages in this session and “favourited” languages, with the latter defaulting to the current languages of the application. Users can type full BCP47 language codes into the combobox and have it parsed.

By default the language dropdown shows the current list of locales
Artists will be able to type in any valid BCP47 code, and Krita will try its best to parse and display it.

However, it’s likely someone may not know the BCP47 code for a given language, and while they could find that on, say, Wikipedia, that still is a considerable amount of friction. So instead we use QLocale to populate a model with the basic names and territory codes. This model then is set onto a ProxySortFilterModel, which in turn is used as a sort of completer model. This allows users to type in partial names, and it’ll show an overlay with matches.

Artists will be able to type in a language and Krita will provide a search.
Languages that have been used this session will be added to the dropdown, where they can be made persistent.

There’s a bit of a problem here though: QLocale can only provide the native name and the English name of a locale, but not, say, the Mandarin chinese name of French. I don’t know if we can ever fix this, as I don’t particularly feel like making translators translate every single language name.

Either way, this should allow someone who uses their computer in English, but speaks French to type in fr and then go into the dropdown to mark that as a favourite so it is easily accessible in all future text shapes.

Most programs use text language to provide spellcheck or hyphenation. Krita does neither right now, but that doesn’t mean it doesn’t use language. For one, some OpenType fonts have a locl feature that gets toggled when you provide the shaper with the appropriate language. There’s two other places we use the language as well:

Line Break and Word Break

Line break and word break both modify the soft wrapping in a piece of text.

Line break in particular can be modified to allow line breaks before or after certain punctuation, with a loose line breaking breaking at all possible places, and a strict line breaking not allowing breaks before and after certain punctuation marks. Which marks those are depends on the content language, and CSS defines several rules for Chinese and Japanese content.

LibUnibreak, which we’re using, doesn’t have loose rules for any languages, but it does allow for strict. A project like KoReader is able to customize the result of LibUnibreak to support more languages, and that might be a future solution. However, it might also be that we look for a different line breaking library as LibUnibreak is going to be limited to Unicode 15.0: Unicode 15.01 introduced improvements for Brahmic scripts, and because those scripts are quite complex, it won’t work with LibUnibreak’s architecture.

The Krita slogan in Korean. Left has word-break: normal, right word-break: keep all, which means soft breaks happen only at the spaces.

Word break is somewhat similar to line break. Some scripts, in particular Korean and Ethiopian, have two major ways of line breaking: word-based and grapheme-based, and depending on the context, you may need one or the other. Word break provides a toggle for that choice.

Text Transform

Text transform is also affected by language. I didn’t really need to do much here however, just forgot to have it inherit properly. I don’t recall if I did touch upon that before, but another annoying thing with text-transform, given it can change the amount of codepoints in a sentence, is that you will need to take care to keep track of those changes, and ideally I’d rework the code so this happens at the last possible moment.

Spacing

Then there’s a number of spacing/white space related features. Letter and word spacing were not touched in these last few weeks, though they do have some peculiarities, and I will discuss them when talking about phase 3 features, when the time comes.

White Space

The CSS white-space rule is my enemy. It exists to facilitate hand-written XML files, and in those, you might want to manually line break text, and have the computer automatically unbreak said text and remove unnecessary white space. And that is also the default behaviour.

In the context of a what-you-see-is-what-you-get text editor however, this default behaviour mostly gives us a text interaction where spaces are getting eaten for no particular reason. Not just that, but this painful default behaviour is called “normal”. Which means that, if we set white space to the much more preferrable “pre-wrap”, users will see a toggle they don’t know. And if they set the toggle to what they would consider the default behaviour, “normal”, they get distinctly not “normal” behaviour.

This’ll probably be fixed by CSS-Text-4, where white-space has been split into white-space-collapse and text-wrap, with the former having much more descriptive “collapse” and “preserve” modes. Until we handle that properly though, the white-space property is going to be hidden by default so we don’t have to troubleshoot that endlessly.

Beyond that, the thing that got fixed code-wise is that white space can now be applied per range. To do this, you need to get the whole string inside the paragraph, and then process each range separately, as white spaces present in a previous range can affect whether collapses happen in the current.

Tab size and Indentation

So, the way tabs work is that instead of just adding spacing, it adds spacing so the next grapheme starts at a multiple of the tab-size. This required some rework, in particular with line-breaking and especially with wrapping text in a shape, but I’ve managed to get that to work.

Showcasing the tab-size property. This is probably not going to be the most used property, but it is good to have functional

Text indentation is similar, though I don’t quite understand how it should work for text-in-shape. Like, should the identation be equal for every new line (current behaviour) or instead be similar to tab-size, and snap to a multiple?

Showcasing text-indent. Both paragraphs have text-indent set to 1 em, but the left one only sets text-indent per new line, while the right one uses hanging indentation.

Text Align and Text Anchoring

Text align didn’t need any changes code wise. It did however need some fixing UI wise. You see, for text-in-shape, SVG 2 uses text-align, while for all other text types, including inline-size wrapping, it uses text-anchor.

This is because SVG was originally a lot dumber in regards to how text works, and the most important thing it needed to know is where the start (anchor) of a given chunk of text is. Inline-size, meanwhile, is basically the simplest form of wrapping bolted onto SVG 1.1 text, and thus uses text-anchor. This does mean that this simple wrapping doesn’t have access to full justification of text, as that’s only in text-align.

On the flip side, text-align and text-align-last only apply to text-in-shape and come from CSS-text. There’s many different options there, and some of them also change behaviour depending on the text-direction. Text-align-last even only really applies when the text is full justified.

The UI for text-anchor and text-align. The top 4 buttons is what most people will see, while clicking on the arrow will fold out the precise options.

These are all tiny implementation details that I didn’t want to bore people with, so I tried to simplify it to three buttons (start/middle/end) and a toggle for justification. People can still set each property separately, but most people won’t ever have to touch this.

First lines of Krita's application description typeset very formally as if it were a roman text.
SVG 2 only allows justification when the text is in a shape, but it is otherwise functional.

Text Rendering

Within text layout, there is this concept of “hinting”, that is, hints as how to properly rasterize the vector glyphs onto a limited amount of pixels. A full vector program like Inkscape doesn’t really have any use for this, but within Krita text is rasterized before compositing.

Now, SVG has the text-rendering property which allows us to decide what kind of rendering we have. For krita, we use “optimizeSpeed” to mean no antialiasing and full hinting, great for pixel art fonts, while “optimizeLegibility” only does full-hinting for vertical text, and light hinting for regular. Then there’s “geometricPrecision” which uses no hinting whatsoever.

If your system does upscaling, it may not be visible, but each of the above texts are rendered with pixel art fonts, and optimize speed enabled, as well, in some cases letterspacing and baseline shift has been enabled. The text layout will snap such offsets to the nearest pixel so text remains crisp.

Besides toggling hinting and antialiasing, these options also get used to ensure the offsets, like line height, baseline shift, tab-size, text-indent dominant baseline and text-decoration get adjusted to the nearest pixel. This should make it easier to have nice pixel perfect lettering, which is something artists have expressed a need for.

QML items and Docker rework

One of the things that was important to me when designing the text properties docker was to ensure folks wouldn’t get overwhelmed. Due this, I spend a lot of time trying to make sure only relevant properties are seen. CSS’ structure is designed to facilitate this, as it allows for partial stylesheets and you don’t need to provide a definition for each style, so I decided to have that reflect inside the docker. Furthermore, when adding new properties I made sure that the search is able to search for alternate key terms, so that a state-of-art term like “tracking” brings up the property “letter-spacing”. The docker is also set up to make it easy to revert properties, so that people can experiment with new properties and if they don’t like them, revert them with a single click.

One of the original mockups for the text properties docker. Design 1 was one that tried to hide all properties under fold-outs. Removing of inherited properties was also in this design.
UI wireframe with a number of comments like "Even in this design, properties need to be grouped" and "Search should provide results for aliases, like tracking showing letterspacing".
In design 2 I focused on making the UI less busy and more experimentation friendly. This is the one we went with at the end for that precise reason.

Some people in the text tool thread weren’t too happy with this; things that appear and disappear are generally considered scary UI. But to me this isn’t much different from layers inside a layer stack. Furthermore, disappearing UI elements generally are the most annoying on systems where this is not an internal representation inside the data, as that means there’s a good chance the UI and the data get out of sync. Because CSS does have partial sheets, figuring out which controls are currently relevant is trivial.

The final docker like how its shown in Krita. Biggest changes is that properties which have a main widget (like font substyle selection) can be expanded to show the precise options. Similarly, the button at the start will show a multi-headed icon when selecting a range of text with multiple different values.

Revert buttons also had many critics, because other text applications don’t have this, and it’s extra buttons, and extra buttons is bad UI. This is a bit of a UI myth, as a good UI is one that allows you to stay in control of the data object you’re manipulating. Sometimes, this can mean that there’s less buttons, but with the complexity of CSS, I felt it was important you would always know whether a property was currently being applied or inherited. Similarly, the fact you can quickly revert a property should facilitate experimentation. And it is not like there’s no one out there that hasn’t gone insane with frustration trying to figure out which text properties are being set, and how to unset them. In many applications, this leads to liberal use of “remove all formatting”, which shouldn’t be as necessary with this docker.

Still, some good points were made: Some basic properties always need to show, like font-size and font-family, as to avoid confusion. Similarly, we always want to hide some properties, like our footgun friend white-space. Finally, some folks just always want to see all properties.

Some people want to always see all properties at all times. A part of me still wonders if they fully understand how many properties Krita supports, but regardless, it is possible. Note that the search bar becomes a filter in this situation.

For this reason, I made it so that all properties can be set “always shown”, “when relevant”, “when set”, “never shown” and “follow default” (The difference between “when relevant” and “when set” is that the former also shows when CSS inheritance is happening. Follow default allows for quickly switching all properties) When there’s no properties that are situationally visible, the search to add a property is replaced with a filter search. This way, we should be able to have a default behaviour suited for progressive disclosure, while people who hate disappearing widgets can disable those. And, as a final plus, I won’t have to deal with people complaining about toggles they don’t use in their language and how I should remove this unnecessary feature.

Both when showing all, or only showing relevant, the search at the bottom can find results for different keywords. In this case, letter-spacing, word-spacing and font-kerning all have “tracking” defined as a possible keyword, and thus the rest are filtered away. This should help people with typography backgrounds find properties even without knowing css all that well.

This was all facilitated by the docker using QML. We’ve previously had attempts at QML integration in Krita, but that was always either a separate UI (krita sketch) that easily broke, or a somewhat unmaintained docker like the touch docker. This time however, we’re committed to actually getting this to work properly, and I have also started using QML for the glyph palette.

One annoying thing with using QML, and even with using the QtQuick.Controls is that we didn’t have access to our QWidget controls anymore, meaning we didn’t have sliders. Thankfully another contributor, Deif Lou, was so kind to hack a port of those sliders together, and I was able to import those into Krita to use in the text properties docker.

Style Presets

With a docker that only shows relevant properties, and can show somewhere between 30-50 of them, the need for presets became self-evident. CSS has support for style classes build in, being its primary usecase even. Krita doesn’t have support for style classes themselves (it can only parse them), but that doesn’t mean we cannot provide presets.

For the presets, I decided to use a small SVG file with a single text object as the storage method. We identify the element that contains the stored style by adding a krita:style-type attribute to that element:

<?xml version="1.0" encoding="UTF-8"?>
<svg xmlns="http://www.w3.org/2000/svg" height="14pt" width="40pt" viewbox="0 0 40 14" xmlns:krita="http://krita.org/namespaces/svg/krita">
    <text transform="translate(0, 12)" font-size="12"><title>Superscript</title><desc>This halves the font-size and offsets the text towards the top.</desc>Super<tspan krita:style-type="character" style="baseline-shift:super; font-size: 0.5em;">script</tspan></text>
</svg>

The reason for a text object instead of using a plain CSS class, is that this allows us to provide a good demonstrative sample. If you create a style like a synthetic superscript, you will want to show that superscript in the context of the text it will be situated in, because otherwise it is invisible.

The style preset selector. Not mentioned is the significant amount of time that was spend making the style and font previews look good. I do think it’s time well-spend as I think the nice big preview make it very inviting to use.

There’s also the possibility to create a “pixel-relative” style. File wise, this means that krita:style-resolution="72dpi" gets added as an attribute. In practice, this is because CSS itself has a hardcoded relationship between points:pixels:inches of 96:72:1, which can be worked with if you’re working in a vector-only environment, but not in a raster one. For this reason, Krita only really stores points and font-relative units. A pixel-relative style then gets scaled depending on the document resolution, so that users can create styles for 12 pixel high text (again, this is because of the pixel-perfect font requirement).

There’s more work that needs to be done here. For example, right now, we cannot edit the fill/stroke, so we’re just not storing that. We also have a very harsh split between paragraph (block level) and character (inline-level) properties, and it would be more ergonomic if we could remove that.

Next Steps

Phase 2 is nearly done. Before finishing this, I’d like to rework the properties a bit more because I got a bit better at QML, and I think I can handle some things more elegantly now. I also want to see if I can get rid of the harsh paragraph/character properties distinction.

After that, phase 3 will start. I have already started on it with some converters so old Krita 5.1 texts can be converted to wrapped text. I also want to replace the text tool options so new texts can be made on basis of a style preset or whatever is in the docker now. I want to remove the old rich text editor. And I want to work on a svg character transforms editing mode, called typesetting mode.

Finally, there’s still the issue of the text paths, that is “text-on-path” and “text-in-shape” That one is being blocked by the sheer complexity of editing the paths themselves (The text-layout parts work fine). We’ve recently done a bunch of meetings to figure out the proper solution here, so I’ll be working on that soon as well.

Appendix

Old German

The reason I find “Old German” very American is because terms like “Old German”, “Old English” and “Old Dutch” all refer to languages that predate the establishment of the United States of America. So, only someone from that country would think that these are appropriate names for what’s basically an older standard spelling variant.

Adobe uses them largely to pick the older style hyphenation dictionaries, which is important for backwards compatibility, but I can’t really understand why “German 1901” was not considered instead of “Old German”.

As an aside, despite having a special “Old Dutch” category inside Adobe products, there isn’t a special BCP47 variant for it. I honestly don’t know which version of Dutch they mean by “Old Dutch”, because in my lifetime, spelling rules have changed significantly, twice, and the latter was so disliked that there’s a second version of it used by certain newspapers. Similarly, you will also not find “nl-1996” or “nl-white-booklet” in BCP47, meaning that even if I did figure it out, we still wouldn’t be able to assign a meaningful BCP47 code to it.

Sunday, 31 August 2025

Window Activation, what’s that?

In short: for some actions you want to activate your application window. How to do that differs from platform to platform.

Some weeks ago there was a post by Kai about window activation on Wayland.

It explains nicely how the XDG Activation protocol works and how the different parts of the KDE & Qt stack implement it to correctly allow to activate the right windows in your KDE Plasma Wayland session.

What’s the issue?

For the most parts this works now flawlessly, but there is some corner case not handled that well at the moment: stuff launched by our terminal emulator Konsole.

This is no large issue for many users, but I rely in my usual workflow with Kate a lot on just being able to do

❯ kate mycoolfile.cpp

in my Konsole window and if Kate is already running to have the existing window activated.

That worked just fine on X11, but on Wayland the safeguards to avoid misuse of window activation stopped that. There I just get some flashing of the Kate icon in the task switcher.

After some years of ignoring that issues myself and hoping a solution would just materialize, I now gave that some try to fix it myself.

The hack (solution)

In the most cases it is easy to get the application that launches stuff to export a new XDG_ACTIVATION_TOKEN environment variable for the new process and all stuff just works.

Unfortunately that is not that easy for a terminal emulator.

Naturally Konsole could create and export one before starting the shell, but that is an one-time use token. In the best case the first launched Kate or other application could use it once.

The current solution is to allow applications to query a XDG_ACTIVATION_TOKEN via DBus from Konsole, if they can prove they got started inside the shell session by passing a KONSOLE_DBUS_ACTIVATION_COOKIE that got exported by Konsole to the shell.

This leads to the following client code in Kate/KWrite and Konsole itself.

    // on wayland: init token if we are launched by Konsole and have none
    if (KWindowSystem::isPlatformWayland()
        && qEnvironmentVariable("XDG_ACTIVATION_TOKEN").isEmpty()
        && QDBusConnection::sessionBus().interface()) {
        // can we ask Konsole for a token?
        const auto konsoleService = qEnvironmentVariable("KONSOLE_DBUS_SERVICE");
        const auto konsoleSession = qEnvironmentVariable("KONSOLE_DBUS_SESSION");
        const auto konsoleActivationCookie
            = qEnvironmentVariable("KONSOLE_DBUS_ACTIVATION_COOKIE");
        if (!konsoleService.isEmpty() && !konsoleSession.isEmpty()
            && !konsoleActivationCookie.isEmpty()) {
            // we ask the current shell session
            QDBusMessage m =
                QDBusMessage::createMethodCall(konsoleService,
                    konsoleSession,
                    QStringLiteral("org.kde.konsole.Session"),
                    QStringLiteral("activationToken"));

            // use the cookie from the environment
            m.setArguments({konsoleActivationCookie});

            // get the token, if possible and export it to environment for later use
            const auto tokenAnswer = QDBusConnection::sessionBus().call(m);
            if (tokenAnswer.type() == QDBusMessage::ReplyMessage
                && !tokenAnswer.arguments().isEmpty()) {
                const auto token = tokenAnswer.arguments().first().toString();
                if (!token.isEmpty()) {
                    qputenv("XDG_ACTIVATION_TOKEN", token.toUtf8());
                }
            }
        }
    }

The KONSOLE_DBUS_SERVICE and KONSOLE_DBUS_SESSION environment variables did already exist before, the only new one is the secret KONSOLE_DBUS_ACTIVATION_COOKIE that is filled by Konsole with some random string per shell session.

The org.kde.konsole.Session has now a new activationToken method to retrieve a token. That method will only work if the right cookie is passed and if Konsole itself is the current active window (else it will not be able to request a token internally).

As it easily visible alone from the naming, that is some Konsole only interface and one needs own code in applications that want to make use of it.

Therefore this is more of a hack than a proper solution.

If it inspires others to come up with something generic, that would be awesome!

I will happily help to implement that in Kate and Co., but until that happens, at least my workflow is back working in my Wayland session.

Feedback

You can provide feedback on the matching Lemmy and Reddit posts.

Hello again! I'm Ajay Chauhan, and this update continues my Google Summer of Code 2025 journey with Kdenlive. Over the past few months, I've been working on transforming Kdenlive's marker system from simple point markers to a range-based markers. Let me take you through the progress we've made since my last update.

From Backend to Frontend:

Building the User Interface

The real magic happened when we started building the user interface. I began with the Marker Dialog - the window where users create and edit markers. This was a significant challenge because we needed to maintain the simplicity of point markers while adding the complexity of range functionality.

I added three new UI elements:

  • A checkbox to toggle between point and range markers
  • An "End Time" field for setting the marker's end position
  • A "Duration" field that automatically calculates and displays the time span

The trickiest part was keeping these fields synchronized. When a user changes the start time, the duration updates automatically. When they modify the duration, the end time adjusts accordingly. It's like having three interconnected gears that always move together.

🔗 Commit: feat(range-marker): implement range marker functionality in MarkerDialog

Visual Magic in the Monitor

Next came the Monitor Ruler - the horizontal timeline you see above the video preview. This is where range markers work visually with the video preview.

I implemented a visual system:

  • Range Span: A semi-transparent colored rectangle that shows the marker's duration
  • Start and End Lines: Vertical lines marking the beginning and end of each range
  • Color Consistency: Each marker type gets its own color, with range markers using that color for the entire span

The core of this visualization is the rangeSpan rectangle:

Rectangle {
    id: rangeSpan
    visible: guideRoot.isRangeMarker
    x: (model.frame * root.timeScale) - ruler.rulerZoomOffset
    width: Math.max(1, guideRoot.markerDuration * root.timeScale)
    height: parent.height
    color: Qt.rgba(model.color.r, model.color.g, model.color.b, 0.5)
}

What this does:

  • visible: guideRoot.isRangeMarker - Only shows for range markers (not point markers)
  • width: Math.max(1, guideRoot.markerDuration * root.timeScale) - Width represents duration, with a minimum of 1 pixel
  • color: Qt.rgba(...) - Uses the marker's category color with 50% transparency

But the real challenge was making these markers interactive. Users can now drag the left or right edges of a range marker to resize it in real-time. The visual feedback is immediate - you see the marker grow or shrink as you drag, making it incredibly intuitive.

🔗 Commit: feat(monitor-ruler): Display the range marker color span in the Clip Monitor

Timeline Integration

The Timeline Ruler was the next frontier. This is the main timeline where users spend most of their editing time, so the range markers needed to be even more sophisticated here.

I added several visual features similar to the monitor ruler.

The timeline implementation also includes the same drag-to-resize functionality, but with additional constraints to prevent markers from extending beyond clip boundaries.

🔗 Commit: feat(markers): drag-to-resize range markers in monitor and timeline

The drag-to-resize

The drag-to-resize functionality was perhaps the most technically challenging feature. I had to implement:

  • Left and Right Handle Resizing: Dragging the left and right edges changes both the start position and duration
  • Live Preview: Visual feedback during resize operations
  • Constraint Handling: Preventing invalid durations (minimum 1 frame)
  • Binding Management: Restoring Qt's automatic updates after resize completion

The resize handles only appear when range markers are wide enough, and they provide visual feedback through color changes and opacity adjustments. Here's how the left and right resize handle works:

Rectangle {
    id: leftResizeHandle
    visible: guideRoot.isRangeMarker && rangeSpan.width > 10
    width: 4
    height: parent.height
    x: rangeSpan.x
    color: Qt.darker(model.color, 1.3)
    opacity: leftResizeArea.containsMouse || leftResizeArea.isResizing ? 0.8 : 0.5

    MouseArea {
        id: leftResizeArea
        anchors.fill: parent
        anchors.margins: -2  // Extends clickable area
        cursorShape: Qt.SizeHorCursor
        preventStealing: true

        onPositionChanged: {
            if (isResizing) {
                var globalCurrentX = mapToGlobal(Qt.point(mouseX, 0)).x
                var realDeltaX = globalCurrentX - globalStartX
                var deltaFrames = Math.round(realDeltaX / root.timeScale)
                var newStartPosition = Math.max(0, startPosition + deltaFrames)

                // Live preview updates
                rangeSpan.x = (newStartPosition * root.timeScale) - ruler.rulerZoomOffset
                rangeSpan.width = Math.max(1, newDuration * root.timeScale)
            }
        }
    }
}

Key implementation details:

  • anchors.margins: -2 - Extends the clickable area beyond the visible handle
  • preventStealing: true - Prevents other mouse areas from interfering
  • Global coordinate tracking ensures accurate resize calculations across zoom levels
  • Live preview updates provide immediate visual feedback

🔗 Commit: feat(monitor-ruler): enable right-click capture for resizing markers

Zone-to-Marker

One of the other features I implemented was the Zone-to-Marker Conversion system. This feature allows users to define a time zone in the monitor and instantly create a range marker from it, bridging the gap between Kdenlive's existing zone functionality and the new range marker system.

Before this feature, users would have to manually create range markers.

This was time-consuming and error-prone, especially when working with precise time ranges that were already defined as zones.

How It Works

The zone-to-marker system works in two ways:

Method 1: Context Menu Integration Users can right-click on the monitor ruler and select "Create Range Marker from Zone" from the context menu. This instantly creates a range marker spanning the currently defined zone.

Method 2: Quick Action A dedicated action that can be triggered from the main window, allowing users to quickly convert zones to markers without navigating through menus.

1. Monitor Proxy Enhancement I added a new method to the MonitorProxy class that handles the zone-to-marker conversion:

bool MonitorProxy::createRangeMarkerFromZone(const QString &comment, int type)
{
    // Validate zone boundaries
    if (m_zoneIn <= 0 || m_zoneOut <= 0 || m_zoneIn >= m_zoneOut) {
        return false;
    }

    std::shared_ptr<MarkerListModel> markerModel;

    // Determine which marker model to use based on monitor type
    if (q->m_id == int(Kdenlive::ClipMonitor)) {
        auto activeClip = pCore->monitorManager()->clipMonitor()->activeClipId();
        if (!activeClip.isEmpty()) {
            auto clip = pCore->bin()->getBinClip(activeClip);
            if (clip) {
                markerModel = clip->getMarkerModel();
            }
        }
    } else {
        // For project monitor, use the timeline guide model
        if (pCore->currentDoc()) {
            markerModel = pCore->currentDoc()->getGuideModel(pCore->currentTimelineId());
        }
    }

    if (!markerModel) {
        return false;
    }

    // Convert zone to range marker
    GenTime startPos(m_zoneIn, pCore->getCurrentFps());
    GenTime duration(m_zoneOut - m_zoneIn, pCore->getCurrentFps());
    QString markerComment = comment.isEmpty() ? i18n("Zone marker") : comment;

    // Use default marker type if none specified
    if (type == -1) {
        type = KdenliveSettings::default_marker_type();
    }

    bool success = markerModel->addRangeMarker(startPos, duration, markerComment, type);
    return success;
}

User Experience Features

1. Smart Validation The system validates zone boundaries before creating markers:

  • Ensures zone start is before zone end
  • Prevents creation of zero-duration zones
  • Handles edge cases gracefully

2. Automatic Naming If no comment is provided, the system automatically generates a descriptive name like "Zone marker" or uses the existing zone name if available.

3. Feedback System Users receive immediate feedback through status messages:

  • Success confirmation when markers are created
  • Error messages for invalid operations
  • Warning messages for missing zones

Features

Timeline Controller Integration: Added methods like resizeGuide and suggestSnapPoint to make range markers work seamlessly with Kdenlive's existing timeline operations.

The backend integration happens through the resizeMarker method in the monitor proxy:

void MonitorProxy::resizeMarker(int position, int duration, bool isStart, int newPosition)
{
    std::shared_ptr<MarkerListModel> markerModel;

    // Determine appropriate marker model based on monitor type
    if (q->m_id == int(Kdenlive::ClipMonitor)) {
        auto activeClip = pCore->monitorManager()->clipMonitor()->activeClipId();
        if (!activeClip.isEmpty()) {
            auto clip = pCore->bin()->getBinClip(activeClip);
            if (clip) {
                markerModel = clip->getMarkerModel();
            }
        }
    }

    if (markerModel) {
        GenTime pos(position, pCore->getCurrentFps());
        bool exists;
        CommentedTime marker = markerModel->getMarker(pos, &exists);

        if (exists && marker.hasRange()) {
            GenTime newDuration(duration, pCore->getCurrentFps());
            // Apply constraints and update the marker
            if (newDuration < GenTime(1, pCore->getCurrentFps())) {
                newDuration = GenTime(1, pCore->getCurrentFps());
            }
            markerModel->editMarker(pos, newStartTime, marker.comment(),
                                  marker.markerType(), newDuration);
        }
    }
}

🔗 Commit: feat: add functionality to create range markers from defined zones

What This Means for Kdenlive Users

Before Range Markers

Users could only place markers at specific points in time. To mark a section, they'd need multiple point markers and remember which ones belonged together.

After Range Markers

Users can now:

  • Mark Complete Sections: Create a single marker that spans an entire intro, chapter, or highlight
  • Visual Organization: See at a glance which parts of their project are marked and how long each section is
  • Efficient Editing: Resize markers to adjust section boundaries without recreating them
  • Better Collaboration: Share projects with clear, visual section markers

Final Thoughts

This GSoC project has been an incredible journey. From the initial concept of extending Kdenlive's marker system to the final implementations of a fully featured range marker interface, every step has been a learning experience. I still have some things to improve in the Merge Request, but I'm happy with the progress I've made.

I'm grateful to my mentor Jean-Baptiste Mardelle for his guidance throughout this project, and to the entire Kdenlive community for their support and feedback in the Merge Request.

As I move forward in my studies and career, I'll always remember this summer spent improving Kdenlive's marker system. The skills I've developed, the challenges I've overcome, and the community I've been part of will continue to influence my work for years to come.


Saturday, 30 August 2025

Hello again!

This is the second part of my GSoC journey with the KDE community.
In my previous blog, I introduced my project “Modernize Account Management with QML” and shared how I worked on building a shared infrastructure for Akonadi and ported the Knut configuration dialog to QML.

Now, I’m excited to share the final part of my work and wrap up this amazing journey.


Work Accomplished

Since my last update, I successfully ported the configuration dialogs for singlefileresource-based resources to QML, marking a significant milestone in modernizing account management.

The new architecture leverages the shared infrastructure I built earlier (QuickAgentConfigurationBase) and consists of two main parts:

  1. Common QML Component (SingleFileConfig.qml):

    • A reusable, Kirigami-based form component.
    • Handles universal settings: file path selection, display name, read-only mode, monitoring, and periodic updates.
    • Eliminates code duplication and ensures a consistent look and feel.
  2. Resource-Specific QML Wrappers:

    • Each resource (Ical, Vcard, Mbox) now has its own Main.qml.
    • Uses a TabBar layout to organize common settings from SingleFileConfig and resource-specific tabs (like compaction for Mbox or activities configuration).

Resources Ported

  • Ical Resource:

    • Migrated calendar file (.ics) configuration.
    • Now provides a clean, two-tab interface (File and Activities) built with Kirigami FormCard components.
    • Offers a more intuitive user experience.
  • Vcard Resource:

    • Migrated address book (.vcf) configuration.
    • Confirmed the reusability of the SingleFileConfig component.
  • Mbox Resource:

    • The most complex port.
    • Included unique tabs for Compact Frequency and Lock Method.
    • Old QWidget .ui files and C++ classes replaced with pure QML components (CompactPage.qml, LockMethodPage.qml).
    • Integrated directly into the new configuration base class.

Lessons Learned

  • Navigating Large Codebases: Learned how to work in KDE’s modular ecosystem, tracing dependencies and understanding project structure.
  • Debugging & Documentation: Improved my debugging skills across QML/C++ and writing clear documentation for future developers.
  • Mentorship & Feedback: Mentor guidance helped refine my coding style, problem-solving approach, and overall contributions.

Looking Ahead

While the main scope of my project is complete, there’s still plenty of room to grow.
I plan to continue contributing to KDE by porting more configuration dialogs to QML as time permits.

I’m deeply grateful to my mentors — Carl Schwan, Claudio Cambra, and Aakarsh MJ — and every KDE developer for their constant guidance, patience, and support throughout this journey. Thank you so much!

Hello everyone, this is going to be the final blog post of my GSoC 2025 project. In this post, I will summarize the progress made during the project and discuss the future plans for expanding OSS-Fuzz integration across KDE libraries.

Quick Recap Of Progress

So far, I had integrated several KDE libraries into OSS-Fuzz, including KMime, KIO-Extras/thumbnail, and KFileMetaData (submitted for integration).

I have also moved existing projects from OSS-Fuzz repository to KDE repositories.

Progress After Midterm

After midterm, in the first half I focused on integrating new thumbnailers into OSS-Fuzz. I had already integrated KIO-Extras/thumbnail, and I continued with KDEGraphics-Thumbnailers, KDESDK-Thumbnailers, and FFMpeg-Thumbs.

After that, I mostly worked on improving the existing integration, i.e, testing the fuzzers and fixing any issues, moving to CMake based setup instead of manual compilation of the fuzzers and adding documentation for local testing of the fuzzers.

The CMake setup allowed for easier maintenance however, it wasn’t as simple as it may seem. Since OSS-Fuzz recommends using static builds, many of the libraries didn’t link to their transitive dependencies correctly for static builds. This required changes to those libraries (.pc files, .cmake files, etc) for proper static linking.

The existing setup also lacked documentation for local testing of the fuzzers. I have added documentation for almost all of the fuzzers. This will be helpful for developers to integrate new fuzzers (such as new thumbnailers or KFileMetaData extractors).

Future Plans

With the initial setup of thumbnailers and KFileMetaData, it is easy to integrate new thumbnailers and KFileMetaData extractors. Currently there are a few more thumbnailers that could be integrated into OSS-Fuzz, I plan to work on integrating them soon as well, the list is here:


Thank You

I would like to thank my mentor, Albert Astals Cid, and the KDE community for their guidance throughout this project. Their feedback was helpful in successfully expanding OSS-Fuzz integration across KDE libraries.