Skip to content

Wednesday, 11 September 2024

At Akademy 2024, during my talk KDE to Make Wines I promised there would be a companion blog post focusing more on the technical details of what we did. This is the article in question.

This is a piece I also wrote for the enioka blog, so there is a French version available.

Where we set the stage

After years of working in the service industry, one thing which doesn’t cease to amaze me is the variety of needs our customers have and how we can still be surprised by them. They sometimes lead us in unexpected directions. This is obviously something I particularly like.

Today I’ll tell you the tale of a nice relationship enioka Haute Couture built with a customer down under.

It all started with a mail on a KDE mailing list a couple of years ago. People were looking for help. It was spotted by my colleague Benjamin Port who reached out. It turned out those people were from De Bortoli Wines an Australian winemaking company. They are known for using Free Software quite a bit and contributing when they can. They even got interviewed on the KDE’s dot ten years ago!

Not much really happened after our first contact but we kept the communication open… Until last year when they reached out to us for some help with Okular, the universal document viewer made by KDE.

They wanted to get rid of Acrobat Reader for Linux in favor of Okular. They had one issue though, the overprint preview support was visibly broken in Okular. This is essential when interacting with a reprographics company which will send back PDFs based on how they layer colors (or overprinting). You need the overprint preview to simulate what the final rendering will be. We were of course motivated to help them get rid of Acrobat Reader for Linux since it is an old and stale piece of proprietary software.

Getting serious about PDF rendering

This looked at first like an easy task. Poppler (used for rendering PDFs) was exposing some API for the overprinting preview but Okular didn’t make use of it. We quickly made a patch to use the overprint preview API.

Alas, doing so uncovered another issue. As soon as we turned on the overprinting preview support the application would crash. We tracked it down one level down. Somehow the crash was hiding in the Poppler-Qt bindings.

After further exploration, it was due to the binding wrongly determining the row size of the raster images to generate. There’s a color space conversion occurring between the initial memory representation and the target raster image. The code was getting this row size before the transform occurred… and then ended up stuffing the wrong amount of data in the target raster. This couldn’t go well. Another patch was thus produced to address this.

The good news is that we managed to fix the issue in less time than the budget the customer allocated to us. So we gave them the choice between stopping here or using the remaining budget to address something else.

We used the Ghent Workgroup PDF Output suite to validate our work beyond the samples provided by our contact. While doing so we noticed Poppler was failing at properly rendering some other cases. So we proposed to investigate those as well.

After spending some time on those… we made tentative fixes but unfortunately they led to some regression. So in agreement with the customer, we wrapped up and created a detailed bug report instead as to not waste their budget. This helped the Poppler community figure out the problem and produce a fix. That’s when we realized we came really close to the right fix at some point. Clearly an expert view on how the various PDF color spaces work was required so it was a good call to create the detailed report.

With still some budget left, our contact proposed us to also bring the overprint preview support to Okular printing. This was initially left out of the scope but necessary when you want to print your preview on a regular laser printer. This required adjusting Okular and adjusting Poppler-Qt once more. It was ultimately done within budget.

CIFS mount woes

Since the customer was satisfied with the work they came back for more. We setup a budget line for them to come up with issues to fix throughout the year.

Around that time, their focus moved more to CIFS mounts which they use extensively for their remote office branches. As active users of that kernel feature, they encounter issues in user facing software that you would otherwise not suspect.

File copy failures

They were affected by a bug preventing Okular to save on CIFS mounts. It is one of those which has been lingering on for a bit more than a year without a solution in sight. Some applications could modify and save a file opened on a CIFS mount but somehow not Okular.

It turned out to be due to some code in KIO itself (the KDE Frameworks API used for network transparent file operations) interacting in an unwanted way with CIFS mounts.

Indeed, the behavior of unlink() (file deletion) on CIFS mounts can be a bit “interesting”. If the file one tries to delete is opened by another process then the operation is claimed to have succeeded but the filename is still visible in the file hierarchy until the last handle is closed. This is unlike the usual UNIX behavior, outside of CIFS mounts the file wouldn’t be visible in the hierarchy anymore. We thus were seeing the issue because Okular does keep a file handle opened on the file.

Now, KIO rightfully attempts to write under a temporary name, delete the original file and rename to the final name during its file copy operation. This would then fail as the unlink() call would succeed, but the rename would unexpectedly fail due to the lingering file in the hierarchy.

So we proposed a patch for KIO which would do a direct copy for files on CIFS mounts. Files being directly overwritten succeed and so the bug experienced with Okular was solved.

Slow directory listing

This wasn’t the end of the issues with CIFS mounts (far from it). They were also experiencing performance problems when listing folders. Interestingly, they would experience it only in the details view of the KDE file dialog.

At its core the issue was due to requesting too much information. When listing a folder known to be remote by KIO (e.g. going through an smb:// URL) the view would limit the amount of information it’d request about the sub-folders. In particular it wouldn’t try to determine the number of files in the sub-folder. This operation is fast nowadays on modern disks, but incurs extra trafic and latency over the network.

This sounded like a simple fix… but in fact it was a bit more work than expected.

Unsurprisingly we quickly found that the code would decide to go for more or less details solely on the URL. Since CIFS mounts get file:/ URLs, they’d end up treated as local files… so we went to querying a bit more agressively. There is an isSlow() method on the items in the detail view tree which we extended to check for CIFS mounts.

This wasn’t enough though, we immediately realized that the new bottleneck was the calls to isSlow() itself. It would lead to several statfs calls which would be expensive as well. The way out was thus to cache the information in the items and for children to query the cache in their parent. Indeed, if the parent is considered slow, we decided to consider the children in the folder as slow as well. This heuristic allowed us to remove all the subsequent isSlow() calls after the one on the mountpoint folder itself.

This was a very old piece of code we touched there, so some time was also used to clean things up a bit, refactor and rename things to align them better with other KIO parts.

LibreOffice backup files during save

They are such active users of CIFS mounts that they found yet another issue! This time with an old version of LibreOffice. In their version it would show up only if the KDE integration was used. I can tell you we were a bit surprised by this. The investigation wasn’t that easy but we managed to track it down.

The issue was showing up after opening a file sitting on a CIFS mount with LibreOffice. If you did a change to the file, then clicked “Save As…” and selected the same file to overwrite, you would get a “Could not create a backup copy” error and the file wouldn’t be saved if the KDE integration was active. All would be fine without this integration though.

What would be different with and without the KDE integration? Well, in one case there is an extra process! When the file dialog opens, if it is the KDE file dialog, the listing is delegated to a KIO Worker. In this particular case this would matter. Indeed, we figured that LibreOffice keeps an open file descriptor on the opened file. Not only this, it also holds a read lock on the file. The KIO Worker is being forked from LibreOffice, so it too has the open file descriptor with a lock.

This made us realize that there was a leak of file descriptor which is in itself not a good thing. So we changed KIO to cleanup open file descriptors when spawning workers, it’s always a good idea to be tidy. Also, as soon as we removed the file descriptor leak the issue was gone. Nice, this was an easy fix.

Just to be thorough, we tried again but this time with the latest LibreOffice. And even with the patched KIO we would get the “Could not create a backup copy” error! This time it would show up also without the KDE integration. And so back to hunting… without getting too much into the internals of LibreOffice, it turned out the more recent version had extra code activated to produce said backup version, and it has two file descriptors open on the same file. So we were back to the problem of two file descriptors and read locks being involved. But this time it was not due to a leak towards another process, and the architecture of LibreOffice didn’t make it easy to reuse the original file descriptor created when opening the file.

The only thing we could do at this point was to simply not lock when the file is on a CIFS mount. It is not a very satisfying solution but it did the job while fitting the allocated time.

Somehow we couldn’t leave things like this though. If you know well the very much criticized POSIX file locks (which have extra challenges over CIFS), something is still not feeling quite right. We got two processes with a file descriptor on the file to save, and yet there is a single process holding a read lock. During the save, when the failure happens, LibreOffice is still making a “backup copy”, it is not writing yet to the file only reading it… for sure this should be allowed!

We thus started to suspect a problem with the kernel itself… and our contact has been willing to explore it further. After investigation, it’s been confirmed to be a kernel bug. For that the customer hooked us up with Andrew Bartlett from Catalyst as they knew he could help us flesh out ideas in this space. This proved valuable indeed. Thanks to tests we wrote previously and conversations with Andrew, we quickly figured that depending on the options you would pass at mount time the CIFS driver would handle the locks properly or not. We’ve discussed with the maintainer of the CIFS driver for a couple of fixes. They have been merged last week (end of August 2024).

A customer who sparks joy

It’s really a joy to have a customer like this. As can be seen from their willingness to dig deeper on what others would consider obscure issues, they demonstrate they are thinking long term. It also means they come with interesting and challenging issue… and they’re appreciative of what we achieved for them!

Indeed we got the pleasure to receive this by email during our conversations:

We very much appreciate the work you’ve done, and difficulty surrounding the challenges you’re working through. Your approach/results are spectacular, and we are very grateful to be able to be a part of it.

Of course, if you have projects involving Free Software communities or issues closer to the system, feel free to reach out, and we’ll see what we could do to help you. Maybe you too can be a forward thinking customer who sparks joy!

Tuesday, 10 September 2024

https://openuk.uk/openuk-september-2024-newsletter-1/

https://www.linkedin.com/feed/update/urn:li:activity:7238138962253344769/

Our 5th annual Awards are open for nominations and our 2024 judges are waiting for your nominations! Hannah Foxwell, Jonathan Riddell, and Nicole Tandy will be selecting winners for 12 categories. ?

The OpenUK Awards 2024 are open for nominations until Sunday, September 15.. Our 5th Awards again celebrate the UK’s leadership and global collaboration in open technology!

Nominate now! https://openuk.uk/awards/openuk-awards-2024/

Up to 3 shortlisted nominees will be selected in each category by early October and each nominee will be given one place at the Oscars of Open Source, the black tie Awards Ceremony and Gala Dinner for our 5th Awards held at the House of Lords on 28 November, thanks to the sponsorship of Lord Wei.

Monday, 9 September 2024

Today, we bring you a report on the brand-new release of the Maui Project.

We are excited to announce the latest release of MauiKit version 4.0.0, our comprehensive user interface toolkit specifically designed for convergent interfaces, the complying frameworks, and an in-house developed set of convergent applications.

Built on the solid foundations of Qt Quick Controls, QML, and the power and stability of C++, MauiKit empowers developers to create adaptable and seamless user interfaces across a range of devices, and with this release, we have finally migrated to Qt6 and made available the documentation for the frameworks.

Join us on this journey as we unveil the potential of MauiKit 4 for building convergent interfaces, and finally discover the possibilities offered by the enhanced Maui App stack.

Community

To follow the Maui Project’s development or to just say hi, you can join us on our Telegram group @mauiproject

We are present on X and Mastodon:

Thanks to the KDE contributors who have helped to translate the Maui Apps and Frameworks!

Downloads & Sources

You can get the stable release packages [APKs, AppImage, TARs] directly from the KDE downloads server at https://download.kde.org/stable/maui/

All of the Maui repositories have the newly released branches and tags. You can get the sources right from the Maui group: https://invent.kde.org/maui

Qt6

With this version bump the Maui team has finalized the migration over to Qt6, which implies more stability and better performance coming from Qt upgraded QQC engine; but also means that some features have been removed or did not make the cut and still need more time to be brought back in posterior releases.

MauiKit 4 Frameworks & Apps

Currently, there are over 10 frameworks, with two new ones recently introduced. They all, for the most part, have been fully documented, and although, the KDE doxygen agent has some minor issues when publishing some parts, you can find the documentation online at https://api.kde.org/mauikit/ (and if you find missing parts, confusing bits, or overall sections to improve – you can open a ticket at any of the framework repos and it shall be fixed shortly after)

 

A script element has been removed to ensure Planet works properly. Please find it in the original post.

Core & Others

MauiKit Core controls also include the Mauikit Style, which along with the core controls has been revised and improved in the migration. New features have been introduced and some minor changes in the API have been made.

A good way to test the new changes made visually is via the MauiDemo application, when building MauiKit from the source, just add the -DBUILD_DEMO=ON flag and then launch it as MauiDemo4

All of the other frameworks have also been fully ported and reviewed, and some features are absent – for example, for ImageTools the image editor is missing for Android due to KQuickImageEditor problems.

Comic book support is missing in MauiKit-Documents, due to a big pending refactoring.

Finally, TextEditor new backend rendering engine migration is yet to be started.

Most of these pending issues will be tackled in the next releases bit by bit.

More details can be found in the previous blog posts:

Maui Release Briefing # 4

Maui Release Briefing #5

 

Archiver & Git

MauiKit-Archiver is a new framework, and it was created to share components and code between different applications that were duplicating the same code: Index, Arca, and Shelf.

The same goes for MauiKit-Git, which will help unify the code base for implementations made in Index, Bonsai, and Strike, so all of those apps can benefit from a single cohesive and curated code base in the form of a framework.

Archiver is pending to be documented, and Git is pending to be finished for its first stable release.

A script element has been removed to ensure Planet works properly. Please find it in the original post.

Known Issues

  • MauiKit-Documents comic book support is stalled until the next release due to heavy refactoring under Android.
  • MauiKit-ImageTools under Android does not include the image editor, since KQuickImageEditor is not working correctly under Android
  • Clip is not working under Android due to issues with the libavformat not finding openssl.so when packaging the APK, this is still under review
  • MauiKit-Git is still being worked on, and due to this Bonsai is not included on this stable release as it is being ported over to MauiKit-Git

 

A script element has been removed to ensure Planet works properly. Please find it in the original post.

Maui Shell

Although Maui Shell has been ported over to Qt6 and is working with the latest MauiKit4, a lot of pending issues are still present and being worked on. The next release will be dedicated fully on Maui Shell and all of its subprojects, such as Maui Settings, Maui Core, CaskServer, etc.

That’s it for now. Until the next blog post, that will be a bit closer to the 4.0.1 stable release.

Release schedule

The post Maui Release Briefing #6 appeared first on MauiKit — #UIFramework.

After years and years of working together on KPhotoAlbum, a considerable part of the devs team (Johannes and me ;-) finally met in-person, at Akademy in Würzburg!

Johannes Zarl-Zierl and Tobias Leupold at Akademy 2024 in Würzburg

It was a very nice and pleasurable meeting, with lots of information around KDE, e.g. community goals, where we stand with Qt 5 and 6 and where we want to go, programming, sustainability and so on. Thoroughly nice and friendly people (esp. the two of us of course ;-), with whom one could have nice and productive conversations. If you can, go to Akademy – it's worth it!

Also, we hopefully again could emphasize – in person – the importance a Qt6/KF6 port of Marble for KPhotoAlbum and also KGeoTag. We now actively work on porting KPA to Qt6/KF6, but we need Marble to be able to finally release it. But we're confident everything will work out.

Hopefully, this won't be the last time we meet!

— Tobias

Saturday, 7 September 2024

KDE Goals logo

The KDE community has charted its course for the coming years, focusing on three interconnected paths that converge on a single point: community. These paths aim to improve user experience, support developers, and foster community growth.

Streamlined Application Development Experience

This goal focuses on improving the application development process. By making it easier for developers to create applications, KDE hopes to attract more contributors and deliver better software for both first-party and third-party applications. A notable task within this goal is enhancing the experience of building KDE apps with languages beyond C++, such as Rust or Python.

Champions: Nicolas Fella and Nate Graham

We care about your Input

KDE has a diverse users base with unique input needs: artists using complex monitor and drawing tablet setups; gamers with controllers, fancy mice, and handhelds; users requiring accessibility features or using a language optimally types with complex input methods; students with laptops, 2-in-1s, and tablets — and more! While KDE has made significant progress in supporting these diverse sources of input over the years, there are still gaps to be addressed. This goal aims to close those gaps and deliver a truly seamless input experience for everyone.

Champions: Gernot Schiller, Jakob Petsovits and Joshua Goins

KDE Needs You! 🫵

KDE’s growth depends on new contributors, but a lack of fresh involvement in key projects like Plasma, Kdenlive, Krita, GCompris, and others is a concern. This goal focuses on formalizing and enhancing recruitment processes, not just for individuals but also for institutions. Ensuring that bringing in new talent becomes a continuous and community-wide priority, vital for KDE's long-term sustainability.

Champions: Aniqa Khokhar, Johnny Jazeix and Paul Brown

Join us!

Your voice, your code, and your ideas are what will shape the KDE of tomorrow — whether you're a user, developer, or contributor. Let’s go on this journey together and make these goals a reality!

Join the Matrix room and keep an eye on the website for the latest KDE Goals updates.

Friday, 6 September 2024

On my way to Akademy, looking forward to meeting people there. Even though I’m traveling with spotty Internet access for now, let’s not loose good habits. Here is my web review for the week 2024-36.


The Internet Archive just lost its appeal over ebook lending - The Verge

Tags: tech, copyright, law, library

This is really bad news… Clearly the publishers cartel would try to outlaw libraries if they were invented today.

https://www.theverge.com/2024/9/4/24235958/internet-archive-loses-appeal-ebook-lending


In Leak, Facebook Partner Brags About Listening to Your Phone’s Microphone to Serve Ads for Stuff You Mention

Tags: tech, privacy, surveillance, advertisement

There, now this seems like a real thing… your phone recording you while you’re not aware for advertisement purposes. Nice surveillance apparatus. Thanks but no thanks.

https://futurism.com/the-byte/facebook-partner-phones-listening-microphone


Why A.I. Isn’t Going to Make Art

Tags: tech, ai, machine-learning, gpt, art, learning, cognition

An excellent essay about generative AI and art. Goes deep in the topic and explains very well how you can hardly make art with those tools. It’s just too remote from how they work. I also particularly like the distinction between skill and intelligence. Indeed, we can make highly skilled but not intelligent systems using this technology.

https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art


Challenging The Myths of Generative AI

Tags: tech, ai, machine-learning, gpt, criticism

Does a good job listing the main myths the marketing around generative AI is built on. Don’t fall for the marketing, exert critical thinking and rely on real properties of those systems.

https://www.techpolicy.press/challenging-the-myths-of-generative-ai/


Is Linux collapsing under its own weight? On Rust for Linux

Tags: tech, linux, rust, kernel, foss, politics

Interesting analysis. For sure the Rust for Linux drama tells something about the Linux kernel community and its complicated social norms.

https://sporks.space/2024/09/05/is-linux-collapsing-under-its-own-weight-on-rust-for-linux/


Rust for Linux revisited

Tags: tech, linux, rust, kernel, politics, foss

Politics in the Linux kernel can indeed be tough. The alternative path proposed to the Rust-for-Linux team is indeed an interesting one, it could bear interesting results quickly.

https://drewdevault.com/2024/08/30/2024-08-30-Rust-in-Linux-revisited.html


LSP: the good, the bad, and the ugly

Tags: tech, semantic, programming, protocols, language

Interesting view about the LSP specification, where it shines, and where it falls short.

https://www.michaelpj.com/blog/2024/09/03/lsp-good-bad-ugly.html


Python Programmers’ Experience

Tags: tech, python, community, data-visualization

Indeed this is a much better visualization. It shows quite well how the Python programmers pool is growing.

https://two-wrongs.com/python-programmers-experience


OAuth from First Principles

Tags: tech, oauth, security

Nice post explaining the basics of OAuth. If you wonder why the flow seems so convoluted, this article is for you.

https://stack-auth.com/blog/oauth-from-first-principles


Things I Wished More Developers Knew About Databases

Tags: tech, databases

Lots of things to keep in mind when dealing with databases. This is a nice list of “must know” for developers, false assumptions are widespread (and I fall in some of those traps myself from time to time).

https://rakyll.medium.com/things-i-wished-more-developers-knew-about-databases-2d0178464f78


The Latency/Throughput Tradeoff: Why Fast Services Are Slow And Vice Versa

Tags: tech, performance, latency

An old article but a good reminder: you have to choose between latency and throughput, you can’t have both in the same system.

https://blog.danslimmon.com/2019/02/26/the-latency-throughput-tradeoff-why-fast-services-are-slow-and-vice-versa/


The impact of memory safety on sandboxing · Alex Gaynor

Tags: tech, security, safety, memory, sandbox

Interesting point. As the memory safety of our APIs will increase, can we reduce the amount of sandboxing we need? This will never remove completely the need if only for logic bugs, but surely we could become more strategic about it.

https://alexgaynor.net/2024/aug/30/impact-of-memory-safety-on-sandboxing/


LazyFS: A FUSE Filesystem which can be used to simulate data loss on unsynced writes

Tags: tech, tests, storage

That sounds like a very interesting tool to simulate and test potential data loss scenarios. This is generally a bit difficult to do, should make it easier.

https://github.com/dsrhaslab/lazyfs


Greppability is an underrated code metric

Tags: tech, programming, craftsmanship

Good set of advices on naming variables, types, etc. Indeed this makes things easier to find in code bases.

https://morizbuesing.com/blog/greppability-code-metric/


Reasons to write design docs

Tags: tech, architecture, design, documentation, communication

Very good article. I wish I’d see more organisations writing such design documents. They help a lot, and that allows to have a way to track changes in the design. To me it’s part of the minimal set of documentation you’d want on any non trivial project.

https://ntietz.com/blog/reasons-to-write-design-docs/


Differing Values In A Team Are Costly

Tags: tech, values, organization, team, management

Aligning people with differing core values in a team is indeed necessary but difficult. It can kill your project for small teams, for larger teams you will likely need to think your organization keeping the misalignment in mind.

https://rtpg.co/2024/08/31/cost-of-a-values-gap/


Twelve rules for job applications and interviews

Tags: tech, hiring, interviews

Good set of advices. I wish more people applying for a job would follow them.

https://vurt.org/articles/twelve-rules/


Is My Blue Your Blue?

Tags: colors, cognition, funny

One of those essentials questions in life now has some form of answer. Where is the blue/green boundary for you?

https://ismy.blue/



Bye for now!

Thursday, 5 September 2024

In April we had the combined goals sprint, where a fine group of KDE people working on things around Automation & Systematization, Sustainable Software, and Accessibility got together. It was a nice cross-over of the KDE goals, taking advantage of having people in one room for a weekend to directly discuss topics of the goals and interactions between them. David, Albert, Nate, Nico, and Volker wrote about their impressions from the sprint.

So what happened regarding the Sustainable Software goal at the sprint and where are we today with these topics? There are some more detailed notes of the sprint. Here is a summary of some key topics with an update on current progress.

Kick-Off for the Opt-Green project

The Opt-Green project is the second funded project of the KDE Eco team. The first one was the Blue Angel for Free Software project, where we worked on creating material helping Free Software projects to assess and meet the criteria for the Blue Angel certification for resource and energy-efficient software products.

The Opt Green project is about promotion of extending the operating life of hardware with Free Software to reduce electronic waste. It's funded for two years by the German Federal Environment Agency and the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection and is running from April 2024 to March 2026.

Opt-Green presentation

Joseph introduced the project, why it's important, how the environment is suffering from software-induced hardware obsolescence, and how Free Software in general and KDE specifically can help with fighting it. The approach of the project is to go beyond our typical audience and introduce people who are environmentally aware but not necessary very technical to the idea of running sustainable, up-to-date Free Software on their computers, even devices they may think are no longer usable due to lack of vendor support. In many cases this is a perfectly fine solution, and it's surprisingly attractive to a number of people who care about sustainability but haven't really been introduced to Free Software yet.

Where we are today

The project is in full swing. The project has already been present at quite a number of events to motivate people to install Free Software on their (old) devices and support them in how to do it. See for example the report about the Academy of Games for upcoming 9th graders in Hannover, Germany.

Revamping the KDE Eco website

We had a great session putting together ideas and concepts about how we could improve the KDE Eco website. From brainstorming ideas to sketching a wireframe as a group, we discussed and agreed on a direction of how to present what we are doing in the KDE Eco team.

KDE Eco website sketches

The key idea is to focus on three main audiences (end users, advocates, and developers) and present specific material targeted at these groups. This nicely matches what we already have, e.g., the KDE Eco handbook for how to fulfill the Blue Angel criteria for developers, or the material being produced for events reaching out to end users, and while giving it a much more focused presentation.

Where we are today

The first iteration of the new design is now live on eco.kde.org. There is more to come, but it already gives an impression where this is going. Anita created a wonderful set of design elements which will help to shape the visual identity of KDE Eco going forward.

Surveying end users about their attitude to hardware reuse

Making use of old hardware by installing sustainable free software on it is a wide field. There are many different variations of devices and what users do with them also varies a lot. What are the factors that might encourage users to reuse old hardware, what is holding them back?

To get a bit more reliable answers to these questions we came up with a concept for a user survey which can be used at events where we present the Opt Green project. This includes questions about what hardware people have and what is holding them back from installing new software on it.

Where we are today

The concept has been implemented with an online survey on KDE's survey service. It's available in English and German and is being used at the events where the Opt Green project is present.

Opt-Green Survey

Sustainable AI

One of the big hype topics of the last two years has been Generative AI and the Large Language Models which are behind this technology. They promise to bring revolutionary new features, much closer to how humans interact in natural language, but they also come with new challenges and concerns.

One of the big questions is how this new technology affects our digital freedoms. How does it relate to Free Software? How does licensing and openness work? How does it fit KDE's values? Where does it make sense to use its technology? What are the ethical implications? What are the implications in terms of sustainability?

We had a discussion around the possible idea of adopting something like Nextcloud's Ethical AI rating in KDE as well. This would make it more transparent to users how use of AI features affects their freedoms and gives them a choice to use what they consider to be satisfactory.

Where we are today

This is still pretty much an open question. The field is moving fast, there are legal questions around copyright and other aspects still to be answered. Local models are becoming more and more an option. But what openness means in AI has become very blurry. KDE still has to find a position here.

Wednesday, 4 September 2024

A while ago a colleague of mine asked about our crash infrastructure in Plasma and whether I could give some overview on it. This seems very useful to others as well, I thought. Here I am, telling you all about it!

Our crash infrastructure is comprised of a number of different components.

  • KCrash: a KDE Framework performing crash interception and prepartion for handover to…
  • coredumpd: a systemd component performing process core collection and handover to…
  • DrKonqi: a GUI for crashes sending data to…
  • Sentry: a web service and UI for tracing and presenting crashes for developers

We’ve already looked at KCrash and coredumpd. Now it is time to look at DrKonqi.

DrKonqi notification

DrKonqi

DrKonqi is the UI that comes up when a crash happens. We’ll explore how it integrates with coredumpd and Sentry.

Crash Pickup

When I outlined the functionality of coredumpd, I mentioned that it starts an instance of systemd-coredump@.service. This not only allows the core dumping itself to be controlled by systemd’s resource control and configuration systems, but it also means other systemd units can tie into the crash handling as well.

That is precisely what we do in DrKonqi. It installs drkonqi-coredump-processor@.service which, among other things, contains the rule:

WantedBy=systemd-coredump@.service

…meaning systemd will not only start systemd-coredump@unique_identifier but also a corresponding drkonqi-coredump-processor@unique_identifier. This is similar to how services start as part of the system boot sequence: they all are “wanted by” or “want” some other service, and that is how systemd knows what to start and when (I am simplifying here 😉). Note that unique_identifier is actually a systemd feature called “instances” — one systemd unit can be instantiated multiple times this way.

drkonqi-coredump-processor

When drkonqi-coredump-processor@unique_identifier runs, it first has some synchronization to do.

As a brief recap from the coredumpd post: coredumpd’s crash collection ends with writing a journald entry that contains all collected data. DrKonqi needs this data, so we wait for it to appear in the journal.

Once the journal entry has arrived, we are good to go and will systemd-socket-activate a helper in the relevant user.

The way this works is a bit tricky: drkonqi-coredump-processor runs as root, but DrKonqi needs to be started as the user the crash happened to. To bridge this gap a new service drkonqi-coredump-launcher comes into play.

drkonqi-coredump-launcher

Every user session has a drkonqi-coredump-launcher.socket systemd unit running that provides a socket. This socket gets connected to by the processor (remember: it is root so it can talk to the user socket). When that happens, an instance of drkonqi-coredump-launcher@.service is started (as the user) and the processor starts streaming the data from journald to the launcher.

The crash has now traveled from the user, through the kernel, to system-level systemd services, and has finally arrived back in the actual user session.

Having been started by systemd and initially received the crash data from the processor, drkonqi-coredump-launcher will now augment that data with the KCrash metadata originally saved to disk by KCrash. Once the crash data is complete, the launcher only needs to find a way to “pick up” the crash. This will usually be DrKonqi, but technically other types of crash pickup are also supported. Most notably, developers can set the environment variable KDE_COREDUMP_NOTIFY=1 to receive system notifications about crashes with an easy way to open gdb for debugging. I’ve already written about this a while ago.

When ready, the launcher will start DrKonqi itself and pass over the complete metadata.

the crashed application
└── kernel
    └── systemd-coredumpd
        ├── systemd-coredumpd@unique_identifier.service
        └── drkonqi-coredump-processor@unique_identifier.service
            ├── drkonqi-coredump-launcher.socket
            └── drkonqi-coredump-launcher@unique_identifier.service
                └── drkonqi

What a journey!

DrKonqi

Crash Processing

DrKonqi kicks off crash processing. This is hugely complicated and probably worth its own post. But let’s at least superficially explore what is going on.

The launcher has provided DrKonqi with a mountain of information so it can now utilize the CLI for systemd-coredump, called coredumpctl, to access the core dump and attach an instance of the debugger GDB to it.

GDB runs as a two step automated process:

Preamble Step

As part of this automation, we run a service called the preamble: a Python program that interfaces with the Python API of GDB. Its most important functionality is to create a well-structured backtrace that can be converted to a Sentry payload. Sentry, for the most part, doesn’t ingest platform specific core dumps or crash reports, but instead relies on an abstract payload format that is generated by so called Sentry SDKs. DrKonqi essentially acts as such an SDK for us. Once the preamble is done, the payload is transferred into DrKonqi and the next step can continue.

Trace Step

After the preamble, DrKonqi executes an actual GDB trace (i.e. the literal backtrace command in gdb) to generate the developer output. This is also the trace that gets sent to KDE’s Bugzilla instance at bugs.kde.org if the user chooses to file a bug report. The reason this is separate from the already created backtrace is mostly for historic reasons. The trace is then routed through a text parser to figure out if it is of sufficient quality; only when that is the case will DrKonqi allow filing a report in Bugzilla.

Transmission

With all the trace data assembled, we just need to send them off to Bugzilla or Sentry, depending on what the user chose to do.

Bugzilla

The Bugzilla case is simply sending a very long string of the backtrace to the Bugzilla API (albeit surrounded by some JSON).

Sentry

The Sentry case on the other hand requires more finesse. For starters, the Sentry code also works when offline. The trace and optional user message get converted into a Sentry envelope tagged with a receiver address — a Sentry-specific URL for ingestion so it knows under which project to file the crash. The envelope is then written to ~/.cache/drkonqi/sentry-envelopes/. At this point, DrKonqi’s job is done; The actual transmission happens in an auxiliary service.

Writing an envelope to disk triggers drkonqi-sentry-postman.service which will attempt to send all pending envelopes to Sentry using the URL inside the payload. It will try to do so every once in a while in case there are pending envelopes as well, thereby making sure crashes that were collected while offline still make it to Sentry eventually. Once sent successfully, the envelopes are archived in ~/.cache/drkonqi/sentry-sent-envelopes/.

This concludes DrKonqi’s activity. There’s much more detail going on behind the scenes but it’s largely inconsequential to the overall flow. Next time we will look at the final piece in the puzzle — Sentry itself.

I’m excited to make Tellico 4.0 available as the first version to leverage the new Qt6 and KDE Frameworks 6 libraries. Tellico 4.0 also continues to build with Qt5/KF5 for those who haven’t yet transitioned to the newer versions.

Especially since this has many updates and changes in the underlying library code, please backup your data before switching to the new version. Creating a full backup file can be done by using the Export to Zip option which will create a file with all your images together with the main collection.

Please let me know of any compilation issues or bugs, particularly since I haven’t tested this on a wide range of Qt6/KF6 releases. The KDE builds are all working, which certainly helps my confidence, but one never knows.

Improvements and Fixes

  • Building with Qt6 is enabled by default, falling back to Qt5 for older versions of ECM or when the BUILD_WITH_QT6=off flag is used.
  • Book and video collections can be imported from file metadata (Bug 214606).
  • All entry templates were updated to include any loan information (Bug 411903).
  • Creating and viewing the internal log file is supported through the --log and --logfile command-line options (Bug 426624).
  • The DBUS interface can output to stdout using -- as the file name.
  • Choice fields are now allowed to have multiple values (Bug 483831).
  • The iTunes, Discogs, and MusicBrainz sources now separate multi-disc albums (Bug 479503).
  • A configurable locale was added to the IMDb data source.
  • The Allocine and AnimeNFO data sources were removed.

Whoops, it's already been months since I last blogged. I've been actively involved with Plasma and especially its power management service PowerDevil for over a year now. I'm still learning about how everything fits together.

Turns out though that a little bit of involvement imbues you with just enough knowledge and confidence to review other people's changes as well, so that they can get merged into the next release without sitting in limbo forever. Your favorite weekly blogger for example, Nate Graham, is a force of nature when it comes to responding to proposed changes and finding a way to get them accepted in one form or another. But it doesn't have to take many years of KDE development experience to provide helpful feedback.

Otfen we simply need another pair of eyes trying to understand the inner workings of a proposed feature or fix. If two people think hard about an issue and agree on a solution, chances are good that things are indeed changing for the better. Three or more, even better. I do take pride in my own code, but just as much in pushing excellent improvements like these past the finish line:

In turn, responsible developers will review your own changes so we can merge them with confidence. Xaver, Natalie and Nate invested time into getting my big feature merged for Plasma 6.2, which you've already read about:

Per-display Brightness Controls

Nate's screenshot is nice and hi-res, I'll just reuse it here

This was really just a follow-up project to the display support developments mentioned in the last blog post. I felt it had to be done, and it lined up nicely with what KWin's been up to recently.

So how hard could it be to add another slider to your applet? Turns out there are indeed a few challenges.

In KDE, we like to include new features early on and tweak them over time. As opposed to, say, the GNOME community, which tends to discuss them for a loooong time in an attempt to merge the perfect solution on the first try. Both approaches have advantages and drawbacks. Our main drawback is harder to change imperfect code, because it's easy to break functionality for users that already rely on them.

Every piece of code has a set of assumptions embedded into it. When those assumptions don't make sense for the next, improved, hopefully perfect solution (definitely perfect this time around!) then we have to find ways to change our thinking. The code is updated to reflect a more useful set of assumptions, ideally without breaking anyone's apps and desktop. This process is called "refactoring" in software development.

But let's be specific: What assumptions am I actually talking about?

There is one brightness slider for your only display

This one's obvious. You can use more than just one display at a time. However, our previous code only used to let you read one brightness value, and set one brightness value. For which screen? Well... how about the code just picks something arbitrarily. If you have a laptop with an internal screen, we use that one. If you have no internal screen, but your external monitor supports DDC/CI for brightness controls, we use that one instead.

What's that, you have multiple external monitors that all support DDC/CI? We'll set the same value for all of them! Even if the first one counts from 0 to 100 and the second one from 0 to 10.000! Surely that will work.

No it won't. We only got lucky that most monitors count from 0 to 100.

The solution here is to require all software to treat each display differently. We'll start watching for monitors being connected and disconnected. We tell all the related software about it. Instead of a single set-brightness and a single get-brightness operation, we have one of these per display. When the lower layers require this extra information, software higher up in the stack (for example, a brightness applet) is forced to make better choices about the user experience in each particular case. For example, presenting multiple brightness sliders in the UI.

A popup indicator shows the new brightness when it changes

So this raises new questions. With only one display, we can indicate any brightness change by showing you the new brightness on a percentage bar:

OSD indicator with percentage bar, but no display name

Now you press the "Increase Brightness" key on your keyboard, and multiple monitors are connected. This OSD popup shows up on... your active screen? But did the brightness only change for your active screen, or for all of them? Which monitor is this one popup representing?

Ideally, we'd show a different popup on each screen, with the name of the respective monitor:

OSD indicator with percentage bar plus display name. Early screenshot, the final version uses the same small icon as the original popup

That's a good idea! But Plasma's OSD component doesn't have a notion of different popups being shown at the same time on different monitors. It may even take further changes to ask KWin, Plasma's compositor component, about that. What we did for Plasma 6.2 was to provide Plasma's OSD component with all the information it needs to do this eventually. But we haven't implemented our favorite UI yet, instead we hit the 6.2 deadline and pack multiple percentages into a single popup:

OSD indicator with display names and percentages both as text

That's good enough for now, not the prettiest but always clear. If you only use or adjust one screen, you'll get the original fancy percentage bar you know and love.

The applet can do its own brightness adjustment calculations

You can increase or decrease brightness by scrolling on the icon of the "Brightness and Color" applet with your mouse wheel or touchpad. Sounds easy to implement: read the brightness for each display, add or subtract a certain percentage, set the brightness again for the same display.

Nope, not that easy.

For starters, we handle brightness key presses in the background service. You'd expect the "Increase Brightness" key to behave the same as scrolling up with your mouse wheel, right? So let's not implement the same thing in two different places. The applet has to say goodbye to its own calculations, and instead we add an interface to background service that the applet can use.

Then again, the background service never had to deal with high-resolution touchpad scrolling. It's so high-resolution that each individual scroll event might be smaller than the number of brightness steps on your screen. The applet contained code to add up all of these tiny changes so that many scroll events taken together will at least make your screen change by one step.

Now the service provides this functionality instead, but it adds up the tiny changes for each screen separately. Not only that, it allows you to keep scrolling even if one of your displays has already hit maximum brightness. When you scroll back afterwards, both displays don't just count down from 100% equally, but the original brightness difference between both screens is preserved. Scroll up and down to your heart's content without messing up your preferred setup.

Dimming will turn down the brightness, then restore the original value later

Simple! Yes? No. As you may guess, we now need to store the original brightness for each display separately so we can restore it later.

But that's not enough: What if you unplug your external screen while it's dimmed? And then you move your mouse pointer again, so the dimming goes away. Your monitor, however, was not there for getting its brightness restored to the original value. Next time you plug it in, it starts out with the dimmed lower brightness as a new baseline, Plasma will gladly dim even further next time.

Full disclosure, this was already an issue in past releases of Plasma and is still an issue. Supporting multiple monitors just makes it more visible. More work is needed to make this scenario bullet-proof as well. We'll have to see if a small and safe enough fix can still be made for Plasma 6.2, or if we'll have to wait until later to address this more comprehensively.

Anyway, these kind of assumptions are what eat up a good amount of development time, as opposed to just adding new functionality. Hopefully users will find the new brightness controls worthwhile.

So let's get to the good news

Your donations allowed KDE e.V. to approve a travel cost subsidy in order to meet other KDE contributors in person and scheme the next steps toward world domination. You know what's coming, I'm going to:

Akademy 2024: 7th to 12th September 2024 in Würzburg, Germany

Akademy is starting in just about two days from now! Thank you all for allowing events like this to happen, I'll try to make it count. And while not everyone can get to Germany in person, keep in mind that it's a hybrid conference and especially the weekend talks are always worth watching online. You can still sign up and join the live chat, or take a last-minute weekend trip to Würzburg if you're in the area, or just watch the videos shortly afterwards (I assume they'll go up on the PeerTube Akademy channel).

I'm particularly curious about the outcome of the KDE Goals vote for the upcoming two years, given that I co-drafted a goal proposal this time around. Whether or not it got elected, I haven't forgotten about my promise of working on mouse gesture support on Plasma/Wayland. Somewhat late due to the aforementioned display work taking longer. Other interesting things are starting to happen as well on my end. I'll have to be mindful not to stretch myself too thinly.

Thanks everyone for being kind. I'll be keeping an eye out for your bug reports when the Plasma 6.2 Beta gets released to adventurous testers in just over a week from today.

Discuss this post on KDE Discuss.