Skip to content

Friday, 8 November 2024

Make sure you commit anything you want to end up in the KDE Gear 24.12
releases to them

Next Dates:

  •   November 14, 2024: 24.12 freeze and beta (24.11.80) tagging and release
  •   November 28, 2024: 24.12 RC (24.11.90) tagging and release
  •   December  5, 2024: 24.12 tagging
  •   December 12, 2024: 24.12 release


https://community.kde.org/Schedules/KDE_Gear_24.12_Schedule

I spent past three weeks working on refactoring and fixing legacy code (the oldest of which was from 2013) that handled positioning Plasma desktop icons, and how this data was saved and loaded.

Here's the merge request if you're curious: plasma-desktop: Refactor icon positioner saving and loading

The existing code worked sometimes, but there were some oddities like race conditions (icon positioning happens in weird order) and backend code mixed in with frontend code.

Now I am not blaming anyone for this. Code has tendency to get a bit weird, especially over long periods of time, and especially in open source projects where anyone can tinker with it.

You know how wired earbuds always, always get tangled when you place them in a drawer or your pocket or something for few seconds? Codebases do the exact same thing, when there are multiple people writing on things, fixing each others' bugs. Everyone has a different way of thinking, so it's only natural that things over time get a bit tangled up.

So sometimes you need someone to look at the tangled codebase and try to clear it up a bit.

Reading code is the hardest part

When going through old code, especially some that has barely any comments, it can take a very long time to understand what is actually going on. I honestly spent most of my time trying to understand how the thing even works, what is called when, where the icons positions are updated, and so on.

When I finally had some understanding of what was happening, I could start cleaning things up. I renamed a lot of the old methods to be hopefully more descriptive, and moved backend code — like saving icon positions — from the frontend back to backend.

Screens and icons

Every screen (PC monitor, TV…) tends to have it's own quirks. Some, when connected with display-port adapter, tell your PC it's disconnected if your PC goes to screen saving mode. Some stay connected, but show a blank screen.

One big issue with the icon positions was that when screen got turned off, it thought there was no screen anymore and started removing items from the desktop.

That's fair. Why show desktop icons on a screen that is non-existent? But when you have a monitor that tells your PC, "Okay I'm disconnecting now!" when the PC says it's time to sleep, wrong things would happen.

This condition is now handled by having a check that if the screen is in use or not. Now when screen is not in use, we just do nothing with the icons. No need to touch them at all.

Stripes and screen resolution

Our icon positioning algorithm uses something called "stripes."

Every resolution has it's amount of stripes. Stripes contain an array of icons, or blank spots.

So if your screen resolution is, let's say, 1920x1080, we calculate how many stripes and how many items per stripe will fit on that screen.

Stripe1: 1 2 3 4 5 6 7
Stripe2: 1 2 3 4 5 6 7
Stripe3: 1 2 3 4 5 6 7

And so on..

But when you change your screen resolution or scale factor, how many icon stripes you have and how many icons fit on each stripe will change.

So if you have one of those screens that looks to the system like it's been unplugged when it goes into sleep mode, previously the stripe amount would change to 1 row, 1 column. And the icon positioner would panics and shove all icons in that 1,1 slot.

Then when you'd turn the screen back on, the icon positioner would wonder what just happened and restore the proper stripe number and size. But by that point it would have lost all our positioning coordinate data during the shoving of icons in that one miniscule place, so instead it would reset the icon positions… and this leaves users wondering why their desktop icon arrangement is now gone.

Here we have to also check for the screen being in use or not. But there were other problems.

Saving icon positions

The prior code saved the icon positions every time the positions changed. Makes sense.

But it didn't account for the screen being off… so the icon positions would get saved while the desktop was in a faulty state. This also causes frustration because someone arranges the icons how they wish, but then screen does something weird and they're now saved in wrong places again.

Our icon positions were updated after almost every draw call, if the positions changed. So this would mean the saving would happen rather often and no matter what moved them.

We had to separate the user action from computer action. If computer moves the icons, we ideally do not save their positions, unless something drastic has happened like resolution change.

The icon positions are saved per resolution, so if you move icons around while they're displayed on a 3440x1440 screen and then change the resolution to 1920x1080, both will have their own arrangements. This part of the codebase did not previously work, and it would always override the old configuration, which caused headache.

So now we only save icon positions when:

  • The user adds or removes a desktop icon
  • The user moves a desktop icon
  • The user changes the desktop resolution

This makes the icon position saving much less random, since it's done only after explicit user actions.

Margin errors

The last thing that caused headaches with the icon positioning was that the area available on the desktop for icons was determined before panels were loaded. When the panels loaded, they would reduce the amount of space for desktop icons, and that area would constantly resize until everything is ready.

In previous code, this would cause icons to move, which updates positions, which then saves their positions.

So let's say you arrange your icons nicely, but the next time you boot into plasma, your panels start shoving the poor icon area around and the icons have to move out of the way… and now they're all in the wrong places.

This was already partially fixed by not saving when the computer moves the icons around: we just load the icon positions when the screen is in use and we are done with listing the icons on the desktop. Part of the margin changes happen when screen is off.

We still need to fix the loading part; ideally we load the icon area last, so that it gets the margins it expects and doesn't shuffle around while panels are still appearing. But it was out of scope for this merge request.

Conclusions

It may not sound like much, but this was a lot of work. I spent days just thinking about this problem, trying to understand what is happening now and how to improve it.

Luckily with a lot of help from reviewers and testers I got things to work much better than it used to. I am quite "all-over-the-place" when I solve problems so I appreciate the patience they had with me and my questions. :D

What I mostly wished for when working on this were more inline code comments. You don't need to comment the obvious things, but everything else could use something. It's hard to gauge what is obvious and what is not, but that kind of answers the question: If you don't know if it's obvious or not, it's likely not, so add some comment about it.

I do hope that the desktop icons act more reliably after all these changes. If you spot bugs, do report them at https://bugs.kde.org.

Thanks for reading! :)

PS. The funniest thing to me about all of this is that I do not like having any icons on my desktop. :'D

Thursday, 7 November 2024

The last maintenance release of the 24.08 series is out. We fixed some issues, so project opening should be working again.

For the full changelog continue reading on kdenlive.org.

Tuesday, 5 November 2024

This one required a few other features to be implemented first, so let’s jump right in.

Matching reference luminances

A big part of what a desktop compositor needs to get right with HDR content is to show SDR and HDR content properly side by side. KWin 6.0 added an SDR brightness slider for that purpose, but that’s only half the equation - what about the brightness of HDR content?

When we say “HDR”, usually that refers to a colorspace with the rec.2020 primaries and the perceptual quantizer (PQ) transfer function. A transfer function describes how to calculate a real brightness value from the “electrical” signal encoded in the content - PQ specifically has encoded values from 0 to 1 and brightness values from 0 to 10000 nits. For reference, your typical office monitor does around 300 or 400 nits at maximum brightness setting, and many newer phones can go a bit above 1000 nits.

Now if we want to show HDR content on an HDR screen, the most straight forward thing to do would be to just calculate the brightness values, write them to the screen and be done with it, right? That’s what KWin did up to Plasma 6.1, but it’s far from ideal. Even if your display can show the full range of requested brightness values, you might want to adjust the brightness to match your environment - be it brighter or darker than the room the content was optimized for - and when there’s SDR things in HDR content, like subtitles in a video, that should ideally match other SDR content on the screen as well.

Luckily, there is a preexisting relationship between HDR and SDR that we can use: The reference luminance. It defines how bright SDR white is - which is why another name for it is simply “SDR white”.

As we want to keep the brightness slider working, we won’t map SDR content to the reference luminance of any HDR transfer function though, but instead we map both SDR and HDR content to the SDR brightness setting. If we have an HDR video that uses the PQ transfer function, that reference luminance is 203 nits. If your SDR brightness setting is at 406 nits, KWin will just multiply the brightness of the HDR video with a factor of 2.

This doesn’t only mean that we can make SDR and HDR content fit together nicely on HDR screens, but it also means we now know what to do when we have HDR content on an SDR screen: We map the reference luminance from the video to SDR white on the screen. That’s of course not enough to make it look nice though…

Tone mapping

Especially with HDR presented on an SDR screen, but also on many HDR screens, it will happen that the content brightness exceeds the display capabilities. To handle this, starting with Plasma 6.2, whenever the HDR metadata of the content says it’s brighter than the display can go, KWin will apply tone mapping.

Doing this tone mapping in RGB can result in changing the content quite badly though. Let’s take a look by using the most simple “tone mapping” function there is, clipping. It just limits the red, green and blue values separately to the brightness that the screen can show.

If we have a pixel with the value [2.0, 0.0, 2.0] and a maximum brightness of 1.0, that gets mapped to [1.0, 0.0, 1.0] - which is the same purple, just in darker. But if the pixel has the values [2.0, 0.0, 1.0], then that gets mapped to [1.0, 0.0, 1.0], even though the source color was significantly more red!

To fix that, KWin’s tone mapping uses ICtCp. This is a color space developed by Dolby, in which the perceived brightness (aka Intensity) is separated from the chroma components (Ct = blue-yellow, Cp = red-green), which is perfect for tone mapping. KWin’s shaders thus transform the RGB content to ICtCp, apply a brightness mapping function to only the intensity component, and then convert back to RGB.

The result of that algorithm looks like this:

RGB clippingKWin 6.2’s tone mappingMPV’s tone mapping
HDR image with clippingHDR image with KWin's tonemappingHDR image with MPV's tone mapping

As you can see, there’s still some color changes going on in comparison to MPV’s algorithm; this is partially because the tone mapping curve still needs some more adjustments, and partially because we also still need to do similar mapping for colors that the screen can’t actually show. It’s already a large improvement though, and does better than the built-in tone mapping functionality in many HDR screens.

When tone mapping HDR content on SDR screens, we always end up reducing the brightness of the overall image, so that we have some brightness values to map the really bright highlights in the video to - otherwise everything just slightly over the reference luminance would look like an overexposed blob of color, as you can see in the “RGB clipping” image. There are ways around that though…

HDR on SDR laptop displays

To explain the reasoning behind this, it helps to first have a look at what even makes a display “HDR”. In many cases it’s just marketing nonsense, a label that’s put on displays to make them seem more fancy and desirable, but in others there’s an actual tangible benefit to it.

Let’s take OLED displays as an example, as it’s considered one of the display technologies where HDR really shines. When you drive an OLED at high brightness levels, it becomes quite inefficient, it draws a lot of power and generates a lot of heat. Both of these things can only be dealt with to a limited degree, so OLED displays can generally only be used with relatively low average brightness levels. They can go a lot brighter than the average in a small part of the screen though, and that’s why they benefit so much from HDR - you can show a scene that’s on average only 200 nits bright, with the sky in the image going up to 300 nits, the sun going up to 1000 nits and the ground only doing 150 nits.

Now let’s compare that to SDR laptop displays. In the case of most LCDs, you have a single backlight LED for the whole screen, and when you move the brightness slider, the power the backlight is driven at is changed. So there’s no way to make parts of the screen brighter than the rest on a hardware level… but that doesn’t mean there isn’t a way to do it in software!

When we want to show HDR content and the brightness slider is below 100%, KWin increases the backlight level to get a peak brightness that matches the relative peak brightness of that content (as far as that’s possible). At the same time it changes the colorspace description on the output to match that change: While the reference luminance stays the same, the maximum luminance of the transfer function gets increased in proportion to the increase in backlight brightness.

The results is that SDR white gets mapped to a reduced RGB value, which is at least supposed to exactly counteract the increase of brightness that we’re applying with the backlight, while HDR content that goes beyond the reference luminance gets to use the full brightness range.

Increasing the backlight power of course doesn’t come without downsides; black levels and power usage both get increased, so this is only ever active if there’s HDR content on the screen with valid HDR metadata that signals brightness levels going beyond the reference luminance.

As always, capturing HDR content with a phone camera is quite difficult, but I think you can at least sort of see the effect:

without backlight adjustmentwith backlight adjustment
without backlight adjustmentwith backlight adjustment

This feature has been merged into KWin’s git master branch and will be available on all laptop displays starting with Plasma 6.3. I really recommend trying it for yourself once it reaches your distribution!

KDAB Training Day - May 8th, 2025

Early-Bird tickets are on sale for the KDAB Training Day 2025 until 2025-03-31 23:59

training-day-long-banner.png

The KDAB Training Day 2025 will take place in Munich on May 8th, right after the Qt World Summit on May 6th-7th. Choose to buy a combo ticket here (for access to QtWS and Training Day) or here (for access to Training Day only).

Seats are limited, so don't wait too long if you want to participate in a specific course. Tickets include access to the selected training course, training material, lunch buffet, beverages, and coffee breaks. Note: The Training Day is held at Hotel NH Collection München Bavaria, located at the Munich Central Station (not the same location as Qt World Summit).

Get your ticket

Why should you attend the KDAB Training Day?

With over 20 years of experience and a rich store of well-structured, constantly updated training material, KDAB offers hands-on, practical programming training in Qt/QMLModern C++3D/OpenGLDebugging & Profiling, and lately Rust - both for beginners as well as experienced developers.

All courses provided at the Training Day include central parts of the regular 3- to 4-day courses available as scheduled or customized on-site training. Choosing a compact, learning-rich one-day course, lets you experience the quality and effectiveness of KDAB’s usual training offerings.

Courses available at the KDAB Training Day 2025

QML Application Architecture

In this training, we do a step-by-step walkthrough of how to build a QML-based embedded application from the ground up and discuss some challenges that are typically met along the way.

An important part of that journey is an investigation of where to put the boundaries between what you do in C++ and what you do in QML. We also look at some of the tools and building blocks we have available in QML that can help us achieve well-performing, well-structured, and well-maintainable applications.

This course is for

(Qt) developers looking to improve their understanding of how to construct maintainable and efficient larger-scale QML applications.

Prerequisite

Some real-world experience working on QML applications as well as a basic understanding of Qt and C++.

Get your ticket

QML/C++ Integration

In this training, we start with a recap of fundamentals:

  • How do we expose C++ API to QML?
  • How do we make data available to QML?

Afterward, we explore several more advanced techniques, often widely deployed within Qt's QML modules, such as Qt Quick.

This will answer questions such as:

  • How would I do a Loader like component?
  • How would I do a Layout like component?

This course is for

Qt/QML developers who are familiar with the QML APIs of QtQuick and related modules and who have wondered how these are implemented and want to use similar techniques in their project-specific APIs.

Prerequisite

Some real-world experience working on QML applications as well as a basic understanding of Qt and C++.

Get your ticket

Modern C++ Paradigms

Modern C++ emphasizes safer, more efficient, and maintainable code through higher-level abstractions that reduce error-prone manual work.

This training will explore key paradigms shaping recent C++ evolution, starting with value semantics in class design, which enhances code safety, local reasoning, and thread safety. We will examine modern C++ tools for creating value-oriented types, including move semantics, smart pointers, and other library enablers.

Next, we will look at expressive, type and value-based error handling.

Finally, we'll cover range-based programming, which enables clean, declarative code and unlocks new patterns through lazy, composable transformations.

This course is for

C++ developers who wish to improve the quality of their code, in particular those who wish to write future-proof APIs.

Prerequisites

Prior professional experience in C++. Experience with the latest C++ standards (C++20/23/26) is a plus. We will use several examples inspired by Qt APIs, so Qt knowledge is also a plus (but this is not going to be a Qt training).

Get your ticket

Integrating Rust into Qt Applications

In this step-by-step course, we start with a Qt/C++ application and add Rust code to it piece by piece. To achieve this, we will cover:

  • Use of Cargo (Rusts build system) with CMake
  • Accessing Rust code from C++ with CXX (and vice-versa)
  • Defining your own QObject types in Rust with CXX-Qt

We discuss when to use Rust compared to C++ to make the best of both languages and how to use them together effectively to make Qt applications safer and easier to maintain.

This course is for

Qt/C++ Developers with an interest in Rust who want to learn how to use Rust in their existing applications.

Prerequisites

Basic Qt/C++ knowledge, as well as basic Rust knowledge, is required. A working Qt installation with CMake and a working Rust installation is needed. We will provide material before the training day that participants should use to check their setup before the training.

Get your ticket

Effective Modern QML

In this training, we look into all the new developments in QML over the last few years and how they lead to more expressive, performant, and maintainable code.

This includes:
- The qt_add_qml_module CMake API
- Declarative type registration
- The different QML compilers
- New language and library features
- New developments in Qt Quick Controls
- Usage of tools like qmllint, QML Language Server, and qmlformat

The focus will be on gradually modernizing existing codebases with new tools and practices.

This course is for

Developers who learned QML back in the days of Qt 5 and want to catch up with recent developments in QML and modernize their knowledge as well as codebases.

Prerequities

Some real-world experience with QML and a desire to learn about modern best practices.

Get your ticket

Integrating Custom 3D Renderers with Qt Applications

Qt has long offered ways of using low-level 3d libraries such as OpenGL to do custom rendering. Whether at the Window, the Widget, or Quick Item level, the underlying rendering system can be accessed in ways that make it safe to integrate such 3rd party renderers. This remains true in the Qt 6 timeline, although the underlying rendering system has changed and OpenGL has been replaced by RHI.

In this course, we look at how the graphic stack is structured in Qt 6 and how third-party renderers can be integrated on the various platforms supported by Qt.

We then focus on the specific case of integrating Vulkan-based renderers. Vulkan is the successor to OpenGL; it's much more powerful but harder to learn. To facilitate the initial use of Vulkan, we introduce KDGpu, a library that encapsulates Vulkan while preserving the underlying concepts of pipeline objects, buffer handling, synchronization, etc.

This course is for

This course targets developers wanting to understand the recent state of the graphics stack in Qt, discover the fundamental principles of modern graphics API, and integrate their custom renderers in their applications.

Prerequisite

Prior knowledge of Qt will be required. A basic understanding of 3d graphics would be beneficial.

Get your ticket

Video from KDAB Training Day 2023 held in Berlin

The post KDAB Training Day - May 8th, 2025 appeared first on KDAB.

In many cases, importing data into LabPlot for further analysis and visualization is the first step in the application:

LabPlot supports many different formats (CSV, Origin, SAS, Stata, SPSS, MATLAB, SQL, JSON, binary, OpenDocument Spreadsheets (ods), Excel (xlsx), HDF5, MQTT, Binary Logging Format (BLF), FITS, NetCDF, ROOT (CERN), LTspice, Ngspice) and we plan to add support for even more formats in the future. All of these formats have their reasons for existence as well as advantages and disadvantages. However, the performance of reading the data varies greatly between the different formats and also between the different CPU generations. In this post, we’ll show how long it takes to import a given amount of data in four different formats – ASCII/CSV, Binary, HDF5, and netCDF.

This post is not about promoting any of the formats, nor is it about doing very sophisticated measurements with different amounts and types of data and extensive CPU benchmarking. Rather, it’s about what you can (roughly) expect in terms of performance on the new and not so new hardware with the current implementation in LabPlot.

For this exercise, we import the data set with 1 integer column and 5 columns of float values (Brownian motion for 5 “particles”, one integer column for the index) with 50 Millions of rows which results into 300 Millions of numerical values:

We take 6 measurements for each format, ignore the first measurement, which is almost always an outlier due to the disk cache in the kernel and results in faster file reads on subsequent accesses, and calculate the averages:

As expected, the file formats that deal with binary representation internally (Binary, HDF5, NetCDF) provide significantly better performance compared to ASCII, and this difference becomes larger the slower the CPU is. The performance of HDF5 and NetCDF is almost the same because the newer version of NetCDF is based on HDF5.

The implementation in the data import code is straightforward. Ignoring for a moment the complexity with the different options affecting the behavior of the parser, different data types and other subleties, once everything is set up it’s just a matter of iterating over the data, parsing it and converting it into the internal structures. The logic inside the loop is fixed, and a linear behavior with respect to the total number of values to read is expected. This expectation is confirmed using the same CPU (we took the fastest CPU from the table above) and varying the total number of rows with the fixed number of columns:

The performance of the import is even more critical when dealing with external data that is frequently modified. In order to provide a smooth visualization in LabPlot for such “live data”, it’s important to optimize all steps involved here, like the import of the new data itself, as well as the recalculation in the algorithms (smoothing, etc.) and in the visualization part. For the next release(s), we’re now working to further optimize the implementation to handle more performance-critical scenarios in the near future. The results of these activities, funded by the NLnet grant, will be the subject of a dedicated post soon.

Friday, 1 November 2024

In Plasma 6.2, KWin switched from doing linear blending with HDR to blending in a gamma 2.2 space. Let’s take a look at what that means, and why it was done.

What is blending?

When KWin composites, it paints window by window, going by the order of how the windows are stacked - the bottom-most window first, the topmost window last. When a window is opaque, you just overwrite the pixels in the framebuffer with the ones from the window. When a window is semi-transparent though, we need to additionally do blending.

To do blending, the GPU calculates the value that the framebuffer should have with some equation that gets the pixel from the window and the existing value in the framebuffer, and outputs some appropriate value. Usually that equation is1

framebuffer = framebuffer.rgb * (1 - window.alpha) + window.rgb * window.alpha

where window.alpha is a per-pixel value that describes how opaque the pixel is.

Blending with color management

In Plasma 5, compositing happened in the display color space. As displays could be assumed to be roughly the same, with brightness levels encoded in sRGB, that resulted in blending looking very similar everywhere.

With HDR in Plasma 6 however, that assumption was no longer true, so we had to do something else. As it was considered the most “correct” thing and allows us to ensure the exact same blending result even with displays that have wildly different colorspaces, linear blending was chosen for Plasma 6.0. The result of linear blending would look different from sRGB, but it would at least be consistent everywhere.

With linear blending, instead of just taking the rgb values as is, you first convert them into light-linear values, for example with sRGB you’d just do

rgb_linear = pow(rgb_sRGB, 2.2)

then apply the blending equation, and at the end convert it back to whatever encoding the screen needs.

Because of limitations in KWin’s renderer and the ancient OpenGL versions KWin supports, the only way to do this was to render first into a so-called shadow buffer2 with linear values, and after compositing a second shader pass would run, taking that shadow buffer as the input and outputting a buffer with non-linear values that’s suitable for sending to the screen.

The caveats of linear blending

It’ll be pretty obvious to most that doing a fullscreen copy each frame is not ideal for performance or battery life, but there was even a second problem: If you use only 8 or 10 bits per color (bpc) to store linear brightness values, you get visible banding in the dark areas of the image, as human vision is very non-linear. So on top of doing fullscreen copies, KWin also had to use a floating point buffer with 16 bpc, which uses twice the memory bandwidth vs. 8 bpc and makes performance and power usage even worse.

With that performance hit, we couldn’t enable this on all hardware, but had to restrict linear blending to those displays where HDR or an ICC profile is enabled. This threw the whole consistency benefit out of the window, because now a semi-transparent surface would look a lot more transparent just because you enabled HDR… causing the very problem we wanted to avoid!

SDRHDR
gamma 2.2 blendinglinear blending

We had to do something when blending in HDR though, so a different approach had to be taken.

Custom transfer functions

When storing colors in buffers, usually we use non-linear encodings with transfer functions like sRGB or PQ. All the transfer functions have some sort of luminance levels attached to them, for example an sRGB value of 1.0 is defined to result in a luminance of 80 nits in its reference viewing environment3.

There’s no reason to be restricted to those definitions though - we can make transfer functions that mean whatever we want or need. In KWin’s case, we switched the shadow buffer from using a linear encoding to a gamma 2.2 encoding, in which 1.0 means the maximum luminance of your monitor. This means we do blending in a very similar way as on SDR screens4, and translucent surfaces on the screen look pretty much the same again, no matter what display settings you have.

As the gamma 2.2 encoding allocates more numbers to darker parts of the image, we can also avoid banding with fewer bits per color. Whenever HDR is enabled and the driver supports buffers with 10 bpc, KWin now prefers to use that instead of 16 bpc, alleviating some of the performance issues. There was still more that could be done though…

KMS offloading

On most graphics cards, the scanout hardware that takes care of sending the image to the display has some fixed function blocks to change the colors in various ways - the most common being a simple look up table (LUT) per color channel.

Through the DRM/KMS API, the compositor can set this LUT, so we can use it to change the encoding from whatever we did blending in to the one the display needs. With HDR screens, this means we

  • convert from our shadow buffer encoding with gamma 2.2 and the maximum luminance of the screen to linear
  • possibly apply rgb factors for night light
  • convert from linear to the PQ encoding the screen needs

With that LUT in place, we don’t have to run a shader pass to convert from the shadow buffer encoding to the screen anymore, so the whole fullscreen copy falls away.

Except for a few additional instructions in the shaders used for compositing, enabling HDR in Plasma 6.2 thus has no performance impact anymore on the vast majority of hardware!





  1. in practice, window.rgb is already “pre-multiplied” with window.alpha, but that doesn’t really matter here 

  2. it’s called that because it’s never actually shown on the screen 

  3. the viewing environment basically defines a standardized room, in which the content can be viewed as intended without doing any brightness adjustments 

  4. it’s not exactly the same though. If you need some exactly specific blending behavior in your application, it’s best if you do it yourself 

Tuesday, 29 October 2024

Fedora 41 has been released! 🎉 So let’s see what comes in this new release for the Fedora Atomic Desktops variants (Silverblue, Kinoite, Sway Atomic and Budgie Atomic).

Note: You can also read this post on the Fedora Magazine.

bootupd enabled by default for UEFI systems (BIOS coming soon)

After a long wait and a lot of work and testing, bootloader updates are finally enabled by default for Atomic Desktops.

For now, only UEFI systems will see their bootloader automatically updated on boot as it is the safest option. Automatic updates for classic BIOS systems will be enabled in the upcoming weeks.

If you encounter issues when updating old systems, take a look at the Manual action needed to resolve boot failure for Fedora Atomic Desktops and Fedora IoT Fedora Magazine article which includes instructions to manually update UEFI systems.

Once you are on Fedora 41, there is nothing more to do.

See the Enable bootupd for Fedora Atomic Desktops and Fedora IoT change request and the tracking issue atomic-desktops-sig#1.

First step towards Bootable Containers: dnf5 and bootc

The next major evolution for the Atomic Desktops will be to transition to Bootable Containers.

We have established a roadmap (atomic-desktops-sig#26) and for Fedora 41, we added dnf5 and bootc to the Bootable Container images of Atomic Desktops.

Those images are currently built in the Fedora infrastructure (example) but not pushed to the container registry.

The images currently available on quay.io/fedora (Silverblue, Kinoite, etc.) are mirrored from the ostree repository and thus do not yet include dnf5 and bootc.

Once releng#12142 has been completed, they will be replaced by the Bootable Container images.

In the mean time, you can take a look at the unofficial images (see the Changes in unofficial images section below).

See the DNF and bootc in Image Mode Fedora variants change request and the tracking issue atomic-desktops-sig#48.

What’s new in Silverblue

GNOME 47

Fedora Silverblue comes with the latest GNOME 47 release.

For more details about the changes that alongside GNOME 47, see What’s new in Fedora Workstation 41 on the Fedora Magazine and Fedora Workstation development update – Artificial Intelligence edition from Christian F.K. Schaller.

Ptyxis as default terminal application

Ptyxis is a terminal for GNOME with first-class support for containers, and thus works really well with Toolbx (and Distrobox). This is now the default terminal app and it brings features such as native support for light/dark mode and user-customizable keyboard shortcuts.

See Ptyxis’ website.

Wayland only

Fedora Silverblue is now Wayland only by default. The packages needed for the X11 session will remain available in the repositories maintained by the GNOME SIG and may be overlayed on Silverblue systems that require them.

See the Wayland-only GNOME Workstation Media change request and the tracking issue: atomic-desktops-sig#41.

What’s new in Kinoite

KDE Plasma 6.2

Fedora Kinoite ships with Plasma 6.2, Frameworks 6.7 and Gear 24.08.

See also What’s New in Fedora KDE 41? on the Fedora Magazine.

Kinoite Mobile

Kinoite Mobile is currently only provided as unofficial container images. See the Changes in unofficial images section below.

See the KDE Plasma Mobile Spin and Fedora Kinoite Mobile change request.

What’s new in Sway Atomic

Fedora Sway Atomic comes with the latest 1.10 Sway release.

What’s new in Budgie Atomic

Nothing specific this release. The team is working on Wayland support.

Changes in unofficial images

Until we complete the work needed in the Fedora infrastructure to build and push official container images for the Atomic Desktops (see releng#12142), I am providing unofficial builds of those. They are built on GitLab.com CI runners, use the official Fedora packages and the same sources as the official images.

You can find the configuration and list on gitlab.com/fedora/ostree/ci-test and the container images at quay.io/organization/fedora-ostree-desktops.

New unofficial images: Kinoite Mobile & COSMIC Atomic

With Fedora 41, we are now building two new unofficial images: Kinoite Mobile and COSMIC Atomic. They join our other unofficial images: XFCE Atomic and LXQt Atomic.

See How to make a new rpm-ostree desktop variant in Fedora? if you are interested in making those images official Fedora ones.

See the KDE Plasma Mobile Spin and Fedora Kinoite Mobile change request and the Fedora COSMIC Desktop Environment Special Interest Group (SIG) page.

Renaming the Sericea and Onyx unofficial images to Sway Atomic and Budgie Atomic

If you are using the Sericea or Onyx container images, please migrate to the new Atomic names for Sericea & Onyx (sway-atomic and budgie-atomic) as I will remove the images published under the old name soon, likely before Fedora 42.

We will likely rename the official container images as well.

Smaller changes common to all desktops

Unprivileged updates

The polkit policy controlling access to the rpm-ostree daemon has been updated to:

  • Enable users to update the system without having elevated privileges or typing a password. Note that this change only applies to system updates and repository meta updates; no other operations.
  • Reduce access to the most privileged operations (such as changing the kernel arguments, or rebasing to another image) of rpm-ostree for administrators to avoid mistakes. Only the following operations will remain password-less to match the behavior of package mode Fedora with the dnf command:
    • install and uninstall packages
    • upgrade the image
    • rollback the image
    • cancel transactions
    • cleanup deployment

See the Unprivileged updates for Fedora Atomic Desktops change request and the tracking issue atomic-desktops-sig#7.

“Alternatives” work again

The alternatives command (alternatives(8)) is now working on Atomic Desktops.

See the tracking issue atomic-desktops-sig#51 for more details and documentation.

Fixes for LUKS unlock via TPM

Support for unlokcing a LUKS partition with the TPM is now included in the initramfs.

See the tracking issue atomic-desktops-sig#33 and the in progress documentation silverblue-docs#176.

Universal Blue, Bluefin, Bazzite and Aurora

Our friends in the Universal Blue project have prepared the update to Fedora 41 already. For Bazzite, you can find all the details in Bazzite F41 Update: New Kernel, MSI Claw Improvements, VRR Fixes, Better Changelogs, GNOME 47 & More.

For Bluefin (and similarly for Aurora), see Bluefin GTS is now based on Fedora 40.

I heavily recommend checking them out, especially if you feel like some things are missing from the Fedora Atomic Desktops and you depend on them (NVIDIA proprietary drivers, extra media codec, etc.).

What’s next

We have made lot of progress since the last time, thus this section is going to be more exciting!

Roadmap to Bootable Containers

As I mentionned in First step towards Bootable Containers: dnf5 and bootc, the next major evolution for the Atomic Desktops will be to transition to Bootable Containers. See also the Fedora bootc documentation.

We have established a roadmap (atomic-desktops-sig#26) and we need your help to make this a smooth transition for all of our existing users.

composefs

Moving to composefs is one of the items on the roadmap to Bootable Containers. composefs is the next step for ostree based systems and will enable us to provide better integrity and security in the future.

For Fedora 41, we moved Fedora CoreOS and Fedora IoT to composefs.

For the Atomic Desktops, this is planned for Fedora 42 as we still have a few issues to resolve. See the Enabling composefs by default for Atomic Desktops change request and the tracking issue atomic-desktops-sig#35.

Custom keyboard layout set on installation

This fix is important for setups where the root disk is encryptd with LUKS and the user is asked a passphrase on boot. Right now, the keyboard layout is not remembered and defaults to the US QWERTY layout. Unfortunately this fix did not land in time for Fedora 41 but this will be part of the Fedora 42 installations ISOs. Help us test this by installing systems from a Rawhide ISO to confirm that this issue is fixed.

If you are impacted by this issue, see the tracking issue atomic-desktops-sig#6 for the manual workarounds.

Unifying the Atomic Desktops documentation

We would like to unify the documentation for the Fedora Atomic Desktops into a single one instead of having per desktop environments docs which are mostly duplicate of one another and need to be constantly synced.

See the tracking issue atomic-desktops-sig#10.

Where to reach us

We are looking for contributors to help us make the Fedora Atomic Desktops the best experience for Fedora users.

I’ve had my Creality CR-6 SE for quite some while now and it’s worked very well. I’ve even moved with it a couple of times. However, it seems that now was the time for it to give up the ghost, as the extruder casing developed a crack. Apparently something not completely uncommon.

The extruder casing removed. The spring is quite powerful.
The crack.

So I searched the internet for spare parts before I realized that this is a common failure mode and that there are all-metal replacements. A few clicks later, I had one ordered from Amazon. I took a chance and ordered one for CR-10, as it looked like it would fit from the photos, and it did (phew). Here is a link to the one I got: link.

The replacement extruder installed.

The installation went smooth. The only tricky part was getting the threads of the screw being pushed by the spring right. The spring is quite strong, so it is really a three hand operation in the area where my fingers have a hard time fitting. Having done that, it seems like it just work straight out of the box.

First print.

The print has been going on for a couple of hours now, and there has been no hickups. Big shout-out to OctoPrint while I’m at it. Being able to keep an eye on things without having to sit next to the printer is just great (and not having to fiddle around with SD-cards is also really nice).

Getting There!

Wow! What a trip! 20 hours across 2 flights, 2 hours on the train with travel buddies Nate Graham and Bhushan Shah, and several bus "adventures" with Nate Graham to our hotel. The hotel... Let's not go into too much detail, suffice to say it was an absolute mess.

This being my first Akademy in person it was a very anxious experience getting there, but with global roaming on my phone to keep communication flowing and a few travel buddies it was certainly made much better!

But once we were settled in and unpacked, it was off to the first event!

The Welcome Event!

Wow, was it chaos once all the people showed up, but amazing to see so many KDE users and developers! A few locals even popped their head in, confused by the packed out venue. We thankfully managed to get a ride in Adriaan de Groot's smooth E.V to the venue and found a few others after parking.

The place had a great vibe and the free drinks and snack courtesy of KDE went down a treat! I quickly connected to the free Wi-Fi, spun up some translations of the menu and grabbed the Mexican Fries with guacamole, tomatoes and onions. It was amazing with a few drinks to wash it all down!

On to the main event!

Saturday Talks!

Having the talks start a little later was great as it meant we didn't have to get up early and get rushing out the door!

Adriaan also bought Stroopwafels with him! So delicious!

Only Hackers Will Survive

The first talk (barring the brief opening by the Akademy team) was right into the thick of it, and it was about circular economies, electronic waste and how we, the hackers, have the right mindset to keep things working well beyond manufacturers expected lifetimes. Whether they be limited by lack of software updates or pushed out of the market by ever more demanding software performance requirements.

The scenes of landfills around the globe on display with people in Third World countries sorting through them to reclaim precious metals on display to really bring home the true impact. Here's hoping KDE's software and projects can help with this, especially with the Blue Angel Certification. I am really hoping to see Plasma Mobile continue to take charge in this area, where mobile devices are often discarded rapidly in favour of the newest shiniest thing.

Goals Wrap Up!

Every two years, KDE chooses new goals to focus on to make KDE's software better for all. At this year's Akademy it was time to review the goals outcomes for the last two years.

Carl Schwan went through his accessibility goal outcomes which included a discussion about how our hardware partners want to improve accessibility, 190 code changes that were accessibility related and a new accessibility inspector application! There was also notes around the Appium CI/CD test suite and Selenium driver test suite to help with ensure accessibility in our as well as Project Spiel a new text to speech engine for the Free Desktop plus turning accessibility into a permanent goal for KDE.

Nate Graham wrapped up his goals around automation and systematisation. These were about being lazier in a good way, automating the boring trivial things we spend time doing all the time. Creating policies to reduce the debate on issues with opinions impacting what we do, instead of policy. Working as teams instead of alone and documenting how we do things instead of keeping tribal knowledge locked away in our brains.

I wasn't able to attend much of Cornelius Schumacher's KDE Eco goal so I didn't get any notes on that but I do love that KDE cares for the environment and this should be an ongoing goal to make sure our software (and hardware partners) are acting responsibly to ensure a brighter future for the next generations.

Report of the Board

The KDE board assembled in person to give us a run-down of 2023 including all their favourite things that happened like paying members going from 53 to 719, GitHub sponsors up to 132 from 67, and 170 nonmember recurring donations up from 130 on Donorbox! On top of that, the best fundraising campaign ever in December 2023. A monumental task was also completed with the release of Plasma 6 alongside Frameworks and Gear at the same time! The KDE Eco grant was extended, they hired new staff as part of the Make a Living goal, including someone to manage the goals. The new financial situation improved vastly due to a successful Donorbox rollout and fundraising campaign.

In looking to the future the board are encouraging more community members to get on board with the KDE Goals, with the improved finance situation they are hoping to leverage that to continue improving KDE's offerings. Conferences and in-person sprints are now viable post-apocalypse! Ask for funding/organisation from the board if you want to organise/attend one and represent KDE! They are also aiming to move beyond the Make a Living effort and leverage current staff for further work without losing sight of their goals and improve management of staff, contracts and keeping staff on. They're also hoping to expand our app store presence on Google Play, Windows Store and Flathub to get more apps on and investigate possible revenue sources from this.

Adapt or Die: How new Linux packaging approaches affect wider KDE

David Edmundson gave an awesome talk about Linux packaging and where we are headed. Flatpaks! Oh, and I guess Snaps and Appimages and that new one from Deepin… But his talk was generally focussed on how KDE as a community will need to face the challenges of containerised applications since immutable distros are appearing all the time now. In this well researched discussion, he raised several case studies that he has identified as problem points for progress. This was one of my favourite talks, as I am a big proponent of Flatpak as the future of application distribution and plan on working with David to that end.

Looking Back: What's Next

Wait what? What does that even mean? Nicolas Fella decided to share with us his run-down on how the porting from Qt5 to Qt6 went as a whole. From the timeline stretching way back to 2019 to the KDE Megarelease of 2024 including how each step was planned out and then managed into a reality. He also gave us a run-down of the good and the bad from his viewpoint. The good included lots of great new features and loads of pre-release testing. The bad included bugs (but you can never get rid of all of these), controversial decisions and broken distros as of release. He also noted a lack of documentation for third party users of KDE frameworks and some things that got left on the cutting room floor due to a lack of time to get them in. Those have been marked TODO KF7 😂

An Operating System of Our Own

Harald Sitter gave us a run-down of his new Operating System KDE Linux (previously - and my favourite name - Project Banana). This is a new image based distro that uses BTRFS and images of the OS, with easy switching between them. This will be designed for anyone to use, from KDE developers to users and hardware vendors! It will bring apps from Flatpak (my favourite) and Snap to keep the OS and applications separate. I've been watching this for a little while before Akademy so I was happy to see it announced, and it has attracted many people onboard and has accelerated the development of it immensely! Come join us @ #kde-linux:kde.org on Matrix!

A look on the Bright Side of Life

Harald gave us a quick lightning talk about remaining calm and enjoying what we do. Sometimes things are annoying or frustrating but sometimes we need to step back, look at all the awesome things we make and the amazing people we do that with. If in doubt, ask for help or a rubber duck and come back to fight another day.

Sunday Talks!

Sadly I didn't take a lot of notes on Sunday but I attended a few talks.

  • Openwashing - How do we handle (and enforce?) OSS policies in products? by Markus Feilner, Holger Dyroff, Richard Heigl, Leonhard Kugler and Cornelius Schumacher. A discussion on companies using the title and reputation of open source without actually being open or contributing anything to the community.

  • Group Photo - My first time in a KDE related photo!

  • Contributing is more than just code by Kai Uwe Broulik was a really important talk for me as I can't code. It's just not how my brain works, but the topic of his talk resonated with me so much. KDE has lots of code, frameworks, apps, libraries and that requires a lot of code. But there is SO much more work to be done around that code, translations, quality assurance, bug triaging, documentation just to name a tiny few. If you're interested in contributing to KDE, there are LOTS of things you can help with!

  • Financial support for working on KDE Jos van den Oever gave a great talk about how you as a KDE contributor can apply for funding. It was mostly focussed around NLnet who had a representative there to encourage us to apply for funding and explain the application and approvals processes. They also stuck around for the next few days to help anyone who wanted to apply to submit their application!

Daily driving Plasma Mobile and what's still lacking

Another talk that really hits home for me was by Bart Ribbers on Plasma Mobile. We've got amazing stories for our desktop offering, but the mobile space is a huge part of most people's daily lives. So many people don't own a computer any more, and they're living their digital life through a mobile phone. This talk gave insight into Bart's daily life with his Plasma Mobile enabled phone and where we need to narrow our focus to improve the experience.

BoFs & Daytrip

Over the following days I attended many BoFs:

  • Opt Green: Website Bloat and Green Web which we discussed how our website content affects the environment, from intensive JavaScript, oversized or large images increasing CPU usage and bandwidth to the core of the web itself and what Data Centres and traffic hubs use for their electricity, is it renewable or fuelled by dirty fossil fuels

  • Opt Green / KDE Eco which was all about KDE Eco's certification of Okular, their involvement in defining the Blue Angel specification, getting more KDE apps certified and how KDE was recognised as an expert in sustainability in the German parliament!

  • KDE Goals - We care about your Input where discussions happened around how we improve input methods for those who use alternate languages and alphabets to game controllers and digital input tablets and devices.

  • KDE's release schedules was all about starting discussions about when and how we release software, including how we can better cooperate with downstream distros who are kindly distributing our software

  • Plasma Discover by Aleix gave us insight into the recent changes to performance by Harald and himself, and just a great general discussion around Plasma's current state and what is needed for the future of software stores. KDE Goals - Streamlined Application Development Experience was a rather interesting chat about how we currently develop apps and many discussions about how we improve that process, introduce new blood to the community and make it seriously easy to start a new KDE application.

Food!

This worried me in the lead up and on my way to Germany, as I had earlier this year decided to go Vegetarian. However, I was delightfully surprised and the quality and variety of food available, with the food organised by Akademy team being inclusive of my requirements and tasty! The food at University cafeteria was also amazing to my surprise and I learned that they had won awards!

I also thankfully found some students who sold me a crate of Cola for 10 Euros. I grabbed one and gave the rest to the Akademy help desk for them to give out to attendees for a 1 Euro donation ;)

Outro

  • What an amazing event, thank yous in no particular order go to Nate Graham, the KDE e.V for sponsoring my travel, the Akademy team and everyone I met at Akademy for making it an awesome experience especially for someone who hasn't been to a conference before, to all of the KDE community for making awesome software and the Akademy and KDE sponsors who make these events possible!