Skip to content

Friday, 30 August 2024

Let’s go for my web review for the week 2024-35.


Telegram is neither “secure” nor “encrypted”

Tags: tech, telegram, security, privacy

Here a good reminder that the PR of Telegram is highly misleading. It’s not very secure, they don’t really care about your privacy.

https://rys.io/en/171.html


How Telegram’s Founder Pavel Durov Became a Culture War Martyr

Tags: tech, telegram, law

Yes, such an arrest is concerning. Now, lots of people are voicing the wrong concerns… this article actually does a good job explaining it.

https://www.404media.co/how-telegrams-founder-pavel-durov-became-a-culture-war-martyr/


Is the Open Source Bubble about to Burst?

Tags: tech, foss, values, licensing, fundraising

I’m not sure the “bubble” comparison properly applies. Still there are indeed signs of the Open Source movement getting in troubles. It’ll be all the more important to stick to the Free Software values.

https://tarakiyee.com/is-the-open-source-bubble-about-to-burst/


Elasticsearch is Open Source, Again

Tags: tech, foss, licensing

It’s about time… I wish they would have gone for the AGPL + proprietary double license scheme instead of their odd licenses the last time.

https://www.elastic.co/blog/elasticsearch-is-open-source-again


Top Programming Languages 2024 - IEEE Spectrum

Tags: tech, language

Interesting to see Typescript and Rust picking up pace slowly. Otherwise Python, Java, Javascript and C++ are still the big four overall. For jobs, C# and SQL are good to have in your tool belt.

https://spectrum.ieee.org/top-programming-languages-2024


What RSS Needs

Tags: tech, rss

Very good list of the challenges ahead for RSS as a popular protocol. It’d be great to see some of it being tackled.

https://www.mnot.net/blog/2024/08/25/feeds


OpenSSH Backdoors

Tags: tech, supply-chain, security, linux

Interesting comparison between old attempts at backdooring OpenSSH and the latest xz attempt. There are lessons to be learned from this. It makes a good case for starting to sandbox everything.

https://blog.isosceles.com/openssh-backdoors/


Bypassing airport security via SQL injection

Tags: tech, security, airline

Woops, this was clearly a very bad security issue allowing to completely bypass airport security screening in the US.

https://ian.sh/tsa


SQL Has Problems. We Can Fix Them: Pipe Syntax In SQL

Tags: tech, databases, sql

Looks like an interesting way to improve SQL. This feels like a nice extension, it’s much better than throwing out the baby with the bathwater.

https://research.google/pubs/sql-has-problems-we-can-fix-them-pipe-syntax-in-sql/


Python’s Preprocessor

Tags: tech, programming, python, metaprogramming

Interesting to see how far you can go preprocessing Python.

https://pydong.org/posts/PythonsPreprocessor/


Don’t use complex expressions in if conditions | Volodymyr Gubarkov

Tags: tech, programming, maintenance

Definitely a good advice, I see very complex expressions in if (or while BTW) conditions way too often. They tend to accumulate over time.

https://maximullaris.com/if_condition.html


Why Playwright is less flaky than Selenium

Tags: tech, web, frontend, tests, performance, reliability

Interesting reason which would explain the Selenium flakiness. It’s just harder to write tests with race conditions using Playwright.

https://justin.searls.co/links/2024-08-29-why-playwright-is-less-flaky-than-selenium/


Putting a meaningful dent in your error backlog

Tags: tech, monitoring, reliability, debugging

How to avoid drowning in errors when getting serious about monitoring. Finding class of errors and treating them one by one will definitely help.

https://blog.danslimmon.com/2024/08/15/putting-a-meaningful-dent-in-your-error-backlog/


There can’t be only one

Tags: tech, design, databases

A weird detour via baseball obscure rules to justify why we should pay attention to the “Highlander problem”. This should be kept in mind especially for designing databases.

https://www.b-list.org/weblog/2024/aug/27/highlander-problem/


Building with purpose

Tags: tech, business, architecture

Definitely something architects should do more. Understanding the business needs should be the input to the technical decisions. Otherwise you might just happily build the wrong thing.

https://cremich.cloud/building-with-purpose



Bye for now!

Tuesday, 27 August 2024

A while ago a colleague of mine asked about our crash infrastructure in Plasma and whether I could give some overview on it. This seems very useful to others as well, I thought. Here I am, telling you all about it!

Our crash infrastructure is comprised of a number of different components.

  • KCrash: a KDE Framework performing crash interception and prepartion for handover to…
  • coredumpd: a systemd component performing process core collection and handover to…
  • DrKonqi: a GUI for crashes sending data to…
  • Sentry: a web service and UI for tracing and presenting crashes for developers

We’ve looked at KCrash previously. This time we look at coredumpd.

Coredumpd

coredumpd collects all crashes happening on the system, through the core_pattern system. It is shipped as part of systemd and as such mostly available out of the box.

It is fairly sophisticated and can manage the backlog of crashes, so old crashes get cleaned out from time to time. It also tightly integrates with journald giving us a well-defined interface to access crash metadata.

coredumpctl screenshot

But before we dive into the inner workings of coredumpd, let’s talk about cores.

What are cores?

A core, or more precisely: a core dump file, is a copy of the memory image of a process and its process status (registers, mappings, etc.) in a file. Simply put, it’s like we took a copy of the running process from RAM and stored it in a file. The purpose of such a core is that it allows us to look at a snapshot of the process at that point in time without having the process still running. Using this data, we can perform analysis of the process to figure out what exactly went wrong and how we ended up in that situation.

The advantage is that since the process doesn’t need to be running anymore, we can investigate crashes even hours or days after they happened. That is of particular use when things crash while we are not able to deal with them immediately. For example if Plasma were to crash on logout there’d be no way to deal with it besides stopping the logout, which may not even be possible anymore. Instead we let the crash drop into coredumpd, let it collect a core file, and on next login we can tell the user about the crash.

With that out of the way, it’s time to dump a core!

Core Dumps

We already talked about KCrash and how it intercepts crashes to write some metadata to disk. Once it is done it calls raise() to generate one of those core dumps we just discussed. This actually very briefly turns over control to the kernel which will more or less simply invoke the defined core_pattern process. In our case, coredumpd.

coredumpd will immediately systemd-socket-activate itself and forward the data received from the kernel. In other words: it will start an instance of systemd-coredump@.service and the actual processing will happen in there. The advantage of this is that regular systemd security configuration can be applied as well as cgroup resource control and all that jazz — the core dumping happens in a regular systemd service.

The primary task here is to actually write the dump to a file. In addition, coredumpd will also collect lots of additional metadata besides what is in the core already. Most notably various bits and pieces of /proc information such as cgroup information, mount information, the auxillary vector (auxv), etc.

Once all the data is collected a journald entry is written and the systemd-coredump@.service instance quits again.

The journal entry will contain the metadata as entry fields as well as the path of the core dump on disk, so we can later access it. It essentially serves as a key-value store for the crash data. A severely shortened version looks like this:

Tue 2024-08-27 17:52:27.593233 CEST […]
    COREDUMP_UID=60106
    COREDUMP_GID=60106
    COREDUMP_SIGNAL_NAME=SIGSYS
    COREDUMP_SIGNAL=31
    COREDUMP_TIMESTAMP=1724773947000000
    COREDUMP_COMM=wine64
    COREDUMP_FILENAME=/var/lib/systemd/coredump/core.wine64.….zst
    …

Example

Since this is all rather abstract, we can look at a trivial example to illustrate things a bit better.

Let’s open two terminals. In the first we can watch the journal for the crash to appear.

journalctl -xef SYSLOG_IDENTIFIER=systemd-coredump

In the second terminal we run an instance of sleep in the background, and then trigger a segmentation fault crash.

sleep 99999999999&
kill -SEGV $!

In the first terminal you’ll see the crash happening:

Aug 27 15:01:49 ajax systemd-coredump[35535]: Process 35533 (sleep) of user 60106 terminated abnormally with signal 11/SEGV, processing...
Aug 27 15:01:49 ajax systemd-coredump[35549]: [🡕] Process 35533 (sleep) of user 60106 dumped core.

                                              Stack trace of thread 35533:
                                              #0  0x0000729f1b961dc0 n/a (/lib/ld-linux-x86-64.so.2 + 0x1cdc0)
                                              ELF object binary architecture: AMD x86-64

So far so interesting. “But where is the additional data from /proc hiding?” you might wonder. We need to look at the verbose entry to see all data.

journalctl -o verbose SYSLOG_IDENTIFIER=systemd-coredump
journalctl -o verbose of a crash

This actually already concludes coredumpd’s work. In the next post DrKonqi will step onto the stage.

Here at KDE neon tower we have been busy rebasing our KDE software builds from Ubuntu 22.04 (jammy) to Ubuntu 24.04 (noble). This always takes longer than you’d think, mostly because it’s a moving target so we also have to keep updating the incoming releases from Plasma, Frameworks, Gear and even Calligra. We had a couple of delays when Jonathan caught Covid (4 years after it was worth sympathy to do so) and then the build server had issues and needed itself rebuilding. But the package archives are there and the Docker images are there and today the first ISO got built which boots successfully. Next steps are making sure all the software is up to date and getting the upgrade solid. Be with you soon!

First KDE neon ISO based on Noble

Calligra is the office and graphics suite developed by KDE and is the successor to KOffice. With some traditional parts like Kexi and Plan having an independent release schedule, this release only contains the four following components:

The most significant updates are that Calligra has been fully transitioned to Qt6 and KF6, along with a major overhaul of its user interface.

General

Words, Sheets, and Stage now feature a new sidebar design. Currently, this is implemented using a proxy style, which will no longer be necessary once the related merge request in Breeze is merged.

Sidebar with the new immutable tab design
Sidebar with the new immutable tab design

I revamped the content of each sidebar tab, addressing various visual glitches and making the spacing much more consistent.

The “Custom Shape” docker has been removed, and custom shapes are now accessible through a popup menu in the toolbar across all Calligra applications.

Custom shapes popup
Custom shapes popup

Regarding the toolbar, I streamlined the default layout by removing basic actions like copy, cut, and paste.

The settings dialogs were also cleaned up and are now using the new FlatList style also used by System Settings and most Kirigami applications.

Settings Dialog
Settings Dialog

Words

Word now features the new sidebar design, and the main view uses a shadow to define the document borders.

Calligra Words
Calligra Words

The Style Manager and Page Layout dialog were also updated.

Style Manager
Style Manager

Page Layout
Page Layout

Stage

Stage didn’t really change aside of the sidebar redesign. But I am using it to work on my slides for Akademy and it is a pretty solid choice.

Calligra Stage
Calligra Stage

The tooltip for the slides are now compatible with Wayland.

Tooltip showing a slide
Tooltip showing a slide

Calligra Sheets

As part of the Qt6 port, Sheets lost its scripting system based on the unmaintained Kross framework. In the future, it would be possible to add Python scriping, thanks to the work of Manuel Alcaraz Zambrano on getting Python bindings for the KDE Frameworks.

Visually a noticable change is that the cell editor moved from a docker positioned on the left of the spreadsheet view by default to a normal widget on the top. This takes a lot less space which can be used by the spreadsheet.

Calligra Sheets
Calligra Sheets

Karbon

Karbon didn’t received much change outside of the one affecting the whole platform.

Karbon
Karbon

Launcher

The intial window when opening one of the Calligra application was redesign and adopted the new “frameless style”.

Custom Document tab of the launcher page
Custom Document tab of the launcher page

Template tab of the launcher page
Template tab of the launcher page

Other

  • Braindump is now able to compile again, but since it lacks an active maintainer, the component is disabled in release builds.

  • The webshape plugin has been ported from the outdated QtWebkit module to QtWebEngine and is no longer exclusive to Braindump. This means you can now embed websites directly into your word documents, slides, and spreadsheets.

Webshape
Webshape

  • The AppStream id of every components is prefixed by org.kde.calligra. This allow Flatpak to expose every Calligra applications to your application launcher.

Get Involved

Calligra needs your support! You can contribute by getting involved in development, providing new or updated templates, or making a donation to KDE e.V.. Join the discussion in our Matrix channel.

Credits

This release would not have been possible without the high quality mockups provided by Manuel Jesús de la Fuente. Also big thanks to everyone who contributed to this Calligra release: Evgeniy Harchenko, Dmitrii Fomchenkov and bob sayshilol.

Packager Section

You can find the package on download.kde.org and it has been signed with my GPG key.

Monday, 26 August 2024

Project Recap

As my Google Summer of Code 2024 journey concludes, I'm excited to share the updates on my project: Porting Arianna to Foliate-js. The main goal was to replace the outdated epub.js with actively maintained Foliate-js. In my previous blog post, I discussed the initial progress on integrating Foliate-js into Arianna, including the implementation of Table of Contents (TOC) and metadata handling.

My work done so far

Overcoming Rendering Challenges

  • Rendering Issues: One of the major hurdles was fixing the rendering issues that were causing the book to not be visible on the screen. This was a complex problem, but with the guidance of my mentor,we were able to resolve it successfully, and the book was able to visible on the screen.

  • Text Color in Light Theme: I also addressed the text color issues in the light theme mode, ensuring other color can be visible and maintaining visual consistency across different themes.

  • Navigation Buttons: Enabled the navigation buttons by setting backend.locationsReady to true when the book is ready. This was a key fix to enhance user navigation within the ebook, to move from one page to another expect through the arrow keys.

  • Theme Color Handling: Lastly, I worked on the handling of theme colors to provide a consistent visual experience across different themes.

light: {
  fg: Config.invert ? Kirigami.Theme.backgroundColor.toString() : Kirigami.Theme.textColor.toString(),
  bg: Config.invert ? Kirigami.Theme.textColor.toString() : Kirigami.Theme.backgroundColor.toString()
}

Navigation Buttons

User Experience and Functionality Refinements

  • Slider and Progress Percentage: I fixed the slider functionality, making sure it accurately reflects the reading progress.

commit: ee6cae02

I added the line backend.progress = action.payload.fraction; in the relocate case. The relocate event is triggered when the location changes. The event’s detail contains properties such as range, index, and fraction. Here fraction is a number between 0 and 1, indicating the reading progress within the section. By utilizing fraction, the slider can now accurately reflect the user's position in the ebook.

  • Reading Position Accuracy: I also ensured that the slider accurately reflects the reading position when users interact with it, improving the overall usability, issues previously was that when moving the slider didn't change the position in the book.

To solve this, I implemented the following changes:

QQC2.Slider {
    onValueChanged: {
        if (pressed) {
            backend.progress = value
        }
    }
    onPressedChanged: {
        if (!pressed) {
            backend.progress = value
            view.runJavaScript(`reader.view.goToFraction(${value})`)
        }
    }
}
  • Book Progress Display: I resolved issues with the book progress display, update time left calculation, and popup behavior, refining the user interface for a smoother reading experience.

> Reflections and Takeaways

The project has been a significant learning experience. The most challenging part for me was making things work and realizing that not everything is as straightforward as I initially thought. It was daunting to dive into a large codebase and try to understand how everything fits together, but this experience taught me the importance of patience and how to reduce problems so they can be simply solved.

Looking Ahead

While significant progress has been made, many things are left to do:

  • Fixing right-click copy and search functionality
  • Implementing Ctrl+ shortcut for increasing font size
  • Addressing link color and redirect issues
  • Features like bookmarking and annotations

What’s next?

While my GSoC journey is coming to an end, my contributions to Arianna and the open-source community will continue.

I'd like to thank my mentor, Carl Schwan, for the guidance and support throughout the project, the KDE community, and the Google Summer of Code program for this opportunity. This experience has not only improved Arianna but has also been a transformative journey for me as a developer.

Thank you for following along with my progress, see you in my next blog with more progress.

With the Qt 6.7 release, Qt introduced a wide range of improvement for the text rendering and font shaping.

One element of this is that you can now configure OpenType font features.

Many of the 'new cool' programming fonts have such features integrated. That includes both free fonts like Cascadia Code or paid fonts like MonoLisa.

Let's use the features of Cascadia Code as an example, that is the stuff they promote on their GitHub page:

Cascadia Code OpenType font features

For example if you set the ss01 feature, you get some alternative italics. The same holds for MonoLisa, there that is the feature ss02. Already that shows: these feature are often not very usefully named and very font specific.

Thanks to Waqar and me, with the upcoming KF 6.6 release, one will be able to configure that in Kate and other KTextEditor based applications.

The generic KDE Frameworks font chooser allows now to configure that stuff and KTextEditor will keep these settings around.

See here enabled alternative italics in Kate with the enhanced font chooser still open (look at the SPDX markers in the code):

Kate font features demo

A remaining issue is how to best handle the configuration saving in a more generic way. Ideas how to add that to KConfig without breaking compatibility of the configuration files we write with older applications would be welcome. For KTextEditor we just add some extra key for just the features that will be ignored by old versions.

FrOScon 2024 🔗

Carl Schwan CarlSchwan 08:45 +00:00
RSS

This year, I attended FrOScon for the first time . FrOScon is the biggest conference about free and open-source software in Germany. It takes place every year in Bonn/Siegburg (Germany) at the weekend and is free to attend.

For the first time, I was not at a conference to staff a KDE stand. My employer had a stand there, and it was a great occasion for me to meet some colleagues, fellow KDE, and Matrix contributors.

GnuPG Stand
GnuPG Stand

So I spent the majority of my time at the GnuPG stand and discussing many things with Volker, including KDE PIM and the future of KWallet.

I also meet many Matrix community members and am excited to attend the Matrix Conference next month.

Matrix Stand
Matrix Stand

All in one, it was a great conference and I hope to see more KDE people there next year and maybe even having out own KDE stand.

Haruna version 1.2.0 is out with a new footer style.


flathub logo

Availability of other package formats depends on your distro and the people who package Haruna.

Windows version:

If you like Haruna then support its development: GitHub Sponsors | Liberapay | PayPal

Feature requests and bugs should be posted on bugs.kde.org, but for bugs make sure to fill in the template and provide as much information as possible.


Changelog:

1.2.1

  • Fixed thumbnail preview popup not resizing after disabling the previews
  • Use the same volume component for both footer styles

1.2.0

  • Added floating footer/bottom toolbar style with 2 ways to trigger it:
    • on every mouse movement of the video area
    • only when the mouse is in the lower part of the video area
  • Removed the docbook and moved its content to tooltips
  • Middle clicking the playlist scrolls to the playing item

Every two years, the KDE community selects three goals that serve as focal points for the entire community's efforts in the coming years. This cyclical process of goal-setting and community-wide focus is a great example of KDE's Cumulative Culture in action.

This concept, typically observed in human societies, refers to the ability to build upon previous knowledge and innovations to create increasingly complex and effective solutions. In KDE's case, each cycle of goals represents a new layer of accumulated wisdom, i.e. new features and more stability.

The First Cycle (2018-2020)

The first cycle of goals laid the groundwork with its focus on community growth, privacy, and usability.

  • Streamlined Onboarding: Focused on attracting and retaining new contributors by making the onboarding process smoother and more engaging.
  • Privacy Software: Prioritized user privacy and security, ensuring KDE software respects user data and complies with security standards.
  • Usability & Productivity: Aimed to enhance the usability and productivity of KDE software, making it powerful yet easy to use.

The Second Cycle (2020-2022)

The second cycle tackled more complex challenges. Goals like Wayland implementation improvements (which layed the foundation for the Plasma 6 release), improving the app ecosystem, and ensuring consistency in design and functionality.

  • Wayland: This task aimed at stabilizing Wayland support accross KDE apps.
  • All About the Apps: Improved KDE's app infrastructure, enabling more efficient app delivery and better support services.
  • Improve Consistency across the Board: Ensured uniformity in design and functionality across KDE software, improving usability and reducing redundancy.

The Third Cycle (2022-2024)

The third cycle, which is currently coming to an end, was about progress and adaptation. A focus to include environmental responsibility, operational efficiency, and inclusive design.

  • Sustainable Software: Focused on making KDE software more energy-efficient and environmentally friendly by implementing practices that reduce resource consumption and ensure long-term sustainability.
  • Automate and Systematize Internal Processes: Aimed to streamline KDE’s internal workflows by automating repetitive tasks, adding code tests across projects and creating a Quality Assurance team to name a few.
  • KDE For All: Seeked to make KDE software accessible and inclusive for all users.

A New Cycle A Comin' (2024-2026)

Now, as we enter the fourth cycle of the KDE Goals, we see the full power of this cumulative process. Each goal, whether fully achieved or not, contributes to the collective knowledge and capability of the KDE community. Ideas and partial solutions from past cycles become a solid foundation of knowledge and experience that support future efforts.

The commmunity is currently voting on the following proposals for the next KDE Goals cycle that will guide our efforts and shape our focus for the coming years:

KDE Goals at Akademy

The three most voted goals will be announced at Akademy, where there will also be a wrap-up talk about the achievements of the current goals. Also, there will be Birds-of-a-feather (BoF) sessions with the new goal champions.

Join the Matrix room and keep an eye on the website for the latest KDE Goals updates.

KDE's Okular is the first software which got awarded with the Blue Angel label for resource and energy-efficient software products. The certification was based on the first version of the criteria for this product criteria which were introduced in 2020. Now the criteria have been updated. What has changed and what does that mean for KDE?

The revised criteria are available as version 4 on the Blue Angel web site. Only the German version is currently available; the English version will follow shortly.

New software categories

The biggest change is the scope of the label. In the past it was limited to desktop software. With the updated version, the criteria also include software on mobile devices and server software or a combination of these categories, such as a web service with mobile and desktop clients.

The biggest challenge is the measurement of the energy and resource efficiency for these new categories, which requires a more flexible approach and must accommodate scenarios where the measurement cannot be done by inserting a meter in front of the power supply of a single device. The new criteria address this by defining applicable methods for the measurement of mobile and server applications.

The extended scope covers a much broader range of software. For KDE the desktop category is most relevant, but of course a lot of software also interacts with a server component, for example an email client like KMail, which could now be treated and assessed as a combined client-server system to give more realistic and relevant results.

More flexible measurement procedure

The expansion in scope requires an expanded view on the measurement of energy and resource efficiency as well. The first version of the criteria was quite strict and prescribed a very specific measurement procedure on specified reference systems. It was based on a comparison of measurements in a representative usage scenario and in idle mode. This gave a realistic impression of what the usage of a computer program meant in terms of energy consumption.

The new criteria allow for more variation in how the measurements are carried out. The original method is still there, but variations which lead to comparable results are possible as well. This change means that a new criterion was introduced to document the way measurements are done.

In addition to the measurement of the usage scenario, a new type of measurement was introduced. This measures total energy consumption of a production system over a longer period of time. This is particular useful for server applications, where this method can lead to more realistic numbers by averaging resource consumption over real-world usage of multiple users.

For mobile applications, the measurement also has to include the data volume transmitted during a standard usage scenario and the list of URLs it has accessed. This is based on the assumption that large volumes of data transfer imply a higher energy usage. It can also be used to assess if the application is using advertisements or is collecting tracking information. Both are forbidden under the revised Blue Angel criteria.

Ongoing assessment of energy and resource efficiency

The original criteria demanded that updates of the software still run on old reference systems and that the energy consumption does not increase more than 10%. They were not very clear in how exactly this should be proven and documented. Especially for software which is released very often, testing every individual update is impractical. For mobile and even more for server software, update cycles can be very short, up to multiple updates a day.

In the updated criteria there is a more precise way of handling updates. The general idea is still there that updated software run on old hardware and energy consumption not increase too much. But it's not tied to individual updates anymore. The required procedure is to do a measurement at least once a year and publish the results as part of the documentation of the software product. This includes documentation of the measurement setups and any changes to it as well as preserving the history of measurements, so that users can judge for themselves how much energy and resource usage is increasing over time.

This procedure clarifies the requirments and opens a pragmatic way of measuring updates. It implies a certain burden on updating documentation.

Consequences for KDE and Okular

KDE holds the Blue Angel label for its PDF viewer Okular. This is desktop software and the standard usage scenario doesn't include any network access. That means that the expanded scope does not change anything for the existing certification. The revised criteria open up the opportunity to apply for the Blue Angel label for mobile software, such as KDE Connect, and mixed scenarios which also include server components, but the eco-certification for Okular is covered as it was before.

The more flexible measurement criteria give us more leeway in how we are doing the measurements. We have set up KEcoLab for being able to regularly do measurements. This setup follows the procedure prescribed in the original criteria. As this is still valid, it also means no change for us, and our measurements still fulfill the criteria. However, it gives us more opportunities to improve the lab and doesn't strictly tie us to the original list of reference systems anymore. We might want to take advantage of that.

The documentation of the measurement system is something we have always done in a transparent way, so this also doesn't require any big changes on our side. We have to consider how to best convey this in the documentation of Okular, but this is mostly a question on how we communicate the existing content.

The ongoing assessment of energy and resource efficiency ties very well into how we handle software updates. We have a continuous release stream with frequent updates and incremental changes. This fits the model of the new criteria. We have to review how we include regular updates of the documentation and measurement data in releases, but this again is mostly a question of how we communicate the existing content.

Conclusion

The revised criteria provide a welcome expansion of the Blue Angel to more categories of software and a more flexible way to do energy and resource efficiency measurements. They continue to align well with how KDE develops software in general and Okular in particular, so we do not see any issues with continuing the Blue Angel certification for Okular.

We would be happy if the new version of the criteria would increase adoption of the Blue Angel ecolabel for resource and energy efficient software. Sustainable software is an important topic and the Blue Angel can be one way of making progress in this area more visible to a broad audience.