Skip to content

Friday, 20 September 2024

Akademy 2024 group phot This year Akademy was in Würzburg (shock! horror!). I think its not too far fetched to say that we pulled it successfully.

How it came to be

During last years Akademy in Thessaloniki the idea came up given the high density of KDE people in the area to hold Akademy in Würzburg. On top we had the perfect venue in mind: two lecture halls for talks, BoF rooms and a ample common area for people to hack and socialize. I had such thoughts in the back of my mind for a while but did not share or go through with them.

However what Akademy does is it makes you talk to people and Tobias convinced it me that we should do it and more people even agreed that Akademy in Würzburg would be a good idea! (Of course some of these encouragers this does not mean work organizing Akademy like it would mean for us…) So on the second-to-last day of Akademy I sat down and wrote an email to my former university professor to enquire about the possibility of doing Akademy in the building that we envisioned.

And just like that, after some informal talks during Akademy and a meeting at the University and sending in a proposal after the Call for Hosts later you end up having weekly meetings to talk about and plan the next installment of Akademy.

How it did go

There were some problems but I think all in all this year Akademy went very well. I heard so much praise and positive feedback it felt a bit surreal at first (as did Sunday evening when all the talks were done). The most stressful situation for me happened on Sunday afternoon when I retrieved my charging phone from the team room to discover that the social event could not go ahead as planned and trying to manage that with the rest of the team. I think the resulting evening was very nice and chill and you could feel my relieve. I heard there was even an afterparty afterwards at the bar where we had held the welcome event.

Of course during the event there are always minor issues that need dealing with and looking out for those, dealing with them and trying to make sure everything is running smoothly (and the stress coming these things) meant that I couldn’t (and was not in the right state of mind) to focus much on the talks. When the BoF days started this was bit better but only on Thursday after the day trip I felt in a ‘conference mood’ and was able to focus fully on the BoFs that I was attending and sit down and hack a bit. If you want to learn more about the actual conference many people have blogged about it on the planet or read the report on the Akademy website.

In the end I think it’s fair to say that Akademy was a success. Made possible by KDE e.V. (go donate!), the sponsors, all the people of the Akademy team local and non-local, all the Volunteers on short and on long notice an last but not least all the awesome attendees - there would be no Akademy without any people. Thank you to you all! I am excited to learn about where next years Akademy will be (you can do it as well - it’s not hard and you get an awesome award on top) and looking forward to attending it as ‘normal Attendee’ and meeting everyone again there (if not earlier).

Thursday, 19 September 2024

gcompris 4.2

Today we are releasing GCompris version 4.2.

It contains bug fixes and graphics improvements on multiple activities.

This version adds translation for Latvian.

It is fully translated in the following languages:

  • Arabic
  • Bulgarian
  • Breton
  • Catalan
  • Catalan (Valencian)
  • Greek
  • UK English
  • Esperanto
  • Spanish
  • Basque
  • French
  • Galician
  • Croatian
  • Hungarian
  • Indonesian
  • Italian
  • Lithuanian
  • Latvian
  • Malayalam
  • Dutch
  • Norwegian Nynorsk
  • Polish
  • Brazilian Portuguese
  • Romanian
  • Russian
  • Slovenian
  • Albanian
  • Swedish
  • Swahili
  • Turkish
  • Ukrainian

It is also partially translated in the following languages:

  • Azerbaijani (97%)
  • Belarusian (87%)
  • Czech (97%)
  • German (96%)
  • Estonian (96%)
  • Finnish (95%)
  • Hebrew (96%)
  • Macedonian (90%)
  • Portuguese (96%)
  • Slovak (84%)
  • Chinese Traditional (96%)

You can find packages of this new version for GNU/Linux, Windows, Android, Raspberry Pi and macOS on the download page. This update will also be available soon in the Android Play store, the F-Droid repository and the Windows store.

Thank you all,
Timothée & Johnny

The jury of this year’s KDE Akademy Awards, being by tradition representatives of last year’s winners, has selected the hex editor Okteta in the category “Best Application”. Thanks to them for this appreciation, even more for a niche application 🙂

Though, appreciation for what, as there are no details? The last new feature was added in 2019, with the 17th patch release since just done. So, for a reliable program with no need to relearn the UI every year and proudly close to zero open actual bug reports? Then the port to Qt6/KF6, while started in 2022, might be only completed in 2025… if ever. So rather, is this an end-of-life award for an aged 16 years old program?

Looking Back

Triggered by the event some reflection on the past development, if only for the author himself, to update the memories how one got here and what it brings for the future. Which turned into a longer text than anticipated 🙂

2003-2006: Years of Initial Need for a Widget

The Okteta project was born in 2003 , the known first code traces date back to May 13th, 2003. The first related code was imported into KDE’s code repository in August 15th, 2003, by the commit message:

Initial import of KHexEdit2, featuring a widget, a kpart and an app.
Most important is the widget...
Hopefully it will be usable enough ready for KDE 3.2...

The name “KHexEdit2” was chosen as the project was a re-implementation of KHexEdit, the hex editor part of KDE since KDE 1.1. Because those times I was trying to create a viewer for executables and libraries (project name Binspekt, stalled soon), for which I wanted a widget for displaying bytes. KHexEdit seemed to have no code that could be cleanly ripped out and be reused, so work started to code such a widget from scratch and respectively also consumers of it, to add more reason & motivation.

The formal request on September 29th that year to then move the project’s code from the “kdenonbeta” area into release-covered areas had this optimistic line in it:

Finally there will be an app, build around the ReadWritePart. In 2004.

Turned out, life did not agree to that plan, thus 2004 passed without any such app. And so did 2005.

Still, back in February 2004 as part of KDE 3.2 the first elements of the still named KHexEdit2 project made a first release. Though with a bit of complexity. ((Note that at this time KDE was still also the name of the released bundle product, composed of the so called modules kdelibs, kdebase, kdeutils, etc. Where kdelibs held all the public libraries, kdebase the basic desktop components, and so on.)) As adding a complete implementation of a hexeditor widget to the official kdelibs for just a few potential users was declared unbalanced, instead just some header files with KHE namespaced abstract interface classes were added, with an inline utility method to dynamically load any plugin implementing them. So not increasing the actual runtime and installation size of kdelibs. And the kdeutils module got to provide such a plugin by the name KBytesEdit. This then was implemented by the hex editor widget library from the KHexEdit2 project, also in the kdeutils module, whose own API and headers were kept private. To confuse everyone this library was yet named libkhexedit, even if the actual KHexEdit program did not use it. Because the spirit of the naming was on the level of widgets and classes, not program names, and there the 2 postfix made no sense. Consumers of this construct became at least the utility app for Palm Pilot handhelds KPilot and the debugger plugin of the IDE KDevelop.

In March 2005 KDE 3.4 then included the KHexEdit2 KPart (read-only) as well, also located in the kdeutils module and also implemented using the private libkhexedit library. Making some people unhappy as it registered (like its Okteta successor still does today) as handler for the MIME type application/octet-stream, so popping up as fallback KPart where no other handler was found,. And seeing raw bytes over nothing has been partially perceived as “broken” 🙂

2006-2008: Second Try on a Program, as Sample and with new Dedicated Name

2006 arrived, and with the author there was still some ever developing curiosity about the feasibility of writing viewer & editor programs using some higher-level reusable & exchangeable components. Now a byte array is a pretty simple data structure to use for a sample implementation of such a concept. And here we had a widget implementation for byte array viewing and editing fully under our control. And a certain level of skills with C++ acquired at the time. This was just too tempting to not give it an own try, for fun and experience. So a June blog post “Fun with KHexEdit” also mentioned a program again and introduced the name designed meanwhile:

I am tackling the construction of a successor to KHexEdit again, projectname “Okteta”.

Later that year a first visual snippet was shared, showing how KHexEdit’s UI served as initial template for Okteta, also to potentially help the transfer of existing users:

November 2006: first published screenshot of Okteta in some pre-Alpha state

To avoid duplicated efforts and to increase pressure to deliver, two weeks later on November 27th an optimistic email was sent to KDE’s great eternal to-newer-API porting worker Laurent Montel, as it was the time to port KDE software to Qt4/KDELibs4:

Hi Laurent,

please don't spend too much effort at the old program KHexEdit, I am quite far
on the way to write a successor, called Okteta. Concerning feature
compatibility, so far I implemented around 60 % of the features of KHexEdit,
and hope to do the last 40 % until at least January. Yes, no code yet in SVN
(besides the library), but that will change in three weeks, promised.
[...]

This time life agreed mainly to the plan. Though the promise on the code in SVN was delivered on only with almost a year delay. A first dump of the program code was committed into KDE’s code repository on October 23rd, 2007, by the commit message:

Uploading the Okteta program into KDE's playground,
so the code isn't lost, after growing slowly only on my hdd for more than a year.

Okteta is a planned successor to KHexEdit, yet misses all of it's functionality.
With it's modular architecture, based on the co-developed lib kakao, it should
soon offer more than would could be done with the monolithic KHexEdit. I hope.

As can be read, this first copy also was featuring a first draft for the own before mentioned higher-level component system, initially named “Kakao”, later in 2009 to be renamed to “Kasten”. That first name was made ad-hoc inspired by a drink on the table (in learned safe distance from the keyboard), to soon find it used similarly by some certain bigger IT company, even for a somehow related subject, thus a new name was by the time designed less ad-hoc.

And so some months later in April 2008 Okteta entered the “kdereview” phase, to proceed after two weeks into the kdeutils module. In time for KDE 4.1, so premiering its release as part of that in July 2008. Okteta here also took over to provide the KBytesEdit plugin for the kdelibs KHE interfaces as well as the KPart, which before had resided in subdirectories of the KHexEdit program sources. KHexEdit itself stayed unported to Qt4/KDELibs4, so Okteta as planned did not run into duplicated efforts and rivalry (or, it avoided competition, for good and bad).

July 2008: Okteta’s first release, as version 0.1, part of KDE 4.1

2008-2012: Years of Features Flow

With the foundations laid and releases established as part of KDE releases, the next years saw a number of features added, initially even each KDE version:

January 2009: new features in Okteta 0.2, part of KDE 4.2
August 2009: new features in Okteta 0.3, part of KDE 4.3
February 2010: new features in Okteta 0.4, part of KDE SC 4.4
July 2011: new features in Okteta 0.7, part of KDE Applications 4.7
August 2012: new features in Okteta 0.9, part of KDE Applications 4.9

2010-Present: Sharing Functionality in Rich Public Libraries

From the very begin of the project on the byte array viewing & editing feature was embeddable by 3rd-party software using the abstract KHE interfaces in kdelibs, or then the KPart at least for viewing, Though this allowed only little control & features due to a limited API.

Starting with Okteta 0.4 in February 2010, the two sets of underlying libraries, Kasten and Okteta ones, used to implement the Okteta program, the Okteta KPart and the KHE KBytesEdit plugin, are provided with stable public API.

The lower-Qt-level Okteta GUI library also started to be accompanied by a Qt Designer plugin, to allow easy use of the two provided classes of widgets also in Qt’s UI files.

February 2010: new Okteta widgets plugin for Qt Designer, part of Okteta 0.4

In February 2010 during a week-long Kate-KDevelop development meeting in Berlin the intended flexibility of the new public libraries proofed itself by enabling to create a plugin for KDevelop to integrate hex viewing & edting and all the Okteta tools in just those few days, for some nice satisfaction. The plugin was officially released first with KDevelop 4.1 in October 2010 and later also ported to the Qt5/KF5 version of KDevelop. For the current Qt6/KF6-based version of KDevelop the plugin is excluded from the build for now, given the current lack of a released Qt6/KF6-based version of the Okteta and Kasten libraries.

October 2010: Okteta plugin for KDevelop, first released with KDevelop 4.1

2012-Present: Switching from Features to Architecture, from Bundled to Stand-Alone

The port to Qt5/KF5 happened without any issues and was completed for version 0.15, released as part of KDE Applications 14.12. During the transformation of KDELibs4 to KDE Frameworks 5 the KHE interfaces also got dropped there, due to Okteta meanwhile directly providing public libraries. So this ported version of Okteta also no longer provided the KBytesEdit plugin, but otherwise as before the public libraries and the KPart, next to the program itself.

After KDE Applications 17.12, as for a while there was no feature development and only occasional work on the design of the Okteta & Kasten libraries happened, the Okteta project switched to a stand-alone release schedule. A 0.25 version branch was created and patch version releases only done when there were user-relevant changes like bug fixes or bigger translation improvements.

Then 2019 brought the first version and for now also latest to also provide at least one new feature to users, for which a 0.26 version branch was created. This version has meanwhile got 17 patch releases, with bug fixes, translation improvements and other adjustments. And after 5 years of such polishing is the one which now received the “Best Application” 2024 Akademy Award 🙂

March 2019: new features in Okteta 0.26, released on own schedule

2022-Present: Preparing for Qt6 & KF6

Okteta’s code base has been mostly quickly updated to any API deprecations, also as part of a “zero build warnings” strategy. So the approach taken for both Qt & KF libraries to strive for source-backward-compatible C++ API in their both new major version 6 made the initial port of Okteta to Qt6 & KF6(-Alpha) a matter of less than a day in May 2022. Now, only if one ignores one of the tools.

May 2022: preview of Okteta port to Qt6/KF6

The Structures tool, first developed by Alexander Richardson in 2010 for Okteta 0.4, was in 2011 for Okteta 0.7 extended by him to also support dynamic structure definitions, using JavaScript expressions. For this QtScript has been used as engine. Four years later, in July 2015 though Qt 5.5 declared QtScript as deprecated. The officially recommended substitute QJSEngine turned out to not allow the dynamic translation of JavaScript properties and methods as relied on by the Structures tool for the copy-avoiding mapping of the data blob interpretation into the JavaScript scene (beware, for what the author understands so far). So it could not be used as drop-in replacement.

As finding a suitable and more future-proof JavaScript engine or exploring a possible reimplementation using the QJSEngine is a complex task and also needs bigger chunks of time & focussing, it had been postponed. Year after year. And thus now nine years later in 2024 there is still not even a plan. And Qt6 no longer now provides QtScript.

Just dropping the Structures tool is not a real option. It is a great feature, which also got some users. So a plan is needed and work to be done by someone. As of now my own, surely radical idea is to rewrite the whole Structures tool from scratch, still for the Qt5/KF5 version of Okteta. This should lead to fully wrapping the brain around this complex feature, instead of seeing to indirectly explore it by trying to understand all the details of the current elaborated implementation with the risk to misinterpret some intentions. Starting from scratch might also allow to finally share all the code used for the data formats in the separate Decoding Table tool, and perhaps even to introduce a more generic approach on the data formats supported in the main mass display besides currently byte values and 8-bit charset mapping. Idea, Should, Trying, Starting, Might, Perhaps… any words of confidence, please? 🙂

For now the initial Qt6/KF6 port is maintained by a single commit containing the complete dump of the “it builds, starts and does not crash on simple usage” changes, in a work branch continuously rebased to the latest current Qt5/KF5-based development branch. This commit would then at the time of the real Qt6/KF6 port be properly split into the different aspects of porting. For now it serves to hold the door open while still on the other side.

Looking Forward, by Looking Back Some More

For sure the initial goal with the Okteta program to do something for fun and experience has been largely achieved 🙂 The current challenge with the needed replacement of the used JavaScript engine promises more experience, though no fun initially to me at least.

And some feedback as well even a KDE Akademy Award now hint the created and publicly shared program also served other people for their serious and less serious needs. Possibly even some desperate Faust-like persons, “So that they may perceive the bytes which hold their doc[ument] together in its inmost fold.” (and even tweak them for better as needed, owning their world or document). Though no contracts done, and thus no souls here owned.

But as before, Okteta actually is just a sample implementation of the actual interest pursued here, exploring the feasibility of writing programs by higher-level reusable & exchangeable components, ideally also allowing random end users to mesh those components themselves into tailored solutions for certain needs. So if development has stalled as it has, both on the components concept but also the hex editing features, how to increase motivation again to set resources aside, and for which part?

Position in the Hex Editor Solution Space

When it comes to the Free Software solution space for hex editor needs, next to Okteta there is currently coverage starting from simple ones like GHex over wxHexEditor, which serves needs beyond Okteta by support for paged loading of big files and also the working memory, though sadly unmaintained currently, to the newer yet most impressive and very powerful ImHex (by what the web pages show, never tried).

So would people suffer if Okteta is gone for current platforms, at least for a while?

Open Component Systems vs. Closed Monolithic Blobs

Now the author, while being curious, never got around to actually study existing solutions for the concept of higher-level component system or even deploy them in projects, only ever saw some theoretic surfaces. And is fully aware of the own experiment turning into something serious rather being a pipe dream. Even more when after soon two decades the initially created TODO list is not even 10 % done, this won’t work out this single human’ life 🙂 So the following is more like the wish-wash of a hobbyist bird watcher, while also having some chicken in the backyard to which things are compared. Or alike.

It seems composable systems with complex interfaces are not the dominating species in the Free Software ecosystems. The Linux kernel outpaced any microkernel systems, e.g. Gnu Hurd is yet to be spotted outside OS zoos. The Eclipse Rich Client Platform, whose concepts were one of the original by-headlines inspirations for this project, seems to have maxed out some time ago as well. at least in the mainstream through the author’s bubble space. StarOffice^WOpenOffice^WLibreOffice has the UNO component system, but how many Add-Ons flourish on it? Then GnuStep would have enabled to spread the component concepts of OpenStep, but little has be seen? The later GnuStep-related, indeed thriving for stars impressive project Étoilé seemed to be overloaded with related ideas, but sadly never lift off. Then the GNOME project even had a reference to component systems in its initial full name “GNU Network Object Model Environment”, but its respective Bonobo framework based on CORBA faded away rather soon.

Also KDE started initially with implementations around the idea of components. To quote the KDE 1.0 announcement:

In view of these circumstances the KDE Project has developed a first rate compound document application framework, implementing the latest advances in framework technology and thus positioning itself in direct competition to such popular development frameworks as for example Mircrosoft’s MFC/COM/ActiveX technology. KDE’s KOM/OpenParts compound document technology enables developers to quickly create first rate applications implementing cutting edge technology. KDE’s KOM/OpenParts technology is build upon the open industry standards such as the object request broker standard CORBA 2.0.

KOM/OpenParts was then in KDE 2.0 replaced by KParts. Actually the presence of such technology development was the deciding factor to go for KDE when the author those days got into “Linux” and had to choose between GnuStep, Enlightenment, GNOME and KDE. These days though KDE is run with claims like “All About Apps”. The generic KServices system got destructed for KF6. The possibly latest KPart (a Markdown Viewer) was written years ago by this author, and the once KDE-central KPart-driven program Konqueror is only a shadow of its former self. KOffice & Calligra as component-oriented office suites also died or stalled close to extinction. Generic plugins like the KParts are not even listed on apps.kde.org or elsewhere anymore, also no longer mentioned as concept in KDE Gear release announcements. Similar specific plugins like the Plasma applets, they are also not listed separately, but only as part of the respective, in the example, Plasma products.

Additionally packaging formats like Flatpak or Snap are discussion-less embraced and promoted, which push in the direction of isolated and frozen software programs. Even today does Flatpak’s metadata system appstream. also otherwise discussion-less embraced by KDE, not have a concept of generic plugins, so KParts cannot be described properly.

In such an environment a component system would be limited to predefined fixed component sets in libraries, from which applications would then provide a setup and offer that to users. A bit like being able to shop as consumer at the kitchen equipment store only preset exhibition rooms, instead of meshing up items from different providers into ones’ own tailored meal preparations “app”. Surely it is in the interest of the dominating providers who then will see to bundle only their items, and then add bloat as well as only making half the items good. As consumer I desire to have the choice over pre-made bundles vs. self-assembled ones. Like there are times for All-inclusive vacations and times for self-organized ones. So with current KDE but also the larger current Free Software “desktop” scene as real world development environment working on and thinking about component systems has the author feel at odds.

So maybe the experiment with Kasten as a higher-level component system could also stop here. Perhaps some research instead could be done why such systems failed in comparison. Like, was it due to the inflexibility presented by fixed published interfaces, where on new feature needs implementations cannot simply do temporary shortcuts and adaptions where needed to be quick back to the market? Could it be due to the possible need for more abstract and generic thinking with component systems, where the majority of developers working for the market might prefer to think more concrete and case-by-case? Then, might there still be a middle-ground, where any advantages of high-level component systems are the deciding factor in the competition?

Next Release Scheduled: October 10th, version 0.26.18

As described already for the early stages, there always have been ideas and plans.. and delays… and also doubts… and then things happened. Locally there are lots of notes with ideas, and a number of code drafts and sketches stacked by the years. And at least in the near future it seems there are still time windows, electricity, a laptop, and enough human capabilities to carry on and tinkering over this stack.

The Okteta (& Kasten) project for now is alive, just lurking around in front of the next evolution step to take. Which might find it new ground. Or extinction anyway. And while it is lurking, it gets a tad more feathers polished, by another bug fix release already scheduled for next month 🙂

KD Reports 2.3.0

We’re pleased to announce the release of KD Reports 2.3.0, the latest version of our reporting tool for Qt applications. This marks our first major update in two years, bringing several bug fixes and new features that further improve the experience of generating reports.

What is KD Reports?

KD Reports is a versatile tool for generating reports directly from Qt applications. It supports creating printable and exportable reports using code or XML, featuring elements like text, tables, charts, headers, and footers. Whether for visualizing database content, generating invoices, or producing formatted printouts, KD Reports makes it easy to create structured reports within your Qt projects.

What’s New in KD Reports 2.3.0?

The new release includes essential bug fixes and feature enhancements that make KD Reports even more robust and user-friendly.

kdreports-trademark.png

Bug Fixes

The 2.3.0 release addresses several important issues to improve stability and compatibility. One major fix resolves an infinite loop and other problems caused by changes in QTextFormat behavior in Qt 6.7. Right-aligned tabs, which previously didn’t work when paragraph margins were set, have also been corrected. High-DPI rendering has been improved to eliminate blurriness in displays where the device pixel ratio (DPR) is not equal to 1. Furthermore, an issue with result codes being overwritten in the KDReportsPreviewDialog has been fixed. Finally, table borders, which were lost after upgrading to Qt 6.8, now behave as expected, maintaining their cell borders throughout.

New Features

KD Reports 2.3.0 introduces several new features aimed at providing more customization and flexibility in report generation. For instance, the AutoTableElement now supports customization of header styling via the new setHorizontalHeaderFormatFunction and setVerticalHeaderFormatFunction, which are demonstrated in the PriceList example. Additionally, individual table cell formatting has been enhanced with the setCellFormatFunction, allowing for customization of borders and padding. Text alignment within table cells has also been improved with the new setVerticalAlignment feature, making it easy to vertically center or top-align text when using different font sizes within the same row.

The AbstractTableElement now allows setting column constraints while leaving some columns without constraints—just pass {} for unconstrained columns. This feature is particularly useful when setting constraints for columns further to the right. Also, the TableElement has gained rowCount() and columnCount() methods, which can be used in dynamic scenarios, such as applying alternate background colors to rows.

Lastly, you can now disable the progress dialog during printing or PDF export using setProgressDialogEnabled(false). This is useful for applications that generate multiple documents or handle progress tracking internally, offering more control over the user interface during these operations.

You can explore all the new features and improvements in KD Reports 2.3.0 on its GitHub page. Download the latest release and check out the detailed changes to see how they can enhance your reporting tasks. Feel free to share your feedback or report any issues you encounter along the way.

The post KD Reports 2.3.0 appeared first on KDAB.

Implementing an Audio Mixer, Part 2

Recap

In Part 1, we covered PCM audio and superimposing waveforms, and developed an algorithm to combine an arbitrary number of audio streams into one.

Now we need to use these ideas to finish a full implementation using Qt Multimedia.

Using Qt Multimedia for Audio Device Access

So what do we need? Well, we want to use a single QAudioOutput, which we pass an audio device and a supported audio format.

We can get those like this:

const QAudioDeviceInfo &device = QAudioDeviceInfo::defaultOutputDevice();
const QAudioFormat &format = device.preferredFormat();

Let’s construct our QAudioOutput object using the device and format:

static QAudioOutput audioOutput(device, format);

Now, to use it to write data, we have to call start on the audio output, passing in a QIODevice *.

Normally we would use the QIODevice subclass QBuffer for a single audio buffer. But in this case, we want our own subclass of QIODevice, so we can combine multiple buffers into one IO device.

We’ll call our subclass MixerStream. This is where we will do our bufferCombine, and keep our member list of streams mStreams.

We will also need another stream type for mStreams. For now let’s just call it DecoderStream, forward declare it, and worry about its implementation later.

One thing that’s good to know at this point is DecoderStream objects will get the data buffers we need by decoding audio data from a file. Because of this, we’ll need to keep our audio format from above to as a data member mFormat. Then we can pass it to decoders when they need it.

Implementing MixerStream

Since we are subclassing QIODevice, we need to provide reimplementations for these two protected virtual functions:

virtual qint64 QIODevice::readData(char *data, qint64 maxSize);
virtual qint64 QIODevice::writeData(const char *data, qint64 maxSize);

We also want to provide a way to open new streams that we’ll add to mStreams, given a filename. We’ll call this function openStream. We can also allow looping a stream multiple times, so let’s add a parameter for that and give it a default value of 1.

Additionally, we’ll need a user-defined destructor to delete any pointers in the list that might remain if the MixerStream is abruptly destructed.

// mixerstream.h

#pragma once

#include <QAudioFormat>
#include <QAudioOutput>
#include <QIODevice>

class DecodeStream;

class MixerStream : public QIODevice
{
    Q_OBJECT

public:
    explicit MixerStream(const QAudioFormat &format);
    ~MixerStream();

    void openStream(const QString &fileName, int loops = 1);

protected:
    qint64 readData(char *data, qint64 maxSize) override;
    qint64 writeData(const char *data, qint64 maxSize) override;

private:
    QAudioFormat mFormat;
    QList<DecodeStream *> mStreams;
};

Notice that combineSamples isn’t in the header. It’s a pretty basic function that doesn’t require any members, so we can just implement it as a free function.

Let’s put it in a header mixer.h and wrap it in a namespace:

// mixer.h

#pragma once

#include <QtGlobal>

#include <limits>

namespace Mixer
{
inline qint16 combineSamples(qint32 samp1, qint32 samp2)
{
    const auto sum = samp1 + samp2;
    if (std::numeric_limits<qint16>::max() < sum)
        return std::numeric_limits<qint16>::max();

    if (std::numeric_limits<qint16>::min() > sum)
        return std::numeric_limits<qint16>::min();

    return sum;
}
} // namespace Mixer

There are some very basic things we can get out of the way quickly in the MixerStream cpp file. Recall that we must implement these member functions:

explicit MixerStream(const QAudioFormat &format);
~MixerStream();

void openStream(const QString &fileName, int loops = 1);

qint64 readData(char *data, qint64 maxSize) override;
qint64 writeData(const char *data, qint64 maxSize) override;

The constructor is very simple:

MixerStream::MixerStream(const QAudioFormat &format)
    : mFormat(format)
{
    setOpenMode(QIODevice::ReadOnly);
}

Here we use setOpenMode to automatically open our device in read-only mode, so we don’t have to call open() directly from outside the class.

Also, since it’s going to be read-only, our reimplementation of QIODevice::writeData will do nothing:

qint64 MixerStream::writeData([[maybe_unused]] const char *data,
                              [[maybe_unused]] qint64 maxSize)
{
    Q_ASSERT_X(false, "writeData", "not implemented");
    return 0;
}

The custom destructor we need is also quite simple:

MixerStream::~MixerStream()
{
    while (!mStreams.empty())
        delete mStreams.takeLast();
}

readData will be almost exactly the same as the implementation we did earlier, but returning qint64. The return value is meant to be the amount of data written, which in our case is just the maxSize argument given to it, as we write fixed-size buffers.

Additionally, we should call qAsConst (or std::as_const) on mStreams in the range-for to avoid detaching the Qt container. For more on qAsConst and range-based for loops, see Jesper Pederson’s blog post on the topic.

qint64 MixerStream::readData(char *data, qint64 maxSize)
{
    memset(data, 0, maxSize);

    constexpr qint16 bitDepth = sizeof(qint16);
    const qint16 numSamples = maxSize / bitDepth;

    for (auto *stream : qAsConst(mStreams))
    {
        auto *cursor = reinterpret_cast<qint16 *>(data);
        qint16 sample;

        for (int i = 0; i < numSamples; ++i, ++cursor)
            if (stream->read(reinterpret_cast<char *>(&sample), bitDepth))
                *cursor = Mixer::combineSamples(sample, *cursor);
    }

    return maxSize;
}

That only leaves us with openStream. This one will require us to discuss DecodeStream and its interface.

The function should construct a new DecodeStream on the heap, which will need a file name and format. DecodeStream, as implied by its name, needs to decode audio files to PCM data. We’ll use a QAudioDecoder within DecodeStream to accomplish this, and for that, we need to pass mFormat to the constructor. We also need to pass loops to the constructor, as each stream can have a different number of loops.

Now our constructor call will look like this:

DecodeStream(fileName, mFormat, loops);

We can then use operator<< to add it to mStreams.

Finally, we need to remove it from the list when it’s done. We’ll give it a Qt signal, finished, and connect it to a lambda expression that will remove the stream from the list and delete the pointer.

Our completed openStream function now looks like this:

void MixerStream::openStream(const QString &fileName, int loops)
{
    auto *decodeStream = new DecodeStream(fileName, mFormat, loops);
    mStreams << decodeStream;

    connect(decodeStream, &DecodeStream::finished, this, [this, decodeStream]() {
        mStreams.removeAll(decodeStream);
        decodeStream->deleteLater();
    });
}

Recall from earlier that we call read on a stream, which takes a char * to which the read data will be copied and a qint64 representing the size of the data.

This is a QIODevice function, which will internally call readData. Thus, DecoderStream also needs to be a QIODevice.

Getting PCM Data for DecodeStream

In DecodeStream, we need readData to spit out PCM data, so we need to decode our audio file to get its contents in PCM format. In Qt Multimedia, we use a QAudioDecoder for this. We pass it an audio format to decode to, and a source device, in this case a QFile file handle for our audio file.

When a QAudioDecoder's start method is called, it will begin decoding the source file in a non-blocking manner, emitting a signal bufferReady when a full buffer of decoded PCM data is available.

On that signal, we can call the decoder’s read method, which gives us a QAudioBuffer. To store in a data member in DecodeStream, we use a QByteArray, which we can interact with using QBuffers to get a QIODevice interface for reading and writing. This is the ideal way to work with buffers of bytes to read or write in Qt.

We’ll make two QBuffers: one for writing data to the QByteArray (we’ll call it mInputBuffer), and one for reading from the QByteArray (we’ll call it mOutputBuffer). The reason for using two buffers rather than one read/write buffer is so the read and write positions can be independent. Otherwise, we will encounter more stuttering.

So when we get the bufferReady signal, we’ll want to do something like this:

const QAudioBuffer buffer = mDecoder.read();
mInputBuf.write(buffer.data<char>(), buffer.byteCount());

We’ll also need to have some sort of state enum. The reason for this is that when we are finished with the stream and emit finished(), we remove and delete the stream from a connected lambda expression, but read might still be called before that has completed. Thus, we want to only read from the buffer when the state is Playing.

Let’s update mixer.h to put the enum in namespace Mixer:

#pragma once

#include <QtGlobal>

#include <limits>

namespace Mixer
{
enum State
{
    Playing,
    Stopped
};

inline qint16 combineSamples(qint32 samp1, qint32 samp2)
{
    const auto sum = samp1 + samp2;

    if (std::numeric_limits<qint16>::max() < sum)
        return std::numeric_limits<qint16>::max();

    if (std::numeric_limits<qint16>::min() > sum)
        return std::numeric_limits<qint16>::min();

    return sum;
}
} // namespace Mixer

Implementing DecodeStream

Now that we understand all the data members we need to use, let’s see what our header for DecodeStream looks like:

// decodestream.h

#pragma once

#include "mixer.h"

#include <QAudioDecoder>
#include <QBuffer>
#include <QFile>

class DecodeStream : public QIODevice
{
    Q_OBJECT

public:
    explicit DecodeStream(const QString &fileName, const QAudioFormat &format, int loops);

protected:
    qint64 readData(char *data, qint64 maxSize) override;
    qint64 writeData(const char *data, qint64 maxSize) override;

signals:
    void finished();

private:
    QFile mSourceFile;
    QByteArray mData;
    QBuffer mInputBuf;
    QBuffer mOutputBuf;
    QAudioDecoder mDecoder;
    QAudioFormat mFormat;
    Mixer::State mState;
    int mLoopsLeft;
};

In the constructor, we’ll initialize our private members, open the DecodeStream in read-only (like we did earlier), make sure we open the QFile and QBuffers successfully, and finally set up our QAudioDecoder.

DecodeStream::DecodeStream(const QString &fileName, const QAudioFormat &format, int loops)
    : mSourceFile(fileName)
    , mInputBuf(&mData)
    , mOutputBuf(&mData)
    , mFormat(format)
    , mState(Mixer::Playing)
    , mLoopsLeft(loops)
{
    setOpenMode(QIODevice::ReadOnly);

    const bool valid = mSourceFile.open(QIODevice::ReadOnly) &&
                       mOutputBuf.open(QIODevice::ReadOnly) &&
                       mInputBuf.open(QIODevice::WriteOnly);

    Q_ASSERT(valid);

    mDecoder.setAudioFormat(mFormat);
    mDecoder.setSourceDevice(&mSourceFile);
    mDecoder.start();

    connect(&mDecoder, &QAudioDecoder::bufferReady, this, [this]() {
        const QAudioBuffer buffer = mDecoder.read();
        mInputBuf.write(buffer.data<char>(), buffer.byteCount());
    });
}

Once again, our QIODevice subclass is read-only, so our writeData method looks like this:

qint64 DecodeStream::writeData([[maybe_unused]] const char *data,
                               [[maybe_unused]] qint64 maxSize)
{
    Q_ASSERT_X(false, "writeData", "not implemented");
    return 0;
}

Which leaves us with the last part of the implementation, DecodeStream's readData function.

We zero out the char * with memset to avoid any noise if there are areas that are not overwritten. Then we simply read from the QByteArray into the char * if mState is Mixer::Playing.

We check to see if we finished reading the file with QBuffer::atEnd(), and if we are, we decrement the loops remaining. If it’s zero now, that was the last (or only) loop, so we set mState to stopped, and emit finished(). Either way we seek back to position 0. Now if there are loops left, it starts reading from the beginning again.

qint64 DecodeStream::readData(char *data, qint64 maxSize)
{
    memset(data, 0, maxSize);

    if (mState == Mixer::Playing)
    {
        mOutputBuf.read(data, maxSize);
        if (mOutputBuf.size() && mOutputBuf.atEnd())
        {
            if (--mLoopsLeft == 0)
            {
                mState = Mixer::Stopped;
                emit finished();
            }

            mOutputBuf.seek(0);
        }
    }

    return maxSize;
}

Now that we’ve implemented DecodeStream, we can actually use MixerStream to play two audio files at the same time!

Using MixerStream

Here’s an example snippet that shows how MixerStream can be used to route two simultaneous audio streams into one system mixer channel:

const auto &device = QAudioDeviceInfo::defaultOutputDevice();
const auto &format = device.preferredFormat();

auto mixerStream = std::make_unique<MixerStream>(format);

auto *audioOutput = new QAudioOutput(device, format);
audioOutput->setVolume(0.5);
audioOutput->start(mixerStream.get());

mixerStream->openStream(QStringLiteral("/path/to/some/sound.wav"));
mixerStream->openStream(QStringLiteral("/path/to/something/else.mp3"), 3);

Final Remarks

The code in this series of posts is largely a reimplementation of Lova Widmark’s project QtMixer. Huge thanks to her for a great and lightweight implementation. Check the project out if you want to use something like this for a GPL-compliant project (and don’t mind that it uses qmake).

The post Implementing an Audio Mixer, Part 2 appeared first on KDAB.

Ruqola 2.3.0 is a feature and bugfix release of the Rocket.chat app.

New features:

  • Implement Rocket.Chat Marketplace.
  • Allow to clean room history.
  • Allow to check new version.
  • Implement moderation (administrator mode).
  • Add welcome page.
  • Implement pending users info (administrator mode).
  • Use cmark-rc (https://github.com/dfaure/cmark-rc) for markdown support.
  • Delete oldest files from some cache directories (file-upload and media) so it doesn't grow forever.

Fixed bugs:

  • Clean market application model after 30 minutes (reduce memory footprint).
  • Fix show discussion name in completion.
  • Fix duplicated messages in search message dialog.
  • Add delegate in search rooms in team dialog.

URL: https://download.kde.org/stable/ruqola/
Source: ruqola-2.3.0.tar.xz
SHA256: 051186793b7edc4fb2151c80ceab3bcfd65acb27d38305568fda54553660fdd0
Signed by: E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Riddell jr@jriddell.org
https://jriddell.org/jriddell.pgp

Wednesday, 18 September 2024

This year was my first Akademy, and I was thrilled to be able to attend in person. Even better, some veterans said it was the best Akademy so far. It was great to see some people I had met in April and to meet new people. I arrived on Friday, 6th Sept and left the following Friday. I very much enjoyed staying in the lovely town of Würzburg and doing a day tour of Rothenberg. Now that I've caught up on sleep (the jet lag, it is real), it's time to write about it.

As described in the Akademy 2024 report, the focus this year was resetting priorities, refocusing goals and reorganizing projects. Since I had recently become a more active contributor to KDE, I was keen to learn about the direction things will take over the next year. It was also exciting to see the new design direction in Union and Plasma Next!

A Personal Note

Speaking of goals, a personal one I've striven toward in my career is to contribute to something that improves the world, even if indirectly. It's not something I've always been able to do. It feels good to be able to work with a project that is open source, and is working to make the computing world more sustainable.

I'd also like to recognize all the wonderful, welcoming folks that make Akademy such a great conference. I've been to a few other tech conferences and events, with varying atmospheres and attitudes. I can say that people at Akademy are so helpful, and so nice - it made being among a lot of new faces in a foreign country a truly great experience.

The Conference

The keynote had powerful information about the impacts of improper tech disposal and what we can do to improve the situation. This highlighted the importance of the KDE Eco project, which aims to help to reduce e-waste and make our projects more sustainable. Their new Opt Green initiative is going to take concrete steps toward this.

Some of the talks I attended:

  • KDE Goals - one talk about the last 2 years of goals and another revealing the new goals.
  • "Adapt or Die" - how containerized packaging affects KDE projects.
  • Union and styling in KDE's future.
  • Banana OS KDE Linux - why it's being developed and some technical planning.
  • Getting Them Early: Teaching Pupils About The Environmental Benefits Of FOSS This strategy has been powerful for other projects (like Microsoft Windows, Google Chromebooks, Java), so I'm glad to see it for KDE.
    • Why and how to use KDE frameworks in non-KDE apps
    • KDE Apps Initiative
    • Cutting Gordian's "End-User Focus" vs. "Privacy" Knot - collecting useful user data while respecting privacy and security.
    • Plasma Next - Visual Design Evolution for Plasma
    • The Road to KDE Neon Core

BoF Sessions

Some of the BoF sessions I attended:

  • Decentralizing KUserFeedback
  • Streamlined Application Development Experience
  • Organizing the Plasma team, Plasma goals
  • Plasma "Next" initiative
  • Union: The future of styling in KDE
  • KWallet modern replacement
  • Video tutorial for BoF best practice (Kieryn roped me into this one)
  • Security Team BoF Notes

Thanks to everyone who made this year's Akademy such a wonderful experience. If anyone out there is thinking of attending next year, and can make it, I really recommend it. I'm hoping to be at Akademy 2025!

Contrary to popular belief, Akademy 2024 was not my first Akademy. KDE seems to keep tagged faces from Akademy group photos around, so I stumbled over some vintage pictures of me in 2006 (Glasgow) and 2007 (Dublin). At the time, I was an utter greenhorn with big dreams, vague plans, and a fair bit of social anxiety.

Dublin is where Aaron Seigo taught me how to eat with chopsticks. I continue to draw from that knowledge to this day.

And then I disappeared for 15 years, but now it's time for a new shot. This time around, I'm a little less green (rather starting to grey a little) and had a surprising amount of stuff to discuss with various KDE collaborators. Boy, is there no end of interesting people and discussion topics to be had at Akademy.

"Oh, you're the PowerDevil guy"

You're not wrong, I've been contributing to that for the past year. As such, one highlight for me was to meet KDE's hardware integration contractor Natalie Clarius in person and sync up on all things power-related.

Akademy's no-photo badge makes its wearer disappear from selfies. AI magic, perhaps?

Natalie presented a short talk and hosted a BoF session ("Birds of a Feather", a.k.a. workshop) about power management topics. We had a good crowd of developers in attendance, clearing up the direction of several outstanding items.

Power management in Plasma desktops is in decent shape overall. One of the bigger remaining topics is (re)storing battery charge limits across reboots, for laptops whose firmware doesn't remember those settings. There is a way forward that involves making use of the cross-desktop UPower service and its new charge limit extensions. This will give us the restoring feature for free, but we have to add some extra functionality to make sure that charge threshold customization remains possible for Plasma users after switching over.

We also looked at ways to put systems back to sleep that weren't supposed to wake up yet. Unintended wake-ups can happen e.g. when the laptop in your backpack catches a key press from the screen when it's squeezed against the keyboard. Or when one of those (conceptually neat) "Modern Standby" implementations on recent laptops are buggy. This will need a little more investigation, but we've got some ideas.

I talked to Bhushan Shah about power saving optimizations in Plasma Mobile. He is investigating a Linux kernel feature designed for mobile devices that saves power more aggressively, but needs support from KDE's power management infrastructure to make sure the phone will indeed wake up when it's meant to. If this can be integrated with KDE's power management service, we could improve battery runtimes for mobile devices and perhaps even for some laptops.

The friendly people from Slimbook dropped by to show off their range of Linux laptops, and unveiled their new Slimbook VI with KDE neon right there at the conference. Compared to some of their older laptops, build quality is improved leaps and bounds. Natalie and I grilled their BIOS engineer on topics such as power profiles, power consumption, and how to get each of their function keys show the corresponding OSD popup.

KDE Slimbook VI shortly after the big reveal

"I'm excited that your input goal was chosen"

Every two years, the KDE community picks three "Goals" to rally around until the next vote happens. This time, contributors were asked to form teams of "goal champions" so that the work of educating and rallying people does not fall on the shoulders of a single poor soul per goal.

So now we have eight poor souls who pledge to advance a total of three focus areas over the next two years. Supported by KDE e.V.'s new Goals Coordinator, Farid. There's a common thread around attracting developers, with Nicolas Fella and Nate Graham pushing for a "Streamlined Application Development Experience" and the KDE Promo team with a systematic recruitment initiative titled "KDE Needs You". And then there's this other thing, with a strict end user focus, briefly introduced on stage by guess who?

Yup. Hi! I'm the input guy now.

Turns out a lot of people in KDE are passionate about support for input devices, virtual keyboards and input methods. Gernot Schiller (a.k.a. duha) realized this and assembled a team consisting of himself, Joshua Goins (a.k.a. redstrate) as well as Yours Truly to apply as champions. The goal proposed that "We care about your Input" and the community's response is Yes, Yes We Do.

As soon as the new goals were announced, Akademy 2024 turned into an Input Goal Akademy for me. In addition to presenting the new goal on stage briefly, we also gathered in a BoF session to discuss the current state, future plans and enthusiastic volunteering assignments related to all things input. I also sat down with a number of input experts to learn more about everything. There is still much more I need to learn.

It's a sprawling topic with numerous tasks that we want to get done, ranging from multi-month projects to fixing lots of low-hanging fruit. This calls for a series of dedicated blog posts, so I'll go into more detail later.

Join us at #kde-input:kde.org on Matrix or watch this space (and Planet KDE in general) for further posts on what's going on with input handling in KDE.

Look at the brightness side

KWin hacker extraordinaire Xaver Hugl (a.k.a. zamundaaa) demoed some of his color magic on a standard SDR laptop display. Future KWin can play bright HDR videos in front of regular SDR desktop content. Accurate color transformations for both parts without special HDR hardware, that's pretty impressive. I thought that HDR needs dedicated hardware support, turns out I'm wrong, although better contrast and more brightness can still improve the experience.

I also got to talk to Xaver about touchpad gestures, learned about stalled attempts to support DDC/CI in the Linux kernel directly, and pestered him for a review to improve Plasma's D-Bus API for the new per-monitor brightness features. Also the XDC conference in Montreal, is happening in less than a month, featuring more of Xaver as well as loads of low-level graphics topics. Perhaps even input-related stuff. It's only a train ride from Toronto, maybe I'll drop by. Maybe not. Here's a medieval German town selfie.

Towering over the rooftops of Rothenburg ob der Tauber with Xaver, Jonathan Riddell, and two suspect KWin developers in the back

Thanks to the entire KWin gang for letting me crash their late-night hacking session and only throwing the last of us out at 2 AM after my D-Bus change got merged. Just in time for the Plasma 6.2 beta. I was dead tired on Thursday, totally worth it though.

Atomic KDE for users & developers

Plasma undoubtedly has some challenges ahead in order to bring all of its power and flexibility to an image-based, atomic OS with sandboxed apps (i.e. Flatpak/Snap). David Edmundson's talk emphasized that traditional plugins are not compatible with this new reality. We'll need to look into other ways of extending apps.

David Edmundson wildly speculating about the future

The good news is that lots of work is indeed happening to prepare KDE for this future. Baloo making use of thumbnailer binaries in addition to thumbnailer plugins. KRunner allowing D-Bus plugins in addition to shared libraries. Arjen Hiemstra's work-in-progress Union style being customizable through configuration rather than code. Heck, we even learned about a project called KDE Neon Core trying to make a Snap out of each and every piece of KDE software.

Going forward, it seems that there will be a more distinct line between Plasma as a desktop platform and KDE apps, communicating with each other through standardized extension points.

All of this infrastructure will come in handy if Harald Sitter's experimental atomic OS, KDE Linux (working title), is to become a success. Personally, I've long been hoping for a KDE-based system that I can recommend to my less technical family members. KDE Linux could eventually be that. Yes, Fedora Kinoite is also great.

KDE Linux: Useful to users, hardware partners, and... developers?

What took me by surprise about Harald's presentation was that it could be great even as a development platform for contributing to the Plasma desktop itself.

As a desktop developer, I simply can't run my Plasma development build in a container. Many functions interact with actual hardware so it needs to run right on the metal. On my current Arch system, I use a secondary user account with Plasma installed into that user's home directory. That way the system packages aren't getting modified - one does not want to mess with system packages.

But KDE Linux images contain the same system-wide build that I would make for myself. I can build an exact replacement with standard KDE tooling, perhaps a slight bit newer, and temporarily use it as system-wide replacement using systemd-sysext. I can revert whenever. KDE Linux includes all the development header files too, making it possible to build and replace just a single system component without building all the rest of KDE.

Different editions make it suitable for users anywhere between tested/stable (for family members) and bleeding edge (for myself as Plasma developer). Heck, perhaps we'll even be able to switch back and forth between different editions with little effort.

Needless to say, I'm really excited about the potential of KDE Linux. Even without considering how much work it can save for distro maintainers that won't have to combine outdated Ubuntu LTS packages with the latest KDE desktop.

Conclusion

There's so much else I've barely even touched on, like NLnet funding opportunities, quizzing Neal Gompa about KDE for Enterprise, Rust and Python binding efforts, Nicolas Fella being literally everywhere, Qt Contributor Summit, finding myself in a hostel room together with fellow KDE devs Carl & Kåre. But this blog post is already long enough. Read some of the other KDE blogs for more Akademy reports.

German bus stops have the nicest sunsets. Also rainbows!

Getting home took all day and jet lag isn't fun, but I've reasonably recovered to give another shot at bringing KDE software to the masses. You can too! Get involved, donate to KDE, or simply enjoy the ride and discuss this post on KDE Discuss.

Or don't. It's all good :)

Tuesday, 17 September 2024

Introduction

License information in source code is best stored in each file of the source code as a comment, if at all possible. That way the license metadata travels with the file even if it was copied from its original package/repository into a different one.

Client-side JavaScript, CSS and similar languages that make up a large chunk of the web are often concatenated, minified and even uglified in an attempt to make the website faster to load. In this process, most often, the comments get culled the first to reduce the number of characters that serve no function to the program code itself.

Problem

The problem therefore is that typically when JavaScript, CSS (or similar client-side1 code) is being built, it tends to lose not just comments that describe the code’s functionality, but also comments that carry licensing and copyright information. And since licenses (FOSS or not) typically require the text of the license and copyright notices2 to be kept with the code, such removal can be problematic.

Proposal

The goal is to preserve copyright and licensing information of web front-end code even after minification in such a way that it makes FOSS license compliance3 of web applications easier.

In addition, my proposal is intended to keep things:

  • as simple as possible;
  • as short as possible;
  • not introduce any new specifications, but rely on well-established standards; and
  • not require any additional tooling, but rely on what is already in common use.

Essentially, my suggestion is literally as simple as wrapping every .js, .css and similar (e.g. .ts, .scss, …) file with SPDX snippets tags, following the REUSE specification as follows:

At the very beginning of the file introduce an “important” comment block that starts the (SPDX) snippet and includes all the REUSE/SPDX tags that apply to this file, e.g.:

/*!
 * SPDX-SnippetBegin
 * SPDX-CopyrightText: © 2024 Hacke R. McRandom <hacker@mcrandom.example>
 * SPDX-LicenseIdentifier: MIT
 */

And at the very end of the file introduce another “important” comment block that simply closes the (SPDX) snippet:

/*! SPDX-SnippetEnd */

… and that is basically it!

How any why this works (in theory)

This results in e.g. a .js file that would look something like this:

/*!
 * SPDX-SnippetBegin
 * SPDX-CopyrightText: © 2024 Hacke R. McRandom <hacker@mcrandom.example>
 * SPDX-LicenseIdentifier: MIT OR Unlicense
 */

import half_of_npm

code_goes_here();

/*! SPDX-SnippetEnd */

and a .css file as follows:

/*!
 * SPDX-SnippetBegin
 * SPDX-CopyrightText: © 2020 Talha Mansoor <talha131@gmail.com>
 * SPDX-LicenseIdentifier: MIT
 */

}
pre {
  overflow: auto;
  white-space: pre;
  word-break: normal;
  word-wrap: normal;
  color: #ebdbb2; /* This is needed due to bug in Pygments. It does not wraps some part of the code of some lagauges, like reST. This is for fallback. */
}

/*! SPDX-SnippetEnd */

All JavaScript, CSS, TypeScript, Sass, etc. files would look like that.

Then on npm run build (or whatever build system you use) the minifier keeps those tags where they are, because ! is a common trigger to keep that comment when minifying.

So if all the files are tagged as such4, the minified barf5 you get, should include all the SPDX tags in order and in the right place, so you see which license/copyright starts and ends to apply in the gibberish.

And if it pulls stuff that does not use REUSE (snippets) yet, you will still be able to tell it apart6, since it will be the barf that’s between SPDX-SnippetEnd of the previous and SPDX-SnippetBegin of the next properly marked barf.

Is this really enough?

OK, so now we know the start and end of a source code file that ended up in the minified barf. But are the SPDX-SnippetCopyrightText and SPDX-License-Identifier enough?

I think so, yes.

If I chose to express my copyright notice using an SPDX tag – especially if I followed the format that pretty much all copyright laws prescribe – that should be no problem7.

The bigger question is whether communicating the license solely through the SPDX IDs8 is enough, since you would technically not be including the whole license text(s). Honestly, I think it is fine.9 At this stage SPDX is not just long-established in the industry and community, but is also a formal international standard (ISO/IEC 5692:2021). Practically from its beginning – and probably the most known part of the spec – unique names/IDs of licenses and the equivalent canonical texts of those license have been part of SPDX. Which means that if I see SPDX-License-Identifier: MIT I know it specifically means the text that is on https://spdx.org/licenses/MIT.html. Ergo, as long as you are using licenses from the SPDX License List in these tags, all the relevant info is present.

How to keep the tags – Overview of popular minifiers

As mentioned, most minifiers tend to remove comments by default to conserve space. But there is a way to retain license comments (or other important comments). And this method existed for over a decade now!

I have done some research into how different minifiers deal with this. Admittedly, mostly by reading through their documentation. Due to my lack of skills, I did not manage to test out all of them in practice.

But at least theoretically the vast majority of the minifiers that I was told are common (plus a few more I found) seem to support at least one way of keeping important – or even explicitly copyright/license-relevant – comments.

minifierJSDoc-style / @licenseYUI-style / /*!
Google Closure Compiler✔️✔️ 11
Esbuild✔️✔️
JShrink✔️✔️
SWC10✔️✔️
Terser✔️✔️
UglifyJS✔️✔️
YUI Compressor✔️
Bun❌ (🛠️?)

From what I can tell it is only Bun that does not support any way to (selectively) preserve comments. There is a ticket open to implement (at least) the ! method though.

While not themselves minifiers, module bundlers do call and handle minfiers. Here are notes about the ones that I learnt are the most popular ones:

From this overview it seems like using the /*! comment method is our best option – it is short, the most widely supported and not loaded with meaning.

More details on both styles below.

Using @license / JSDoc-style

JSDoc is a markup language used to annotate JavaScript source code files to add in-code documentation, which is then used to generate documentation.

Looking at the JSDoc specification, the following keywords seem relevant:

So from JSDoc it seems the best choice would be @license. To quote from the spec itself:

The @license tag identifies the software license that applies to any portion of your code.

You can use any text to identify the license you are using. If your code uses a standard open-source license, consider using the appropriate identifier from the Software Package Data Exchange (SPDX) License List.

Some JavaScript processing tools, such as Google's Closure Compiler, will automatically preserve any JSDoc comment that includes a @license tag. If you are using one of these tools, you may wish to add a standalone JSDoc comment that includes the @license tag, along with the entire text of the license, so that the license text will be included in generated JavaScript files.

Using /*! / YUI-style

This other style seems to originate from Yahoo! UI Compressor 2.4.8 (also from 2013), to quote its README:

C-style comments starting with /*! are preserved. This is useful with comments containing copyright/license information. As of 2.4.8, the '!' is no longer dropped by YUICompressor. For example:

/*!
* TERMS OF USE - EASING EQUATIONS
* Open source under the BSD License.
* Copyright 2001 Robert Penner All rights reserved.
*/

remains in the output, untouched by YUICompressor.

Many other projects adopted this, some extended it by using also the single-line //!, but others have not.

Also note that YUI itself does not use the double-asterisk /** tag (if it did, it should be /**!), whereas that is typically the starting tag in JSDoc (and JavaDoc) of a document-relevant comment block.

So from the YUI-style tags, it seem using (multi-line) C-style comments that start with /*! is the most consistently used.

And as YUI-style seems to be the most commonly implemented way to tag and preserve (licensing-relevant) comments in JS, it would seem prudent to adopt it for our purposes – to preserve REUSE-standardised tags to mark license and copyright information in files and snippets.

A few PoC I tried

So far the theory …

But when it comes to testing it in practice, it gets both a bit messy and I very quickly reach the limits of my JS skills.

I have tried a few PoC, and ran into mixed, yet promising, results so far.

The most issues, I assume, are fixable by simply changing the settings of the build tools accordingly.

It is entirely possible that a lot of the issues are PEBKAC as well.

Svelte + Rollup + Terser

The simplest PoC I tried is a Svelte app that uses Rollup as a build tool and Terser as the minifier – kindly offered by Oliver “oliwerix” Wagner as a guinea pig. I left the settings as they are, and the results are mostly fine.

First we pull the 29c9881 commit and build it with npm install; npm run build.

In public/build/ we have three files: bundle.css, bundle.js, bundle.js.map.

The bundle.css file does not have any SPDX-* tags, and I suspect this is because it consist solely of 3rd party components, which do not use these tags yet. The public/global.css is still referred to separately in public/index.html and retains the SPDX-* tags in its non-minified form. So that is fine, but would need further testing to check the minified CSS.

The bundle.js file contains the minified JS and the SPDX-* tags remain there, but with one SPDX-SnippetEnd being misplaced.

If we compare e.g. rg SPDX- src/*.js12:

src/store.js
2: * SPDX-SnippetBegin
3: * SPDX-SnippetCopyrightText: 1984 Winston Smith <win@smith.example>
4: * SPDX-License-Identifier: Unlicense
18:/*! SPDX-SnippetEnd */

src/main.js
2: * SPDX-SnippetBegin
3: * SPDX-SnippetCopyrightText: © 2021 Test Dummy <dummy@test.example>
4: * SPDX-License-Identifier: BSD-2-Clause
17:/*! SPDX-SnippetEnd */

… and rg SPDX- public/build/bundle.js

3:     * SPDX-SnippetBegin
4:     * SPDX-SnippetCopyrightText: 1984 Winston Smith <win@smith.example>
5:     * SPDX-License-Identifier: Unlicense
8:/*! SPDX-SnippetEnd */
10:/*! SPDX-SnippetEnd */
13:     * SPDX-SnippetBegin
14:     * SPDX-SnippetCopyrightText: © 2021 Test Dummy <dummy@test.example>
15:     * SPDX-License-Identifier: BSD-2-Clause

… it is clear that something is amiss. A snippet cannot end before it begins.

But when checking the public/build/bundle.js.map SourceMap, we again see the SPDX-* tags in order just fine.

I would really like to know what went wrong here.

React Scripts (+ WebPack + Terser)

Before that I tried to set up a “simple” React Scripts app with the help of my work colleague, Carlos “roclas” Hernandez.

Here, again, I am getting mixed results out of the box.

First we pull the af54954 commit and build it with npm install; npm run build.

On the CSS side, we see that there are two files in build/static/css/, namely: main.05c219f8.css and main.05c219f8.css.map.

Both the main.05c219f8.css and its SourceMap retain the SPDX-* tags where we want them, so that is great!

On the JS side it gets more complicated though. In build/static/js/ we have several files now, and if we pair them up:

  • 453.2a77899f.chunk.js and 453.2a77899f.chunk.js.map
  • main.608edf8e.js, main.608edf8e.js.LICENSE.txt and main.608edf8e.js.map

The 453.2a77899f.chunk.js* files contain no SPDX-* tags. Honestly, I do not know where they came from, but assume it is again 3rd party components, which do not use these tags yet. So we can ignore them.

But it is the main.608edf8e.js* files that we are interested in.

Unfortunately, it is here that it gets a bit annoying.

It seems React Scripts is quite opinionated and hard-codes its preferences when it comes to minification etc. So even though it is easy to set-up WebPack and Terser to preserve (important) comments in the code itself, React forces it otherwise.

What this results in then is the following:

  • main.608edf8e.js is cleaned of all comments – no SPDX-* tags here;
  • but now main.608edf8e.js.LICENSE.txt has all the SPDX-* tags as well other important comments (e.g. @license blocks from 3rd party components);
  • and as for the main.608edf8e.js.map SourceMap, it includes SPDX-* tags, as expected.

The really annoying bit is that it seems like main.608edf8e.js.LICENSE.txt is not mapped, it is just a dump of all the license-related comments. So that does not help us here.

There is a workaround by injecting code and settings using Rewire, but so far I have not managed to set it up correctly. I am sure it is possible, but I gave up after it took way too much of my time already.

Some early thoughts on *.js.map and *.js.LICENSE.txt

If the license and copyright info is missing from the minified source code, but it is there in the *.js.map SourceMap (spec), I think that is better than nothing, but I am leaning towards it not being enough for the goal we are trying to reach here.

Similarly, when the minifier simply shoves all the license comments into a separate *.js.LICENSE.txt file, removing them from the *.js file and without any way to map the license and copyright comments back to the source code, I do not see how this is much more useful than the *.js.map itself.

So far, it seems to me like this is a problem caused by some frameworks (e.g. React Scripts) hard-coding their preferences when it comes to minification, without an easy way to override it.

But if there was a *.js.LICENSE.txt (or equivalent) that was mapped13 via SourceMaps, so one could figure out which license comment matches which source code block in the minified code, I would be inclined to take that as potentially good enough.

Future ideas

Once the base issue of preserving SPDX tags in minified (web front-end) code in proper places is solved, we can expand it to make it even more useful.

Here is a short list of ideas that popped up already. I am keeping them hidden by default, to not detract too much from the base problem.

Nothing stops us from adding more relevant information in these tags – in fact, as long as it is an SPDX tag, that would be in line with both the SPDX standard and the REUSE spec. A prefect candidate to include would be something to designate the origin or provenance of the package the file came from – e.g. using PackageURL.

To make this even more useful in practice, it is entirely imaginable that build tools could help generate or populate these tags and therefore inject information themselves, some early ideas:

  • for (external) packages that do not use REUSE/SPDX Snippet Tags, the build tool could be ordered to generate them from REUSE/SPDX File Tags;
  • same, but to pull the information from a different source (e.g. LICENSE file) – that might be a bit dubious when it comes to exactness though;
  • the above-mentioned PackageURL (or other origin identifier) could be added by the build tool.

All of the above future ideas are super early ideas and some could well be too error-prone to be useful, but should be kept in mind and discussed after the base issue is solved.

Open questions & Call for help and testers

As stated, I am not very savvy when it comes to writing web front-ends, so at this point this project would really benefit from people with experience in building web front-ends taking a look.

If anyone knows how to get React and any other similarly opinionated frameworks to not create the *.js.LICENSE.txt file, but keep the important comments in code, that would be really useful.

If you are a legal or license compliance expert, while I am quite confident in the logic behind this, feedback is very welcome and do try to poke holes in my logic.

If you are a technical person (esp. front-end developers), please, try applying this to code in different build environments and let me know what works and what breaks. We need more and better PoCs.

If you have proposed fixes, even better!

Comments, ideas, suggestions, … welcome.

Ultimately, my goal is to come up with a solution that works and requires (ideally) no changes in tooling and specifications.

If that means abandoning this proposal and finding a better one, so be it. But it has to start somewhere that is half-way doable.

Credits

While I did spend quite a bit of time on this, this would not exist without prior work and a lot of help from others.

First of all, the basic idea of treating each file as a snippet that just happens to currently span a whole file, was originally proposed to me by José Maria “chema” Balsas Falaguera in 2020 and we worked together on an early PoC on how to apply REUSE to a large-ish JS + CSS code-base … before (REUSE Snippets and later) SPDX Snippets came to be.

In fact, it was this Chema’s idea that sparked my quest to first bring snippet support to REUSE, and later to SPDX specifications.

At REUSE I would specifically like to thank Carmen Bianca Bakker, Max Mehl, and Nico Rikken for not refusing this idea upfront and also for being great intellectual sparring partners on this adventure.

And at SPDX it was Alexios “zvr” Zavras who saw the potential and helped me draft SPDX tags for snippets to the point where it got accepted.

I would also like to thank Henrik “hesa” Sandklef, Maximilian Huber and Philippe Ombredanne for their feedback and some proposals on how to expand this further later on.

Of course, none of this would be possible without everyone behind SPDX, REUSE, YUI, and JSDoc.

I am sure I forgot to mention a few other people, and if that was you, I humbly apologise and ask you to let me know, so I can correct this issue.

hook out → wow, this took longer than expected … so many rabbit holes!


  1. Also called browser-side code. 

  2. In droit d’auteur / civil law jurisdictions, the removal of a copyright statement could also qualify as a violation of civil law or even a criminal offence … when the copyright holder is an author (physical person), and not a legal entity

  3. Both from the point-of-view of a license compliance expert and their tooling; and from (at least) the spirit of FOSS licenses. 

  4. Barring any magical weirdness that happens during the minifying/uglifying that might mess things up in things I did not predict. Here I need front-end experts’ help and more testing. 

  5. If unreadable binary is a blob, “barf” sounded like an appropriate equivalent for nigh-unreadable minified code. 

  6. Here is where the added functionality of treating every file as a (very-long) snippet pays off. If we did not mark the start and end of each file, we would not be able to tell that in the minified code. In that case, a consequence would be that we would falsely assume a piece of minified code would fall under a certain license, whereas it would simply be part of a file that was unmarked (e.g. pulled in from a 3rd party component). 

  7. Probably Apache-2.0’s requirement to include the NOTICE file would not be satisfied, but that is not a new issue. And there are three ways to make Apache-2.0 happy, so it is solvable if you are diligent. 

  8. The value of an SPDX-License-Identifier tag is an SPDX License Expression, which can either be just a single license or a logical statement of several licenses. (Hat-tip to Peter Monks for pointing this out.) 

  9. I do accept that there is a tiny bit of handwaving involved here, but honestly, I think this should be good enough, given the limitations of minified code. If the consensus ends up that full license texts need to be shipped as well, this can be done via hyperlinks, and those can be generated from the SPDX IDs. Or if you really want to be pedantic about it, upload and hyperlink the (usually missing) license texts from each JS/CSS package itself. 

  10. SWC is primarily a compiler and module bundler, but I keep it here because it does its own minification instead of relying on an external tool. 

  11. It is not documented but Google Closure Compiler does indeed support ! and treats it explicitly as a license block (without any additional JSDoc parsing) since 2016

  12. I use RipGrep in this example, but normal grep should work too. 

  13. And I do not know of any that does that. 

From Fri, Sep 6th to Tue, Sep 10th I attended the 2024 edition of KDE Akademy in Würzburg, Germany. I booked a room in a hotel downtown the same place CoLa, a fellow KDE developer, stayed. Since parking is rather expensive in downtown areas, I left the car in front of the university building where the event was about to start on Saturday morning and took the bus into the city to the hotel. We all used the bus in coming days and one would always meet some KDE folks easy to spot wearing their lanyards.

On Friday night the KDE crowd gathered at a pub in the city and it was great to see old friends and also meet new people. At some point, I was talking to Carlos. It turned out that he already made some contributions to KMyMoney. The git log says it was in 2022. While more and more fellow KDE developers joined the place it became louder and louder and conversations were not easy anymore. Too bad that some of us got stranded at different places on their way out to Würzburg and did not make it until Saturday.

Conference

On Saturday, the conference program started with a keynote by Joanna Murzyn who took us on a journey from crucial mineral mining hubs to electronic waste dumpsters, uncovering the intricate connections between code, hardware, open source principles as well as social and environmental justice. We discovered how the KDE community’s work is shaping a more resilient, regenerative future, and explore ways to extend those principles to create positive impact beyond tech world.

On the first day, I took the opportunity to see the following talks

  • Current Developments in KDE Hardware Integration
  • KDE to Make Wines — Using KDE Software on Enterprise Desktops a Return on Experience
  • KWin Effects: The Next Generation
  • Adapt or Die: How new Linux packaging approaches affect wider KDE
  • An Operating System of Our Own
  • What’s a Maintainer anyway?

The last one for the day complemented the keynote in a nice way. In KDE newcomer Nicole Teale’s talk entitled “Getting Them Early: Teaching Pupils About The Environmental Benefits Of FOSS” she presented the work she is doing introducing KDE/FOSS to pupils, with a focus on its environmental benefits. She shared ideas on how to get schools involved in teaching pupils about reusing old hardware with FOSS. and presented some of the projects that have already been implemented in schools in Germany. This project is funded by the Umweltbundesamt (UBA) called “Sustainable Software For Sustainable Hardware”. The goal of this project is to reduce e-waste by promoting the adoption of KDE / Free & Open Source Software (FOSS) and raising awareness about the critical role software plays in the long-term, efficient use of hardware.

This becomes important in 2025 when Windows 10 runs out of support and Windows 11 requires new hardware, even though the existing one is still perfectly suited for the requirements of the majority of people. Linux and KDE to the rescue.

Saturday ended with Pizza and beer at the university as the booked beer garden canceled the reservation due to approaching thunderstorms.

On Sunday, I saw the following talks

  • Openwashing – How do we handle (and enforce?) OSS policies in products?
  • Opt In? Opt Out? Opt Green! KDE Eco’s New Sustainability Initiative
  • KDE’s CI and CD infrastructure
  • The Road to KDE Neon Core — Gosh! We’re surrounded by Snaps everywhere!

and of course the KDE Akademy award ceremony. In between those talks I had a chance to meet Julius Künzel and take a look at the problems we have in the KMyMoney project with the MacOS CD builds. He spotted a few things but I did not have the time to take care of them yet.

As a tradition, on Sunday is also the gathering to take the group picture. Here’s this years edition:

CC-BY-SA 4.0 by Andy Betts

Birds of a feather sessions

On Monday and Tuesday I went to various BoF’s and took the opportunity to join the git/Gitlab presentation by Natalie Clarius. I learned a few subtleties of Gitlab that I didn’t know before, so it was worth it. In the meantime I talked with a lot of people and did a small bit of hacking (one bug fixed). The BoFs I joined:

Good-bye Akademy 2024 / Thank you volunteers

Tuesday afternoon was the time to wave good-bye to the fellow KDE people and drive back home which I reached without delay (no traffic on the road) after an hour and a half. Hopefully, I will be able to join next time. Next stop will be the auditing of KDE accounting coming up in Berlin in a few weeks.

A big thank you goes out to the numerous volunteers who made this event happen. The team around seaLne just did a marvelous job.