Skip to content

Friday, 23 August 2024

Since Plasma 5.18, nearly five years ago, Plasma has shipped with a "telemetry" system. It’s opt-in, allowing users to send a small amount of data back to us.

Was it useful or worth it? It's a question that comes up occasionally, and the answer is mixed. I believe it showed real potential, though the reality of our implementation was somewhat underwhelming and didn't really deliver. There are many lessons learned that are worth sharing with other projects that might face similar endeavours.

The good bits

Where we had data available for topics being discussed it worked. To give two concrete examples from memory:

  • A developer claimed, "No one is using a screen smaller than 1024x768," while bumping the minimum size of a window. This was proved wrong; the number of users at 800x600 or even 640x480 is surprisingly high. Still low as an overall percentage, but higher than you would ever intuitively think. Presumably, it's the default for a lot of virtual machines.

  • Four years ago, a developer claimed, "No one still uses only OpenGL2; we can change the code to do XYZ." A check of our user base showed it would have affected nearly 5% of our users, so the change was abandoned.

Interestingly, this last topic came up again very recently, as it held back colour management improvements, but in a narrower Wayland-only path and with a fallback. After checking metrics again, the usage was below 1%, so we went ahead with that merge request.

So, are metrics worth it just to stop developers and designers from making nonsense claims out of thin air? Absolutely! 90% of stats are just made up on the spot. Metrics are just as much about preventing changes as it is about sparking changes.

Indirect impact

The other important part is having a more general sense of the landscape. Currently, we have a lot of hard conversations about how quickly we push the move to Wayland. We have voices wanting to maintain support, and we have voices wanting to push quicker. These decisions shouldn't be made just by who can be the loudest. For every individual topic that came up in those discussions, I would always have in mind our current adoption value at the time.

Should we care about Nvidia? Knowing they make up about 25% of our user base makes the decision for us. I ran with an Nvidia card in one machine because of this, implementing Nvidia context loss handling and doing what we could during the Wayland transition
We don't test BSD while developing Plasma, but we also let it hold us back. Should we care about it more or less? My opinion matches exactly what the metrics say.

Some stats and graphs:

Another role of metrics is being a conversation starter—people will fawn over a graph. More topics on Reddit will be about our Wayland usage rather than the topic I'm trying to discuss. I'll focus on Wayland examples beacuse that's a topic close to me.

Wayland adoption over time

I keep tabs on what our metrics show here. We can see the slow increase from under 20% to around 45% over time, showing the progress as both we and the Wayland ecosystem evolved. At Plasma 6, we switched the defaults a small bump in the graph can be seen. but 45% still seemed rather disappointing.

Filtering on just Plasma 6 reveals the true story:

There's still 20% of users switching away, or using a distro with a different default, or having carried over presets, but it's more promising. Interestingly, we can see that the GPU vendor distribution differs between X11 and Wayland.

Problems and lessons learned

Ultimately, despite the positive parts it would be hard to call our telemetry a staggering success. For the handful of examples above, there are a hundreds of cases where we had no data to back anything up. The range of data points was pitiful and it wasn't often used

The viewing tool is really, really important!

Data collection without viewing it is meaningless. As shown above, we often need to drill down and cross-reference filters to extract conclusions.

The original plan was to use the existing UI provided by kuserfeedback, which did not scale at all and quickly fell over. It was designed for high-fidelity data for a small number of users, not what we had.

In a rush, we pivoted to using Grafana because there was already a setup hosted.

It worked—ish, but it’s not designed for this, especially combined with our data structure, which was a manual NoSQL in normal SQL. Every graph needed to be written by hand, and it felt very much like fighting the system rather than working with it. Combined with the limited access permissions granted, it wasn't used by many people.

It being used is the number one indicator of its usefulness!

We need to find a tool specifically designed for visualization and querying datasets (maybe Apache Superset?).

Time-based data just makes noise

Our system sent updates every N days with basically the same data every time. This made writing queries way messier than it should have been. It never added any value; I would always be interested in what the current stats are. As described in the Wayland usage graphs above, if I'm making a Wayland decision, it doesn't matter what most people are using; it matters what people on the latest release are using. We always ended up having to add filters to focus on just the latest version.

The upgrade story needs planning in advance

The amount of data we collected was tiny—some GPU information, screen information, language, and a few other fragments. The plan was to slowly add more and more stats over time, but we hit a wall. Our UX involved the user selecting to enable metrics and it being a fire-and-forget operation.

What do we do when we want to add more data? For example, whether you use an analog or digital clock. We would need to prompt the user and reset their settings in the meantime, which is at odds with it being a setting. The whole thing became such an ordeal that made it not worthwhile.

Wrap up

The project didn't fail, when we had data and it was used it worked, but overall our implementation falls short. I would like to open a discussion at Akademy on how we move forward with our current system potentially starting from scratch treating it more like a survey that we prompt to auto populate and submit each year.

Thursday, 22 August 2024

Hello everyone! Time flies and now we’re already in the final week of GSoC. In this blog post I’ll be sharing the progress I’ve made since my last update, focusing primarily on subtitle styling.

Subtitle Editor

The first thing I did was to enhance the existing subtitle editor. The updated editor now serves as an interface for editing ASS events, which include various components. With the new subtitle editor, we can easily modify elements such as the event’s layer, style, margins, and more. I’ve also simplified the effects section, allowing us to control subtitle scrolling by simply adjusting checkboxes and combo boxes for speed, direction, and range.

However, the most significant change is the text field and the buttons above it. To better understand these changes, it’s important to first introduce the relationship between ASS styles and events. In ASS files, each event must be assigned a valid style that applies to the entire event text. Additionally, ASS override tags are special text blocks within events that allow precise control over the styles of different parts of the text, rather than the entire text. (There are some exceptions, like “Set Position.”)

The text field has been enhanced to assist users in inputting ASS override tags using the provided buttons. For instance, when a user clicks the “Toggle Bold” button, tags are automatically inserted or adjusted to toggle the bold style for either the selected text or the text following the cursor if nothing is selected. Additionally, the text field features a highlighter that renders different parts of the tags in distinct styles, making them more distinguishable, and an auto-completer that lists all valid presets as we start typing a tag name.

For those who prefer the previous subtitle editor, which only displays the rendered text, a “Simple Editor” is also available. This editor syncs with the normal editor but displays only the text without tags while rendering some basic tag effects. However, due to the complexities of ASS tag rules, style editing in the Simple Editor can sometimes behave unpredictably. So it’s best suited for simpler use cases before or after editing styles.

Subtitle Manager

Continuing from the previous improvements, the Subtitle Manager is now integrated with style management and has been divided into four sections: File, Event, Style, and Info, which correspond to the four main components of ASS subtitles. Each section, except for the File section, includes a sidebar for switching between different subtitle files. Additionally, when in the Style section, we can drag and drop a style item onto a subtitle file name in the sidebar to efficiently move or copy styles between files. The same functionality is available in the Event section, where we can move or copy an entire layer to another file.

Misc

Style Editor

A new widget, the Style Editor, was created to edit styles and provide a preview.

Convert Old Global Style

Old styles will now be automatically converted to the “Default” style in the new project. Font size, outline, and shadow will be scaled to maintain the original effects.

Different Default Styles for Layers

Now, we can assign different default styles to each layer, which will automatically be applied to a subtitle event when it’s created on the corresponding layer. This feature is especially useful for quickly building a subtitle file with multiple speakers, allowing each speaker to have a distinct style.

Summary

It has been a wonderful summer getting involved in the KDE community and contributing to Kdenlive! I may not be the best at coding, but I’ve learned a lot throughout this journey. Thanks for everyone who has gave me guidence — Eugen Mohr, Farid Abdelnour, and especially my mentor, Jean-Baptiste Mardelle. While GSoC is coming to an end, my journey with KDE is just beginning. After these updates, I plan to continue improving subtitle functions, including making it easier for users to input more ASS override tags and refining the UI and user experience. See you in my next blog :)

Implementing an Audio Mixer, Part 1

Motivation

When using Qt Multimedia to play audio files, it’s common to use QMediaPlayer, as it supports a larger variety of formats than QSound and QSoundEffect. Consider a Qt application with several audio sources; for example, different notification sounds that may play simultaneously. We want to avoid cutting notification sounds off when a new one is triggered, and we don’t want to construct a queue for notification sounds, as sounds will play at the incorrect time. We instead want these sounds to overlap and play simultaneously.

Ideally, an application with audio has one output stream to the system mixer. This way in the mixer control, different applications can be set to different volume levels. However, a QMediaPlayer instance can only play one audio source at a time, so each notification would have to construct a new QMediaPlayer. Each player in turn opens its own stream to the system.

The result is a huge number of streams to the system mixer being opened and closed all the time, as well as QMediaPlayers constantly being constructed and destructed.

To resolve this, the application needs a mixer of its own. It will open a single stream to the system and combine all the audio into the one stream.

Before we can implement this, we first need to understand how PCM audio works.

PCM

As defined by Wikipedia:

Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps.

Here you can see how points are sampled in uniform intervals and quantized to the closest number that can be represented.

Pcm.svg

Image Source: Wikipedia

Description from Wikipedia: Sampling and quantization of a signal (red) for 4-bit LPCM over a time domain at specific frequency.

Think of a PCM stream as a humongous array of bytes. More specifically, it’s an array of samples, which are either integer or float values and a certain number of bytes in size. The samples are these discrete amplitude values from a waveform, organized contiguously. Think of the each element as being a y-value of a point along the wave, with the index representing an offset from

at a uniform time interval.

Here is a graph of discretely sampled points along a sinusoidal waveform similar to the one above:

Discrete_cosine

Image Source: Wikimedia Commons

Description from Wikimedia Commons: Image of a discrete time sinusoid

Let’s say we have an audio waveform that is a simple sine wave, like the above examples. Each point taken at discrete intervals along the curve here is a sample, and together they approximate a continuous waveform. The distance between the samples along the x-axis is a time delta; this is the sample period. The sample rate is the inverse of this, the number of samples that are played in one second. The typical standard sample rate for audio on CDs is 44100 Hz - we can’t really hear that this data is discrete (plus, the resultant sound wave from air movement is in fact a continuous waveform).

We also have to consider the y-axis here, which represents the amplitude of the waveform at each sampled point. In the image above, amplitude

is normalized such that

. In digital audio, there are a few different ways to represent amplitude. We can’t represent all real numbers on a computer, so the representation of the range of values varies in precision.

For example, let’s say we have two different representations of the wave above: 8-bit signed integer and 16-bit signed integer. The normalized value

from the image above maps to

with 8-bit representation and

with 16-bit. Therefore, with 16-bit representation, we have 128 times as many possible values to represent the same range; it is more precise, but the required size to store each 16-bit sample is double that of 8-bit samples.

We call the chosen representation, and thus the size of each sample, the bitdepth. Some common bitdepths are 16-bit int, 24-bit int, and 32-bit float, but there are many others in existence.

Let’s consider a huge stream of 16-bit samples and a sample rate of 44100 Hz. We write samples to the audio device periodically with a fixed-size buffer; let’s say it is 4096 bytes. The device will play each sample in the buffer at the aforementioned rate. Since each sample is a contiguous 2-byte short, we can fit 2048 samples into the buffer at once. We need to write 44100 samples in one second, so the whole buffer will be written around 21.5 times per second.

What if we have two different waveforms though, and what if one starts halfway through the other one? How do we mix them so that this buffer contains the data from both sources?

Waveform Superimposition

In the study of waves, you can superimpose two waves by adding them together. Let’s say we have two different discrete wave approximations, each represented by 20 signed 8-bit integer values. To superimpose them, for each index, add the values at that index. Some of these sums will exceed the limits of 8-bit representation, so we clamp them at the end to avoid signed integer overflow. This is known as hard clipping and is the phenomenon responsible for digital overdrive distortion.

xWave 1 (y1​)Wave 2 (y2​)Sum (y1+y2​)Clamped Sum
0+60−100−40-40
1−120+80−40−40
2+40+70+110+110
3−110−100−210−128
4+50−110−60−60
5−100+60−40−40
6+70+50+120+120
7−120−120−240−128
8+80−100−20−20
9−80+40−40−40
10+90+80+170+127
11−100−90−190−128
12+60−120−60−60
13−120+70−50−50
14+80−120−40−40
15−110+80−30−30
16+90−100−10−10
17−110+90−20−20
18+100−110−10−10
19−120−120−240−128

Now let’s implement this in C++. We’ll start small, and just combine two samples.

Note: we will use qint types here, but qint16 will be the same as int16_t and short on most systems, and similarly qint32 will correspond to int32_t and int.

qint16 combineSamples(qint32 samp1, qint32 samp2)
{
    const auto sum = samp1 + samp2;

    if (std::numeric_limits<qint16>::max() < sum)
        return std::numeric_limits<qint16>::max();

    if (std::numeric_limits<qint16>::min() > sum)
        return std::numeric_limits<qint16>::min();

    return sum;
}

This is quite a simple implementation. We use a function combineSamples and pass in two 16-bit values, but they will be converted to 32-bit as arguments and summed. This sum is clamped to the limits of 16-bit integer representation using std::numeric_limits in the <limits> header of the standard library. We then return the sum, at which point it is re-converted to a 16-bit value.

Combining Samples for an Arbitrary Number of Audio Streams

Now consider an arbitrary number of audio streams n. For each sample position, we must sum the samples of all n streams.

Let’s assume we have some sort of audio stream type (we’ll implement it later), and a list called mStreams containing pointers to instances of this stream type. We need to implement a function that loops through mStreams and makes calls to our combineSamples function, accumulating a sum into a new buffer.

Assume each stream in mStreams has a member function read(char *, qint64). We can copy one sample into a char * by passing it to read, along with a qint64 representing the size of a sample (bitdepth). Remember that our bitdepth is 16-bit integer, so this size is just sizeof(qint16).

Using read on all the streams in mStreams and calling combineSamples to accumulate a sum might look something like this:

qint16 accumulatedSum = 0;

for (auto *stream : mStreams)
{
    // call stream->read(char *, qint64)
    // to read a sample from the stream into streamSample
    qint16 streamSample;
    stream->read(reinterpret_cast<char *>(&streamSample), sizeof(qint16)));

    // accumulate
    accumulatedSum = combineSamples(sample, accumulatedSum);
}

The first pass will add samples from the first stream to zero, effectively copying it to accumulatedSum. When we move to another stream, the samples from the second stream will be added to those copied values from the first stream. This continues, so the call to combineSamples for a third stream would combine the third stream’s sample with the sum of the first two. We continue to add directly into the buffer until we have combined all the streams.

Combining All Samples for a Buffer

Now let’s use this concept to add all the samples for a buffer. We’ll make a function that takes a buffer char *data and its size qint64 maxSize. We’ll write our accumulated samples into this buffer, reading all samples from the streams and adding them using the method above.

The function signature looks like this:

void readData(char *data, qint64 maxSize);

Let’s achieve more efficiency by using a constexpr variable for the bitdepth:

constexpr qint16 bitDepth = sizeof(qint16);

There’s no reason to call sizeof multiple times, especially considering sizeof(qint16) can be evaluated as a literal at compile-time.

With the size of each sample and the size of the buffer, we can get the total number of samples to write:

const qint16 numSamples = maxSize / bitDepth;

For each stream in mStreams we need to read each sample up to numSamples. As the sample index increments, a pointer to the buffer data needs to also be incremented, so we can write our results at the correct location in the buffer.

That looks like this:

void readData(char *data, qint64 maxSize)
{
    // start with 0 in the buffer
    memset(data, 0, maxSize);

    constexpr qint16 bitDepth = sizeof(qint16);
    const qint16 numSamples = maxSize / bitDepth;

    for (auto *stream : mStreams)
    {
        // this pointer will be incremented across the buffer
        auto *cursor = reinterpret_cast<qint16 *>(data);
        qint16 sample;

        for (int i = 0; i < numSamples; ++i, ++cursor)
            if (stream->read(reinterpret_cast<char *>(&sample), bitDepth))
                *cursor = combineSamples(sample, *cursor);
    }
}

The idea here is that we can start playing new audio sources by adding new streams to mStreams. If we add a second stream halfway through a first stream playing, the next buffer for the first stream will be combined with the first buffer of this new stream. When we’re done playing a stream, we just drop it from the list.

Next Steps

In Part 2, we'll use Qt Multimedia to fully implement our mixer, connect to our audio device, and test it on some audio files.

The post Implementing an Audio Mixer, Part 1 appeared first on KDAB.

Tuesday, 20 August 2024

A while ago a colleague of mine asked about our crash infrastructure in Plasma and whether I could give some overview on it. This seems very useful to others as well, I thought. Here I am, telling you all about it!

Our crash infrastructure is comprised of a number of different components.

  • KCrash: a KDE Framework performing crash interception and prepartion for handover to…
  • coredumpd: a systemd component performing process core collection and handover to…
  • DrKonqi: a GUI for crashes sending data to…
  • Sentry: a web service and UI for tracing and presenting crashes for developers

We will look at them in turn. This post introduces KCrash.

KCrash

KCrash, as the name suggests, is our KDE framework for crash handling. While it is a mid-tier framework and could be used by outside projects, it mostly doesn’t make sense to, because some behavior is very KDE-specific.

It installs POSIX signal handlers to intercept crash signals and then prepares the crashed process for handover to coredumpd and DrKonqi. More on these two in another post. Once prepared it sends the crash signal into the next higher level crash handler until the signal eventually reaches the default handler and cause the kernel to invoke the core pattern.

Before that can happen, a bunch of work needs doing inside KCrash. Most of it quite boring, but also somewhat challenging.

You see, when handling a signal you need to only use signal-safe functions. The manpage explains very well why. This proves quite challenging at the level we usually are at (i.e. Qt) because it is entirely unclear what is and isn’t ultimately signal-safe under the hood. Additionally, since we are dealing with crash scenarios, we must not trigger new memory allocation, because the heap management may have had an accident.

To that end, KCrash has to use fairy low-level API. To make that easier to work with, there are actually two parts to KCrash:

  • The Initialization Stage
  • The Crash Stage

The Initialization Stage

Initialization is generally triggered by calling KCrash::initialize. You may already wonder what kind of initialization KCrash could possibly need. Well, the obvious one is setting up the signal handling. But beyond that the init stage is also used to prepare us for the crash stage. I’ve already mentioned the serious constraints we will encounter once the signal hits, so we had best be prepared for that. In particular we’ll do as much of the work as possible during initialization. This most important includes copying QString content into pre-allocated char * instances such that we later only need to read existing memory. The second most important aspect is the metadata file preparation for use in…

The Crash Stage

Once initialization has happened, we are ready for crashes. Ideally the application doesn’t crash, of course. 😉

But if it does the biggest task is rescuing our data!

Metadata

Inside KCrash we have the concept of Metadata: everything we know about the crashed application: the signal, process ID, executable, used graphics device… and so on and so forth. All this data is collected into a metadata file on-disk in ~/.cache/kcrash-metadata at the time of crash.

Here’s an example file:

[KCrash]
exe=/usr/bin/kwin_wayland
glrenderer=
platform=wayland
appname=kwin_wayland
apppath=/usr/bin
signal=11
pid=1353
appversion=6.1.80
programname=KWin
bugaddress=submit@bugs.kde.org

The actual fields vary depending on what is available for any given application, but it’s generally more or less what is shown in the example.

This metadata file will later be consumed by DrKonqi in an effort to obtain information that only existed at runtime inside the application - such as the version that was running, or whether it was running in legacy X11 mode.

Handoff

Once the metadata is safely saved to disk, KCrash simply calls raise(). This re-raises the signal into the default handler, and through that causes a core dump.

What happens next is up to the system configuration as per the core manpage.

The recommended setup for distributions is that a crash handler be configured as core_pattern and that this handler consumes the crash. We recommend an implementation of the coredumpd and journald interfaces as that will then allow our crash handler to come in and log the crash with KDE.

So that was KCrash, the first in our four-step crash-handling pipeline. In the next blog post I’ll tell you all about the next one: coredumpd.

Konqi chasing after bugs

Sunday, 18 August 2024

A new Craft cache has just been published. The update is already available for KDE's CD, CI will follow in the next hours or days.

Please note that this only applies to the Qt6 cache. The Qt5 cache is in LTS mode since April 2024 and does not recieve major updates anymore.

Changes (highlights)

  • Qt 6.7.2
  • FFmpeg 7.0.1
  • llvm 18.1.8
  • boost 1.86.0
  • OpenSSL 3.3.1 (for Android too, which was still on 1.1.1v until recently)
  • CMake 3.30.0
  • Ninja 1.12.1
  • Removed qt-installer-framework (Windows)

About KDE Craft

KDE Craft is an open source meta-build system and package manager. It manages dependencies and builds libraries and applications from source on Windows, macOS, Linux, FreeBSD and Android.

Learn more on https://community.kde.org/Craft or join the Matrix room #kde-craft:kde.org

Saturday, 17 August 2024

It's been more than three weeks since the midterm summary, and the project is now nearing completion.

Currently, all the original features of Blinken have been fully implemented in the QML version. The remaining tasks involve UI adjustments, testing, and fixing potential bugs.

Over the past few weeks, I’ve been working on the following:

Integrating Blinken's Logic

The game logic of Blinken is handled by the BlinkenGame class from the original Blinken. The original code design is quite good, with most of the game logic encapsulated in this one class. The separation between the logic and the UI rendering is well done, so all I needed to do was connect the signals from this class in QML.

As for the audio playback in Blinken, the original code used the Phonon library, which is also open-source but does not support Android. Therefore, I replaced it with the QtMultimedia library, which provides cross-platform audio playback functionality.

Android Build for KF6 Applications

Some features of Blinken rely on libraries provided by the KF6 framework, such as KF6I18n and KF6Config. When cross-compiling to the Android platform, it's necessary to use the aarch64-linux versions of these libraries. If these libraries are not available on your system, you will encounter the following errors during compilation:

ld.lld: error: /usr/lib64/libKF6XmlGui.so.6.4.0 is incompatible with aarch64linux ld.lld: error: /usr/lib64/libKF6ConfigWidgets.so.6.4.0 is incompatible with aarch64linux 
……

However, the package manager on my Fedora distribution does not provide these versions. Compiling and installing them one by one from source is too cumbersome. On the advice of the community, I used Craft to handle cross-compilation.

For reference, here is the Craft tutorial: Craft - KDE Community Wiki

Important note: If you encounter installation failures, make sure to clear all contents under craft-kde-android before trying again, as leftover files may cause the installation to fail.

When installing, choose the Arm64 target architecture. If there are remnants from a previous failed installation, it may prevent the option to select the ARM64 architecture.

If you encounter issues like "Permission denied," you’ll need to disable SELinux:

sudo apt-get install selinux-utils 
sudo setenforce 0

Additionally, note that in the virtual machine invent-registry.kde.org/sysadmin/ci-images/android-qt67 provided by the community, the Java version is outdated, preventing the use of Gradle 8.6. You can either manually update the Java version in the docker or use an older version of Gradle.

To use Craft for building applications, you need to write a script called a Blueprint, which describes the libraries your application depends on. These scripts are relatively easy to write, and you can quickly get started by following the community documentation: Craft/Blueprints - KDE Community Wiki.

Using KF6 Framework Libraries

Some of the libraries originally used by Blinken are compatible with the Android platform, while others are not. By referring to the API Documentation, you can check which libraries are supported on Android. In Blinken, the following libraries are Android-compatible:

  • CoreAddons
  • GuiAddons
  • I18n
  • XmlGui

I needed to use these libraries in the QML version of Blinken.

The KF6 framework provides a convenient internationalization API, and the usage in QML is almost the same as in QWidget, which allowed me to directly reuse Blinken’s original multi-language support, saving a lot of time.

KConfig is used in Blinken to store high score information and settings. For the high scores, I needed to extract the HighScoreManager class from the original HighScoreDialog file, make some modifications, and then create a new high score interface in QML that connects the signals and slots of HighScoreManager. For the settings functionality, it’s as simple as registering a KConfig singleton in QML :

`qmlRegisterSingletonInstance<blinkenSettings>("org.kde.blinken", 1, 0, "BlinkenSettings", blinkenSettings::self());

KF6XmlGui was used in the original Blinken to create the About Blinken Page, About KDE Page, and Handbook Page. Although this library is Android-compatible, it is based on QWidget, while the main interface of Blinken is built with QML. Bringing in QWidget just for these pages didn't seem like a good idea. Luckily, for the Android platform, kirigami-addons provides this functionality. By incorporating it, I also brought in Kirigami, which helps optimize the UI.

After adding new dependencies, it’s important to modify the .kde-ci.yml file to support CI/CD. For more information: Infrastructure/Continuous Integration System - KDE Community Wiki.

Friday, 16 August 2024

Let’s go for my web review for the week 2024-33.


Why We Picked AGPL - ParadeDB

Tags: tech, foss, business

I wish more product companies would pick this license. Going for AGPL with a support and/or double license offering is a strong model in my opinion.

https://blog.paradedb.com/pages/agpl


Fellowship for Maintainers | Sovereign Tech Fund

Tags: tech, foss, economics

Interesting initiative. I’m looking forward to the results of this first pilot.

https://www.sovereigntechfund.de/programs/fellowship


Breaking Up Google

Tags: tech, google, monopoly, law, economics

Of course it sounds complicated to break Google up… but that’s not the point. It’s about avoiding its monopolistic position, the fact that it’s complicated is just another symptom.

https://micro.webology.dev/2024/08/14/breaking-up-google.html


The Dying Web

Tags: tech, google, browser, web, standard

Yes, please let’s increase the market share of non-Chromium based browsers.

https://endler.dev/2024/the-dying-web/


Hacking the Scammers

Tags: tech, security, hacking

Someone was about to get revenge, this gives an interesting exploration.

https://blog.smithsecurity.biz/hacking-the-scammers


‘Sinkclose’ Flaw in Hundreds of Millions of AMD Chips Allows Deep, Virtually Unfixable Infections | WIRED

Tags: tech, cpu, amd, security

Luckily this kind of very low level vulnerabilities are not too common and difficult to exploit. But when they get exploited all things break loose and you can’t trust your hardware anymore.

https://www.wired.com/story/amd-chip-sinkclose-flaw/


Why exploits prefer memory corruption

Tags: tech, security, memory

Interesting take, those bugs are more convenient to exploit. Logic bugs are too specific to easily exploit at scale.

https://pacibsp.github.io/2024/why-exploits-prefer-memory-corruption.html


Some thoughts on OpenSSH 9.8’s PerSourcePenalties feature

Tags: tech, security, ssh

Clearly a new OpenSSH feature to keep an eye on. This should improved security of the server by default. That said, it needs to be a bit more in the wild before knowing how to best tune it.

https://utcc.utoronto.ca/~cks/space/blog/sysadmin/OpenSSHPerSourcePenaltiesThings


slow TCP connect on Windows

Tags: tech, windows, networking

This is indeed surprising behavior and specific to Windows. If you wonder why TCP connect is slow and you got IPv6 support active this might be why.

https://daniel.haxx.se/blog/2024/08/14/slow-tcp-connect-on-windows/


Recent Performance Improvements in Function Calls in CPython

Tags: tech, python, performance

Interesting dive into some of the performance improvements introduced into recent CPython releases.

https://blog.codingconfessions.com/p/are-function-calls-still-slow-in-python


Approximating sum types in Python with Pydantic

Tags: tech, python, type-systems

Here is an interesting use of Pydantic to properly model inputs.

https://blog.yossarian.net/2024/08/12/Approximating-sum-types-in-Python-with-Pydantic


Reflection-based JSON in C++ at Gigabytes per Second – Daniel Lemire’s blog

Tags: tech, c++, reflection, type-systems, performance

Compile time reflection in C++ will indeed be a big deal.

https://lemire.me/blog/2024/08/13/reflection-based-json-in-c-at-gigabytes-per-second/


High-precision date/time in SQLite

Tags: tech, time, databases, sqlite

Looks like a neat extension which can come in handy.

https://antonz.org/sqlean-time/


PostgreSQL masking and obfuscation tool

Tags: tech, databases, postgresql, tools

Looks like an interesting tool for creating anonymized pre-production environments.

https://greenmask.io/latest/


The fastest way to copy data between Postgres tables

Tags: tech, databases, postgresql

Need to duplicate data in Postgres? Several options are on the table.

https://ongres.com/blog/fastest_way_copy_data_between_postgres_tables/


blocking=render: Why would you do that?!

Tags: tech, web, browser, frontend, html

A new HTML attribute to keep an eye on. I can expect people to abuse it with hard to debug problems in the frontend if you don’t know it is there.

https://csswizardry.com/2024/08/blocking-render-why-whould-you-do-that/


Garbage Collection and Metastability - Marc’s Blog

Tags: tech, garbage-collector, performance, safety, memory

Interesting, it confirms garbage collectors can be the source of unrecoverable performance degradation in request based systems.

https://brooker.co.za/blog/2024/08/14/gc-metastable.html


Good Retry, Bad Retry: An Incident Story

Tags: tech, distributed, failure, recovery

Retries are becoming common place to deal with transient errors. That said, they can be a problem with recovery of longer failures due to amplification. There are options on the table to solve this though.

https://medium.com/yandex/good-retry-bad-retry-an-incident-story-648072d3cee6


The Perils of Future-Coding

Tags: tech, design, complexity, performance

Or why anticipating too much is merely a gamble. You can be lucky, but how often will you be? Also I agree that in such cases the performance will be impacted longer term leading to a death by thousands of paper cuts.

https://www.sebastiansylvan.com/post/the-perils-of-future-coding/


How we deleted 4195 code files in 9 hours - by Anton Zaides

Tags: tech, technical-debt, organization, leadership, funny

I’m not sure the incentives are right… it’s better to clean up as you go. Still some places would benefit from such an event from time to time and even if you clean up as you go missed opportunities happen.

https://zaidesanton.substack.com/p/organizing-the-best-cleanathon-your


Stop Team Topologies. Reevaluating Team Topologies

Tags: tech, organization

Surprisingly, I bumped into this article as I’m wrapping up reading the Team Topologies book. This highlight fairly well some of the concerns I have with it and where it shines. I think it’s right to turn to the principles it’s built on rather than use the model it proposes as a blueprint.

https://martyoo.medium.com/stop-team-topologies-fd954ea26eca


How to build a strategy

Tags: business, organization, management, strategy

It’s bloody hard to build a strategy. This article is full of good wisdom to make one. This won’t make it really easier, but at least you won’t start in the wrong direction and will be able to know if what you produce is any good.

https://www.cultivatedmanagement.com/how-to-build-a-strategy/


Anime girl breaking the fourth wall

Tags: tech, blender, 3d, funny

Funny short video, I guess it has also some tutorial value to know what you can do with Blender? (and no, you can’t break the fourth wall with it)

https://www.youtube.com/watch?v=gTi_-HGtsDY



Bye for now!

Thursday, 15 August 2024

This will be a guide on how to generate Python bindings for your C++ library using Shiboken. Shiboken is a tool specifically created to build PySide, so it supports Qt code perfectly fine.

The steps described here require at least KDE Frameworks 6.8. I’ll use KUnitConversion as example, because it’s a small library.

We’ll start adding the building instructions. This part is mostly boilerplate code as it’s the same for any library (except for the obvious thing of changing the library name):

 1set(bindings_library "KUnitConversion")
 2
 3set(wrapped_header ${CMAKE_SOURCE_DIR}/python/bindings.h)
 4set(typesystem_file ${CMAKE_SOURCE_DIR}/python/bindings.xml)
 5
 6set(generated_sources
 7    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_module_wrapper.cpp
 8    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_wrapper.cpp
 9    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_converter_wrapper.cpp
10    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_unit_wrapper.cpp
11    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_unitcategory_wrapper.cpp
12    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_updatejob_wrapper.cpp
13    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_value_wrapper.cpp)
14
15ecm_generate_python_bindings(
16    PACKAGE_NAME ${bindings_library}
17    VERSION ${KF_VERSION}
18    WRAPPED_HEADER ${wrapped_header}
19    TYPESYSTEM ${typesystem_file}
20    GENERATED_SOURCES ${generated_sources}
21    DEPENDENCIES KF6::UnitConversion
22    QT_VERSION ${REQUIRED_QT_VERSION}
23    HOMEPAGE_URL "https://invent.kde.org/frameworks/kunitconversion"
24    ISSUES_URL "https://bugs.kde.org/describecomponents.cgi?product=frameworks-kunitconversion"
25    AUTHOR "The KDE Community"
26    README ../README.md
27)
28
29target_link_libraries(${bindings_library} PRIVATE KF6UnitConversion)
30install(TARGETS ${bindings_library} LIBRARY DESTINATION "${KDE_INSTALL_LIBDIR}/python-kf6")

Let’s see what each part does.

 1set(bindings_library "KUnitConversion")
 2
 3set(wrapped_header ${CMAKE_SOURCE_DIR}/python/bindings.h)
 4set(typesystem_file ${CMAKE_SOURCE_DIR}/python/bindings.xml)
 5
 6set(generated_sources
 7    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_module_wrapper.cpp
 8    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_wrapper.cpp
 9    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_converter_wrapper.cpp
10    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_unit_wrapper.cpp
11    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_unitcategory_wrapper.cpp
12    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_updatejob_wrapper.cpp
13    ${CMAKE_CURRENT_BINARY_DIR}/KUnitConversion/kunitconversion_value_wrapper.cpp)

The first line just defines the name of the Python library we’ll build later. Then we set the header file that includes all the necessary headers of the library. We’ll see that file later. The generated sources list is a bit more complicated, as you need to guess the names of the files generated by Shiboken. Fortunately, you can probably guess the pattern from the example above: the first two files are always the same (the name of the library + _module_wrapper or _wrapper) and the rest is the list of classes defined in your XML file (more about it later). The qt_libs variable just contains a list of the Qt modules that our library requires. You should list all of them, even if one depends on another, because otherwise Shiboken won’t be able to find the include directories.

15ecm_generate_python_bindings(
16    PACKAGE_NAME ${bindings_library}
17    VERSION ${KF_VERSION}
18    WRAPPED_HEADER ${wrapped_header}
19    TYPESYSTEM ${typesystem_file}
20    GENERATED_SOURCES ${generated_sources}
21    DEPENDENCIES KF6::UnitConversion
22    QT_VERSION ${REQUIRED_QT_VERSION}
23    HOMEPAGE_URL "https://invent.kde.org/frameworks/kunitconversion"
24    ISSUES_URL "https://bugs.kde.org/describecomponents.cgi?product=frameworks-kunitconversion"
25    AUTHOR "The KDE Community"
26    README ../README.md
27)

This is the magic part. The ecm_generate_python_bindings function takes care of running Shiboken with all the required arguments, building the Python library and a wheel file to publish it on the Python Package Index (pypi.org). It has the following arguments:

  • PACKAGE_NAME: Name of the Python library.
  • VERSION: Version of the resulting library.
  • WRAPPED_HEADER: The header file we talked about above.
  • TYPESYSTEM: XML file with the type system information.
  • GENERATED_SOURCES: The list of files that Shiboken will generate.
  • DEPENDENCIES: A list of libraries that the bindings require, typically the library we’re building the bindings of.
  • QT_VERSION: The minimum required Qt version.
  • HOMEPAGE_URL: A URL to the homepage of the project.
  • ISSUES_URL: A URL where users can report bugs.
  • AUTHOR: The author of the library.
  • README: The README file for the Python library.
29target_link_libraries(${bindings_library} PRIVATE KF6UnitConversion)
30install(TARGETS ${bindings_library} LIBRARY DESTINATION "${KDE_INSTALL_LIBDIR}/python-kf6")

The last part links the C++ library with the Python bindings and installs it. Now let’s take a look at the header file:

1#pragma once
2
3// Make "signals:", "slots:" visible as access specifiers
4#define QT_ANNOTATE_ACCESS_SPECIFIER(a) __attribute__((annotate(#a)))
5
6#include <KUnitConversion/Converter>
7#include <KUnitConversion/Unit>
8#include <KUnitConversion/UnitCategory>
9#include <KUnitConversion/Value>

Nothing exciting there, just the list of includes (and some Qt thing that I don’t understand).

The last file you need is the typesystem definition, where you tell Shiboken which things (classes, structs, enums, namespaces…) you want to include in your bindings and how it should interpret them. You can delete or rename functions, change the return type, modify the input parameters and many other things. You may want to take a look at the documentation because the list of posible options is very large.

 1<?xml version="1.0"?>
 2<typesystem package="KUnitConversion">
 3    <load-typesystem name="typesystem_core.xml" generate="no" />
 4
 5    <namespace-type name="KUnitConversion">
 6        <enum-type name="CategoryId" />
 7        <object-type name="Converter" />
 8        <object-type name="Unit" />
 9        <object-type name="UnitCategory" />
10        <enum-type name="UnitId" />
11        <object-type name="UpdateJob" />
12        <object-type name="Value" />
13    </namespace-type>
14</typesystem>

You need to load the typesystems of the Qt libraries that you are using so Shiboken can understand what your code is referring to. They come included with PySide.

That’s all you need to generate the Python bindings for your library. The last step is building the project as you usually do.

Update 2024-11-16: Updated the post with the released version of the module.

Wednesday, 14 August 2024

After a long wait, Plasma Dialer 24.08 is finally out. This released is based on Qt6 and contains 17 months of bug fixing as well as small improvements all other the place.

Screenshot of the dialer page

Screenshot of the empty call history

Packager Section

You can find the package on download.kde.org and it has been signed with my Carl's GPG key.

Tuesday, 13 August 2024

This post is written on behalf of the LabPlot team. It’s different compared to what we usually publish on our homepage but we feel we need to share this story with our community.

Introduction

You might already know this, but finalizing a release for a project with the complexity and scope like that of LabPlot can be hard and exhausting. After our latest recent 2.11 release, we decided to take a short break and distance ourselves from coding and take care of other non-coding related tasks, like discussions around the NLnet grant for LabPlot, our ongoing GSoC projects, the roadmap for the next release, improving our documentation, the gallery on the homepage and the article about LabPlot on Wikipedia. Don’t worry, we’re already back to coding and working on new features for the next release 🙂

The article about LabPlot on Wikipedia (we are talking about the ‘EN’ version here, but the situation is similar for other languages) was completely outdated and still containing the information about LabPlot1 from Qt3/KDE3 times. The article became largely wrong with the introduction of LabPlot2 and with further developments in recent years. Among other things, the feature set described on Wikipedia was very far from being correct and complete in comparison to the description for other applications of its type.

The current situation was clear for us and it was also evident what needed to be done. Let’s go ahead and improve the article, we thought. Hey! Being able to contribute and to share your knowledge with everybody is the advantage of Wikipedia, right? Easier said than done…

Key Takeaways

But before we begin:

  • Wikipedia itself points out that the purpose of Wikipedia is to benefit readers by acting as a comprehensive compendium that contains information on all branches of knowledge. For this purpose, as it is clearly stated on Wikipedia, “Wikipedia has many policies or what many consider “rules”. Instead of following every rule, it is acceptable to use common sense as you go about editing. Being too wrapped up in rules can cause a loss of perspective, so there are times when it is better to ignore a rule. Even if a contribution “violates” the precise wording of a rule, it might still be a good contribution.” Link: Use common sense.
  • According to Wikipedia there is no need to read any policy or guideline pages to start editing. The five pillars of Wikipedia are a popular summary of the most important principles. And the three of the pillars are formulated as follows: 1. Wikipedia is free content that anyone can use, edit, and distribute. 2. Wikipedia’s editors should treat each other with respect and civility. 3. Wikipedia has no firm rules. And If a rule prevents you from improving or maintaining Wikipedia, ignore it.
  • In this Wikipedia’s article on https://en.wikipedia.org/wiki/Wikipedia:Dispute_resolution it is stated that once sustained discussion begins, productively participating in it is a priority. Editors should focus on article content during discussions; comment on content, not the contributor. And when an editor finds a passage in an article that is biased, inaccurate, or unsourced the best practice is to improve it rather than deleting salvageable text.
  • I fully acknowledge these common-sense principles. I accept the fact that some phrases of the original version of the new content added by Dariusz, another core member of the LabPlot team, might have possibly infringed a less general rule on Wikipedia, and that’s why he asked for a constructive assistance, to no effect.
  • I can also accept the reality and the existence of different users with the various amount of expertise, goodwill and power. The worst case are people contributing in a subversive manner over long time to such an open project to achieve more power and authority and completely different and evil goals later, and this can also be related to users with granted power. See the recent XZ Utils backdoor. I also accept the fact that the amount of work behind the scenes on Wikipedia requires the usage of automated mechanisms and bots (“Meet the ‘bots’ that edit Wikipedia”).
  • However, I cannot accept the fact that the quality of knowledge on Wikipedia can be seriously undermined by power users heavily using algorithms and blindly enforcing some subjectively selected, narrow rules against the general principles outlined above, and at the same time not being open to any constructive discussion. The fact that complete content and comments are censored and removed by users with granted power or by their (semi-)automated tools, which deceives the reader and distorts the history of the discussion, is definitely not acceptable. And this is apparently not an exception, see the links here, here and here and many other similar discussions on the internet.

Keep the above in mind while you read what happened.

The incident I want to share with you is certainly not about LabPlot and its team. It’s about the negative impact of blindly invoking algorithms or quoting a single rule by Wikipedia’s users with granted power on the overall quality of the information stored on Wikipedia. As Dariusz noticed, in economics there is the observation that “bad money” drives out the “good money” from the market (Gresham’s Law “bad money drives out good”). We wonder whether the actions of the entities like MrOllie, some of which are described in the next parts of the article, are enough to justify the introduction of a new law for Wikipedia “bad information drives out good”?

Chain of events

In order to make the content correct and to provide an up-to-date description of the project, similar to the articles for other projects mentioned e.g. on https://en.wikipedia.org/wiki/List_of_information_graphics_software, Dariusz did multiple edits of the article over the course of two days using his Wikipedia account ‘Dlaska’. Very soon after that, the entity MrOllie became aware of his changes and reverted them completely with the suggestion that it was a promotional rewrite. Then, a “user talk” with Dariusz was initiated by MrOllie:

We are all volunteers, having no benefit other than satisfaction from developing LabPlot. But sticking to the principle of intellectual honesty, Dariusz himself fully disclosed to MrOllie that he is a LabPlot team member that felt obliged to step in to correct misleading information in the article and to make the content more complete and up-to-date, because no one has done it for a long time. Unable to get any suggestions from MrOllie despite Dariusz’ requests, Dariusz removed any phrases that could even potentially have promotional qualities (e.g. rename “strongly support” to “support”). Unfortunately, even this had no effect on the actions of MrOllie, resulting in the revert of the new content.

In parallel, I joined these activities and reverted the revert done by MrOllie and provided some explanations for this step. Another “user talk” with me (I don’t have any account, you see my IP address here) was initiated by MrOllie:

After multiple back-and-forth reverts, my IP was blocked and a “Conflict of Interest on the Noticeboard” was raised by MrOllie where he quickly got the support from his peers on Wikipedia. Dariusz’ comment didn’t change anything in the overall situation:

In parallel, more seasoned Wikipedia users jumped on the bandwagon and started ‘editing’ the article by first blindly reverting the article to the version containing potentially promotional content and then removing even more and more content and references with arguments that, in our perception, didn’t make sense arguing with anymore. Any discussion seemed completely ineffective. After most of the content had been removed from the article, to the point that the new version was more deprived of content than the old version, the user Smartse added a notability tag which was later turned into a notification box to the article stating this article “may not meet Wikipedia’s general notability guideline.”. Notability is a test used by editors to decide whether a given topic warrants its own article. So in our perception this could be interpreted as a threat of removing the article completely. The size and severity of the problem we were confronted with was already obvious at this point.

After my IP was unblocked (or maybe because I just got a new IP from my ISP), I was able to reply on this noticeboard. Since I was already foreseeing it’s going to be deleted, I took a screenshot (this is also the reason why I did screenshots for all other events):

Practically immediately my reply, red-highlighted above, was deleted without any comment or note and this is how this thread looks like afterwards:

Fortunately, Dariusz, who has an account in Wikipedia, got the notification about my added reply via email:

and after clicking within seconds on the link in the email he was informed that the comment might have been deleted, and it sure was, right after it had been added.

Immediately after this, another notification box with “A major contributor to this article appears to have a close connection with its subject.” was added to the article:

and my new IP was blocked for “abusing multiple accounts” and using them for “illegitimate reasons”:

After all these deletions, see the full history of changes

This is how the article looks like in its “final version”:

In retrospect

What seems to have happened here looks like a well coordinated or even (semi-) automated chain of events with a pre-defined replies, arguments and actions. MrOllie stands out for the incredible diligence and regularity of his activity. The chart below shows the number of edits he has made by day of the week and hour (in local time), from 2008 to the present (source of the chart: https://xtools.wmcloud.org):

Also, over 75% of MrOllie’s edits are done in a semi-automated way with the help of tools on Wikipedia like Twinkle. So, this account functions like a programmed algorithm or somebody who is heavily relying on them.

Seeing no reasonable chance of correcting this situation in the context of being deprived of the right to effectively discuss the matters with entities like MrOllie, we gave up on our initial idea to improve the article.

What’s next?

After reading more on this subject we realized that this problem is not new, but apparently it is not common knowledge either. Completely independent of who or what censored us – AI bots (is AI already winning over us?), good or bad editors etc. – trusting Wikipedia now is much harder than before. Still, the question remains about what to do next.

We can completely give up the idea of contributing to this platform and rather focus on other channels like our homepage and other online resources in the KDE and free world (Mastodon, etc) and provide more and more useful information.

Alternatively, we can ask for support from other people with more experience in editing and maybe even with more authority on Wikipedia to help us to get a reasonable description of the project on Wikipedia to the benefit of Wikipedia’s readers and LabPlot’s users.

Thoughts?

Links

For the sake of completeness and of easier usage, here are the links mentioned in my reply that were deleted:

Note, for the first two links above, the original posts in the Wikipedia related channels on Reddit about the same MrOllie account on Wikipedia were deleted, shame on those who think evil of this. The comments are still available, though, and the reader can get at least an idea about the original content of those posts.

Edit

We added a short post about the article to r/wikipedia on Reddit, but it was removed by the moderators just 2 hours later, without any comment.

If, after reading this article, you think MrOllie and his counterparts are capable of correcting their own actions, take a look at the exchange below. It’s also an indication of what the future may hold for the LabPlot article on Wikipedia… Source: https://en.wikipedia.org/w/index.php?title=User_talk:MrOllie&oldid=1240636433


Update (20.08.2024)

We’ve been informed that a Wikipedia’s editor (Smartse) has today nominated the article about LabPlot for deletion, so before making any hasty, not to say retaliative, decisions, we encourage Wikipedia editors to reach a sensible “consensus” in the context of information published on these websites:

The last three links refer to Wikipedia’s articles about other applications similar to LabPlot. As far as we know, none of them have been nominated for deletion. In the meantime the article has been given a “protected” status.

Update (21.08.2024)

So far we have provided a list of nearly 50 research papers from various scientific fields that show that LabPlot has been used for research and teaching purposes. It would be unreasonable to expect researchers dealing with domain problems to devote a significant part of their research papers to describing LabPlot. In contrast, the following article is devoted entirely to LabPlot. The professors Williams Morales González and Jesús Eleuterio Hernández-Ruíz described the program’s functionality and usage in detail [1]. This fits into the Wikipedia’s General notability guideline:

  • “A topic is presumed to be suitable for a stand-alone article or list when it has received significant coverage in reliable sources that are independent of the subject.”
  • “Sources do not have to be written in English” and “there is no fixed number of sources required.”
  • “Notability is based on the existence of suitable sources, not on the state of sourcing in an article.”

Below is a (semi-)automatic translation of the authors’ conclusions:

The experiences of the professors of the Experimental Physics group of the UCLV Physics program in the use of LabPlot as a tool for the analytical and graphical processing of experimental data were presented.

LabPlot is a free, open source, multiplatform software with KDE desktop design and similar characteristics to Origin. It is intended for interactive analysis of experimental data and includes a wide variety of operations for analytical and graphical data processing, including linear and nonlinear fitting and data extraction from external plots, without the need for licensed software.

LabPlot can be used as a tool for experimental data processing, not only in Experimental Physics, but also in the scientific work developed by students from the second year of the course and culminating with the diploma work.

Does this information have any real relevance to Wikipedia editors? Time will tell.

[1] Williams Morales González. Jesús Eleuterio Hernández-Ruíz. 2022. Experiencias en el uso del software LabPlot en el procesamiento analítico y gráfico de datos experimentales. Conference: VII Taller de Enseñanza de la Física At: Universidad de Oriente, Santiago de Cuba. https://www.researchgate.net/publication/361586279_Experiencias_en_el_uso_del_software_LabPlot_en_el_procesamiento_analitico_y_grafico_de_datos_experimentales.

I want to thank you Dariusz for his contributions to this article.