Skip to content

Sunday, 21 January 2024

When running Linux software and encountering a crash, and you make a bug report about it (thank you!), you may be asked for backtraces and debug symbols.

And if you're not developer you may wonder what in the heck are those?

I wanted to open up this topic a bit, but if you want more technical in-depth look into these things, internet is full of info. :)

This is more a guide for any common user who encounters this situation and what they can do to get these mystical backtraces and symbols and magic to the devs.

Backtrace

When developers ask for a backtrace, they're basically asking "what are the steps that caused this crash to happen?" Debugger software can show this really nicely, line by line. However without correct debug symbols, the backtrace can be meaningless.

But first, how do you get a backtrace of something?

On systems with systemd installed, you often have a terminal tool called coredumpctl. This tool can list many crashes you have had with software. When you see something say "segmentation fault, core dumped", this is the tool that can show you those core dumps.

So, here's a few ways to use it!

How to see all my crashes (coredumps)

Just type coredumpctl in terminal and a list opens. It shows you a lot of information and last the app name.

How to open a specific coredump in a debugger

First, check from the plain coredumpctl list the coredump you want to check out. Easiest way to deduce it is to check the date and time. After that, there's something called PID number, for example 12345. You can close the list by pressing q and then type coredumpctl debug 12345.

This will often open GDB, where you can type bt for it to start printing the backtrace. You can then copy that backtrace. But there's IMO easier way.

Can I just get the backtrace automatically in a file..?

If you only want the latest coredump of the app that crashed on you, then print the backtrace in a text file that you can just send to devs, here's a oneliner to run in terminal:

coredumpctl debug APP_NAME_HERE -A "-ex bt -ex quit" |& tee backtrace.txt

You can also use the PID shown earlier in place of the app name, if you want some specific coredump.

The above command will open the coredump in a debugger, then run bt command, then quit, and it will write it all down in a file called backtrace.txt that you can share with developers.

As always when using debugging and logging features, check the file for possible personal data! It's very unlikely to have anything personal data, BUT it's still a good practice to check it!

Here's a small snippet from a backtrace I have for Kate text editor:

#0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0)
    at pthread_kill.c:44
...
#18 0x00007f5653fbcdb9 in parse_file
    (table=table@entry=0x19d5a60, file=file@entry=0x19c8590, file_name=file_name@entry=0x7f5618001590 "/usr/share/X11/locale/en_US.UTF-8/Compose") at ../src/compose/parser.c:749
#19 0x00007f5653fc5ce0 in xkb_compose_table_new_from_locale
    (ctx=0x1b0cc80, locale=0x18773d0 "en_IE.UTF-8", flags=<optimized out>) at ../src/compose/table.c:217
#20 0x00007f565138a506 in QtWaylandClient::QWaylandInputContext::ensureInitialized (this=0x36e63c0)
    at /usr/src/debug/qt6-qtwayland-6.6.0-1.fc39.x86_64/src/client/qwaylandinputcontext.cpp:228
#21 QtWaylandClient::QWaylandInputContext::ensureInitialized (this=0x36e63c0)
    at /usr/src/debug/qt6-qtwayland-6.6.0-1.fc39.x86_64/src/client/qwaylandinputcontext.cpp:214
#22 QtWaylandClient::QWaylandInputContext::filterEvent (this=0x36e63c0, event=0x7ffd27940c50)
    at /usr/src/debug/qt6-qtwayland-6.6.0-1.fc39.x86_64/src/client/qwaylandinputcontext.cpp:252
...

The first number is the step where we are. Step #0 is where the app crashes. The last step is where the application starts running. Keep in mind though that even the app crashes at #0 that may be just the computer handling the crash, instead of the actual culprit. The culprit for the crash can be anywhere in the backtrace. So you have to do some detective work if you want to figure it out. Often crashes happen when some code execution path goes in unexpected route, and the program is not prepared for that.

Remember that you will, however, need proper debug symbols for this to be useful! We'll check that out in the next chapter.

Debug symbols

Debug symbols are something that tells the developer using debugger software, like GDB, what is going on and where. Without debugging symbols the debugger can only show the developer more obfuscated data.

I find this easier to show with an example:

Without debug symbols, this is what the developer sees when reading the backtrace:

0x00007f7e9e29d4e8 in QCoreApplication::notifyInternal2(QObject*, QEvent*) () from /lib64/libQt5Core.so.5

Or even worse case scenario, where the debugger can't read what's going on but only can see the "mangled" names, it can look like this:

_ZN20QEventDispatcherGlib13processEventsE6QFlagsIN10QEventLoop17ProcessEventsFlagEE

Now, those are not very helpful. At least the first example tells what file the error is happening in, but it doesn't really tell where. And the second example is just very difficult to understand what's going on. You don't even see what file it is.

With correct debug symbols installed however, this is what the developer sees:

QCoreApplication::notifyInternal2(QObject*, QEvent*) (receiver=0x7fe88c001620, event=0x7fe888002c20) at kernel/qcoreapplication.cpp:1064

As you can see, it shows the file and line. This is super helpful since developers can just open the file in this location and start mulling it over. No need to guess what line it may have happened, it's right there!

So, where to get the debug symbols?

Every distro has it's own way, but KDE wiki has an excellent list of most common operating systems and how to get debug symbols for them: https://community.kde.org/Guidelines_and_HOWTOs/Debugging/How_to_create_useful_crash_reports

As always, double check with your distros official documentation how to proceed. But the above link is a good starting point!

But basically, your package manager should have them. If not, you will have to build the app yourself with debug symbols enabled, which is definitely not ideal.. If the above list does not have your distro/OS, you may have to ask the maintainers of your distro/OS for help with getting the debug symbols installed.

Wait, which ones do I download?!

Usually the ones for the app that is crashing. Sometimes you may also need include the libraries the app is using.

There is no real direct answer this, but at the very least, get debug symbols for the app. If developers need more, they will ask you to install the other ones too.

You can uninstall the debug symbols after you're done, but that's up to you.

Thanks for reading!

I hope this has been useful! I especially hope the terminal "oneliner" command mentioned above for printing backtraces quickly into a file is useful for you!

Happy backtracing! :)

Saturday, 20 January 2024

Why KMines? #

Minesweeper is a tragically underrated puzzle game. While I recall examining the mysterious array of gray squares as a child, it wasn’t until adulthood that I took the time to learn the rules of the game. Despite my late start, however, I still count minesweeper as a classic. These days, good minesweeper clones are hard to come by. I settled on GNOME’s Mines for a while, but as the look of GTK applications on my QT-based KDE Plasma Desktop sets my teeth on edge, I ditched it for KMines in short order. While I enjoyed the game, I found the themes shipped with KMines a bit dated, so I thought I’d make my own.

A screenshot of the KMines game window showing a new dark theme.
The Clean Blue Dark KMines theme

The Drama #

I didn’t quite know what I was getting into when I started working on my themes. I had expected there’d be a simple way to add themes through the KMines settings menu, or by dropping an SVG somewhere in your file-system. If only it were so simple. Adding a theme to KMines requires setting up a full-on KDE development environment, re-compiling KMines from source each time you want to test it, and then, of course, submitting a merge request to the git repository. Thanks to the help of some very patient souls in various KDE Matrix channels, I was able to work through all of this, but I found the process so tricky that I submited a second merge request, this time to the repository for develop.kde.org, for a page documenting the process. Now I realise that some developers out there are going to read through this and wonder if I was dropped on my head as an infant, but in my defense, when it comes to software development, I’m a humble designer and Jamstack web developer. This is all very new to me, and I was expecting a much more streamlined process for what I saw as simple visual tweaks.

A screenshot of the KMines game window showing a new light theme.
The Clean Blue Light KMines theme

Why High Contrast #

When submitting my initial design, a KDE contributor who had been helping me via Matrix pointed out that they found the theme difficult to parse visually as the contrast was quite low. I hadn’t considered contrast ratios here, my thinking being that I didn’t need to; I was just making one theme among many, after all, and users could choose any theme that worked best for them. After some consideration, however, it dawned on me that my theme would likely be the only modern-looking theme in the release, so it would be ideal if it were as accessible as possible. With this in mind, thanks to the feedback of one helpful individual, instead of one pretty but low contrast theme, I decided to make two modern high contrast themes, one light and one dark, targeting the WCAG AAA standard for contrast.

Coming in Plasma 6! #

With thanks to those KDE contributors who helped make it happen, the merge request containing these two themes scraped by the skin of its teeth past the closing door of a feature-freeze and will be available with Plasma 6 this February!

Improving the qcolor-from-literal Clazy check

For all of you who don’t know, Clazy is a clang compiler plugin that adds checks for Qt semantics. I have it as my default compiler, because it gives me useful hints when writing or working with preexisting code. Recently, I decided to give working on the project a try! One bigger contribution of mine was to the qcolor-from-literal check, which is a performance optimization. A QColor object has different constructors, this check is about the string constructor. It may accept standardized colors like “lightblue”, but also color patterns. Those can have different formats, but all provide an RGB value and optionally transparency. Having Qt parse this as a string causes performance overhead compared to alternatives.

Fixits for RGB/RGBA patterns

When using a color pattern like “#123” or “#112233”, you may simply replace the string parameter with an integer providing the same value. Rather than getting a generic warning about using this other constructor, a more specific warning with a replacement text (called fixit) is emitted.

testf4ile.cpp:92:16: warning: The QColor ctor taking RGB int value is cheaper than one taking string literals [-Wclazy-qcolor-from-literal]
        QColor("#123");
               ^~~~~~
               0x112233
testfile.cpp:93:16: warning: The QColor ctor taking RGB int value is cheaper than one taking string literals [-Wclazy-qcolor-from-lite
        QColor("#112233");
               ^~~~~~~~~
               0x112233

In case a transparency parameter is specified, the fixit and message are adjusted:

testfile.cpp:92:16: warning: The QColor ctor taking ints is cheaper than one taking string literals [-Wclazy-qcolor-from-literal]
        QColor("#9931363b");
               ^~~~~~~~~~~
               0x31, 0x36, 0x3b, 0x99

Warnings for invalid color patterns

Next to providing fixits for more optimized code, the check now verifies that the provided pattern is valid regarding the length and contained characters. Without this addition, an invalid pattern would be silently ignored or cause an improper fixit to be suggested.

.../qcolor-from-literal/main.cpp:21:28: warning: Pattern length does not match any supported one by QColor, check the documentation [-Wclazy-qcolor-from-literal]
    QColor invalidPattern1("#0000011112222");
                           ^
.../qcolor-from-literal/main.cpp:22:28: warning: QColor pattern may only contain hexadecimal digits [-Wclazy-qcolor-from-literal]
    QColor invalidPattern2("#G00011112222");

Fixing a misleading warning for more precise patterns

In case a “#RRRGGGBBB” or “#RRRRGGGGBBBB” pattern is used, the message would previously suggest using the constructor taking ints. This would however result in an invalid QColor, because the range from 0-255 is exceeded. QRgba64 should be used instead, which provides higher precision.


I hope you find this new or rather improved feature of Clazy useful! I utilized the fixits in Kirigami, see https://invent.kde.org/frameworks/kirigami/-/commit/8e4a5fb30cc014cfc7abd9c58bf3b5f27f468168. Doing the change manually in Kirigami would have been way faster, but less fun. Also, we wouldn’t have ended up with better tooling :)

Wednesday, 17 January 2024

Ruqola 2.1 Beta (2.0.81) is available for packaging and testing.

Ruqola is a chat app for Rocket.chat. This beta release will build with the current release candidate of KDE Frameworks 6 and KTextAddons allowing distros to start to move away from Qt 5.

URL: https://download.kde.org/unstable/ruqola/
SHA256: 2c4135c08acc31f846561b488aa24f1558d7533b502f9ba305be579d43f81b73

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell jr@jriddell.org
https://jriddell.org/esk-riddell.gpg

Tuesday, 16 January 2024

Disclaimer: I am not one of KDE's masterminds or spokespersons. I am a mere bystander with few unimportant commits. I follow KDE's ecosystem and other developments in the free software world. In the following, I share some thoughts and my personal opinion.

Talks about new programming languages

After 30 years of C code, the Linux kernel opens itself to a second high-level language: Rust. Since fall of 2022 the kernel mainly gained infrastructure work. Some experiments show promising results like a Rust-based network driver or a scheduler.
Recently, Git developers started to discuss how to allow Rust code in our beloved version control system. Far from having reached a consensus, its media coverage and heated discussions in forums show how interested the public is in this topic.
Other projects try to replace established software by rewritten from scratch Rust ones: uutils coreutils, sudo-rs, librsvg, Rustls. Heck, Rewrite it it Rust (RiiR) has become a meme.

We already have a new programming language!

KDE is close to its 6th Megarelease, with one major change being based on Qt 6. Qt 6 requires C++17 which -- as of today -- is perceived as modern C++ and is a leap compared to C++11. It is possible to write modern software with C++17. Still, additional tools like C++ Core Guidelines or Cppcheck are advised to keep the number of preventable bugs low.
Most of the projects mentioned in the introduction are using C. This inflicts more pain to the developers and thus using Rust is more attractive. For sure, a fair portion of RiiR arguments do not apply to KDE's C++ code base.

Problems with C++ remain

C++ cannot adapt to modern ways like including a borrow checker or a less complicated syntax, as this would break compatibility. As much as C++ improved as a language, its compilers, and its ecosystem, it is not enough to be considered a good choice for new projects. NIST and NSA advice to move away from C++.
Other problems like complicated tooling with variations on different platforms (build systems, compiler, linker, debugger, dependency management), mixed-in C-style code, difficult to parse C++ code, cannot be solved.
I fear that in a not to distant future, C++ might be perceived as an outdated choice to learn and people might less likely consider to join KDE as contributors.

What can be done?

In the past, GNOME adopted Vala as a new language to solve the short-comings of C. Vala seems to be dead. Going with Rust did not lead to a project-wide adoption.
Some people are working on Qt bindings for Rust, e.g., CXX-Qt from KDAB. I am not sure if Qt itself is working on something similar. At least there is no go-to binding.
Beside the hot topic Rust, two big players invest in ways to have good interoperability with existing code bases and a modern language: Cpp2 / cppfront and Carbon.
Cpp2 is a new language from Herb Sutter, who chairs the C++ working group. The idea is to have a transpiler cppfront producing modern C++ code. Cpp2 is not backward compatible to C++ and thus not limited in introducing new ways or removing existing parts. Cpp2 promises to integrate seamlessly in existing C++ code bases as it is compiled into C++ code.
Carbon is a project by Google developers and follows a different approach. It aims to provide a new language that can use all C++ features in interfaces, even templates with all bells and whistles.

Discuss our future

I do not want to whine about C++. I want to start a discussion on how KDE's future might look like. KDE was always driving innovations. We helped CMake to become one of the most important build systems for C++. KDE 4.0 introduced the semantic desktop. KHTML's code base was the nucleus for today's big browsers.
Probably we should have this discussion as a BoF at Akademy 2024 or other places where KDE's masterminds and people with a feeling for future trends come together and form/formulate future directions. In the meantime, I start a discourse thread.
Personally, I would like to see some push for Cpp2. More important, I want to see that we are actively shaping KDE's future.

It’s a pleasure to be on the OpenUK New Year’s Honours list for 2024. There’s some impressive names on there such as Richard Hughes of Packagekit and other projects at Red Hat, Colin Watson who was at Ubuntu with me and I see is now freelance, Mike McQuaid was previously of KDE but is now trying a startup with Mac packager Workbrew for Homebrew.

OpenUK run various activities for open tech in UK countries and KDE currently needs some more helpers for a stall at their State of Open Con in London on Feb 6 and 7 February, if you can help do get in touch.

KDE’s 6th releases will happen next month bringing with it the refresh of code and people that a new major version number can bring, I think KDE’s software in the coming year will continue to impress.

My life fell apart after some family loss last year so I’ve run away to the end of the world at Finesterre in Galicia in Spain for now, let me know if you’re in the area.

Sunday, 14 January 2024

Lithuania and Latvia

Caused by a small discussion about how it is difficult to get from Berlin to Riga by train, and in direct consequence a quick look at how the official app for trains in Latvia finds its connections, I added support for it in KDE Itinerary. KDE Itinerary is KDE’s travel planning app.

After I understood how it works, adding support for new data sources seemed pretty doable, so I directly moved on to do the same for trains in Lithuania as well.

As a result of this, it is now possible to travel from Berlin to Riga with Itinerary and continue further with the local trains there:

Screenshot of the first part of the journey from Berlin Hauptbahnhof to Warszawa Gdanska using EC 249, and the next day continuing with IC 144 to Vilnius Screenshot of the second part, from Vilnius to Riga on the following day. Afterwards a local train to Sloka follows

The connection is still far from good, but fear I can’t fix that in software.

What still does not work, is directly searching from Berlin to Riga, as that depends on having a single data source that has data on the entire route to find it. So it is necessary to split the route and search for the parts yourself.

Why you can’t always find a route even though there is one

The main data source for Itinerary in Europe is the API of the “Deutsche Bahn”, the main railway operator in Germany. Its API also has data for neighbouring countries, and even beyond that. According to Jon Worth their data comes from UIC Merits, which is a common system that railway operators can submit their routes to. However that probably comes with high costs, so many smaller operators like the ones in Latvia and Lithuania don’t do that. For that reason there is no such single data source that can route for example from Berlin to Riga.

What most of the operators in Europe do however, is publish schedule data in a common format (GTFS). What is missing so far, is a single service that can route on all of the available data and has an API that we can use. Setting something like this up would require a bit of community and hosting resources, but I am hopeful that we can have something like this in the future.

In the meantime, it already helps to fill in the missing countries one by one, so at least local users can already find their routes in Itinerary, and for Interrail and other cross border travel, people can at least patch routes together.

More countries

The next country I worked on was Montenegro. The reason for that is that it is close to the area that the DB API can still give results for, and also still has useful train services. Getting their API to work well was a bit more difficult though, as it doesn’t provide some of the information that Itinerary usually depends on. For example coordinates for stations. Those are needed to select where to search for trains going from a station. Luckily, exporting the list of stations and their coordinates from OpenStreetMap was relatively easy and provided me with all the data I needed.

A route from Belgrade Center to Podgorica, shown on a map by KDE Itinerary

Thanks to that Itinerary can now even show the route on a map properly.

Now only the API for Serbia is missing to actually connect to the part of the network DB knows about.

The new backends are not yet included in any release, but you can already find them in the nightly builds. Be aware that the nightly builds have switched to Qt6 and KF6 faily recently, which means there are still a few rough edges and small bugs in the UI.

On Linux, you can use the nightly flatpak:

flatpak install https://cdn.kde.org/flatpak/itinerary-nightly/org.kde.itinerary.flatpakref

On Android, the usual KDE Nightly F-Droid repository has up to date builds.

Saturday, 13 January 2024

After a few days of work the Fedora KDE SIG is proud to announce the availability of KDE 6th Megarelease Release Candidate 1 on Fedora Rawhide!

For those who like bleeding edge, feel free to try it!

We are very excited and looking forward to Fedora 40 + KDE 6 + Wayland only

Note: right now the update is sitting on testing. If you don’t want to wait a few hours until it reaches stable

You can access it via a dnf repository like:

[main]
cachedir=/var/cache/yum
debuglevel=1
logfile=/var/log/yum.log
reposdir=/dev/null
retries=20
obsoletes=1
gpgcheck=0
assumeyes=1
keepcache=1
install_weak_deps=0
strict=1

# repos

[build]
name=build
baseurl=https://kojipkgs.fedoraproject.org/repos/f40-build-side-81132/5738561/x86_64

Thursday, 11 January 2024

Make sure you commit anything you want to end up in the KDE Gear 24.02 releases to them

Next Dates:

  •    January 31: 24.02 RC 2 (24.01.95) Tagging and Release
  •   February 21: 24.02 Tagging
  •   February 28: 24.02 Release


https://community.kde.org/Schedules/February_2024_MegaRelease

This post is in part a response to an aspect of Nate’s post “Does Wayland really break everything?“, but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.

Some facts

Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 – this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense!

The shift towards people seeing “Linux” more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this – this holistic approach is something I always wanted!

Furthermore, I think Wayland removes a lot of functionality that shouldn’t exist in a modern compositor – and that’s a good thing too! Some of X11’s features and design decisions had clear drawbacks that we shouldn’t replicate. I highly recommend to read Nate’s blog post, it’s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.

But!

But! Of course there was a “but” coming 😉 – I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop’s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches.

But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate’s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe’s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn’t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available.

A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.

A quick detour through the neuroscience research lab

I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost).

In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them… But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.

KDE/Linux usage at a control station for a particle accelerator at Adlershof Technology Park, Germany, for reference (by 25years of KDE)3

Back to the point

The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of “easiest”: In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though.

Here’s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all.

So if they learn that “porting to Linux” not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the “new” Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen.

Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt’s documentation you will often find texts like “works everywhere except for on Wayland compositors or mobile”4.

Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.

What’s missing?

Window positioning

SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps.

It is important to note that this is not a legacy design, but in many cases an intentional choice – these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop).

Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way.

I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.

Window position restoration

Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again.

Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.

Window icons

Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it’s not the end of the world if every window has the same icon, but it’s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now.

I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow “W” icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.

Limited window abilities requiring specialized protocols

Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.

Automated GUI testing / accessibility / automation

Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we’re not fully there yet.

Wayland is frustrating for (some) application authors

As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author’s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result.

Wayland does “break” screen sharing, global hotkeys, gaming latency (via “no tearing”) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except “use Xwayland and be on emulation as second-class citizen forever”, it just results in very frustrated application developers.

For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.

Triangles are proven to be the best shape! If you are drawing circles you are creating bad art!

Wayland, via its protocol limitations, forces a certain way to build application UX – often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out – Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.

The porting dilemma

I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren’t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details – even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow.

Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine’s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.

A way out?

So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question:

Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity?

I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors.

If the answer for your environment turns out to be “Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability”, then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn’t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment.

If the answer turns out to be “We do want some portability”, the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren’t. We can’t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits).

You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them.

To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier.

What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp! 😄

I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.

Footnotes

  1. Apologies for the clickbait-y title – it comes with the subject 😉 ↩
  2. When I talk about “Wayland” I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified. ↩
  3. I would have picked a picture from our lab, but that would have needed permission first ↩
  4. Qt has awesome “platform issues” pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn’t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren’t particularly helpful when porting (but still essential to know). ↩
  5. Besides issues with Nvidia hardware – CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though. ↩