For projects incorporating Rust and Qt, a minor but important problem can arise: too many sources of log messages. In the usual case, one of these is from Rust (and crate dependencies) and another is from Qt. Our Rust and Qt solution, CXX-Qt, exposes a new API to help with this in the upcoming 0.8.0 release.
There are several benefits to ensuring your log messages go through one logger. It makes it easier to redirect said messages into a file, there's less configuration required for end users, and it makes consistent formatting easier.
Qt has its own logging infrastructure, and their suggested way to print messages in Qt applications. It has a category system, rules for filtering which messages you want to see, custom formatting and so on. Applications like GammaRay can also display messages from Qt, and configure which categories are emitted.
One of the more popular logging libraries in Rust is called tracing, and we're going to show how you can redirect messages from it to the Qt logger. The integration can work for any other logging crate too, including your own custom solution.
Getting started
Before we start splicing the two log streams, let's quickly go over sending a Qt log message in Rust. Assuming you already have a CXX-Qt application (and if you don't, you can follow our book, construct a message log context with QMessageLogContext:
let file = CString::new("main.rs").unwrap();
let function = CString::new("main").unwrap();
let category = CString::new("lib").unwrap();
let context = QMessageLogContext::new(&file, 0, &function, &category);
You can specify the filename, the line number, the function name and even a category. We have to use CString here because QMessageLogContext in C++ uses const char*, not QString.
Note that there isn't a way to get the name of the currently calling function out of the box in Rust, but there are alternative solutions if you want that information.
Now that we have a context, we can send a message to the Qt logger. We can do this using the unfortunately undocumented qt_message_output function. This sends a message to the currently installed message handler.
Note that CXX-Qt currently doesn't have Rust bindings to install a custom one.
The default message handler goes to the standard output and error streams. This function takes a QtMsgType, the context we just created, and a QString message:
qt_message_output(
QtMsgType::QtInfoMsg,
&context,
&QString::from("This is an informational message..."),
);
And voilà:
lib: This is an informational message...
But that looks pretty plain, and we can't see any of our context! Most information available in the Qt logger isn't shown by default, but we can modify it by using theQT_MESSAGE_PATTERN environment variable.
I used the comprehensive example given in the linked documentation, but it's useful for showcasing the context:
[20250314 8:56:45.033 EDT I] main.rs:0 - This is an informational message...
Now that we have sent our first Qt log message from Rust, let's up the ante and integrate the tracing crate.
Integrating with Tracing
To redirect events from tracing, we have to add a subscriber that forwards events to Qt. This basically means anything that uses tracing (e.g. your application, another crate) will be picked up.
As a side effect of tracing's flexibility, we also have to create our own Visitor. This is necessary because tracing records structured data, and we need to flatten said data to a string.
And now for the custom layer that will catch the events, and send them to Qt:
pub struct QtSubscriber {}
impl<S> tracing_subscriber::Layer<S> for QtSubscriber
where
S: tracing::Subscriber,
{
fn on_event(
&self,
event: &tracing::Event<'_>,
_ctx: tracing_subscriber::layer::Context<'_, S>,
) {
let mut buffer: String = String::new();
let mut visitor = StringVisitor {
string: &mut buffer
};
event.record(&mut visitor);
let msg_type = match *event.metadata().level() {
tracing::Level::ERROR => QtMsgType::QtCriticalMsg,
tracing::Level::WARN => QtMsgType::QtWarningMsg,
tracing::Level::INFO => QtMsgType::QtInfoMsg,
tracing::Level::DEBUG => QtMsgType::QtDebugMsg,
tracing::Level::TRACE => QtMsgType::QtDebugMsg
};
let file = if let Some(file) = event.metadata().file() {
CString::new(file).unwrap()
} else {
CString::default()
};
let line = if let Some(line) = event.metadata().line() {
line as i32
} else {
0
};
let function = CString::default();
let category = CString::new("lib").unwrap();
let context = QMessageLogContext::new(&file, line, &function, &category);
qt_message_output(msg_type, &context, &QString::from(buffer));
}
}
And finally, we have to register our new layer with the registry:
use tracing_subscriber::layer::SubscriberExt; // needed for with
use tracing_subscriber::util::SubscriberInitExt; // needed for init
tracing_subscriber::registry().with(QtSubscriber{}).init();
Afterwards, try sending a message using tracing::info! and see if it works:
tracing::info!("This is an informational message... from tracing!")
Using a more comprehensive QT_MESSAGE_PATTERN, you should get a log message from Qt. Note that "message" is written here, as that's the default field in tracing:
[20250320 10:41:40.617 EDT I] examples/cargo_without_cmake/src/main.rs:104 - message = This is an informational message...
These new bindings in CXX-Qt should be useful to application developers who want to adopt the best logging crates in Rust, while ensuring their messages are forwarded to Qt. If you want to redirect Qt log messages to Rust, CXX-Qt doesn't have a way to install custom message handlers yet.
See our KDABLabs repository for complete examples of the snippets shown here.
The other day I finally replaced my trusty Thinkpad T480s I bought 6½ years ago. Overall, I was still pretty happy with it and even gave it a little refresh early last year (RAM upgrade, bigger SSD, new keyboard) but the CPU was really starting to show its age when compiling. I’m almost as picky as Nate when it comes to laptops but the P14s Gen 5 AMD (what a mouthful) checked more boxes than most laptops I looked at in recent years.
Breeze Twilight, for the OLED’s sake
The device shipped with Windows 11 and whenever I touch a Windows machine I’m baffled that people put up with this. I connected it to Wifi (beginner’s mistake apparently) since I wanted to install all firmware updates and salvage a couple of things from it before formatting the SSD (ICC profiles, Dolby audio presets, etc). The first run wizard asked me for my choice of locale, then went looking for updates. Once done, the system rebooted. After that it asked me to give the computer a name. Guess what? Another reboot, and more updates. And then the dreaded compulsory Microsoft account I had no intention of creating. You can open a terminal by pressing Shift+F10 but the old bypassnro trick just led to a reboot and it asking the same questions again. Just when I was about to give up, Bhushan showed me another trick how to create a local account. Indeed, after yet another reboot and clicking away like 10 individual nag screens about privacy and cloud stuff, I was able to log into the system.
This is the sort of usability nightmare and command line tinkering bullshit that people were mocking Linux users for back in the days! Compare that to my KDE neon “unstable” installation where I plugged in a USB stick (Secure Boot of course rejected “3rd party keys” by default), booted into the live system, had the entire drive formatted, and within 10 minutes or so ended up with a working sexy KDE Plasma setup. I’m still sometimes amazed how beautiful our default setup looks nowadays with the floating panel, frosted glass, and all. I don’t like dark mode but since that laptop has an OLED screen I opted for “Breeze Twilight” which combines light applications with a dark panel and wallpaper.
I salvaged the color profiles from the factory Windows install
As with any new device, there’s a few surprises waiting for you. Like most recent laptops it has a stupid Copilot (“AI”) key. Unfortunately, rather than inventing a new key code, it emulates an actual key combination of Meta+Shift+Touchpad Off (F23 I heard, yes, there can be more than just F12). This makes it difficult to just remap the key to something useful (like a right Meta key). However, at least you should be able to use it as a global shortcut, right? Unfortunately, you weren’t able to assign it from GUI. I now fixed the Copilot key by allowing Shift be used in a shortcut in conjunction with “Touchpad Off”. It’s a kludge but at least you can now make it bring up KRunner or something.
Speaking of proprietary keys, it also has a “Phone Link” function key. It is recognized as such starting from Linux kernel 6.14 but there’s no support in Xkb or Qt yet. I just sent a pull request to libxkbcommon to add it and once that lands, I’ll look into adding it to Qt. How cool would it be if under Plasma the Phone Link button would instead open KDE Connect?!
The suspend-to-idle stuff is both a great opportunity and a little scary. Modern laptops don’t do “proper” S3 suspend anymore but only S2 which works more like on a smartphone where it can wake up anytime and fast. Right now even plugging in the charger or touching the touchpad causes the machine to wake up. Luckily, if KWin detects the lid is shut and no monitor is connected, it sleeps again after a short time to prevent a hot backpack situation. Work is ongoing to make better use of this capability, to detect what caused the system to wake up. For example, when plugging in the charger, we might want to wake up, play a short animation and the “plugged in” sound and go back to sleep.
I can still return the device within the 14 day period if something major crops up but I already fell in love with its 120 Hz OLED 16:10 display (I luckily don’t seem to be sensitive to the 240 Hz PWM it uses), so I don’t think I’ll be returning it :-)
During all my time with KDE projects, I've never made an app from scratch.. Except now.
Today my first KDE app ever, KomoDo, was released on Flathub!
It's a simple todo application for todo.txt format and parses todo.txt
files into a list of cards. One task is one card.
The application has help sections for any of the rules todo.txt format follows, but you can
use as many or as little of them as you want.
I wanted to go through in this blogpost about the process, or what I remember of it anyway.
I've always just fixed bugs in Plasma or Frameworks or even in apps like Dolphin, but never made anything from scratch.
So I thought that writing this down might be helpful for someone in similar position.
As with any KDE project, it's open source and the repository can be explored right here: utilities/komodo.
Starting up
Starting a Qt project using KDE libraries, especially QtQuick, is not that difficult in the end.
KDE has really good documentation for starting up.
I mostly followed this guide to get started: https://develop.kde.org/docs/getting-started/kirigami/setup-cpp/
There is also guides for Python and Rust if you wish to use those instead of C++.
I also highly recommend using kde-builder for building
and running the applications. It makes things so much easier.
Other than that, there's not much to tell about the setup.
The project itself
Working on the application was not that bad either. I've worked on various C++ and QML files
during my work on KDE software, so I had easy time getting into the flow.
I think the most difficult part for me was the CMake files: I couldn't really understand them or
what they do, I mostly followed what other projects did and replicated them.
This of course caused me to do unnecessary things, like installing libraries I made for the app:
TodoModel which the QML code uses to parse the todo.txt file and generate the view was accidentally installed
among the system libraries, which was not ideal, since I'm not building a framework.
Luckily with tons of help and reviews from my friends in the KDE fam, we got CMake to build things nicely.
Then with kde-builder the feedback loop of code->build->test was fast enough, especially in small app like this,
that I could iterate on the application in a good pace.
I also had a lot of help from a friend with the CI and sysadmin stuff. That side of development is
very confusing to me so I'm glad with the help. I dunno if you want to be named so I didn't, but you know
who you are: Big thanks. :)
TodoModel
Since Qt is reliant on the model-view system, I had to make a "model" that parses
the todo.txt file and spits out Todo objects, which can then be parsed
by the QML code, which is the frontend code.
If you have never worked on model-view systems, it can take some time to understand.
To me, the model-view as words was always a bit confusing: I would have understood it better
if it was worded something like data-ui. But tech jargon has always been my weakest point.
The parsing is done by a RegExp nightmare I concocted with help of stack-overflow and tears.
Here it is, in its nightmarish glory, enjoy the nightmare fuel:
Due to the nightmare fuel that RegExp can be, I decided that unit testing would be great addition.
So I fought with CMake a bit and then made myself a proper parser testing file.
Thankfully Qt has nice QTest library that can be used to create tests.
They can be hard to parse at first, but when you understand how they work, they're really
quick and easy to work with.
Testing saved my rear a few times during this project, especially when modifying anything parser related.
Look and feel
When the model worked fine, I started concentrating more on the look and feel of the app.
The application went through various phases: At first, everything was done through a dialog, except
viewing the cards. It was kind of distracting that every time you wanted to modify or make new task,
the application would pop things up.
Over time though, I think I got it into a good spot: Most actions can be done in the task list, but when
user wants to delete a task, the app asks the user first if they are sure about it.
Then there is help menu and quick info for syntax related things.
Otherwise there's really no settings: I did not want to add any settings if I could avoid it.
The only setting the app saves to config file is the file you opened last time.
I also consulted the KDE Human Interface Guidelines (HIG) few times, but
it was never "in the way." It just helped me to make decisions sometimes when I wasn't sure what to do with
the design.
Also my lovely wife @tecsiederp made the application the most adorable icon ever.
Look at it. Just a lil guy.
Also, if you don't get your tasks done, he will get sad. So you better do those tasks!
It just makes me so happy and motivated to work on my tasks when I see this lil guy in my taskbar. :D
The icons however have to follow our guidelines, also mentioned in the HIG.
Luckily my wife is good with SVG files, and if she wasn't I already had two people wanting to help me with it, which
I very much appreciated. If I had to make an icon myself, it would.. Not be good. :P
Something I mostly struggled with though was getting a nice syntax highlighting for the task text.
I wanted to use our KSyntaxHighlighting library, but it would not match all colorschemes: Users
may use their own or third party colorscheme for the app, but the syntax highlighter does not have a colorscheme
that matches it.. So it would look bit odd.
So I made my own simple one that appends some color tags to the string, and the QML engine does the rest.
The text can have HTML tags like <span> in it which QML can parse automagically.
I think the app turned out to look pretty good. It also supports keyboard navigation
for getting around the app. (Though there might be some bugs with it still.)
Releasing the app
KDE has documentation for the release process, so I just followed it to the letter.
During the release process, I have to wait for at least 2 weeks for people to give me reviews,
and so they did. And then I fixed things. And some more.
Eventually the app was good to go (though it was never directly told me, it was just implicitly agreed),
and I followed the last bits of relese process, setting up signing keys and such.
With Flathub, the submission process was rather painless in my opinion. I followed their
requirements and submission documentation and it all just
clicked together. Eventually someone was reviewing my PR and then.. The app is in Flathub.
I had to make a patch for Flathub specifically, but it was also really painless so I didn't mind.
What it all mostly took from me is time. Otherwise it was just nice and easy.
More apps?
So at least in my experience, the whole process was rather easy. I didn't want to go into
nitty gritty details too much.. Because I don't remember everything + you can see the source code + the process itself
happens in the open.
I'm definitely interested in making more KDE apps, but also I have my game project(s).
And I really want to get back into gamedev. Like spend much more time into it.
At least now I have a todo app which fits my needs to keep track of the gamedev projects. :)
Thanks for reading! And if you're looking for new todo application, give KomoDo a try! :)
The 46th annual TeX Users Group conference (TUG2025) took place in Kerala during July 18–20, 2025. I’ve attended and presented at a few of the past TUG conferences; but this time it is different as I was to present a paper and help to organize the conference. This is a personal, incomplete, (and potentially hazy around the edges) reflection of my experience organizing the event, which had participants from many parts of the Europe, the US and India.
Preparations
The Indian TeX Users Group, lead by CVR, have conducted TUG conferences in 2002 and 2011. We, a group of about 18 volunteers, lead by him convened as soon as the conference plan was announced in September 2024, and started creating todo-lists, schedules and assigning responsible persons for each.
STMDocs campus has excellent conference facilities including large conference hall, audio/video systems, high-speed internet with fallback, redundant power supply etc. making it an ideal choice, as done in 2011. Yet, we prioritized the convenience of the speakers and delegates to avoid travel from and to the hotel in city — prior experience found it is best to locate the conference facility closer to the stay. We scouted for a few hotels with good conference facilities in Thiruvananthapuram city, and finalized the Hyatt Regency; even though we had to take greater responsibility and coordination as they had no prior experience organizing a conference with requirements similar to TUG. Travel and visit advisories were published on the conference web site as soon as details were available.
Projector, UPS, display connectors, microphones, WiFi access points and a lot of related hardware were procured. Conference materials such as t-shirt, mug, notepad, pen, tote bag etc. were arranged. Noted political cartoonist E.P. Unny graciously drew the beloved lion sketches for the conference.
Karl Berry, from the US, orchestrated mailing lists for coordination and communication. CVR, Shan and I assumed the responsibility of answering speaker & delegate emails. At the end of extended deadline for submitting presentations and prerecorded talks; Karl handed us over the archive of those to use with the audio/video system.
Audio/video and live streaming setup
I traveled to Thiruvananthapuram a week ahead of the conference to be present in person for the final preparations. One of the important tasks for me was the setup the audio/video and live streaming for the workshop and conference. The audio/video team and volunteers in charge did a commendable job of setting up all the hardware and connectivity on 16th evening and we tested presentation, video playing, projector, audio in/out, prompt, clicker, microphones and live streaming. There was no prompt at the hotel, so we split the screen-out to two monitors placed on both side of the podium — this was much appreciated by the speakers later. In addition to the A/V team’s hardware and (primary) laptop, two laptops (running Fedora 42) were used: a hefty one to run the presentation & backup OBS setup; another for video conferencing remote speakers’ Q&A session. The laptop used for presentation had 4K screen resolution. Thanks to Wayland (specifically, Kwin), the connected HDMI out can be independently configured for 1080p resolution; but it failed to drive the monitors split further for prompt. Changing the laptop built-in display resolution also to 1080p fixed the issue (may changing from 120 Hz refresh rate to 60 Hz might have helped, but we didn’t fiddle any further).
Also met with Erik Nijenhuis in front of the hotel, who was hand-rolling a cigarette (which turned out to be quite in demand during and after the conference), to receive a copy of the book ‘The Stroke’ by Gerrit Noordzij he kindly bought for me — many thanks!
Workshop
The ‘Tagging PDF for accessibility’ workshop was conducted on 17th July at STMDocs campus — the A/V systems & WiFi were setup and tested a couple of days prior. Delegates were picked up at the hotel in the morning and dropped off after the workshop. Registration of workshop attendees were done on the spot, and we collected speaker introductions to share with session chairs. Had interesting discussions with Frank Mittelbach and Boris Veytsman during lunch.
Reception & Registration
There was a reception at Hyatt on 17th evening, where almost everyone got registered, collected the conference material with program pre-print, t-shirt, mug, notepad & pen, a handwritten (by N. Bhattathiri) copy of Daiva Daśakam, and a copy of the LaTeX tutorial. All delegates introduced themselves — but I had to step out at the exact moment to get into a video call to prepare for live Q&A with Norman Gray from UK, who was presenting remotely on Saturday. There were two more remote speakers — Ross Moore from Australia and Martin J. Osborne from Canada — with whom I conducted the same exercise, despite at inconvenient times for them. Frank Mittelbach needed to use his own laptop for presentation; so we tested the A/V & streaming setup with that too. Doris Behrendt had a presentation with videos; its setup was also tested & arranged.
An ode to libre software & PipeWire
Tried to use a recent Macbook for the live video conference of remote speakers, but it failed miserably to detect the A/V splitter connected via USB to pick up the audio in and out. Resorting to my old laptop running Fedora 42; the devices were detected automagically and PipeWire (plus WirePlumber) made those instantly available for use.
With everything organized and tested for A/V & live streaming, I went back to get some sleep to wake early on the next day.
Day 1 — Friday
Woke up at 05:30, reached hotel by 07:00, and met with some attendees during breakfast. By 08:45, the live stream for day 1 started. Boris Veytsman, the outgoing vice-president of TUG opened TUG2025, handed over to the incoming vice-president and the session chair Erik Nijenhuis; who then introduced Rob Schrauwen to deliver the keynote titled ‘True but Irrelevant’ reflecting on the design of Elsevier XML DTD for archiving scientific articles. It was quite enlightening, especially when one of the designers of a system looks back at the strength, shortcomings, and impact of their design decisions; approached with humility and openness. Rob and I had a chat later, about the motto of validating documents and its parallel with IETF’s robustness principle.
You may see a second stream for day 1, this is entirely my fault as I accidentally stopped streaming during tea break; and started a new one. The group photo was taken after a few exercises in cat-herding.
All the talks on day 1 were very interesting: with many talks about tagging pdf project (that of Mittelbach, Fischer, & Moore); the state of CTAN by Braun — to which I had a suggestion for inactive package maintainer process to consider some Linux distributions’ procedures; Vrajarāja explained their use of XeTeX to typeset in multiple scripts; Hufflen’s experience in teaching LaTeX to students; Behrendt & Busse’s talk about use of LaTeX in CryptTool; and CVR’s talk about long running project of archiving Malayalam literary works in TEI XML format using TeX and friends. The session chairs, speakers and audience were all punctual and kept their allotted time in check; with many followup discussions happening during coffee break, which had ample time to feel the sessions not rushed.
Ross Moore’s talk was prerecorded. As the video played out, he joined via a video conference link. The audio in/out & video out (for projecting on screen and for live streaming) were connected to my laptop, and we could hear him through the audio system as well as the audience questions via microphone were relayed to him with no lag — this worked seamlessly (thanks to PipeWire). We had a small problem with pausing a video that locked up the computer running the presentation; but quickly recovered — after the conference, I diagnosed it to be a noveau driver issue (a GPU hang).
By the end of the day, Rahul & Abhilash were accustomed to driving the presentation and live streams, so I could hand over the rein and enjoy the talks. Decided to stay back at the hotel to avoid travel, and went to bed by 22:00 but sleep descended on this poor soul only by 04:30 or so; thanks to that cup of ristretto for breakfast!
Judging by the ensuing laughs and questions; it appears not everyone was asleep during my talk. Frank & Ulrike suggested not to colour the underscore glyph in math, instead properly colour LaTeX3 macro names (which can have underscore and colon in addition to letters) in the font.
The sessions on second day were also varied and interesting, in particular Novotný’s talk about static analysis of LaTeX3 macros; Vaishnavi’s fifteen-year long project of researching and encoding Tulu-Tigalari script in Unicode; bibliography processing talks separately by Gray and Osborne (both appeared on video conferencing for live Q&A which worked like a charm), etc.
I had interesting discussions with many participants during lunch and coffee breaks. Mentioned to Ben Davies from Overleaf that many résumés I get nowadays are done in LaTeX, even when the person has no knowledge of it — shows signs of TeX going mainstream, in some sense. Ben agreed that it would make sense to set the first/default project in Overleaf a résumé template. I did rehash my concern shared at TUG2023, about the no-error-stop mode in Overleaf leaves much to be desired, as often I encounter documents that do not compile — corroborated by Linas Stonys from VTeX.
In the evening, all of us walked (the monsoon rain was at respite) to the music and dance concert; both of which were fantastic cultural & audio-visual experience.
Veena music, and fusion dance concerts.
Day 3 — Sunday
The morning session of final day had a few talks: Rishi lamented about eroding typographic beauty in publishing (which Rob concurred with, Vrajarāja earlier pointed out as the reason for choosing TeX, …); Doris on LaTeX village in CCC — and about ‘tuwat’ (to take action); followed by the TeX Users Group annual general body meeting presided by Boris as the first session post lunch; then on his approach to solve editorial review process of documents in TeX; and a couple more talks: Rahul’s presentation about pdf tagging used our opentype font for syntax highlighting (yay!); and the lexer developed by Overleaf team was interesting. On Veeraraghavan’s presentation about challenges faced by publishers, I had a comment about the recurrent statement that “LaTeX is complex” — LaTeX is not complex, but the scientific content is complex, and LaTeX is still the best tool to capture and represent such complex information.
Two Hermann Zapf fans listening to one who collaborated with Zapf [published with permission].
Calligraphy
For the final session, Narayana Bhattathiri gave us a calligraphy demonstration, in four scripts — Latin, Malayalam, Devanagari and Tamil; which was very well received judging by the applause. I was deputed to explain what he does; and also to translate for the Q&A session. He obliged the audience’s request of writing names: of themselves, or spouse or children, even a bär, or as Hàn Thế Thành wanted — Nhà khủng lồ (the house of dinosaurs, name for the family group); for the next half hour.
Bhattathiri signing his calligraphy work for TUG2025.
Nijenhuis was also giving away swags by Xerdi, and I made the difficult choice between a pen and a pendrive, opting for the latter.
The banquet followed; where in between enjoying delicious food I could find time to meet and speak with even more people and say good byes and ‘tot ziens’.
Later, I had some discussions with Frank about generating MathML using TeX.
Many thanks
A number of people during the conference shared their appreciation of how well the conference was organized, this was heartwarming. I would like to express thanks to many people involved, including the TeX Users Group, the sponsors (who made it fiscally possible to run the event and support many travels via bursary), STMDocs volunteers who handled many other responsibilities of organizing, the audio-video team (who were very thoughtful to place the headshot of speakers away from the presentation text), the unobtrusive hotel staff; and all the attendees, especially the speakers.
Thanks particularly to those who stayed at and/or visited the campus, for enjoying the spicy food, delicious fruits from the garden, and surviving the long techno-socio-eco-political discussions. Boris seems to have taken it to heart my request for a copy of the TeXbook signed by Don Knuth — I cannot express the joy & thanks in words!
The TeXbook signed by Don Knuth.
The recorded videos were handed over to Norbert Preining, who graciously agreed to make the individual lectures available after processing. The total file size was ~720 GB; so I connected the external SSD to one of the servers and made it available to a virtual machine via USB-passthrough; then mounted and made it securely available for copying remotely.
Special note of thanks to CVR, and Karl Berry — who I suspect is actually a kubernetes cluster running hundreds of containers each doing a separate task (with apologies to a thousand gnomes), but there are reported sightings of him; so I sent personal thanks via people who have seen him in flesh — for leading and coordinating the conference organizing. Barbara Beeton and Karl copy-edited our article for the TUGboat conference proceedings, which is gratefully acknowledged. I had a lot of fun and a lot less stress participating in TUG2025 conference!
Partners holding big jigsaw puzzle pieces flat vector illustration. Successful partnership, communication and collaboration metaphor. Teamwork and business cooperation concept.
I write this in the wake of a personal attack against my work and a project that is near and dear to me. Instead of spreading vile rumors and hearsay, talk to me. I am not known to be ‘hard to talk to’ and am wide open for productive communication. I am disheartened and would like to share some thoughts of the importance of communication. Thanks for listening.
Open source development thrives on collaboration, shared knowledge, and mutual respect. Yet sometimes, the very passion that drives us to contribute can lead to misunderstandings and conflicts that harm both individuals and the projects we care about. As contributors, maintainers, and community members, we have a responsibility to foster environments where constructive dialogue flourishes.
The Foundation of Healthy Open Source Communities
At its core, open source is about people coming together to build something greater than what any individual could create alone. This collaborative spirit requires more than just technical skills—it demands emotional intelligence, empathy, and a commitment to treating one another with dignity and respect.
When disagreements arise—and they inevitably will—the manner in which we handle them defines the character of our community. Technical debates should focus on the merits of ideas, implementations, and approaches, not on personal attacks or character assassinations conducted behind closed doors.
The Importance of Direct Communication
One of the most damaging patterns in any community is when criticism travels through indirect channels while bypassing the person who could actually address the concerns. When we have legitimate technical disagreements or concerns about someone’s work, the constructive path forward is always direct, respectful communication.
Consider these approaches:
Address concerns directly: If you have technical objections to someone’s work, engage with them directly through appropriate channels
Focus on specifics: Critique implementations, documentation, or processes—not the person behind them
Assume good intentions: Most contributors are doing their best with the time and resources available to them
Offer solutions: Instead of just pointing out problems, suggest constructive alternatives
Supporting Contributors Through Challenges
Open source contributors often juggle their community involvement with work, family, and personal challenges. Many are volunteers giving their time freely, while others may be going through difficult periods in their lives—job searching, dealing with health issues, or facing other personal struggles.
During these times, our response as a community matters enormously. A word of encouragement can sustain someone through tough periods, while harsh criticism delivered thoughtlessly can drive away valuable contributors permanently.
Building Resilient Communities
Strong open source communities are built on several key principles:
Transparency in Communication: Discussions about technical decisions should happen in public forums where all stakeholders can participate and learn from the discourse.
Constructive Feedback Culture: Criticism should be specific, actionable, and delivered with the intent to improve rather than to tear down.
Recognition of Contribution: Every contribution, whether it’s code, documentation, bug reports, or community support, has value and deserves acknowledgment.
Conflict Resolution Processes: Clear, fair procedures for handling disputes help prevent minor disagreements from escalating into community-damaging conflicts.
The Long View
Many successful open source projects span decades, with contributors coming and going as their life circumstances change. The relationships we build and the culture we create today will determine whether these projects continue to attract and retain the diverse talent they need to thrive.
When we invest in treating each other well—even during disagreements—we’re investing in the long-term health of our projects and communities. We’re creating spaces where innovation can flourish because people feel safe to experiment, learn from mistakes, and grow together.
Moving Forward Constructively
If you find yourself in conflict with another community member, consider these steps:
Take a breath: Strong emotions rarely lead to productive outcomes
Seek to understand: What are the underlying concerns or motivations?
Communicate directly: Reach out privately first, then publicly if necessary
Focus on solutions: How can the situation be improved for everyone involved?
Know when to step back: Sometimes the healthiest choice is to disengage from unproductive conflicts
A Call for Better
Open source has given us incredible tools, technologies, and opportunities. The least we can do in return is treat each other with the respect and kindness that makes these collaborative achievements possible.
Every contributor—whether they’re packaging software, writing documentation, fixing bugs, or supporting users—is helping to build something remarkable. Let’s make sure our communities are places where that work can continue to flourish, supported by constructive communication and mutual respect.
The next time you encounter work you disagree with, ask yourself: How can I make this better? How can I help this contributor grow? How can I model the kind of community interaction I want to see?
Our projects are only as strong as the communities that support them. Let’s build communities worthy of the amazing software we create together.
You click a link in your chat app, your browser with a hundred tabs comes to the front and opens that page. How hard can it be? Well, you probably know by now that Wayland, unlike X, doesn’t let one application force its idiot wishes on everyone else. In order for an application to bring its window to the front, it needs to make use of the XDG Activation protocol.
A KWrite window that failed to activate and instead is weeping bitterly for attention in the task bar
In essence, an application cannot take focus, it can only receive focus. In the example above, your chat app would request an XDG Activation token from the compositor. It then asks the system to open the given URL (typically launching the web browser) and sends along the token. The browser can then use this token to activate its window.
This token is just a magic string, it doesn’t matter how it gets from one application to another. Typically, a new application is launched with the XDG_ACTIVATION_TOKEN variable in its environment. When activating an existing one, an activation-token property is added to the platform_data dict sent via DBus. There’s also older protocols that weren’t designed with this in mind, such as Notifications, StatusNotifierItem (tray icons), or PolKit requests where we cannot change the existing method signatures. Here we instead added some way to set a token just before the actual call.
However, just because you have a token doesn’t mean you can raise your window! The compositor can invalidate your token at any time and reject your activation request. The idea is that the compositor gets enough information to decide whether the request is genuine or some application popping up a dialog in the middle of you typing something. A token request can include the surface that requests the activation, the input serial from the focus or mouse event that resulted in this request, and/or the application ID of the application that should be activated. While all of this is optional (and there can be valid reasons why you don’t have a particular piece of information at this time), the compositor is more likely to decline activation if the information is incomplete or doesn’t match what the requesting application provided.
A lot of places in Qt, KDE Frameworks, and other toolkits and applications have already been adjusted to this workflow and work seamlessly. For example, calling requestActivate on a QWindow will check if there is an XDG_ACTIVATION_TOKEN in the environment and use it, otherwise request one. Qt also does this automatically when the window opens to match the behavior of other platforms. Likewise, things like ApplicationLauncherJob and OpenUrlJob will automatically request a token before proceeding. On the other hand, KDBusService (for implementing single instance applications) automatically sets the corresponding environment variable when it received a token via DBus. Together this makes sure that most KDE applications just work out of the box.
You might be wondering: didn’t KWin-X11 have “focus stealing prevention”? It sure does. There’s a complicated set of heuristics based on _NET_WM_USER_TIME to judge whether the new window appeared as a result of explicit user interaction or is unsolicited. Remember how back in ye olde days, KWin’s focus stealing prevention would keep the Adobe Flash Player fullscreen window from showing ontop of the YouTube video you’re watching? Yeah, it’s not perfect. KWin can also only react on things that have already happened. For instance, when an application uses XSetInputFocus on a window from a different application, KWin will detect that and consider it a malicious request and restore previous focus but for a split second focus did change. If you want to know more, there’s a 200+ lines comment in activation.cpp in KWin’s git repo that explains it all. But then again the application could just do whatever it wants and bypass all of this.
Xtreme Focus Stealing Prevention
Unfortunately, there’s still a few places that don’t do XDG Activation correctly. It didn’t matter much under X – in doubt we could just forceActiveWindow – but now we have to fix those scenarios properly! In order to test whether your application is well-behaved, use the latest git master branch of KWin and set “Focus Stealing Prevention” in Window Management settings to “Extreme”. This will make KWin activate a window if and only if it requests activation with a valid token.
Using this, over the past couple of days Xaver Hugl of KWin fame and I fixed a bunch of issues, including but not limited to:
Dolphin threw away its token before activating its main window when launching a new instance (activating an existing one worked fine)
KRunner, Kickoff, and other Plasmoid popups did not request activation at all
LayerShell-Qt now requests activation on show (to match Qt behavior)
LayerShell-Qt didn’t read the XDG_ACTIVATION_TOKEN from the environment when provided
Privileged clients, like Plasma and KGlobalAccel, were unable to request tokens in some situations
Modifier key presses no longer count towards focus stealing prevention: they’re often used as part of a global keyboard shortcut and don’t necessarily mean the user is interacting with the active window
Furthermore, the DBusRunner specification gained a SetActivationToken method which is called just before Run. Baloo (desktop search) runner now uses this to ensure opening files in an existing application window works. Likewise for the KClock runner bringing KClock to the front properly. I further improved the recent documents runner and places runner to send the file type to the OpenUrlJob so it doesn’t have to determine it again. This makes the job much quicker and avoids KRunner closing before the activation token is requested by the job. However, we have yet to find a proper solution for this in KRunner.
With all of this in place, we’ll likely switch on KWin’s focus stealing on Wayland at a low level and make it gradually stricter as applications are being fixed.
After adding the ability to drag and reposition the Selection Action Bar on the canvas last week, I spent this week improving that interaction. I tackled the issue where users could drag the toolbar completely off-screen, making it inaccessible. This week was all about keeping the toolbar within the canvas boundaries.
Side note: This week I also updated the UI to resemble the mockup concept provided from the community.
Obstacles to Implementation
During testing I noticed that without boundaries, the draggable toolbar could end up hidden and inaccessible especially after window or sidebar resizing. This would make it frustrating for users because they would have to find the missing toolbar or toggle the feature off and back on just to relocate it. I wanted to make the toolbar's positioning restricted to canvas boundaries to keep the toolbar within users' vision.
Find Canvas Boundaries
Looking into the code, I found that the canvasWidget had a property QRect rect() which holds the dimensions of the canvas. We can use these values as the boundaries.
Add Boundaries to Drag Event
With canvasBounds defined, we can limit the drag operation to stay within the canvas dimensions. Based on the current canvas dimensions and during a drag event, we update the horizontal and vertical limits of the toolbar position with the use of qBound which provides an upper and lower boundary for a value. So if the coordinates of the toolbar were to go past the boundaries, qBound would return the defined upper or lower boundary value. On top of canvas boundaries, it is important to include the dimensions of the toolbar and buffer room in the calculation to make sure the UI elements are accessible.
// updated drag event with canvas boundaries
QPointnewPos=mouseEvent->pos()-d->dragStartOffset;QRectcanvasBounds=canvasWidget->rect();intactionBarWidth=125;intactionBarHeight=25;intbufferSpace=5;newPos.setX(qBound(canvasBounds.left()+bufferSpace,newPos.x(),canvasBounds.right()-actionBarWidth-bufferSpace));newPos.setY(qBound(canvasBounds.top()+bufferSpace,newPos.y(),canvasBounds.bottom()-actionBarHeight-bufferSpace));d->dragRectPosition=newPos;canvasWidget->update();returntrue;
Conclusion
This week was focused on improving the user experience and accessibility of the Selection Action Bar. I learned about some great Qt tools, like the properties of QWidget and Qt helper functions like qBound to help solve my problem! In the coming final weeks, I'll continue adding action buttons based on the list provided by the community.
Contact
To anyone reading this, please feel free to reach out to me. I'm always open to suggestions and thoughts on how to improve as a developer and as a person. Email: ross.erosales@gmail.com Matrix: @rossr:matrix.org
Since the last update two months ago
KDE Itinerary got support for manually added train and bus trips, a more flexible
alternative connection search, a new departure details view and a better location search, among many other improvements.
New Features
Manual trip entry
Being able to manually enter train or bus trips rather than selecting them from timetable
data has been often requested and is now finally available.
New menu for adding manual train, bus, ferry or flight connections.
For all modes of transportation there’s now two modes, manually entered data
where you can freely change departure and arrival locations and times, and schedule-backed
data for trips added from public transport searches where you can change the departure
and arrival stops only based on what’s actually in the schedule.
The latter is now also available for ferry trips.
Intermediate stops for a ferry.
The the various add and import actions have also been consolidated
in a single menu on the trip page. Importing backups remains available on the My Data page.
New combined add/import context menu.
Generalized alternative connection search
The alternative connection search is no longer limited to just trains and busses either,
it now can also be applied to ferry trips and flights. Likewise all modes of transportation
in the result can be added that way, that is also ferries and flights.
Additionally, the alternative connection search now allows to select any transfer stop
as the destination, unlike previously being limited to just the first or the last one.
Destination selector in alternative connection search.
New public transport departures view
There’s also an entirely new and much more detailed public transport departures view.
You’ll find that behind the new context menu on locations in the details pages.
New location context menu.
The departure list now automatically updates when looking at current departures. Also, you can select an
individual entry to get a whole set of additional information where available:
Service notes and alerts.
Occupancy levels.
The full trip run in a schedule view and and as a map.
The exact departure location on an indoor station map.
New departure details page.
Events
Earlier this month we had the first Transitous Hack Weekend
covering many topics relevant for Itinerary as well, ranging from making Transitous
long-term sustainable over improving the data coverage and quality to expanding the routing capabilities.
Everything mentioned in this section also benefits KTrip.
Geocoding
Geocoding is the process that happens behind the location search for journey planning.
That produced some confusing and undesired results in the past, and should noticeably
improve with the 25.08 release.
MOTIS, Transitous and
KPublicTransport gained support for geographic bias areas
for geocoding queries. That’s used in the location search based on the selected country.
With this you should now always have results from the right country at the top, while still not
having a hard filter which can be inconvenient in border areas.
Improved automatic backend selection for geocoding. This should fix the issue that in some
countries locations were only searched in a few small regions rather than the entire area.
Italy and the US were affected by this for example.
The city, region and country a result is in are now display in more cases in a second line under
the address or station name. This should further help to disambiguate results.
Online updates for public transport backend configurations
Access to public transport data still depends on over 70 operator services, besides Transitous.
Many of those aren’t really meant for 3rd party use and thus have no proper change management,
which means they can randomly change their URL, need a new custom certificate, need new or changed
parameters or just disappear entirely.
Adapting to a lot of that is merely a matter of changing configuration files part of KPublicTransport,
but so far it would take up to a month for those changes to reach users of release packages.
As this can severely impact the functionality we now have the infrastructure to update the KPublicTransport
backend configuration files as well as supporting data such as the coverage information without
waiting for the next software update.
This is meant to happen automatically eventually, while we are still testing this it’s available as
a context action in the public transport backend configuration page.
Pickup/dropoff constraints
Another new feature in MOTIS and Transitous are pickup and dropoff constraints. That is, a way to describe
that you cannot board or cannot alight from a vehicle at a given stop. While that isn’t all that common,
it can be quite important to know.
KPublicTransport also gained support for this, besides MOTIS also for OpenTripPlanner
and Hafas backends. In the UI you can see this in a few places then:
Changing the departure or arrival stop when editing only allows to select those stops that
actually allow boaring/alighting respectively.
Stops that allow neither boarding nor alighting are shown as such in the journey overview.
This sometimes occurs for border crossings or Switzerland advertising significant rail infrastructure.
Intermediate stops not allowing boarding or alighting.
Fixes & Improvements
Travel document extractor
Added or improved travel document extractors for Amadeus, Arriva, BlablaBus, booking.com, DB, Deutscher Alpenverein, DJH, DSB, Easyjet, Eurostar, Finnair, Flixbus, Globtour, Gopass, hostelworld, Leo Express, LeShuttle, MÁV, Opodo, Ouigo ES, Reenio, SNCB, SCNF, tickets.ua and Tito.
Improved matching and merging of bound and unbound train/bus trips.
Improved matching and merging of seat reservations with seats listed in different orders.
All of this has been made possible thanks to your travel document donations!
Public transport data
New or fixed backend configurations for KVB, South Tyrol, ZKS and the Varsinais-Suomen OpenTripPlanner instance.
Onboard API support for XiamenAir.
Fix parsing of bike carriage and wheelchair accessibility capabilities from OpenTripPlanner.
Fix parsing of flight numbers from Entur.
Support for more vehicle features from DB.
Correctly update country coverage list in the stop picker when enabling/disabling backends.
Filter out journey results with implausibly long detours. This gets rid of the “creative” connections
between the major stations in Paris by backends that can’t do local transport routing there.
Fix parsing of operator names from EFA.
Train station maps
Support for more OSM tagging schemes for elevators, tram infrastructure and platform sections.
Improved matching of platform names when faced with language specific leading abbreviations.
Cancel ongoing tile downloads before starting a new map load.
Better recovery from OSM level range tagging mistakes.
Better recovery from failed tile downloads.
Itinerary app
Added several more export options: Wallet passes, program memberships and standalone passes
can now also be exported, and single reservations can be exported as GPX file.
Reworked journey progress display, which is now active in all journey views and should be
much more reliable especially in the timeline view.
Standalone Wallet passes can now also be updated explicitly when containing an online update URL.
Improved default zoom in the journey and trip map views.
Fixed display of addresses with unresolved ISO 3166-2 region codes.
Determine a correct trip end time in more cases.
Fixed price display when there’s no home country configured.
Also show notes/service alerts in the stop info dialog on the journey map.
Fixed handling of Wallet passes with a case-sensitive identifier.
How you can help
Feedback and travel document samples are very much welcome, as are all other forms of contributions.
Feel free to join us in the KDE Itinerary Matrix channel.