Back in my second year of college, I had just started exploring functional programming. I was picking up Haskell out of curiosity - it felt different, abstract, and honestly a bit intimidating at first. Around the same time, I was also diving into topics like context-free grammars, automata theory, parse trees, and the Chomsky hierarchy - all the foundational concepts that explain how programming languages are parsed, interpreted, and understood by machines.
Somewhere along the way, it hit me: what if I could build something with both? What could be more fun than writing an interpreter for an imperative programming language using a functional one? That idea stuck - and over the next few weeks, I set out to build a purely functional monadic interpreter in Haskell.
I designed the grammar for the language myself, mostly inspired by Python. I wanted it to support loops, conditionals, variable assignments, print statements, and basic arithmetic, boolean, and string operations. It even has a “++” operator for string concatenation. Writing the grammar rules involved figuring out how to model nested blocks, expressions with precedence, and side-effect-free evaluation. I built the entire thing using monadic parser combinators—no parser generators or external libraries, just Haskell’s type system and some stubbornness.
Here’s a rough look at the grammar that powers the interpreter:
The interpreter parses the source code using this grammar, builds an abstract syntax tree, and evaluates it by simulating an environment. There’s no mutation—it just returns a new environment every time a variable is assigned or a block is executed.
Running it is simple enough. After compiling with GHC, it reads the program from stdin and prints the resulting variable bindings and any output generated by print() statements.
ghc -o interpreter interpreter.hs
./interpreter
Here’s a sample program to show how it works:
{
i = 5;
a = (4 < 3) || 6 != 7;
print(a);
# First While! #
while(i != 0 && a)
{
print(i);
i = i - 1;
}
}
Output : a True
i 0
print True 5 4 3 2 1
Once I had the interpreter working, I wanted to make it a bit more fun to interact with. So I built a small GUI in Python using tkinter. It’s nothing fancy—just a textbox to enter code, a button to run it, and an output area to display the result. When you click “Run,” the Python script sends the code to the Haskell interpreter and prints whatever comes back.
The entire thing—from parsing to evaluation—is written in a purely functional style. No mutable state, no IO hacks, no shortcuts. Just expressions flowing through types and functions. It’s probably not the fastest interpreter out there, but writing it did teach me a lot about how languages work under the hood.
🎉 New Clazy Release: Stability Boost & New Checks!
We’re excited to roll out a new Clazy release packed with bug fixes, a new check, and improvements to existing checks.
This release included 34 commits from 5 contributors.
🔍 New Features & Improvements
New Check: readlock-detaching Detects unsafe and likely unwanted detachment of member-containers while holding a read lock.
For example, when calling .first() on the mutable member instead of .constFirst()
Expanded Support for Detaching Checks Additional methods now covered when checking for detaching temporary or member lists/maps.
This includes reverse iterators on many Qt containers and keyValueBegin/keyValueEnd on QMap.
All those methods have const counterparts that allow you to avoid detaching.
Internal Changes
With this release, Clang 19 or later is a required dependency. All older versions needed compatibility logic
and were not thouroughly tested on CI. In case you are on an older Version of a Debian based distro, consider
using https://apt.llvm.org/ and compile Clazy from source ;)
🐞 Bug Fixes
install-event-filter: Fixed crash when no child exists at the given depth. BUG: 464372
fully-qualified-moc-types: Now properly evaluates enum and enum class types. BUG: 423780
qstring-comparison-to-implicit-char: Fixed and edgecase where assumptions about function definition were fragile. BUG: 502458
fully-qualified-moc-types: Now evaluates complex signal expressions like std::bitset<int(8)> without crashing.
#28
qvariant-template-instantiation: Crash fixed for certain template patterns when using pointer types.
Also, thanks to Christoph Grüninger, Johnny Jazeix, Marcel Schneider and Andrey Rodionov for contributing to this release!
Hi again! Week two was all about turning last week’s refactored EteSync resource and newly separated configuration plugin into a fully working, stable component. While the initial plugin structure was in place, this week focused on making the pieces actually work together — and debugging some tricky issues that emerged during testing.
Removing QtWidgets Dependencies with KNotification
While testing, I discovered that the original EteSync resource code used QDialog and KMessageBox directly for showing error messages or status updates. These widget-based UI elements are too heavy for a background resource and conflict with the goal of keeping the resource lightweight and GUI-free.
To address this, I replaced them with a much cleaner approach: creating KNotification instances directly. This allows the resource to send system notifications (like “EteSync sync successful” or error messages) through the desktop’s notification system, without relying on any QtWidgets. As a result, the resource is now fully compatible with non-GUI environments and no longer needs to link against the QtWidgets library.
Refactoring Settings Management for Plugin Compatibility
Another major change this week involved how the resource handles its settings.
Previously, the configuration was implemented as a singleton, meaning both the resource and its configuration plugin were sharing a single instance of the settings object. This worked in the old, tightly-coupled model, but caused conflicts in the new plugin-based architecture.
To fix this, I updated the settings.kcfgc file to set singleton=false. This change allows the resource and the configuration plugin to maintain separate instances of the settings object, avoiding interference. I also updated both etesyncclientstate.cpp and etesyncresource.cpp to properly manage their respective configurations.
Solving the “Zombie Window” Issue
One final issue emerged after separating the UI: the configuration wizard now appears in a separate window from the main Akonadi configuration dialog. When the wizard is completed and closes, the original configuration window — now empty and disconnected — remains open.
Clicking buttons on this leftover window causes terminal errors, since it no longer communicates with a valid process. This results in a confusing and potentially buggy experience for users.
What’s Next?
My next task is to figure out a clean way to close the original parent window when the wizard completes, ensuring a smooth and error-free configuration flow. In addition to that, I’ll begin testing the full integration between the EteSync resource and its configuration plugin to ensure everything works correctly — from saving and applying settings to triggering synchronization. This will help verify that the decoupling is both functionally solid and user-friendly.
I had mentioned a number of new Transitous features in a
previous post.
As those largely depend on the corresponding data being available, here’s an overview of how you can
help to find, add and improve that data.
Transitous
Transitous is a community-run public transport routing service build on top of the MOTIS
routing engine and thousands of datasets from all over the world. Transitous backs
public transport related features in applications like GNOME Maps,
KDE Itinerary or Träwelling.
Just like OpenStreetMap this needs people on the ground identifying issues or gaps
in the data, figuring out where things go wrong and who to talk to at the local operators to
get things fixed.
The first step to help is just comparing data you get from Transitous with the reality around
you, ie. does the public transport schedule match what’s actually happening, and are all
relevant services included?
If there’s things missing or outdated, a list of the types of datasets consumed by Transitous,
and how to inspect and add those, follows below.
The central part in this are a bunch of JSON files in the Transitous Git repository,
which define all the datasets to be used as well as a few parameters and metadata for those.
Once a day those are then retrieved, validated, filtered and post-processed for importing into
MOTIS by Transitous’ import pipeline.
Public transport schedules
The backbone of public transport routing is static GTFS schedule data,
that’s the bare minimum for Transitous to work in a region.
GTFS feeds are essentially zip files containing a set of CSV tables, making them relatively
easy to inspect, although especially nationwide aggregated feeds can get rather large.
GTFS feeds ideally contain data for several months into the future, but can nevertheless receive
regular updates. Transitous checks for updates daily, so for this to work practically we also need
a stable URL for them (that might seem obvious to you, but apparently not to all feed providers…).
We currently have more than 1800 of those,
from 55 countries. The Transitous map view gives you an impression how well an area is covered,
each of the colored markers there is an (estimated) current position of a public transport vehicle.
Recently added coverage in South Korea.
If your area is incomplete or not covered at all, the hardest part to change that is probably finding
the corresponding GTFS feeds. There’s a few places worth looking at:
The public transport operators themselves, they might just publish data on their website.
Regional or national open data portals, especially in countries with regulation requiring public transport data to be published.
In the EU, those are called “National Access Point” (NAP).
Google Maps having public transport data in your region is a strong indicator whether GTFS feeds even exist,
as they use those as well.
Adding a GTFS feed to Transitous is then usually just a matter of a few lines of JSON pointing
to the feed. In rare cases it might require a bit more automation work, such as in France where there’s
hundreds of small feeds to manage.
And every feed is welcome, no matter whether it’s a nation-wide railway operator or a single community-run bus
to help people in a rural area, as long as it’s for a service open to the general public.
Realtime data
So far this is all static though. For properly dealing with delay, disruptions and all kinds of other unplanned
and short-notice service changes we also need GTFS Realtime (RT) feeds.
Those are polled once a minute for updates.
GTFS-RT feeds come in three different flavors:
Trip updates, that is realtime schedule changes like delays, cancellations, etc.
Service alerts, that is textual descriptions of disruptions beyond a specific connection, such as upcoming construction work.
Vehicle positions, that is geographic coordinates of the current position of trains or busses.
MOTIS can handle the first two so far. Support for vehicle positions is also on the wishlist, and not just
for showing current positions on a map, vehicle positions could also be used to interpolate trip updates
when those are not available.
Adding GTFS-RT feeds to Transitous is very similar to adding static GTFS feeds, however GTFS-RT feeds usually only work
in combination with their respective static equivalent.
Combining a smaller realtime feed of a single operator with a nationwide aggregated static feed will thus usually
not work out of the box. There’s ways to exclude certain operators from a larger static feed though, so with a bit
of puzzle work this can usually be made to work as well.
GTFS-RT feeds use Protocol Buffers, but there’s nevertheless simple way to look at their content:
curl https://the.feed.url | protoc gtfs-realtime.proto --decode=transit_realtime.FeedMessage | less
The Protocol Buffers schema file needed for this can be downloaded here.
To see the realtime coverage available in Transitous, you can toggle the color coding of vehicles
on its map view in the upper right corner. A green/yellow/red gradient shows the amount
of delay for the corresponding trip, while gray vehicles have no realtime information.
Color-coded realtime data in Amsterdam, Netherlands.
Shared mobility data
Transitous doesn’t just handle scheduled public transport though, but also vehicle sharing, which
can be particularly interesting for the first and last mile of a trip.
The data for this is provided by GBFS feeds. This includes information about the type of vehicles (bikes,
cargo bikes, kickscooters, mopeds, cars, etc) and their method of propulsion (human powered, electric, etc),
where to pick them up and where to return them (same location as pickup, designated docks of the provider, free floating
within a specific area, etc) and most importantly where vehicles are currently available.
Adding GBFS feeds to Transitous is also just a matter of a few lines of JSON. We currently
don’t have a built-in UI to see the results, showing all available vehicles on the map
is certainly on the wishlist though. GBFS is relatively easy to inspect manually, the entry
point is a small JSON manifest that contains links to JSON files with the actual information, generally
split up by how often certain aspects are expected to change.
Same as for GTFS feeds, any service accessible to the general public is welcome here, whether it’s
a small community run OpenBike instance or a provider with hundreds of vehicles.
On-demand services
Somewhere between scheduled transport and shared mobility are on-demand services. That is, services that require
some form or booking beforehand and might be anything from an on-demand bus that still follows a somewhat fixed
route with pre-defined stops to something closer to a taxi with a more flexible route that picks up or drops
off passengers anywhere in a given area.
These services are often used in times and/or areas with fewer demand, thus making them often the only mobility
option then/there. That makes it all the more important to have those covered as well.
Modeling on-demand services is challenging, given the variety on how those services work and their inherently very dynamic nature.
There’s the relatively new GTFS-Flex standard covering this, which
MOTIS supports since v2.0.66.
GTFS-Flex feeds might be included in static GTFS data or provided separately, and adding them to Transitous
works again by just a few lines of JSON.
There’s one caveat though, the validator we use in pre-processing, gtfsclean,
doesn’t support GTFS-Flex yet, so those feeds are currently imported without any sanity checking
or validation. Therefore we need to be extra careful with adding such feeds until that is fixed.
If you know a bit of Go and want to help with that, get in touch!
For GTFS-Flex data there’s some diagnostic visualization in the map view in debug mode,
when zooming in far enough.
Diagnostic view of on-demand service areas in Switzerland.
OSM
A crucial dataset for all road-based and in-building routing is OpenStreetMap. While that
is generally very comprehensive and up-to-date, there’s one aspect that more often needs fixes, the floor level
separation. That’s not visible in most OSM-based maps and thus is easy to miss while mapping. For Transitous this is
particularly important for in-building routing in train stations.
When zoomed in enough the map view of Transitous will offer you a floor level selector at the lower right.
That can give you a first indication if elements are misplaced (showing up on the wrong level) or not assigned to a floor level
at all (showing up on all levels). For reviewing smaller elements indoor= can also be useful,
and for fixing things JOSM has a built-in level selector on the top left.
Railway tracks running through the passenger level of Bremen central station (upper right).
In most cases adding or fixing the level tag is all that’s needed.
Elements allowing to move between levels (stairs, ramps, elevators, escalators, etc) are especially important for routing.
And more
All of the above is just the current state, there’s much more to look at though, such as:
Unused information in the existing datasets, such as fare information
or the remaining range of sharing vehicles.
Expanding the GTFS standard to cover things currently not modeled. Starting from relatively simple things like
car ferries or car transport trains to highly detailed information
about a bus or train interior with regards to accessibility.
Convert other data formats to GTFS, such as NeTEx,
SIRI
or SSIM.
Generate GTFS-RT trip updates from realtime vehicle position data.
Extend the import pipeline to augment and normalize GTFS feeds, e.g. by injecting line colors and logos from Wikidata or
accessibility information and missing paths from OSM, or to normalize train names.
Considering elevation data for street routing. MOTIS has initial support for this meanwhile, but even the relatively coarse global
30m SRTM grid data would require an extra 50-100GB of (fast) storage, with quadratic growth with smaller grid sizes (1m or 2m grids are available
in a number of regions).
In other words, plenty of rabbit holes to explore, no matter whether you are into code, data, math, trains, busses, IT operations or lobby work :)
Welcome to a new issue of This Week in Plasma! Every week we cover the highlights of what’s happening in the world of KDE Plasma and its associated apps like Discover, System Monitor, and more.
This week we finished polishing up Plasma 6.4 for release, and started to turn our heads to bigger topics — notably including Wayland protocols and accessibility!
Notable New Features
Plasma 6.5.0
Implemented support for an experimental version of the Wayland picture-in-picture protocol that allows apps also implementing it (such as Firefox) to finally display proper PiP windows in advance of the upstream version of the protocol eventually being merged. (Vlad Zahorodnii, link)
Notable UI Improvements
Plasma 6.3.6
Reduced the rate at which the “visual bell” accessibility feature can flash the screen so there’s no way it can cause seizures. (Nicolas Fella, link)
Plasma 6.4.0
Made the Kicker Application Menu widget able to horizontally scroll for searches that return results from many KRunner plugins, so you have the opportunity to see them all. (Christoph Wolk, link)
Plasma 6.4.1
Improved text contrast for labels used in subtitles or other secondary roles throughout Plasma. (Nate Graham, link 1, link 2, link 3, link 4, link 5, link 6, and link 7)
Discover’s search field now trims all whitespace, to prevent errors when copy-pasting text that ends in a space or something. (Nate Graham, link)
Plasma 6.5.0
In System Settings, moved the Invert and Zoom settings into the Accessibility page, which is a more sensible place for them than the Desktop Effects page was. (Oliver Beard, link 1, link 2, and link 3)
Merged KWin’s Background Contrast effect into the Blur effect, since neither makes sense to turn on or off without the other. (Marco Martin, link 1 and link 2)
On Wayland, virtual desktops can now be re-ordered from the Pager widget, and re-ordering them in the Overview effect’s grid view now re-orders them in the Pager widget too. (Marco Martin and Vlad Zahorodnii, link 1, link 2, and link 3)
Spectacle now makes it clearer that you can end a screen recording by pressing the same keyboard shortcut you used to start it, by telling you this in the notification and also by using clearer names for the global shortcuts. (Noah Davis, link)
The Breeze application style’s animated effects for clicking checkboxes and radio buttons now work in QtQuick-based apps and System Settings pages as well. (Kai Uwe Broulik, link)
The Disks & Devices, Networks, and Bluetooth widgets now use standard-style section headers. (Nate Graham, link 1, link 2, and link 3)
Improved the searching UX in the Emoji Selector app: now the search field is always visible, and doing a search will always search through the full set of all emojis if there aren’t any matches on the current page. (Nate Graham, link 1, link 2)
The Display Configuration widget and OSD no longer thinks your primary screen is always connected to a laptop; now it uses more generic terminology. (Nate Graham, link)
Notable Bug Fixes
Plasma 6.3.6
Using a non-default font or font size no longer causes the selection rectangles for files or folders on the desktop to be displayed at the wrong size and cause subtle layout and positioning glitches. (Nate Graham, link)
Plasma 6.4.0
Fixed several more cases where putting a widget on a huge panel could cause Plasma to freeze. (Christoph Wolk, link 1, link 2, link 3, link 4, link 5, link 6, and link 7)
Fixed a case where Discover could crash while offering you the replacement for an end-of-support Flatpak app. (Akseli Lahtinen, link)
Fixed a bug that caused the Open/Save dialogs invoked from Flatpak-based browsers (or when forcing the use of portal-based dialogs) to sometimes not allow the preview pane to be opened. (David Redondo, link)
Fixed a weird bug that could cause a standalone Folder View widget on the desktop to become visually glitchy when you drag files or folders to it from Dolphin. (Akseli Lahtinen, link)
Fixed a bug that broke printing at the correct sizes in Flatpak-packaged GTK apps. (David Redondo, link)
Installing or uninstalling an app no longer unexpectedly clears the search field and results view in Kicker or Kickoff if they happened to be visible at the moment the transaction completed. (Christoph Wolk, link)
Frameworks 6.15
Fixed a cause of crashes in apps and Plasma system services using System Monitor charts. (Arjen Hiemstra, link)
Frameworks 6.16
Fixed an intermittent source of crashes in System Monitor when switching process views. (Arjen Hiemstra, link)
Fixed a weird issue that could cause Open/ Save dialogs to close when hovering over certain files. (David Redondo, link)
KDE Gear 25.04.3
Fixed an issue that could cause the thumbnailer to crash on X11 when using certain widget styles. (Nicolas Fella, link)
Improved startup speed for System Monitor by loading the column configuration dialog’s content on-demand, rather than at launch. (David Edmundson, link)
Made sure that the Environment Canada data source for weather reports keeps working, since the provider is changing their data format soon and we needed to adapt. (Ismael Asensio, link)
Frameworks 6.15
Improved startup speed for System Monitor by loading the tree view indicator arrows on demand, rather than at launch. (David Edmundson, link)
How You Can Help
KDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.
You can help KDE by becoming an active community member and getting involved somehow. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine!
You don’t have to be a programmer, either. Many other opportunities exist:
You can also help us by making a donation! Any monetary contribution — however small — will help us cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.
One of the largest hurdles in any job or activity is getting your resources set up. Luckily for you, Krita has one of the most detailed and straightforward documentation for setup. In this blog I will go over my experience setting up Krita and provide quick links to answer all the questions you may have during set up.
The goal is to get Krita running on your machine. For my setup and for simplicity of instructions, I use Oracle's Virtualbox to run a virtual machine(VM) with Ubuntu on my windows machine. You can use any VM host for set up. The Follow build instructions should be straightforward to follow. The great thing about these instructions is that you don't need to know a lot of detail about docker or C++ yet, but you will need to understand some basic linux and git commands.
In the above links, follow the instruction in the hyperlink title.
My Experience
When I set up Krita for the first time, I felt a sense of accomplishment. Not only was I able to set up Krita, but I was able to deepen my understanding of git, learn about docker, VMs and QT.
I think the biggest take away from setting up Krita is to never give up, ask questions in chat, ask yourself "What do I not understand?" before moving to the next instruction.
Conclusion
Setting up Krita is as simple as you make it out to be. The hardest part is finding the resources to be successful. I hope this blog post can simplify set up for newcomers and experienced users.
Contact
To anyone reading this, please feel free to reach out to me. I’m always open to suggestions and thoughts on how to improve as a developer and as a person. Email: ross.erosales@gmail.com Matrix: @rossr:matrix.org
To briefly recap, Natalie Clarius and I applied for an NLnet grant to improve gesture support in Plasma, and they accepted our project proposal. We thought it would be a good idea to meet in person and workshop this topic from morning to evening for three days in a row. Props to Natalie taking the trip from far away in Germany to my parents' place, where we were kindly hosted and deliciously fed.
Our project plan starts with me adding stroke gesture support to KWin in the first place, while Natalie works on making multi-touch gestures customizable. Divvying up the work along these lines allows us to make progress independently without being blocked on each other's work too often. But of course there is quite a bit of overlap, which is why we applied to NLnet together as a single project.
The common thread is that both kinds of gestures can result in similar actions being triggered, for example:
Showing Plasma's Window Overview
Starting an app / running a custom command
Invoking an action inside a running app
So if we want to avoid duplicating lots of code, we'll want a common way to assign actions to a gesture. We need to know what to store in a config file, how Plasma code will make use of it, and how System Settings can provide a user interface that makes sense to most people. These are the topics we focused on. Time always runs out faster than you'd like, ya gotta make it count.
Three days in a nutshell
Getting to results is an iterative process. You start with some ideas for a good user experience (UX) and make your way to the required config data, or you start with config data and make your way to actual code, or you hit a wall and start from the other end going from code to UX until you hit another wall again. Rinse and repeat until you like it well enough to ship it.
Collected a comprehensive list of gestures (and gesture variants) to support.
On day 2, we:
Collected a broad list of actions (and action types) to invoke when a gesture is triggered.
Sketched out UI concepts for configuring gestures.
Weren't quite satisfied, came up with a different design which we like better.
Discussed how we can automatically use one-to-one gesture tracking when an assigned action supports it.
Drafted a config file format to associate (gesture) triggers with actions.
On day 3, we:
Drafted a competing config file format which adds the same data to the existing kglobalshortcutsrc file instead.
Reviewed existing gesture assignments and proposals.
Created a table with proposed default gesture assignments (to be used once gestures are configurable).
Collected remaining questions that we didn't get to.
What I just wrote is a lie, of course. I needed to break up the long bullet point list into smaller sections. In reality we jumped back and forth across all of these topics in order to reach some sort of conclusion at the end. Fortunately, we make for a pretty good team and managed to answer a good amount of questions together. We even managed to make time for ice cream and owl spottings along the way.
Since you asked for it, here's a picture of Natalie and I drawing multi-touch gestures in the air.
Next up in gestures
So there are some good ideas, we need to make them real. Since the sprint, I've been trying my hand on more detailed mockups for our rough design sketches. This always raises a few more issues, which we want to tackle before asking for opinions from KWin maintainers and Plasma's design community. There isn't much to share with the community yet, but we'll involve other contributors before too long.
Likewise, my first KWin MR for stroke gesture infrastructure is not quite there yet, but it's getting closer. The first milestone will be to make it possible for someone to provide stroke gesture actions. The second milestone will be for Plasma/KWin to provide stroke gesture actions by itself and offer a nice user interface for it.
Baby steps. Keep chiseling away at it and trust that you'll create something decent eventually. This is not even among the largest efforts in KDE, and yet there are numerous pieces to fit and tasks to tackle. Sometimes I'm frankly in awe of communities like KDE that manage to maintain a massive codebase together, with very little overhead, through sheer dedication and skill. Those donations don't go to waste.
At this point I would also like to apologize to anyone who was looking for reviews or other support from me elsewhere in Plasma (notably, PowerDevil) which I haven't helped with. I get stressed when having to divide my time and focus between different tasks, so I tend to avoid it, in the knowledge that someone or something will be left wanting. I greatly admire people who wear lots of different hats simultaneously, and it would surely be so nice to have the aptitude for that, but it kills me so I have to pick one battle at a time.
Right now, that's gestures. Soon, a little bit of travel. Then gestures again. Once that's done, we'll see what needs work most urgently or importantly.
Developing an application for desktop or embedded platforms often means choosing between Qt Widgets and Qt Quick to develop the UI. There are pros and cons to each. Qt, being the flexible framework that it is, lets you combine these in various ways. How you should integrate these APIs will depend on what you're trying to achieve. In this entry I will show you how to display Qt Widget windows on an application written primarily using Qt Quick.
Why Show a Qt Widget Window in a Qt Quick App
Qt Quick is great for software that puts emphasis on visual language. A graphics pipeline, based around the Qt Quick Scene Graph, will efficiently render your UI using the GPU. This means UI elements can be drawn, decorated, and animated efficiently as long as you pick the right tools (e.g. Shaders, Animators, and Qt's Shapes API instead of its implementation of HTML's Canvas).
From the Scene Graph also stem some of Quick's weaknesses. UI elements that in other applications would extend outside of the application's window, such as tool tips and the ComboBox control, can only be rendered inside of Qt Quick windows. When you see other app's tooltips and dropdowns extend beyond the window, those items are being rendered onto a separate windows; one without window decorations (a.k.a. borderless windows). Rendering everything on the same window helps ensure your app will be compatible with systems that can only display a single window at a time, such as Android and iOS, but it could result in wasted space if your app targets PC desktop environments.
An animation shows a small window with QML's and Widget's ComboBoxes opening for comparison purposes
QML ComboBox is confined to the Qt Quick window while the Widgets ComboBox extends beyond the window
Qt lets us combine Widgets and Quick in a few ways. The most common approach is to embed a Qt Quick view into your Widgets app, using QQuickWidget. That approach is fitting for applications that primarily use Widgets. Another option is to render Widgets inside a Qt Quick component, by rendering it through a QQuickPaintedItem. However, this component will be limited to the same window confines as the rest of the items in your Quick window and it won't benefit from Scene Graph rendering optimizations, meaning you get the worst of both worlds.
A third solution is to open widget windows from your Qt Quick apps. This has none of the aforementioned drawbacks, however, the approach has a couple of drawback of its own. First, the app would need to be run from a multi-window per screen capable environment. Second, widget windows are not parentable to Qt Quick windows; meaning certain window z-stack related features, such as setting window modality to Qt::WindowModal, won't have effect on the triggering window when a Widget is opened from Qt Quick. You can work around that by setting modality to Qt::ApplicationModal instead, if you're okay with blocking all other windows for modality.
Displaying Widget windows in Qt Quick applications has been useful to me in the past, and is something I haven't seen documented anywhere, hence this tutorial.
How to Show a Qt Widget Window in a Qt Quick App
Architecture Big Picture
Displaying a Qt Widget window from Qt Quick is simpler than it seems. You'll need two classes:
The class that represents the widget window.
A class to interface from QML, that hosts and instantiates the window.
You might be tempted to forgo the interface class and instantiate the widget directly. However, this would result in a crash. We'll display the widget window by running Widget::show from the interface class.
Update CMakeLists.txt
In addition to those classes, you'll also need to make sure that your app links to both Qt::Quick and Qt::Widgets libraries. Here's what that looks like for a CMake project
// Locate libraries
find_package(Qt6 6.5 REQUIRED COMPONENTS
Quick
Widgets)
// Link build target to libraries
target_link_libraries(${TARGET_NAME} PRIVATE
Qt6::Quick
Qt6::Widgets)
// Replace ${TARGET_NAME} with the name of your target executable
Prepare the interface layer as you would any C++ based Quick component. By this I mean: derive from QObject, and use the Q_OBJECT and QML_ELEMENT macros to make your class available from QML.
// widgetFormHandler.h
#pragma once
class WidgetFormHandler : public QObject
{
Q_OBJECT
QML_ELEMENT
public:
explicit WidgetFormHandler(QObject *parent = nullptr);
};
// widgetFormHandler.h
#pragma once
class WidgetsForm;
class WidgetFormHandler : public QObject
{
Q_OBJECT
QML_ELEMENT
public:
explicit WidgetFormHandler(QObject *parent = nullptr);
~WidgetFormHandler();
private:
std::unique_ptr<WidgetsForm> m_window;
}
Use std::make_unique in the constructor to initialize the unique pointer to m_window.
Define the instantiating class' destructor to ensure the pointers are de-alocated, thus preventing memory leaks. If you stick to using smart pointers, C++ will do all the work for you; simply use the default destructor, like I do here. Make sure to define it outside of the class' header; some compilers have trouble dealing with the destructor when it's defined inside the header.
Now we want to make properties from the widget available in QML. How we do this will depend on the property and on whether we will manipulate the property's value from both directions or only from one side only and update on the other.
Let's look at a bi-directional example in which we add the ability to control the visible state of the widget window from QML. We'll add a property called "visible" to the C++ interface so that it matches the visible that we get from Qt Quick windows in QML. Declare the property using Q_PROPERTY. Use READ and WRITE functions to control the window's state.
To make this bi-directional, set NOTIFY to a signal that allows the property to be updated in QML after it being emitted and emit the signal where applicable. We emit it from setVisible in this class, however if QWidget had a signal that emitted when its visible state changed, I would also make a connection between that signal and that of our handler’s visibleChanged. However, that isn’t the case, so we have to make sure to emit it ourselves.
Make Signals Available to QML
Develop the widget window as you would any other widget. If you use UI forms, go to the header file and create a signal for each action that you wish to relay over to QML.
In this example we'll relay a button press from the UI file, so we'll create a button named pushButton in our ui file:
Qt Designer shows UI file with a button named pushButton, in camel case.
Now add a buttonClicked signal to our header:
// widgetsForm.h
#pragma once
#include <QWidget>
namespace Ui
{
class WidgetsForm;
}
class WidgetsForm : public QWidget
{
Q_OBJECT
public:
explicit WidgetsForm(QWidget *parent = nullptr);
~WidgetsForm();
signals:
void buttonClicked();
// Signal to expose button click from Widgets window
private:
std::unique_ptr<Ui::WidgetsForm> ui;
};
Once again, we use a unique pointer, this time to hold the ui object. This is better than what Qt Creator templates give us because it means C++ handles the memory management for us and we can avoid the need for a delete statement in the destructor.
In the window's constructor, we make a connection between the UI's button's signal and the one that we've created to relay the signal for exposure.
Before we connect the exposed signal to the QML interface, we need another signal on the interface to expose our event over to QML. Here I add qmlSignalEmitter signal for that purpose:
// widgetFormHandler.h
[..]
signals:
void visibleChanged();
void qmlSignalEmitter(); // Signal to relay button press to QML
[..]
To complete all the connections, go to the interface layer’s constructor and make a connection between your window class’ signal and that of the interface layer. This would look as follows:
By connecting one emitter to another emitter we keep each classes' concerns separate and reduce the amount of boilerplate code, making our code easier to maintain.
Over at the QML, we connect to qmlSignalEmitter using the on prefix. It would look like this:
import NameOfAppQmlModule // Should match qt_add_qml_module's URI on CMake
WidgetFormHandler {
id: fontWidgetsForm
visible: true // Make the Widgets window visible from QML
onQmlSignalEmitter: () => {
console.log("Button pressed in widgets") // Log QPushButton's click event from QML
}
}
Final Product
I've prepared a demo app where you can see this technique in action. The demo displays text that bounces around the screen like an old DVD player's logo would. You change the text and font through two identical forms, one implemented in QML and the other done in Widgets. The code presented in this tutorial comes from that demo app.
The moving text should work on all desktop systems except for Wayland sessions on Linux. That is because I'm animating the window's absolute position (which is restricted in Wayland for security reasons) rather than the contents inside a window. This has the benefit of not obstructing other applications, since the moving window that contains the text would capture mouse inputs if clicked, preventing those from reaching the application behind it.
Real World Use Case
The first time I employed this technique was in my FOSS project, QPrompt. I use it there to provide a custom font dialog that doubles as a text preview. Having a custom dialog gives me full control over formatting options presented to users, and for this app we only needed a preview for large text and a combo box to choose among system fonts. QPrompt is also open source, you can find the source code relevant to this technique here: https://github.com/Cuperino/QPrompt-Teleprompter/blob/main/src/systemfontchooserdialog.h
Thank you for reading. I hope you’ll find this useful. A big thank you to David Faure for suggesting the use of C++ unique pointers as well as reviewing the code along with Renato and my team.
If there are other techniques that you’d like for us to try or showcase, let us know.