Sunday, 15 June 2025
This is the release schedule the release team agreed on
https://community.kde.org/Schedules/KDE_Gear_25.08_Schedule
Dependency freeze is in around 2 weeks (July 3) and feature freeze one
after that. Get your stuff ready!
This is the release schedule the release team agreed on
https://community.kde.org/Schedules/KDE_Gear_25.08_Schedule
Dependency freeze is in around 2 weeks (July 3) and feature freeze one
after that. Get your stuff ready!
🎉 New Clazy Release: Stability Boost & New Checks!
We’re excited to roll out a new Clazy release packed with bug fixes, a new check, and improvements to existing checks. This release included 34 commits from 5 contributors.
New Check: readlock-detaching
Detects unsafe and likely unwanted detachment of member-containers while holding a read lock.
For example, when calling .first() on the mutable member instead of .constFirst()
Expanded Support for Detaching Checks
Additional methods now covered when checking for detaching temporary or member lists/maps.
This includes reverse iterators on many Qt containers and keyValueBegin/keyValueEnd on QMap.
All those methods have const counterparts that allow you to avoid detaching.
Internal Changes With this release, Clang 19 or later is a required dependency. All older versions needed compatibility logic and were not thouroughly tested on CI. In case you are on an older Version of a Debian based distro, consider using https://apt.llvm.org/ and compile Clazy from source ;)
install-event-filter: Fixed crash when no child exists at the given depth.
BUG: 464372
fully-qualified-moc-types: Now properly evaluates enum and enum class types.
BUG: 423780
qstring-comparison-to-implicit-char: Fixed and edgecase where assumptions about function definition were fragile.
BUG: 502458
fully-qualified-moc-types: Now evaluates complex signal expressions like std::bitset<int(8)> without crashing.
#28
qvariant-template-instantiation: Crash fixed for certain template patterns when using pointer types.
Also, thanks to Christoph Grüninger, Johnny Jazeix, Marcel Schneider and Andrey Rodionov for contributing to this release!
Hi again! Week two was all about turning last week’s refactored EteSync resource and newly separated configuration plugin into a fully working, stable component. While the initial plugin structure was in place, this week focused on making the pieces actually work together — and debugging some tricky issues that emerged during testing.
While testing, I discovered that the original EteSync resource code used QDialog and KMessageBox directly for showing error messages or status updates. These widget-based UI elements are too heavy for a background resource and conflict with the goal of keeping the resource lightweight and GUI-free.
To address this, I replaced them with a much cleaner approach: creating KNotification instances directly. This allows the resource to send system notifications (like “EteSync sync successful” or error messages) through the desktop’s notification system, without relying on any QtWidgets. As a result, the resource is now fully compatible with non-GUI environments and no longer needs to link against the QtWidgets library.
Another major change this week involved how the resource handles its settings.
Previously, the configuration was implemented as a singleton, meaning both the resource and its configuration plugin were sharing a single instance of the settings object. This worked in the old, tightly-coupled model, but caused conflicts in the new plugin-based architecture.
To fix this, I updated the settings.kcfgc file to set singleton=false. This change allows the resource and the configuration plugin to maintain separate instances of the settings object, avoiding interference. I also updated both etesyncclientstate.cpp and etesyncresource.cpp to properly manage their respective configurations.
One final issue emerged after separating the UI: the configuration wizard now appears in a separate window from the main Akonadi configuration dialog. When the wizard is completed and closes, the original configuration window — now empty and disconnected — remains open.
Clicking buttons on this leftover window causes terminal errors, since it no longer communicates with a valid process. This results in a confusing and potentially buggy experience for users.
My next task is to figure out a clean way to close the original parent window when the wizard completes, ensuring a smooth and error-free configuration flow. In addition to that, I’ll begin testing the full integration between the EteSync resource and its configuration plugin to ensure everything works correctly — from saving and applying settings to triggering synchronization. This will help verify that the decoupling is both functionally solid and user-friendly.
Learn how to use Cucumber-CPP and Gherkin to implement better black box tests for a C++ library. We developed a case-study based on Qt OPC UA.
Continue reading Improved Black Box Testing with Cucumber-CPP at basysKom GmbH.
One of the largest hurdles in any job or activity is getting your resources set up. Luckily for you, Krita has one of the most detailed and straightforward documentation for setup. In this blog I will go over my experience setting up Krita and provide quick links to answer all the questions you may have during set up.
One Stop Shop for Links
Download and Install Ubuntu
Create KDE account
Fork Krita Repository
Follow build instructions
If you use QTCreator to Build and Run Krita follow this video
Krita Chat - Create account, join chat room, introduce yourself and ask questions
The goal is to get Krita running on your machine. For my setup and for simplicity of instructions, I use Oracle's Virtualbox to run a virtual machine(VM) with Ubuntu on my windows machine. You can use any VM host for set up. The Follow build instructions should be straightforward to follow. The great thing about these instructions is that you don't need to know a lot of detail about docker or C++ yet, but you will need to understand some basic linux and git commands.
In the above links, follow the instruction in the hyperlink title.
When I set up Krita for the first time, I felt a sense of accomplishment. Not only was I able to set up Krita, but I was able to deepen my understanding of git, learn about docker, VMs and QT.
I think the biggest take away from setting up Krita is to never give up, ask questions in chat, ask yourself "What do I not understand?" before moving to the next instruction.
Setting up Krita is as simple as you make it out to be. The hardest part is finding the resources to be successful. I hope this blog post can simplify set up for newcomers and experienced users.
To anyone reading this, please feel free to reach out to me. I’m always open to suggestions and thoughts on how to improve as a developer and as a person.
Email: ross.erosales@gmail.com
Matrix: @rossr:matrix.org
To briefly recap, Natalie Clarius and I applied for an NLnet grant to improve gesture support in Plasma, and they accepted our project proposal. We thought it would be a good idea to meet in person and workshop this topic from morning to evening for three days in a row. Props to Natalie taking the trip from far away in Germany to my parents' place, where we were kindly hosted and deliciously fed.
Our project plan starts with me adding stroke gesture support to KWin in the first place, while Natalie works on making multi-touch gestures customizable. Divvying up the work along these lines allows us to make progress independently without being blocked on each other's work too often. But of course there is quite a bit of overlap, which is why we applied to NLnet together as a single project.
The common thread is that both kinds of gestures can result in similar actions being triggered, for example:
So if we want to avoid duplicating lots of code, we'll want a common way to assign actions to a gesture. We need to know what to store in a config file, how Plasma code will make use of it, and how System Settings can provide a user interface that makes sense to most people. These are the topics we focused on. Time always runs out faster than you'd like, ya gotta make it count.
Getting to results is an iterative process. You start with some ideas for a good user experience (UX) and make your way to the required config data, or you start with config data and make your way to actual code, or you hit a wall and start from the other end going from code to UX until you hit another wall again. Rinse and repeat until you like it well enough to ship it.
On day 1, we:
kcm_keys.On day 2, we:
On day 3, we:
kglobalshortcutsrc file instead.What I just wrote is a lie, of course. I needed to break up the long bullet point list into smaller sections. In reality we jumped back and forth across all of these topics in order to reach some sort of conclusion at the end. Fortunately, we make for a pretty good team and managed to answer a good amount of questions together. We even managed to make time for ice cream and owl spottings along the way.
Since you asked for it, here's a picture of Natalie and I drawing multi-touch gestures in the air.

So there are some good ideas, we need to make them real. Since the sprint, I've been trying my hand on more detailed mockups for our rough design sketches. This always raises a few more issues, which we want to tackle before asking for opinions from KWin maintainers and Plasma's design community. There isn't much to share with the community yet, but we'll involve other contributors before too long.
Likewise, my first KWin MR for stroke gesture infrastructure is not quite there yet, but it's getting closer. The first milestone will be to make it possible for someone to provide stroke gesture actions. The second milestone will be for Plasma/KWin to provide stroke gesture actions by itself and offer a nice user interface for it.
Baby steps. Keep chiseling away at it and trust that you'll create something decent eventually. This is not even among the largest efforts in KDE, and yet there are numerous pieces to fit and tasks to tackle. Sometimes I'm frankly in awe of communities like KDE that manage to maintain a massive codebase together, with very little overhead, through sheer dedication and skill. Those donations don't go to waste.
At this point I would also like to apologize to anyone who was looking for reviews or other support from me elsewhere in Plasma (notably, PowerDevil) which I haven't helped with. I get stressed when having to divide my time and focus between different tasks, so I tend to avoid it, in the knowledge that someone or something will be left wanting. I greatly admire people who wear lots of different hats simultaneously, and it would surely be so nice to have the aptitude for that, but it kills me so I have to pick one battle at a time.
Right now, that's gestures. Soon, a little bit of travel. Then gestures again. Once that's done, we'll see what needs work most urgently or importantly.
Take care & till next time!
Developing an application for desktop or embedded platforms often means choosing between Qt Widgets and Qt Quick to develop the UI. There are pros and cons to each. Qt, being the flexible framework that it is, lets you combine these in various ways. How you should integrate these APIs will depend on what you're trying to achieve. In this entry I will show you how to display Qt Widget windows on an application written primarily using Qt Quick.
Qt Quick is great for software that puts emphasis on visual language. A graphics pipeline, based around the Qt Quick Scene Graph, will efficiently render your UI using the GPU. This means UI elements can be drawn, decorated, and animated efficiently as long as you pick the right tools (e.g. Shaders, Animators, and Qt's Shapes API instead of its implementation of HTML's Canvas).
From the Scene Graph also stem some of Quick's weaknesses. UI elements that in other applications would extend outside of the application's window, such as tool tips and the ComboBox control, can only be rendered inside of Qt Quick windows. When you see other app's tooltips and dropdowns extend beyond the window, those items are being rendered onto a separate windows; one without window decorations (a.k.a. borderless windows). Rendering everything on the same window helps ensure your app will be compatible with systems that can only display a single window at a time, such as Android and iOS, but it could result in wasted space if your app targets PC desktop environments.

An animation shows a small window with QML's and Widget's ComboBoxes opening for comparison purposes
QML ComboBox is confined to the Qt Quick window while the Widgets ComboBox extends beyond the window
Qt lets us combine Widgets and Quick in a few ways. The most common approach is to embed a Qt Quick view into your Widgets app, using QQuickWidget. That approach is fitting for applications that primarily use Widgets. Another option is to render Widgets inside a Qt Quick component, by rendering it through a QQuickPaintedItem. However, this component will be limited to the same window confines as the rest of the items in your Quick window and it won't benefit from Scene Graph rendering optimizations, meaning you get the worst of both worlds.
A third solution is to open widget windows from your Qt Quick apps. This has none of the aforementioned drawbacks, however, the approach has a couple of drawback of its own. First, the app would need to be run from a multi-window per screen capable environment. Second, widget windows are not parentable to Qt Quick windows; meaning certain window z-stack related features, such as setting window modality to Qt::WindowModal, won't have effect on the triggering window when a Widget is opened from Qt Quick. You can work around that by setting modality to Qt::ApplicationModal instead, if you're okay with blocking all other windows for modality.
Displaying Widget windows in Qt Quick applications has been useful to me in the past, and is something I haven't seen documented anywhere, hence this tutorial.
Displaying a Qt Widget window from Qt Quick is simpler than it seems. You'll need two classes:
You might be tempted to forgo the interface class and instantiate the widget directly. However, this would result in a crash. We'll display the widget window by running Widget::show from the interface class.
CMakeLists.txtIn addition to those classes, you'll also need to make sure that your app links to both Qt::Quick and Qt::Widgets libraries. Here's what that looks like for a CMake project
// Locate libraries
find_package(Qt6 6.5 REQUIRED COMPONENTS
Quick
Widgets)
// Link build target to libraries
target_link_libraries(${TARGET_NAME} PRIVATE
Qt6::Quick
Qt6::Widgets)
// Replace ${TARGET_NAME} with the name of your target executablemain.cppIn addition to that, in main.cpp you'll need to use QApplication in place of QGuiApplication.
QApplication app(argc, argv);Prepare the interface layer as you would any C++ based Quick component. By this I mean: derive from QObject, and use the Q_OBJECT and QML_ELEMENT macros to make your class available from QML.
// widgetFormHandler.h
#pragma once
class WidgetFormHandler : public QObject
{
Q_OBJECT
QML_ELEMENT
public:
explicit WidgetFormHandler(QObject *parent = nullptr);
};// widgetFormHandler.cpp
WidgetFormHandler::WidgetFormHandler(QObject *parent)
: QObject(parent)
{
}// widgetFormHandler.h
#pragma once
class WidgetsForm;
class WidgetFormHandler : public QObject
{
Q_OBJECT
QML_ELEMENT
public:
explicit WidgetFormHandler(QObject *parent = nullptr);
~WidgetFormHandler();
private:
std::unique_ptr<WidgetsForm> m_window;
}Use std::make_unique in the constructor to initialize the unique pointer to m_window.
Define the instantiating class' destructor to ensure the pointers are de-alocated, thus preventing memory leaks. If you stick to using smart pointers, C++ will do all the work for you; simply use the default destructor, like I do here. Make sure to define it outside of the class' header; some compilers have trouble dealing with the destructor when it's defined inside the header.
// widgetFormHandler.cpp
#include "widgetFormHandler.h"
WidgetFormHandler::WidgetFormHandler(QObject *parent)
: QObject(parent)
, m_window(std::make_unique<WidgetsForm>())
{
// ...
}
WidgetFormHandler::~WidgetFormHandler() = default;Now we want to make properties from the widget available in QML. How we do this will depend on the property and on whether we will manipulate the property's value from both directions or only from one side only and update on the other.
Let's look at a bi-directional example in which we add the ability to control the visible state of the widget window from QML. We'll add a property called "visible" to the C++ interface so that it matches the visible that we get from Qt Quick windows in QML. Declare the property using Q_PROPERTY. Use READ and WRITE functions to control the window's state.
Here's what that would look like:
// widgetFormHandler.h
#pragma once
class WidgetsForm;
class WidgetFormHandler : public QObject
{
Q_OBJECT
QML_ELEMENT
Q_PROPERTY(bool visible READ isVisible WRITE setVisible NOTIFY visibleChanged)
public:
explicit WidgetFormHandler(QObject *parent = nullptr);
~WidgetFormHandler();
const bool isVisible();
void setVisible(bool);
signals:
void visibleChanged();
private:
std::unique_ptr<WidgetsForm> m_window;
};// widgetFormHandler.cpp
#include "widgetFormHandler.h"
#include "widgetForm.h"
WidgetFormHandler::WidgetFormHandler(QObject *parent)
: QObject(parent)
, m_window(std::make_unique<WidgetsForm>())
{
// Hide window by default
m_window->setVisible(false);
}
WidgetFormHandler::~WidgetFormHandler() = default;
const bool WidgetFormHandler::isVisible()
{
return m_window->isVisible();
}
void WidgetFormHandler::setVisible(bool visible)
{
m_window->setVisible(visible);
emit visibleChanged();
}To make this bi-directional, set NOTIFY to a signal that allows the property to be updated in QML after it being emitted and emit the signal where applicable. We emit it from setVisible in this class, however if QWidget had a signal that emitted when its visible state changed, I would also make a connection between that signal and that of our handler’s visibleChanged. However, that isn’t the case, so we have to make sure to emit it ourselves.
Develop the widget window as you would any other widget. If you use UI forms, go to the header file and create a signal for each action that you wish to relay over to QML.
In this example we'll relay a button press from the UI file, so we'll create a button named pushButton in our ui file:

Qt Designer shows UI file with a button named pushButton, in camel case.
Now add a buttonClicked signal to our header:
// widgetsForm.h
#pragma once
#include <QWidget>
namespace Ui
{
class WidgetsForm;
}
class WidgetsForm : public QWidget
{
Q_OBJECT
public:
explicit WidgetsForm(QWidget *parent = nullptr);
~WidgetsForm();
signals:
void buttonClicked();
// Signal to expose button click from Widgets window
private:
std::unique_ptr<Ui::WidgetsForm> ui;
};Once again, we use a unique pointer, this time to hold the ui object. This is better than what Qt Creator templates give us because it means C++ handles the memory management for us and we can avoid the need for a delete statement in the destructor.
In the window's constructor, we make a connection between the UI's button's signal and the one that we've created to relay the signal for exposure.
// widgetsForm.cpp
#include "widgetsform.h"
#include "ui_widgetsform.h"
WidgetsForm::WidgetsForm(QWidget *parent)
: QWidget(parent)
, ui(std::make_unique<Ui::WidgetsForm>())
{
ui->setupUi(this);
// Expose click
connect(ui->pushButton, &QPushButton::clicked, this, &WidgetsForm::buttonClicked);
}
WidgetsForm::~WidgetsForm() = default;Before we connect the exposed signal to the QML interface, we need another signal on the interface to expose our event over to QML. Here I add qmlSignalEmitter signal for that purpose:
// widgetFormHandler.h
[..]
signals:
void visibleChanged();
void qmlSignalEmitter(); // Signal to relay button press to QML
[..]To complete all the connections, go to the interface layer’s constructor and make a connection between your window class’ signal and that of the interface layer. This would look as follows:
// widgetFormHandler.cpp
[..]
WidgetFormHandler::WidgetFormHandler(QObject *parent)
: QObject(parent)
, m_window(std::make_unique<WidgetsForm>())
{
QObject::connect(m_window, &WidgetsForm::buttonClicked, this,
&WidgetFormHandler::qmlSignalEmitter);
}
[..]By connecting one emitter to another emitter we keep each classes' concerns separate and reduce the amount of boilerplate code, making our code easier to maintain.
Over at the QML, we connect to qmlSignalEmitter using the on prefix. It would look like this:
import NameOfAppQmlModule // Should match qt_add_qml_module's URI on CMake
WidgetFormHandler {
id: fontWidgetsForm
visible: true // Make the Widgets window visible from QML
onQmlSignalEmitter: () => {
console.log("Button pressed in widgets") // Log QPushButton's click event from QML
}
}I've prepared a demo app where you can see this technique in action. The demo displays text that bounces around the screen like an old DVD player's logo would. You change the text and font through two identical forms, one implemented in QML and the other done in Widgets. The code presented in this tutorial comes from that demo app.
Example code: https://github.com/KDABLabs/kdabtv/tree/master/Blog-projects/Widget-window-in-Qt-Quick-app
The moving text should work on all desktop systems except for Wayland sessions on Linux. That is because I'm animating the window's absolute position (which is restricted in Wayland for security reasons) rather than the contents inside a window. This has the benefit of not obstructing other applications, since the moving window that contains the text would capture mouse inputs if clicked, preventing those from reaching the application behind it.
The first time I employed this technique was in my FOSS project, QPrompt. I use it there to provide a custom font dialog that doubles as a text preview. Having a custom dialog gives me full control over formatting options presented to users, and for this app we only needed a preview for large text and a combo box to choose among system fonts. QPrompt is also open source, you can find the source code relevant to this technique here: https://github.com/Cuperino/QPrompt-Teleprompter/blob/main/src/systemfontchooserdialog.h

Thank you for reading. I hope you’ll find this useful. A big thank you to David Faure for suggesting the use of C++ unique pointers as well as reviewing the code along with Renato and my team.
If there are other techniques that you’d like for us to try or showcase, let us know.
The post Display Widget Windows in Qt Quick Applications appeared first on KDAB.

Release notes: https://kde.org/announcements/gear/25.04.2/
Now available in the snap store!
Along with that, I have fixed some outstanding bugs:
Ark: now can open/save files in removable media
Kasts: Once again has sound
WIP: Updating Qt6 to 6.9 and frameworks to 6.14
Enjoy everyone!
Unlike our software, life is not free. Please consider a donation, thanks!
Kdenlive 25.04.2 is now available, containing several fixes and small workflow improvements. Some highlights include:

Some last minute fixes were also included in the Windows/Mac/AppImage versions:
See the full changelog below.
For the full changelog continue reading on kdenlive.org.
I'm Ajay Chauhan (Matrix: hisir:matrix.org), currently in my third year of undergraduate studies in Computer Science & Engineering. I'll be working on improving Kdenlive timeline markers for my Google Summer of Code project. I have previously worked on Kdenlive as part of the Season of KDE '24.
Kdenlive currently supports single-point timeline markers, which limits efficiency for workflows that require marking time ranges, such as highlight editing or collaborative annotations. This project proposes enhancing Kdenlive's marker system by introducing duration-based markers that define a clear start and end time.
The project will extend the marker data model to support a duration attribute while maintaining backward compatibility. The UI will be updated to visualize range markers as colored regions on the timeline, with interactive handles for resizing and editing.
These markers will be integrated with key features like zone-to-marker conversion, search and navigation, rendering specific ranges, and import/export capabilities.
The problem that this project aims to solve is the need for efficient range-based marking functionality in Kdenlive's timeline (see issue #614). By implementing duration-based markers, the project will ensure that video editors can work more efficiently with time ranges for various workflows like highlight editing, collaborative annotations, and section-based organization.
My mentor for the project is Jean-Baptiste Mardelle, and I appreciate the opportunity to collaborate with and learn from him during this process.
CommentedTime ClassThe CommentedTime class, which represents individual markers, has been extended to support duration information. This change enables the range marker functionality throughout the application.
I added several new methods and properties to the CommentedTime class:
duration(): Returns the marker's duration as a GenTime objectsetDuration(): Sets the marker's durationhasRange(): Boolean check to determine if a marker is a range marker (duration > 0)endTime(): Calculates and returns the end position (start + duration)The class now includes a new constructor that accepts duration as a parameter, while maintaining backward compatibility with existing point marker creation.
// New constructor with duration support
CommentedTime(const GenTime &time, QString comment, int markerType, const GenTime &duration);
// New member variable
GenTime m_duration{GenTime(0)}; // Defaults to 0 for point markers
🔗 Commit: Add duration handling to CommentedTime class
MarkerListModelPreviously, Kdenlive only supported point markers - simple markers that existed at a specific timestamp without any duration. I've now implemented range marker support, allowing users to create markers that span across time intervals.
The core changes involved extending the MarkerListModel class with several new methods:
addRangeMarker(): A new public method that creates markers with both position and durationaddOrUpdateRangeMarker_lambda(): An internal helper function that handles both creating new range markers and updating existing oneseditMarker(): Added an overloaded version that preserves duration when editing markersThe implementation uses a lambda-based approach for undo/redo functionality, ensuring that range marker operations integrate seamlessly with Kdenlive's existing command pattern. When updating existing markers, the system intelligently determines whether to preserve the current duration or apply a new one.
// New method signature for range markers
bool addRangeMarker(GenTime pos, GenTime duration, const QString &comment, int type = -1);
// Extended edit method with duration support
bool editMarker(GenTime oldPos, GenTime pos, QString comment, int type, GenTime duration);
The model now emits appropriate data change signals for duration-related roles (DurationRole, EndPosRole, HasRangeRole) when range markers are modified, ensuring the UI stays synchronized.
🔗 Commit: Implement range marker support in MarkerListModel
All existing marker functionality continues to work exactly as before. Point markers are simply range markers with zero duration, ensuring a smooth transition for existing projects and workflows.
In the upcoming weeks, with the core range marker backend in place, the next phase will focus on: