Skip to content

Saturday, 19 July 2025

KDE’s Gitlab setup has a branch naming rule that I always forget about – branch names should start with work/ if you want the server to allow you to rebase and push rebased commits (that is, only work branches can be --force pushed to).

I had to abandon and open new PRs a few times now because of this.

Something like this is easy to check on the client side with pre-commit hooks. (a pre-push hook can also be used, but I like the check to be as early as possible)

A simple hook script that checks your branch name starts with work/YOUR_USER_NAME (I like to have the username in the branch name) is rather simple to write:

#!/bin/bash

REPO_URL=$(git remote get-url origin)
KDE_REPO_HOST="invent.kde.org"

if [[ "${REPO_URL}" == *"${KDE_REPO_HOST}"* ]]; then

    BRANCH=$(git rev-parse --abbrev-ref HEAD)
    BRANCH_REGEX="^work/$USER/.*$"

    if ! [[ $BRANCH =~ $BRANCH_REGEX ]]; then
      echo "Your commit was rejected due to its name '$BRANCH', should start with 'work/$USER'"
      exit 1
    fi

fi

It checks that the Git repository is on invent.kde.org, and if it is, it checks if the current branch follows the desired naming scheme.

KDEGitCommitHooks

But the question is where to put this script?

Saving it as .git/hooks/pre-commit in the cloned source directory would work in general, but there are two problems:

  • Manually putting it into every single cloned KDE source directory on your system would be a pain;
  • KDEGitCommitHooks, which is used by many KDE projects, will overwrite the custom pre-commit hook script you define.

The second issue is not a problem since a few hours ago. KDEGitCommitHooks (a part of the extra-cmake-modules framework) now generates a pre-commit hook that, additionally to what it used to do before, executes all the custom scripts you place in the .git/hooks/pre-commit.d/ directory.

So, if a project uses KDEGitCommitHooks you can save the aforementioned script as .git/hooks/pre-commit.d/kde-branches-should-start-with-work.sh and it should be automatically executed any time you create a new commit (after KDEGitCommitHooks updates the main pre-commit hook in your project).

For projects that do not use KDEGitCommitHooks, you will need to add a pre-commit hook that executes scripts in pre-commit.d, but more on that in a moment.

Git templates

The first problem remains – putting this into a few hundred local source directories is a pain and error-prone.

Fortunately, Git allows creating a template directory structure which will be reproduced for any repository you init or clone.

I placed my template files into ~/.git_templates_global and added these two lines to ~/.gitconfig:

[init]
    templatedir = ~/.git_templates_global

I have two KDE-related hook scripts there.

The above one is saved as ~/.git_templates_global/hooks/pre-commit.d/kde-branches-should-start-with-work.

And the second file is the default main pre-commit (~/.git_templates_global/hooks/pre-commit) script:

#!/usr/bin/env bash

# If the user has custom commit hooks defined in pre-commit.d directory,
# execute them
PRE_COMMIT_D_DIR="$(dirname "$0")/pre-commit.d/"

if [ -d "$PRE_COMMIT_D_DIR" ]; then
    for PRE_COMMIT_D_HOOK in "$PRE_COMMIT_D_DIR"/*; do
        ./"$PRE_COMMIT_D_HOOK"
        RESULT=$?
        if [ $RESULT != 0 ]; then
            echo "$PRE_COMMIT_D_HOOK returned non-zero: $RESULT, commit aborted"
            exit $RESULT
        fi
    done
fi

exit 0

It tries to run all the scripts in pre-commit.d and reports if any of them fail.

This default main pre-commit script will be used in projects that do not use KDEGitCommitHooks. In the projects that do, KDEGitCommitHooks will replace it with a script that executes everything in pre-commit.d same as this one does, but with a few extra steps.

I’m currently backpacking in the Balkans and, considering that it’s been such a long time since I wrote a blog post on my blog, I figured it was a good idea to write about it.

As I am traveling, I am also field testing KDE Itinerary and sending patches as I buy new my tickets and reserve my hostels.

Ljubiana, Slovenia

My first stop was in the capital of Slovenia, Ljubljana. I went there by train from Berlin. I first took the night train to Graz and then an intercity train to Ljubljana. And while the connection was perfect and there was no delay, the whole journey took almost 19 hours.

Night Jet
Night Jet
Sleeping car in the night jet
Sleeping car in the night jet

I stayed only one night in Ljubljana in the party hostel Zzz and I enjoyed my time there quite a bit. Thanks to the Hostelworld app, it was super easy to find a group of fellow solo travelers to enjoy the evening with. We went to a food market and I had some delicious local pasta.

Street in Ljubiana
Street in Ljubiana
Castel
Castel
Food market
Food market
Another view of the food market
Another view of the food market
The food at the food market
The food at the food market
The food at the food market
The food at the food market

Rijeka, Croatia

My second stop was in Rijeka, the third biggest city in Croatia. I also took the train to get there. The city itself is very beautiful, same for the beaches. But I didn’t really like that the city has a massive sea port splitting the city center from the beaches. The hostel experience was the most disapointing of the whole trip so far.

Small port of Rijeka
Small port of Rijeka
Small stairs going to a swimming spot in Rijeka
Small stairs going to a swimming spot in Rijeka
Sunset from the swimming spot
Sunset from the swimming spot
The sea
The sea

Split, Croatia

My next stop was in Split, the second largest city in Croatia. While there was a train connection from Rijeka to Split, I decided to take the bus as the train was significantly slower than the bus. I stayed at the Hostel Old Town.

I really had a blast in Split. The city is very old, with Greek, Roman, Byzantine, and Venetian influence. I met a cool group of American/Norwegian travelers in a hostel bar and got dragged to the Ultra festival, which was an amazing experience.

Small allway in the old town of Split
Small allway in the old town of Split
Old fortification
Old fortification
Small town scare
Small town scare
Another old building
Another old building
Picture of the festival
Picture of the festival
Group picture of the people I meet at the festival
Group picture of the people I meet at the festival
Chilling out the day after the festival in the shades of a tree
Chilling out the day after the festival in the shades of a tree

The day after, I went to a boat party which was equally a blast.

Boat party
Boat party
Boat party with the sunset
Boat party with the sunset
Drinking a reasonable amount of Aperol
Drinking a reasonable amount of Aperol
Dancing with a stick
Dancing with a stick
More dancing
More dancing
Group photo at the end
Group photo at the end
A script element has been removed to ensure Planet works properly. Please find it in the original post. A script element has been removed to ensure Planet works properly. Please find it in the original post.

Last weekend I attended the Transitous Hack Weekend in Berlin. This was the first time we ran such an event for Transitous, and with quite a few more people attending than expected it most probably will not have been the last one either.

Transitous logo

DELFI Family & Friends Day

Immediately prior to the Transitous Hack Weekend there was the 2nd DELFI “Family & Friends Day”, for which a number of participants had been in Berlin anyway. DELFI is the entity providing the aggregated national public transport static and realtime schedule data for Germany, which is important input for Transitous.

Extent and quality of that data have room for improvement, so having many members of the community there to lobby for changes helps. And while there’s certainly awareness and willingness among the people doing the work, the complex processes and structures with many different public and private stakeholders don’t exactly yield great agility.

Transitous Hack Weekend

For the Transitous Hack Weekend Wikimedia Deutschland had kindly allowed us to use their WikiBär venue. Special thanks also to Jannis, Theo and Felix for cooking for the entire group during the weekend, which not only kept us all well fed but also made the event particularly efficient and allowed us to cover a wide range of topics in the short time, as you can see below.

A bunch of people sitting around desks with laptops, several small groups in active discussions.
Transitous Hack Weekend in progress (6 more participants not pictured). (CC0-1.0)

Topics

Transitous isn’t attached to any legal entity so far, which is a challenge when it comes to handling money, signing contracts or providing official data protection contacts. Eventually this needs to change.

Setting up our own foundation is of course an option, but that implies (duplicated) work and continuous cost that similar organizations have already covered. Therefore our preferred approach would be to attach Transitous to an existing likeminded foundation.

Usage policy

Transitous so far had no official rules on who can use it, and how. As long as it was primarily used by FOSS applications that wasn’t much of a problem, as that’s exactly what it is intended for. And even better, most of those were even actively contributing to Transitous in some form.

However we recently also got requests from non-FOSS and/or commercial users, which is not what Transitous is intended for. This is now being documented here.

For everyone else nothing should really change here, just please make sure you send proper client identification e.g. in a User-Agent header.

Data normalization and augmentation

There’s various reasons why we might want to modify the schedule data that goes into Transitous, it being incomplete, inconsistent ways of naming/labeling things between different data sources, or things just being outright wrong. Ideally all that gets resolved upstream, but that’s slow at best in most cases, and sometimes just not possible.

We were therefore discussing ideas on a declarative pattern matching/transformation rule system to define such modification in a way that remains maintainable when facing thousands of continuously changing datasets.

That still needs a bit more design work I think, but would allow to solve a number of current issues:

  • Unify route/trip names. How bad this can get can currently be observed e.g. with long distance trains in Germany and Eurostar services.
  • Normalize the mode of transport types across different sources. This would allow to fix Flixtrains being classified as regional train services for example.
  • Augment agency/operator contact information, for use in integrated upstream issue reporting.
  • Add or normalize line colors, and eventually add line, mode and operator logos.

Computing missing route shapes also fits into this, although that’s a bit special as it requires a much more elaborate computation than just modifying textual CSV cells.

Collaboration with other data aggregators

When Transitous started it was entirely unclear whether it would be able to ever scale beyond pure station-to-station public transport routing. We are way past that point meanwhile, with more and more things being added for full intermodal door-to-door routing. Many of those imply building similar dataset catalogs as we have for the public transport schedule data.

While we can do that, there’s often overlapping projects in those areas already, and our preferred solution would be to rather join forces. That is, collect and aggregate all input data there and consume it from that single source.

This includes:

  • Availability of sharing vehicles (from GBFS feeds or converted from other sources). Looking at Citybikes for that.
  • Elevator status data, which can be crucial for wheelchair routing. Looking at accessibility.cloud for that, the same source KDE Itinerary already uses.
  • Availability of parking spaces, which then can be considered when routing with a bike or car. Looking at ParkAPI for that.
  • Realtime road traffic data and dynamic roadsign data, which is useful for road routing. There seem to be some recent developments on this from CoMaps.

We also discovered a few data sets I had no idea where even available anywhere, like live positions of (free) taxis, which could allow new routing options. (Also, lots of Kegelrobben data, the use of which in a transportation context eludes me so far).

External communication

Transitous now has a Mastodon account! We’ll use that for service alerts, project updates and to share posts about related applications or events.

If you want to stay up to date on Transitous’ growing coverage and are up for a small Python coding task, a script to generate a list of coverage additions within the last week would help a lot with providing regular updates there.

We’d also like to have blog feed on the website, which would need to be set up but more importantly requires a few people committing to actually producing regular long-form content.

Documentation and onboarding

With a few people attending who weren’t neck-deep in Transitous or MOTIS code already, we had valuable fresh perspectives on the documentation and onboarding experience.

Both documentation and the usability and error handling of the tools in the import pipeline have benefited from this already, and there’s more improvements that yet have to be integrated.

Realtime data from vehicle positions

As realtime delay/disruption data still isn’t as widely available as basic schedule data, there’s high interest in computing that from vehicle positions. That’s essentially also what the “official” sources do, just that those can also take higher-level information about network operations, track closures, etc as well as human intervention/planning into account.

Vehicle position data tends to be more available, and can be obtained in various more or less creative ways:

  • Some operators publish those as GTFS-RT feeds or at least in some proprietary form for displaying on their own website.
  • Some systems send openly readable radio messages containing vehicle positions, such as AIS on ferries, ADS-B in aircraft as well as various radio protocols for trams and busses.
  • Crowd-sourcing from within traveler apps such as Träwelling or Itinerary, which know which train you are on already anyway and have access to GPS.
  • Dedicated driver apps for e.g. community-operated services.

Lots of opportunity for fun projects here.

And more…

Other topics people worked on included:

  • Automating the collection of the hundreds of French GTFS feeds.
  • Bringing new servers into operation.
  • Improving the quality of the German national aggregated GTFS(-RT) feeds by generating them from NeTEx and Siri data ourselves.
  • MOTIS and GTFS-RT diagnostic tooling.
  • Better QA and monitoring.

And there’s also the meeting notes in the wiki.

More plans and ideas

With the foundation we meanwhile have with Transitous, there’s also ideas and wishes that are now coming into reach:

  • Integrate elevation data for walk/bike/wheelchair routing.
  • Support for ride sharing as an additional mode of transportation.
  • Support “temporary POIs” such as events in geocoding.
  • Showing interactive network line maps, as e.g. done by LOOM.
  • Support for localized stop names and service alerts.
  • Easily embeddable JS components for departure boards or fixed-destination routing for our events.

This is probably the most exciting part for me personally, I’m very happy to see people pushing beyond just a replacement for proprietary routing APIs :)

If you are interested in any of this, join the Transitous Matrix channel and consider joining the Open Transport Community Conference in October!

Earlier this year i got the opportunity to attend and also present a talk at kde conference india 2025 which was held in gandhinagar gujarat from april 4th to 6th, this was my first kde conference and i got to learn a lot about kde by attending various talks and even presenting at the conference, in this blog post i will share about my experience attending the conference and my learnings and also about my talk that i presented.

What is KDE?

For those who are not familiar with kde, so initially it started as a desktop environment for linux known as (k desktop environment) but now over the years it has grown so much that it is now a open source organization with a large community of developers who uses, develops and promote kde software.

you may have heard about some popular kde projects like:

  • Plasma linux desktop environmet most popular and customizable kde software.

  • Kdenlive the video editing software from kde which is also used in hollywood

  • Okular a versatile document viewer.

  • Krita one of the best digital painting and illustration tools available.

But KDE is not just limited to its awesome software tools, it is also a community of vibrant developers who create, use these tools and make an impact in the world of opensource and software.

my talk was focused on sustainability and greener software and how kde eco a project of kde community which mainly focuses on sustainability in software and hardware believes in a greener future for technology, is making a impact with its projects like kecolab, kde eco test, opt green and other projects in building greener, sustainable software.

Further i talked about how kecolab helps developers measure the energy consuption of their software, which is very helpful in creating and optimizing software to write such a code that consumes less energy and kecolab helps you in measuring that, there are other project also like be4foss (Blauer Engel for Free and Open Source Software) which is like a certificaiton given to the software with low energy consumption and with high impact in sustainability and greener software, so the whole talk was about empowering sustanabiltiy and how kde eco shaping the future of software.

Community: Learning, Networking, and Inspiration

Apart from all these things, attending the KDE conference was also a great experience for me, where I got to listen to talks from various speakers. One talk that I particularly enjoyed was on 'Embracing FOSS in EdTech,' where I learned how free and open-source software should be used in the education system and how it's already happening on a small scale. I found this concept really interesting.

At the conference, I had the opportunity to meet someone who was very passionate about KDE and open source software. They had come all the way from Australia, specifically to attend this conference and give a talk. I gained a lot of knowledge and was really impressed when I learned how free and open source software is used in Australia, even at the elementary school level.

This was my overall experience attending and presenting at the kde india conference, i hope you like this blog post and got to learn something from my experience and if you are also looking to get into open source communities and be a part of, just get involved you can do that any time, kde is very welcoming for new contributors, so just get started contribute learn and have fun.

Thank you for reading!

Welcome to a new issue of This Week in Plasma!

Every week we cover the highlights of what’s happening in the world of KDE Plasma and its associated apps like Discover, System Monitor, and more.

This week we continues the feature work for Plasma 6.5, landing a major visual change that has been years in the wanting: rounded bottom corners for windows! Check it out below, along with other goodies:

Notable New Features

Plasma 6.5.0

Breeze-decorated windows now have their bottom corners rounded by KWin automatically! This feature is on by default, but can be turned off if you preferred the older style. (Vlad Zahorodnii, link 1, link 2, and link 3)

Rounded bottom corners in Dolphin

Notable UI Improvements

Plasma 6.5.0

The sidebars in Discover and System Monitor are now resizable. And they also remember the size you choose in the state config file, not the settings config file. (Marco Martin, link 1, link 2 and link 3)

The Disks & Devices widget now lets you mount a disk without checking for errors, or manually check for errors without mounting. These can be useful for huge disks full of stuff where you know everything is fine, which otherwise take a while to check for updates while mounting. (Bohdan Onofriichuk, link)

Started working on a project to improve the ordering of search results in KRunner. Changes made so far include not increasing the priority for KDE apps and items marked as favorites, both of which tended to make the results feel more random. More changes will be coming soon; this is not the end! (Harald Sitter, link 1 and link 2)

Plasma‘s Weather Report widget now fetches weather data immediately upon waking from sleep when the computer was sleeping for more than 30 minutes. (Bohdan Onofriichuk, link)

When you start creating a user on System Settings’ Users page, there‘s now a “Cancel” button you can click on to easily stop the process and go back. (Rémi Piau, link)

Gear 25.08.0

In Plasma‘s Disks & Devices widget, optical discs no longer display the option to open them in Partition Manager, because this makes no sense. (Joshua Goins, link)

Frameworks 6.17

The NavigationTabButton component — which is used on various System Settings pages as well as a bunch of QtQuick-based apps — now used a more obvious style to communicate keyboard focus, improving accessibility. (Devin Lin, link)

Notable Bug Fixes

Plasma 6.4.4

Fixed a bug that caused the Volume Controls page (accessed from System Settings‘ Sound page) to never enter narrow mode when displayed with longer text than would normally fit on the page, which can happen when using various languages other than English. (Méven Car, link)

Plasma 6.5.0

With HDR mode turned on, the cursor on the lock screen is now dimmed in the expected way when the rest of the screens dims. (Xaver Hugl, link)

Frameworks 6.17

Worked around some Qt issues in Kirigami that could cause apps to crash when using software rendering. (Vlad Zahorodnii, link)

The very common Kirigami.FormLayout component used throughout System Settings and other KDE apps no longer flickers a tiny bit the first time it‘s shown on a page. (Niccolò Venerandi, link)

Other bug information of note:

Notable in Performance & Technical

Plasma 6.5.0

Added some more autotests to verify Plasma‘s ability to load various parts of itself. (Nicolas Fella, link 1, link 2)

Made KWin less trusting of colorimetry coming from screens‘ EDID data, because it‘s wrong enough of the time that it doesn‘t make sense to use by default. (Xaver Hugl, link)

Data for the size of the file dialog window is now stored in the state config file, not the settings config file. (Nicolas Fella, link)

How You Can Help

KDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.

You can help KDE by becoming an active community member and getting involved somehow. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer, either; many other opportunities exist!

You can also help us by making a donation! A monetary contribution of any size will help us cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.

To get a new Plasma feature or a bugfix mentioned here, feel free to push a commit to the relevant merge request on invent.kde.org.

Thursday, 17 July 2025

The Akademy 2025 Program is now live!

This year’s Akademy will take place in Berlin, hosted at the Technische Universität Berlin, both in person and online.

Akademy starts with a welcome event on Friday, 5 September, followed by two full days of talks on Saturday, 6 and Sunday, 7 September, then four days of dedicated BoFs, workshops, meetings, and trainings from Monday, 8 through Thursday, 11 September. Expect a community day trip midweek.

The schedule highlights:

  • Talks covering KDE Frameworks, Plasma, Applications and how KDE can empower digital sovereignty, robust communities, and open development.
  • In-depth sessions on embedded Linux, Plasma innovations, KDE’s eco efforts, backends & frontends, and beyond.
  • Community-driven workshops and BoFs cultivating collaboration and project momentum throughout the week.

This hybrid event model continues to grow, embracing both onsite attendance and remote participation, allowing contributors from around the globe to connect and engage.

Venue & Registration Details:

  • Venue: Technische Universität Berlin
  • In-person + Online: 6–11 September (with the welcome event on 5 September).
  • Registration is open and free!
  • You can explore the full program on Akademy’s website. Stay tuned for our keynotes announcement!

解析QGraphicsProxyWidget的桥接作用

Qt是一个用于构建图形用户界面(GUI)和跨平台应用程序的“超级工具箱”。Qt提供了一套标准化的”零件“,只需要用这些零件组装一次,应用程序就能在所有主流的操作系统上运行。

在我们深入今天的主题之前,对于初次接触Qt的同学,理解其设计的“三大基石”至关重要。

Qt的三大基石

QObject

Qt中,几乎所有有意义的对象(窗口、按钮、定时器等)都继承自**QObject这个始祖类。QObject之所以如此特别,是因为它是通往Qt元对象系统(Meta-Object System)**的大门。

元对象系统是什么? 你可以把它想象成每个QObject都随身携带的一张详细的“身份信息卡”。这张卡片不是C++原生就有的,而是Qt通过一个名为MOC(Meta-Object Compiler)的预处理器工具,在编译前为你的代码自动生成的。这张“身份卡”上记录了:

  • 对象的类名是什么(className())。
  • 它能发出哪些信号
  • 它有哪些可供调用的槽函数

正是因为有了这张预先生成好的“信息卡”,Qt才能够在程序运行时,动态地查询对象的信息,并实现下面要讲的、极其灵活的信号与槽通信机制。

信号与槽(singal & Slots)

这是Qt框架的灵魂,是一种无与伦比的对象间通信机制。让我们用一个生活中的例子来理解它:

  • 传统方式(函数调用): 你按下一个电灯开关,开关必须明确知道灯泡在哪里,并且调用灯泡的turnOn()方法。开关和灯泡“耦合”得非常紧,换个灯泡可能就要改动开关的“接线”。
  • Qt的方式(信号与槽):
    • 一个QPushButton(按钮)被点击时,它不会去找具体要干什么事的对象,它只是向外大喊一声:“我被点击了!”(这就是**信号 clicked()**)。
    • 而另一个对象,比如一个窗口,有一个“耳朵”在听,这个耳朵就是槽函数(例如handleButtonClick())。
    • 我们只需要一行代码 connect(button, &QPushButton::clicked, window, &MyWindow::handleButtonClick); 就建立了一条“连接”。

从此,按钮只管发出信号,窗口只管接收并处理。它们彼此完全独立,一个按钮的点击信号可以连接到多个槽,一个槽也可以接收来自多个信号。这种“解耦”的设计是构建复杂、可维护UI的基石。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
graph TD
subgraph "对象 (Objects)"
Button["QPushButton<br/>(信号源)"]
Label["QLabel<br/>(接收者)"]
end

subgraph "机制 (Mechanism)"
Signal["clicked() 信号"]
Slot["setText() 槽"]
Connect["connect(...) 函数<br/><b>(负责连接)</b>"]
end

UserClick["用户点击"] --> Button
Button -- 发出 --> Signal

subgraph "连接的建立 (Setup Time)"
Connect -.-> Signal
Connect -.-> Slot
end

subgraph "信号的触发 (Runtime)"
Signal -- 触发 --> Slot
end

Slot -- 操作 --> Label

style Connect fill:#d5e8d4,stroke:#82b366

事件循环(The Event Loop)

GUI程序和命令行程序不同,它不会从头执行到尾然后退出。它必须一直运行,等待用户的操作。这就是事件循环的职责。

把它想象成一个兢兢业业的前台接待员 (QApplication::exec()):

  1. 等待: 接待员坐在那里,循环等待。没有事情发生时,程序就“打个盹”,不消耗CPU。
  2. 事件: 一个“事件”发生了(比如,一位访客到来,代表用户的一次鼠标点击)。
  3. 分发: 接待员看到访客的预约单,上面写着“找三楼的按钮经理”,于是他把访客(事件)引导给了正确的对象(那个被点击的按钮)。
  4. 处理: “按钮经理”处理完访客的事务(比如发出clicked()信号),然后告诉接待员“我处理完了”。
  5. 接待员回到座位,继续等待下一个事件。

这个“等待-分发-处理”的循环,就是所有GUI应用能够灵敏响应我们操作的核心机制。

“组建集”与“画布”

在Qt中,构建用户界面的两个体系是QWidget(控件)体系和QGraphicsView(场景)体系。首先我们需要知道QGraphicsProxyWidget为和存在。

QWidget体系

QWidget是我们最先接触,也是最常用的UI构建方式。它的核心思想是“组件集(Component Set)”,我们可以将其比作一个结构严谨、等级分明的“城市”

  • 核心单元与父子关系: 这个“城市”的基本建筑单元是QWidget及其子类(QPushButton, QLineEdit等)。它们之间存在着严格的父子关系。子控件(child widget)在视觉上被“框”在父控件(parent widget)的矩形区域内,无法超出。当父控件被移动、隐藏或销毁时,其所有子控件也会随之移动、隐藏或销毁。这种层级关系构成了UI的骨架。
  • 布局管理(Layouts): 为了让“城市”的建筑排列整齐,QWidget体系引入了布局管理器QLayout的子类,如QVBoxLayout, QHBoxLayout, QGridLayout)。布局管理器如同城市的“规划局”,它接管了其内部所有控件的尺寸和位置。你只需要告诉“规划局”需要放入哪些控件,以及一些基本规则(如间距、对齐方式),它就会自动计算每个控件的最佳大小和位置,并在窗口尺寸变化时智能地重新调整布局,确保UI的响应式和美观。
  • 事件处理(Event Handling): QWidget的事件处理是“直接分发”的。在Linux上,每个顶层QWidget都是一个受窗口管理器管理的原生窗口,拥有独立的窗口句柄。当操作系统捕获到一个鼠标点击或键盘输入时,它能根据事件发生的屏幕坐标,准确地判断出事件属于哪个原生窗口。该事件随后被Qt的事件循环接收,并几乎直接地派发给目标QWidget。控件通过重写虚函数(如mousePressEvent(), keyPressEvent(), paintEvent())来响应这些事件。
  • 绘制模型(Painting Model): 其绘制模型是“独立且光栅化”的。每个QWidget都拥有一块独立的内存区域,称为“后备存储(Backing Store)”。当需要重绘时(由paintEvent触发),每个控件只负责绘制自己矩形区域内的像素。最终,由窗口系统或父控件将这些独立的绘制结果合成(Composite)在一起,形成我们看到的完整窗口。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
graph TD
subgraph 操作系统屏幕
A["QMainWindow<br/>(顶层Widget, 原生窗口句柄: XID/HWND)"]
end

subgraph A [QMainWindow]
B[QMenuBar]
C[QToolBar]
D[CentralWidget]
E[QStatusBar]
end

subgraph D [CentralWidget]
F{"QGridLayout<br/>(布局管理器)"}
G[QPushButton]
H[QLineEdit]
end

A --> B
A --> C
A --> D
A --> E

D --> F
F --> G
F --> H

style A fill:#dae8fc,stroke:#6c8ebf,stroke-width:2px
style F fill:#e1d5e7,stroke:#9673a6,stroke-width:1px,stroke-dasharray: 5 5

QGraphicsView体系

QWidget的严谨不同,QGraphicsView提供了一个极其自由、动态的创作环境,其核心思想是“画布(Canvas)”“场景图(Scene Graph)”。我们可以将其比作一个拥有“上帝视角”的数字创作空间

这个世界由三个核心组件构成:

  1. QGraphicsScene (场景 - 逻辑世界):
    • 这是“无限大的宇宙”或“数据模型”。它是一个逻辑容器,容纳着成千上万个图形项(QGraphicsItem)。
    • 场景本身不关心如何被显示,它只负责管理所有item的逻辑坐标、状态和层次关系。它使用高效的空间索引算法(如BSP树),能够飞快地查询到特定区域内有哪些item。
  2. QGraphicsView (视图 - 观察窗口):
    • 这是我们观察场景的“摄像机”或“取景器”。它是一个标准的QWidget,也是整个体系中唯一与操作系统直接交互的原生窗口。
    • 它的核心职责是将QGraphicsScene中的矢量坐标映射到它自己的像素坐标上,并将用户的鼠标、键盘事件翻译后传递给场景。
    • 最强大的一点是,同一个场景可以被多个视图观察。你可以创建两个视图,一个显示场景的全貌(如地图),另一个则放大显示场景的某个局部细节(如街道),并且在一个视图中对item的修改会立刻反映在另一个视图中。视图本身还可以旋转和缩放,如同调整摄像机的角度和焦距。
  3. QGraphicsItem (图形项 - 世界居民):
    • 这些是场景中的“演员”或“居民”,是构成视觉内容的基本单元。它们是轻量级的,因为它们没有自己的窗口句柄或事件循环。
    • 每个QGraphicsItem都知道如何绘制自己(通过paint()方法)、自己的边界范围(boundingRect())以及精确形状(shape(),用于碰撞检测)。
    • 它们可以被自由地移动、缩放、旋转、扭曲,可以被组合成复杂的复合体,并且拥有Z值来控制其堆叠顺序。
  • 事件处理(Event Handling): QGraphicsView的事件处理是“间接且由场景驱动”的。所有原生事件首先被QGraphicsView接收。然后,视图会询问场景:“鼠标在(x, y)这个像素位置,对应到你的逻辑世界里是哪个坐标?这个坐标上最顶层的item是谁?”场景利用其高效的索引找到目标item后,QGraphicsView再将事件分发给这个item。整个过程由Qt在应用程序内部完成,操作系统对此一无所知。
  • 渲染模型(Rendering Model): 其渲染是“集中式”的。QGraphicsView是唯一的“渲染引擎”。在需要重绘时,它会根据当前的视口范围,要求场景提供所有可见的item列表,然后按照Z值顺序遍历这些item,并调用它们的paint()方法,将它们统一绘制到自己的视口上。这个过程可以无缝地切换到OpenGL/Vulkan后端,利用GPU进行硬件加速,从而在渲染大量动态对象时保持极高的流畅度。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
graph TD
subgraph 操作系统屏幕
A["QGraphicsView<br/>(顶层Widget, 原生窗口句柄: XID/HWND)"]
end

subgraph A ["QGraphicsView - “取景器”"]
subgraph B ["QGraphicsScene - “无限画布”"]
C["QGraphicsRectItem<br/>(x:10, y:20, rotation:15deg)"]
D["QGraphicsTextItem<br/>(x:150, y:100, scale:1.5)"]
E["QGraphicsPixmapItem<br/>(x:80, y:180, z-value:1)"]
F["QGraphicsEllipseItem<br/>(x:-50, y:90, z-value:2)"]
end
end

D --- C
E --- D
F --- E

linkStyle 0,1,2 stroke-width:0px;

style A fill:#dae8fc,stroke:#6c8ebf,stroke-width:2px
style B fill:#d5e8d4,stroke:#82b366,stroke-width:2px,stroke-dasharray: 5 5
style C fill:#f8cecc,stroke:#b85450
style D fill:#f8cecc,stroke:#b85450
style E fill:#f8cecc,stroke:#b85450
style F fill:#f8cecc,stroke:#b85450

桥梁(QGraphicsProxywidget)

现在,我们已经深刻理解了这两个世界的根本不同:

  • QWidget体系:一个由拥有独立“户口”(窗口句柄)的“公民”构成的社会,秩序井然,但缺乏灵活性。
  • QGraphicsView体系:一个由没有“户口”的“虚拟灵魂”构成的魔法世界,自由奔放,但缺少现成的、功能丰富的“公民”。

核心矛盾也因此凸显:如何将一个功能完备的QWidget“公民”,请到QGraphicsScene这个魔法世界里来,并让它在这里依然能行动、能交互,享受缩放和旋转的“魔法”?

这就是QGraphicsProxyWidget需要登场的原因。它,就是为了连接这两个平行宇宙而生的“次元之桥”

核心翻译机制

QGraphicsProxyWidget的桥接作用并非魔法,而是一套精密的、在用户空间实现的“合成器”,其核心是两大翻译机制。

事件流的捕获与”再注入”

一个控件不仅要能看,还要能用。QGraphicsProxyWidget通过一个巧妙的事件重定向流程保证了交互性。

1
2
3
4
5
6
7
8
9
10
11
12
13
sequenceDiagram
participant U as 用户
participant OS as 操作系统 (X11/Wayland)
participant V as QGraphicsView
participant P as QGraphicsProxyWidget
participant W as QPushButton (被代理)

U->>OS: 点击鼠标
OS->>V: 转发原生鼠标事件
V->>P: 投递QGraphicsSceneMouseEvent (场景坐标)
P->>P: **内部翻译**: <br/>1. 转换坐标系<br/>2. 转换事件类型
P->>W: **“再注入”**伪造的QMouseEvent (控件坐标)
W->>W: 像往常一样处理点击, 发出clicked()信号

如上图所示,QGraphicsProxyWidget像一个海关,它“拦截”了来自QGraphicsScene的事件,经过一番“翻译”和“伪装”后,再“注入”给它所代理的QWidget。被代理的控件对此毫无察觉,以为自己生活在普通的窗口环境中。

绘制机制的重定向

QGraphicsProxyWidget采用了“离屏渲染”的技术,让QWidget在不知情的情况下将自己画在一张“影子画布”上。

1
2
3
4
5
6
7
8
9
graph TD
A["QGraphicsScene请求重绘代理项"] --> B{"QGraphicsProxyWidget::paint() 被调用"};
B --> C["<b>步骤一:</b>在内存中准备一块'影子画布'<br/>(QPixmap)"];
C --> D["<b>步骤二:</b>调用内部widget->render(...)<br/>命令控件将自己画在这块画布上"];
D --> E["<b>步骤三:</b>QPushButton完成绘制<br/>'影子画布'上现在有了按钮的图像"];
E --> F["<b>步骤四:</b>ProxyWidget将最终的画布内容<br/>作为一个普通图片绘制到场景中<br/>(此时可应用旋转、缩放等变换)"];
F --> G["绘制完成"];

style E fill:#d5e8d4,stroke:#82b366

这个过程,QGraphicsProxyWidget像一个技艺高超的画家,它先让QWidget在一张不对外展示的画纸上画好自画像,然后它再拿起这张画,进行艺术加工(变换),最后贴到QGraphicsScene这个大展板上。

构建强大的UI

掌握了QGraphicsProxyWidget,我们就能构建出许多传统QWidget布局难以实现的复杂UI。最典型的就是节点编辑器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
graph TD
subgraph QGraphicsScene ["画布: 节点编辑器"]
subgraph NodeA ["QGraphicsItem: 节点 '向量运算'"]
P1["QGraphicsProxyWidget"] --> W1["QLabel: '输入A'"]
P2["QGraphicsProxyWidget"] --> W2["QLineEdit"]
P3["QGraphicsProxyWidget"] --> W3["QLabel: '输入B'"]
P4["QGraphicsProxyWidget"] --> W4["QLineEdit"]
P5["QGraphicsProxyWidget"] --> W5["QLabel: '操作'"]
P6["QGraphicsProxyWidget"] --> W6["QComboBox: [Add, Multiply, ...]"]
end
subgraph NodeB ["QGraphicsItem: 节点 '输出'"]
P7["QGraphicsProxyWidget"] --> W7["QLabel: '结果'"]
P8["QGraphicsProxyWidget"] --> W8["QLineEdit (只读)"]
end
NodeA -- 连接线 --> NodeB
end

“过路费”

这套精密的机制并非没有代价。

  • 内存开销:每个QGraphicsProxyWidget都需要一块QPixmap作为“影子画布”。如果你的场景中有成千上万个复杂的代理控件,这部分内存占用会相当可观。
  • CPU开销:相比于一个只执行简单绘图命令的QGraphicsItem,代理项的QWidget::render()调用(本质上是软件光栅化)和频繁的事件翻译都会带来额外的CPU开销。

因此,QGraphicsProxyWidget是一个“大杀器”,而不是“常规武器”。如果一个功能可以通过自定义一个轻量级的QGraphicsItem来实现,那么就优先选择它。只有当你需要嵌入一个功能极其复杂、如果重写为QGraphicsItem则成本过高的现有QWidget时,QGraphicsProxyWidget才是你的最佳选择。

总结

从始至终,QGraphicsProxyWidget的核心使命就是:在两个不同就阿狗的图形与时间系统之间,充当一个高性能的实时翻译官用户空间合成器

  1. 绘制重定向(Painting Redirection)QGraphicsProxyWidget 通过接管被代理QWidget的绘制目标,将其从屏幕的“后备存储”重定向到一块内存中的“离屏画布”(QPixmap)。它巧妙地利用 QWidget::render() 函数,命令控件在这块“影子画布”上完成所有绘制。随后,它再将这张已经画好的、光栅化的位图作为一张普通的纹理,绘制到QGraphicsScene的最终渲染目标上。这个过程,在应用层面实现了一套独立的离屏渲染与位图合成(Off-screen Rendering & Blitting)流程。
  2. 事件流重塑(Event Stream Remodeling):它在事件传递的路径上设立了一个“检查点”。所有发往其在场景中所在位置的QGraphicsScene事件都会被它捕获。它会基于场景图的变换(Transform)信息,将事件的坐标系、类型和参数从QGraphicsScene的语境,精确地“翻译”成QWidget内部的局部语境,并构造成一个全新的、标准的QWidget事件。最后,它通过Qt的事件系统,将这个“伪造”的事件重新注入(Re-inject)到被代理控件的事件队列中,从而“欺骗”QWidget,使其相信自己正与操作系统直接交互。
  3. 几何状态同步(Geometry State Synchronization)QGraphicsProxyWidget持续监听其内部QWidget的尺寸策略(Size Policy)和几何变化。当QWidget因内部布局变化而希望改变大小时,代理项会相应地更新自己在QGraphicsScene中的边界框(Bounding Rectangle)。反之,当代理项在场景中被用户或动画系统缩放、变形时,它也会将这种几何变换信息传递给内部的QWidget,触发其布局的重新计算。这种双向的几何状态同步,保证了两个世界在视觉和布局上的一致性。

综上所述,QGraphicsProxyWidget并非一个简单的容器,而是一个精密的运行时适配器(Runtime Adapter)。它以一定的内存和CPU开销为代价,通过在用户空间模拟QWidget的绘制环境和事件来源,实现了在矢量化、可变换的场景图中嵌入和复用功能完备的光栅化组件这一高难度任务,是Qt框架中“组合优于继承”和“适配器模式”的绝佳体现。

Wednesday, 16 July 2025

The shell history can quickly become polluted with commands that are only relevant for specific projects. Running specific unit tests from project A, starting docker with services needed for project B, etc.

There are some Bash and Zsh scripts that allow you to have separate histories for each directory you’re in which can be useful in situations such as these, but the issue is that you get separate histories for each directory instead of for each project. This means that you would get one history when you are in project’s root directory, and another when you’re in some subdirectory of said project.

For this reason, I’ve created a small Zsh plugin based on jimhester/per-directory-history.

Instead of creating a separate history for each directory you change to, it creates separate histories only for directories that are ‘tagged’ with some custom file, be it .git, .envrc or something else (it is customizable).

For any directory you change to, it will check if that directory or any of its parents (it will search for the closest parent) contain the ‘tag’ file and it will use that directory as the project root thus creating a separate history for it.

Installation and configuration

You just create an array named PER_PROJECT_HISTORY_TAGS that contains all the file names you want to be used for detecting the project roots:

declare -a PER_PROJECT_HISTORY_TAGS
PER_PROJECT_HISTORY_TAGS=(.envrc .should_have_per_project_history)
declare -r PER_PROJECT_HISTORY_TAGS

In the example above, any directory that I defined custom environment variables for using direnv’s .envrc will be treated as project roots, along with any directory explicitly tagged with .should_have_per_project_history.

This is useful when you have several source repositories inside of a single project that should all have a common history, so you can’t use .git as a tag to detect the project root.

If you don’t define your own tags, the default ones will be used (.git .hg .jj .stack-work .cabal .cargo .envrc .per_project_history).

Then you just source the per-project-history.zsh file from the plugin’s repository.

Or, if you use a plugin manager, add ivan-cukic/zsh-per-project-history to the list of plugins. For Zinit, it would look like this:

zinit light ivan-cukic/zsh-per-project-history

Welcome to the June 2025 development and community update.

Development Report

Krita 5.2.11 Released

A new bugfix release, Krita 5.2.11, is out. Check out the release notes for 5.2.10 and the release post for 5.2.11 and stay up to date.

Qt6 Port Progress

Dmitry spent time improving framerate of the canvas on Qt6, and implemented support for reading modifier keys while Krita is out of focus on Wayland (MR!2409, MR!2406).

Community Report

June 2025 Monthly Art Challenge Results

23 forum members took on the challenge of the "Wrath of the Sun" theme. And the winner is… Summer Battle by @Katamaheen

Summer Battle by @Katamaheen

The July Art Challenge is Open Now

For the July Art Challenge, winner @Katamaheen has chosen "Cool Rides" as the theme. Design a vehicle, optionally add fictional sponsorship logos as suggested by @Mythmaker, and let's race!

Best of Krita-Artists - May/June 2025

This month's Best of Krita-Artists Nominations thread received 22 nominations of forum members' artwork. When the poll closed, these five wonderful works made their way onto the Krita-Artists featured artwork banner:

Color sketch practice by @JayWong

Color sketch practice by @JayWong

My Love by @MauFlores

My Love by @MauFlores

Solstice concept art by @MauFlores

Solstice concept art by @MauFlores

Asian by @yartydesign

Asian by @yartydesign

Painting Animation Background by @Mahmoud_Jalaliye

Painting Animation Background by @Mahmoud_Jalaliye

Best of Krita-Artists - June/July 2025

Take a look at the nominations for next month.

Ways to Help Krita

Krita is Free and Open Source Software developed by an international team of sponsored developers and volunteer contributors. That means anyone can help make Krita better!

Support Krita financially by making a one-time or monthly monetary donation. Or donate your time and Get Involved with testing, development, translation, documentation, and more. Last but not least, you can spread the word! Share your Krita artworks, resources, and tips with others, and show the world what Krita can do.

Other Notable Changes

Other notable changes in Krita's development builds from June 6, 2025 - July 16, 2025.

Stable branch (5.2.10):

  • Animation: Static opacity changes now properly clear animation cache. (bug report) (Change, by Emmet O'Neill)
  • Animation: Fix incorrect scaling of animated transform mask values. (bug report, CCbug report) (Change, by Emmet O'Neill)
  • Keyboard Input: Implement option for ignoring of F13-F24 keys on Windows, to avoid problems with certain apps (such as WeeChat) sending unbalanced fake F22 keypresses that confuse the input system. (bug report) (Change, by Dmitry Kazakov)

Unstable branch (5.3.0-prealpha):

  • File Formats: WebP: Add options to force convert to sRGB embed ICC profile. (wish bug report) (Change, by Rasyuqa A H)
  • File Formats: JPEG-XL: Improve image mode export options, import multi-page images as layers. (Change, by Rasyuqa A H)

Nightly Builds

Pre-release versions of Krita are built every day for testing new changes.

Get the latest bugfixes in Stable "Krita Plus" (5.2.12-prealpha): Linux - Windows - macOS (unsigned) - Android arm64-v8a - Android arm32-v7a - Android x86_64

Or test out the latest Experimental features in "Krita Next" (5.3.0-prealpha). Feedback and bug reports are appreciated!: Linux - Windows - macOS (unsigned) - Android arm64-v8a - Android arm32-v7a - Android x86_64

Tuesday, 15 July 2025

Integrate KTextEditor into Cantor

Project Introduction

Cantor is a powerful scientific computing front-end in the KDE ecosystem, providing users with a unified and friendly interface for mathematical and statistical analysis.

Currently, Cantor’s worksheet cells are based on a custom implementation using QTextDocument. While this approach meets basic needs, it has revealed its limitations in terms of feature expansion and long-term maintenance. To fundamentally enhance the editing experience, simplify the codebase, and embrace the advanced technology of the KDE Frameworks, this project plans to deeply integrate the feature-rich KTextEditor component into Cantor, completely replacing the existing cell implementation.

This core upgrade will bring a suite of long-awaited, powerful features to Cantor, including:

  • Enhanced Multi-line Editing: Significantly improves the editing experience for complex, multi-line code blocks. This includes more robust syntax highlighting, accurate bracket matching, and a more stable editing environment, resolving key issues present in the current implementation.
  • Vi Mode: Provides native Vi-style text editing for users accustomed to Vim’s efficient workflow, significantly improving editing speed.
  • Improved Syntax Highlighting: Supports more comprehensive and precise code coloring rules, making complex mathematical and programming expressions clearer and easier to read.
  • Smart Auto-indent: Enhances code formatting capabilities, making it exceptionally convenient to write structured scripts and multi-line formulas.
  • Code Completion: Intelligently suggests variables, functions, and keywords, thereby speeding up input and reducing syntax errors.
  • Spell Check: Ensures the accuracy of text comments and documentation, which is crucial for writing rigorous formula explanations and reports.

By integrating KTextEditor, Cantor will not only optimize the user’s workflow but also reduce code redundancy and improve the project’s overall maintainability. This move will further strengthen the Cantor and KDE ecosystems, providing users with a smoother and more unified experience across different KDE applications.

Why this is needed

As scientific computing demands become increasingly complex, a modern, full-featured editor is essential for boosting productivity. The current custom implementation has become a bottleneck hindering Cantor’s future development:

  1. High Maintenance Cost: Maintaining a custom editor component consumes significant development resources and struggles to keep pace with the advancements in modern editors.
  2. Difficult to Extend: Implementing complex features like Vi mode or advanced code completion on the existing architecture is akin to “reinventing the wheel”—it’s inefficient and prone to introducing new bugs.
  3. Failure to Leverage the Ecosystem: The KDE Frameworks already provide the very mature and powerful KTextEditor component. Not utilizing it is a waste of available resources. Integrating KTextEditor means Cantor can directly benefit from the years of effort and refinement the entire KDE community has invested in this component.

Current Status: Phase 1 Complete

The first phase of the project has been completed. We have successfully introduced Cantor into the core of KTextEditor and achieved the following key results:

  • Core replacement completed:We created a new WorksheetTextEditorItem class, which acts as a proxy for KTextEditor::View, successfully replacing the old QTextDocument-based cell implementation.

  • Basic functions available:Users can already perform basic operations such as text input and code execution in the new cells, proving the feasibility of the integration solution.

    • short text

    • multi-line display

  • integration of KTextEditor features:Basic syntax highlighting and text editing features are now available in new cells.

    • Python

    • Maxima

    • Lua

Event Handling and UI Interaction: Core user interactions such as mouse events and drag-and-drop functionality for worksheet entries are working as expected within the new framework.

  • drag

  • shotcut key

The Road Ahead: The Plan for Phase 2

With the foundation now firmly in place, the second phase of the project will focus on unlocking the full potential of KTextEditor and refining the user experience. The planned work includes:

  • Activating Advanced Editor Features: We will progressively enable and configure KTextEditor’s sophisticated features, including Vi Mode, integrated Spell Check, and smart indentation rules, ensuring they are seamlessly integrated into the Cantor workflow.
  • Connecting Backend-driven Code Completion: A major goal is to connect Cantor’s various computation backends (e.g., Python, R, Octave) to the KTextEditor code completion framework. This will provide users with context-aware, intelligent suggestions.
  • Bug Fixing and UX Polishing: We will systematically address any remaining issues from the initial integration, focusing on perfecting UI/UX details like focus management, text selection, and context menu consistency.
  • Comprehensive Testing: Rigorous regression testing will be conducted across all features—new and existing—to guarantee the stability and reliability of Cantor following this major architectural upgrade.