Thursday, 28 August 2025
We are happy to announce the release of Qt Creator 17.0.1!

Hello again everyone!
I’m Derek Lin also known as kenoi, a second-year Math student at the University of Waterloo.
Through Google Summer of Code 2025 (GSoC), mentored by Harald Sitter, Tobias Fella, and Nicolas Fella, I have been developing Karton, a virtual machine manager for KDE.
As the program wraps up, I thought it would be a good idea to put together what I’ve been able to accomplish as well as my plans going forward.
A final look at Karton after the GSoC period.
The main motivation behind Karton is to provide KDE users with a more Qt-native alternative to GTK-based virtual machine managers, as well as an easy-to-use experience.
I had first expressed interest in working on Karton in early Feburary where I made the initial full rewrite (see MR #4), using libvirt and a new UI, wrapping virt-install and virt-viewer CLIs. During this time, I had been doing research, writing a proposal, and trying out different virtual machine managers like GNOME Boxes, virtmanager, and UTM.
You can read more about it in my project introduction blog!
A screenshot of my rewrite in March 8, 2025.
One of my goals for the project was to develop a custom libvirt domain XML generator using Qt libraries and the libosinfo GLib API. I started working on the feature in advance in April and was able to have it ready for review before the official GSoC coding period.
I created a dialogue menu to accept a VM name, installation media, storage, allocated RAM, and CPUs. libosinfo
will attempt to identify the ISO file and return a OS short-ID (ex: fedora40
, ubuntu24.04
, etc), otherwise users will need to select one from the displayed list.
Through the OS ID, libosinfo
can provide certain specifications needed in the libvirt domain XML. Karton then fills in the rest, generating a UUID, a MAC address to configure a virtual network, and sets up display, audio, and storage devices. The XML file is assembled through QDomDocument and passed into a libvirt call that verifies it before adding the VM.
VM information (id, name, state, paths, etc) in Karton is parsed explicitly from the saved libvirt XML file found in the libvirt QEMU folder, ~/.config/libvirt/qemu/{domain_name}.xml
.
All in all, this addition (see MR #8) completely removed the virt-install
dependency although barebones.
A screenshot of the VM installation dialog.
The easy VM installation process of GNOME Boxes had been an inspiration for me and I’d like to improve it in the future by adding a media installer and better error handling later on.
A few weeks into the official coding period, I had been addressing feedback and polishing my VM installer merge request. This introduced much cleaner class interface separation in regards to storing individual VM data.
My use of virt-viewer
previously for interacting with virtual machines was meant as a temporary addition, as it is a separate application and is poorly integrated into Qt/Kirigami and lacks needed customizability.
Previously, clicking the view
button would open a virtviewer
window.
As such, the bulk of my time was spent working with SPICE directly, using the spice-client-glib
library, in order to create a custom Qt SPICE client and viewer (see MR #15). This needed to manage the state of connection to VM displays and render them to KDE (Kirigami) windows. Other features such as input forwarding, audio receiving also needed to be implemented.
I had configured all Karton-created VMs to be set to autoport for graphics which dynamically assigns a port at runtime. Consequently, I needed to use a CLI tool, virsh domdisplay
, to fetch the SPICE URI to establish the initial connection.
The viewer display works through a frame buffer. The approach I took was rendering the pixel array I received to a QImage which could be drawn onto a QQuickItem to be displayed on the window. To know when to update, it listens to the SPICE primary display callback.
You can read more about it in my Qt SPICE client blog. As noted, this approach is quite inefficient as it needs to create a new QImage for every frame. I plan on improving this in the future.
Screenshots of my struggles getting the display to work properly.
I had to manage receiving and forwarding Qt input. Sending QMouseEvents, mouse button clicks, were straightforward and can be mapped directly to SPICE protocol mouse messages when activated. Keystrokes are taken in as QKeyEvents and the received scancodes, in evdev
, are converted to PC XT
for SPICE through a map generated by QEMU. Implementing scroll and drag followed similarly.
I also needed manage receiving audio streams from the SPICE playback callback, writing to a QAudioSink. One thing I found nice is how my approach supported multiple SPICE connections quite nicely. For example, opening multiple VMs will create separate audio sources for each so users can modify volume levels accordingly.
Later on, I added display frame resizing when the user resizes the Karton window as well as a fullscreen button. I noticed that doing so still causes resolution to appear quite bad, so proper resizing done through the guest machine will have to be implemented in the future.
Now, we can watch Pepper and Carrot somewhat! (no hardware accelleration yet)
My final major MR was to rework my UI to make better use of screen space (see MR #25). I moved the existing VM ListView into a sidebar displaying only name, state, and OS ID. The right side would then have the detailed information of the selected VM. One my inspirations was MacOS UTM’s screenshot of the last active frame.
When a user closes the Karton viewer window, the last frame is saved to $HOME/.local/state/KDE/Karton/previews
. Implementing cool features like these are much easier now that we have our own viewer! I also added some effects for opacity and hover animation to make it look nice.
Finally, I worked on media disc ejection (see MR #26). This uses a libvirt call to simulate the installation media being removed from the VM, so users can boot into their virtual hard drive after installing.
As a final test of the project, I decided to create, configure and use a Fedora KDE VM using Karton. After setting specifications, I installed it to the virtual disk, ejected the installation media, and properly booted into it. Then, I tried playing some games. Overall, it worked pretty well!
My biggest regret was having a study term over this period. I had to really manage my time well, balancing studying, searching for job positions, and contributing. There was a week where I had 2 midterms, 2 interviews, and a final project, and I found myself pulling some late nighters writing code at the school library. Though it’s been an exhausting school term, I am still super glad to have been able to contribute to a really cool project and get something work!
I was also new to both C++ and Qt development. Funny enough, I had been taking, and struggling on, my first course in C++ while working on Karton. I also spent a lot of time reading documentation to familiarize myself with a lot of the different APIs (libspice, libvirt, and libosinfo).
Left: Karton freezes my computer because I had too many running VMs.
Right: 434.1 GiB of virtual disks; my reminder to implement disk management.
There is still so much to do! Currently, I am on vacation and I will be attending Akademy in Berlin in September so I won’t be able to work much until then. In the fall, I will be finally off school for a 4 month internship (yay!!). I’m hoping I will have more time to contribute again.
There’s still a lot left especially with regards to the viewer.
Here’s a bit of an unorganized list:
gl-scanout
In its current state, Karton is not feature complete, and not ready for officially packaging and releasing. In addition to the missing features listed before, there have been a lot of new and moving parts throughout this coding period, and I’d like to have the chance to thoroughly test the code to prevent any major issues.
However, I do encourage you to try it out (at your own risk!) by cloning the repo. Let me know what you think and when you find any issues!
In other news, there are some discussions of packaging Karton as a Flatpak eventually and I will be requesting to add it to the KDE namespace in the coming months, so stay tuned!
Overall, it has been an amazing experience completing GSoC under KDE and I really recommend it for anyone who is looking to contribute to open-source. I’m quite satisfied with what I’ve been able to accomplish in this short period of time and hoping to continue to working and learning with the community.
Working through MRs has given me a lot of valuable and relevant industry experience going forward. A big thank you to my mentor, Harald Sitter, who has been reviewing and providing feedback along the way!
As mentioned earlier, Karton still definitely has a lot to work on and I plan continuing my work after GSoC as well. If you’d like to read more about my work on the project in the future, please check out my personal blog and the development matrix, karton:kde.org.
Thanks for reading!
Website: https://kenoi.dev/
Mastodon: https://mastodon.social/@kenoi
GitLab: https://invent.kde.org/kenoi
GitHub: https://github.com/kenoi1
Matrix: @kenoi:matrix.org
Discord: kenyoy
These last few weeks have been pretty hectic due to me moving countries and such, so I have not had the time to write a blog post detailing my weekly progress, because of this I have decided to compress it all into a singular blog post talking about all the changes I have been working on and what I plan on doing in the future.
In the last blog post I wrote I talked about the progress that had been made in the newmailnotifier
agent, and that in the following weeks I would finish implementing the changes and testing its funcionality. Well, it ended up taking quite a bit longer as I found that several other files had to also be moved to KMail from KDE-PIM Runtime, and these ones were being used in the runtime repo. The files I have found so far and that I have been looking into are:
The MR for the singleshot capability in the Akonadi repo was given the green light and just recently got merged. On the other hand, the MR with the changes for the agent received feedback and several improvements were requested.
Most importantly, Carl brought to my attention how recent MR’s by Nicolas Fella removed the job tracker from the migration agent, thus making it unnecessary to add it as a temporary folder. Both the requested changes and the removal of the folder have been carried out, while doing so I even realized that in my singleshot MR I was missing the addition of the new finished()
signal in the agentbase header file, which I have now also added.
After doing this though, I once again focused on the problem that persisted, the singleshot capability not working properly. The migration agent would initialize without issue when running the Akonadi server but would then not shut down after completing its tasks. I knew that the isPluginOpen()
method worked in sending the finished
signal, as when I opened and closed the plugin the agent would shut down correctly.
With the help of my mentor Claudio, we found that the migrations were in fact not even running, the agent would start but the jobs would fail to run, because of this the logic implemented to signal the finilization of a job never had the chance to run, and thus isPluginOpen()
remained untouched.
Furthermore, the way I had designed the plugin letting the agent know that it was open had proven to be insufficient, as the migrations (once we get them to run as intended) would emit the jobFinished()
signal after concluding, thus triggering the isPluginOpen()
method with the default value of false and shutting down the agent, even if the plugin was still open.
The times the singleshot capability did work (when opening and closing the plugin), we also found that the status would show as “Broken” and the statusMessage as “Unable to start”, which may need changing, but most troubling was that the opening of the plugin would not restart the agent, therefore only showing an empty config window. I need to find a way to either restart from the agent itself or notify Akonadi so that it restarts it when the plugin runs.
The GSOC concludes next week and these last few weeks have not seen any MR requests from my part, so my plan is to continue with the refactoring beyond the end of the programme, working on completing the NewMailNotifier and Migration agents, as well as dealing with a few of the agents in KMail, namely MailFilter, MailMerge and the UnifiedMailBox.
As of now, the identified issues to solve regarding the Migration agent are:
finished()
signal.In the case of the NewMailNotifier:
While there’s still work ahead, I feel that these weeks have been invaluable in terms of learning, debugging, and understanding the bigger picture of how the different Akonadi agents fit together. The experience has been both challenging and rewarding, and I’m looking forward to tackling the remaining issues with a clearer path forward.
Although GSoC is officially ending, this is just a milestone rather than a finish line, and I’m excited to continue contributing to Merkuro and the KDE ecosystem as a whole.
Qt for MCUs 2.8.3 LTS (Long Term Support) has been released and is available for download. This patch release provides bug fixes and other improvements while maintaining source compatibility with Qt for MCUs 2.8 (see Qt for MCUs 2.8 LTS released). This release does not add any new functionality.
For more information about fixed bugs, please check the Qt for MCUs 2.8.3 change log..
As with other Qt products, Qt for MCUs 2.8.3 LTS can be added to an existing installation by using the maintenance tool or via a clean installation using Qt Online Installer.
The standard support period for the Qt for MCUs 2.8 LTS ends in December 2025.
Qt's model/view framework was one of the big additions to Qt 4 in 2005, replacing the previous item-based list, table, and tree widgets with a more general abstraction. QAbstractItemModel
sits at the heart of this framework, and provides a virtual interface that allows implementers to make data available to the UI components. QAbstractItemModel
is part of the Qt Core module, and it is also the interface through which Qt Quick's item views read and write data.
Today marks both a milestone and a turning point in my journey with open source software. I’m proud to announce the release of KDE Gear 25.08.0 as my final snap package release. You can find all the details about this exciting update at the official KDE announcement.
After much reflection and with a heavy heart, I’ve made the difficult decision to retire from most of my open source software work, including snap packaging. This wasn’t a choice I made lightly – it comes after months of rejections and silence in an industry I’ve loved and called home for over 20 years.
While I’m stepping back, I’m thrilled to share that the future of KDE snaps is in excellent hands. Carlos from the Neon team has been working tirelessly to set up snaps on the new infrastructure that KDE has made available. This means building snaps in KDE CI is now possible – a significant leap forward for the ecosystem. I’ll be helping Carlos get the pipelines properly configured to ensure a smooth transition.
Though I’m stepping away from most development work, I won’t be disappearing entirely from the communities that have meant so much to me:
This transition isn’t just about career fatigue – it’s about financial reality. I’ve spent too many years working for free while struggling to pay my bills. The recent changes in the industry, particularly with AI transforming the web development landscape, have made things even more challenging. Getting traffic to websites now requires extensive social media work and marketing – all expected to be done without compensation.
My stint at webwork was good while it lasted, but the changing landscape has made it unsustainable. I’ve reached a point where I can’t continue doing free work when my family and I are struggling financially. It shouldn’t take breaking a limb to receive the donations needed to survive.
These 20+ years in open source have been the defining chapter of my professional life. I’ve watched communities grow, technologies evolve, and witnessed firsthand the incredible things that happen when passionate people work together. The relationships I’ve built, the problems we’ve solved together, and the software we’ve created have been deeply meaningful.
But I also have to be honest about where I stand today: I cannot compete in the current job market. The industry has changed, and despite my experience and passion, the opportunities just aren’t there for someone in my situation.
Making a career change after two decades is terrifying, but it’s also necessary. I need to find a path that can provide financial stability for my family while still allowing me to contribute meaningfully to the world.
If you’ve benefited from my work over the years and are in a position to help during this transition, I would be forever grateful for any support. Every contribution, no matter the size, helps ease this difficult period: https://gofund.me/a9c55d8f
To everyone who has collaborated with me, tested my packages, filed bug reports, offered encouragement, or simply used the software I’ve helped maintain – thank you. You’ve made these 20+ years worthwhile, and you’ve been part of something bigger than any individual contribution.
The open source world will continue to thrive because it’s built on the collective passion of thousands of people like Carlos, Rik, and countless others who are carrying the torch forward. While my active development days are ending, the impact of this community will continue long into the future.
With sincere gratitude and fond farewells,
Scarlett Moore
In my final week of GSoC with KDE's Krita this summer, I am excited to share this week's progress and reflect on my journey so far. From the initial setup to building the Selection Action Bar, this project has been a meaningful learning experience and a stepping stone toward connecting with Krita's community and open source development.
This week I finalized the Selection Action Bar with my mentor Emmet and made adjustments based on my merge request feedback.
Some key areas of feedback and fixes included:
These improvements taught me that writing good code is not just about features, but also about clarity, consistency, and collaboration.
Alongside updating my feature merge request, I also worked on documentation to explain how the Selection Action Bar works and how users can use it.
Looking back over the past 12 weeks, I realize how much this project has shaped both my technical and personal growth as a developer.
Technical Growth When I started, navigating Krita's large C++/Qt codebase felt overwhelming. Through persistence, code reviews, and mentorship, I've grown confident in reading unfamiliar code, handling ambiguity, and contributing in a way that fits the standards of a large open source project. Following Krita's style guidelines showed me how important naming conventions and standardized code styling are for long-term maintainability.
Personal Growth One of the most important lessons I learned is that open source development isn't about rushing to get the next feature in. It's about patience, clarity, and iteration. Code reviews taught me to embrace feedback, ask better questions, and view them as opportunities for growth rather than blockers.
Community Lessons The most valuable part of this experience was connecting with the Krita and KDE community. I experienced first-hand how collaborative and thoughtful the process of open source development is. Every suggestion, from small style tweaks to broader design decisions, carried the goal of improving the project for everyone. That sense of shared ownership and responsibility is something I want to carry with me in all my future contributions.
These final weeks have been very rewarding. I have grown from starting out by simply reading Krita's large codebase to implementing a feature that enhances users' workflow.
While this marks the end of GSoC for me, it is not the end of my open source journey. My plan moving forward is to:
Finally, I would like to thank my mentor Emmet, the Krita Developers Dmitry, Halla, Tiar, Wolthera, everyone I interacted with in Krita Chat, and the Krita community for their guidance, patience, and encouragement throughout this project.
I also want to thank Google Summer of Code for making this journey possible and giving me the chance to grow as a developer while contributing to open source.
To anyone reading this, please feel free to reach out to me. I'm always open to suggestions and thoughts on how to improve as a developer and as a person.
Email: ross.erosales@gmail.com
Matrix: @rossr:matrix.org
When we create a Python virtual environment, the system automatically generates a complete directory structure to isolate project dependencies. As shown in the image, the venv1
virtual environment contains several core directories, each serving specific functions.
The virtual environment's root directory contains four main directories: bin
, include
, lib
, and lib64
, along with an important configuration file pyvenv.cfg
. This structure design mimics the layout of system-level Python installations, ensuring environment integrity and independence.
The bin
directory is the execution center of the virtual environment, containing all executable files and scripts. The most important among these are various activation scripts, such as activate.csh
, activate.fish
, and activate.ps1
, which correspond to different shell environments. When you execute source bin/activate
, you're actually running these scripts to modify environment variables.
Additionally, this directory contains symbolic links to the Python interpreter, such as python
, python3
, and python3.12
, all pointing to the same Python interpreter instance. Package management tools pip
, pip3
, and pip3.12
are also located here, ensuring that packages installed in the virtual environment don't affect the system-level Python environment.
The include
directory primarily stores Python header files, particularly the C API header files in the python3.12
subdirectory. These files are crucial when compiling Python packages containing C extensions, such as numpy, scipy, and other scientific computing libraries. The virtual environment provides copies of these header files to ensure compilation process consistency.
The lib
directory is the core of the virtual environment, containing the actual files of Python standard library and third-party packages. The site-packages
folder in the python3.12
subdirectory is where all packages installed via pip are stored. This directory's isolation ensures that dependencies between different projects don't conflict with each other.
lib64
is typically a symbolic link pointing to lib
. This design is primarily to support the library file lookup mechanism for 64-bit systems. In some Linux distributions, the system searches both lib
and lib64
directories, and the symbolic link ensures compatibility.
The pyvenv.cfg
file is the configuration core of the virtual environment, recording the environment's basic information, including the Python interpreter path, version information, and whether system site-packages are included. This file determines the virtual environment's behavior mode.
According to the image, this is a Python 3.12.3 Linux environment, showing the Python interpreter's module search paths through sys.path
.
First is the /usr/lib/python312.zip
path, which represents Python's standard library compressed package. This is an optimization strategy where Python packages core standard library modules into a zip file to improve loading speed and save disk space. When importing standard library modules like os
, sys
, and json
, the Python interpreter first searches in this compressed file.
The next /usr/lib/python3.12
directory is the main installation location for Python's standard library. This contains all standard library modules written in pure Python, as well as some configuration files and auxiliary scripts. This directory's structure reflects Python's module organization approach, containing complete implementations of packages such as collections
, concurrent
, and email
.
The /usr/lib/python3.12/lib-dynload
directory specifically stores dynamically loaded extension modules, which are typically modules written in C or C++ and compiled into shared libraries. These extension modules provide Python with the ability to interact with the underlying system, including file system operations, network communication, mathematical calculations, and other performance-critical functions.
In package management, the /usr/local/lib/python3.12/dist-packages
directory plays an important role. This is the storage location for system-level installed third-party packages, typically installed through system package managers or pip packages installed with administrator privileges.
Finally, the /usr/lib/python3/dist-packages
directory is another storage location for third-party packages, usually containing Python packages installed through Linux distribution package management systems. This design allows system package managers and Python package managers to coexist harmoniously, avoiding dependency conflicts.
This directory structure design reflects several important principles of the Python ecosystem. First is modular and layered management, where different types of modules are clearly separated into different directories. Second is the priority mechanism, where Python searches these directories in the order of sys.path
, ensuring correct module loading behavior. Finally is package management flexibility, supporting multiple installation methods and management strategies.
sys.path
before and after virtual environment activation, revealing the sophisticated design of the virtual environment isolation mechanism.When the virtual environment is not activated, sys.path
follows the standard system-level path structure, with the Python interpreter searching for modules according to established priority order. However, once the source bin/activate
command is executed to activate the virtual environment, the system cleverly inserts virtual environment-specific paths at the beginning of the sys.path
list.
The most significant change is the addition of /home/zjh/test_venv/venv1/lib/python3.12/site-packages
to the first position of the search path. This seemingly simple adjustment is actually the core of the virtual environment isolation mechanism. Python's module search follows the "first found, first used" principle, so when the virtual environment's site-packages directory is at the front of the search path, any packages installed in the virtual environment will be loaded preferentially.
This path priority reordering creates an elegant hierarchical overlay system. If you install a specific version of a package in the virtual environment, such as Django 4.2, while the system-level environment has Django 3.2 installed, then with the virtual environment activated, the Python interpreter will prioritize using Django 4.2 from the virtual environment. This mechanism ensures dependency precision and predictability.
It's worth noting that the virtual environment doesn't completely isolate system-level Python paths but adopts a more pragmatic approach. System-level paths, such as /usr/lib/python312.zip
and /usr/lib/python3.12
, remain in the search path, but with reduced priority. This means projects in the virtual environment can still access the Python standard library and system-level installed packages, but will prioritize versions from the virtual environment.
This design philosophy reflects the inclusiveness and practicality of the Python ecosystem. The standard library, as Python's core component, should remain accessible in all environments, while third-party packages achieve project-level isolation through virtual environments. Developers don't need to worry about reinstalling standard library modules like os
and sys
in virtual environments, while being able to precisely control project's third-party dependencies.
From a technical implementation perspective, this path management strategy brings another important advantage: high efficiency of environment switching. Activating and deactivating virtual environments is essentially just dynamic modification of sys.path
, without requiring copying or moving large amounts of files. This allows developers to quickly switch between different project environments without significant performance overhead.
Based on the significant architectural changes in Python 3.8, virtual environment switching shows distinct watershed characteristics in technical implementation:
The PyConfig system introduced in Python 3.8 completely changed the interpreter initialization paradigm. Before 3.8, virtual environment switching relied on relatively simple but crude global variable setting methods, configured through functions like Py_SetProgramName
and Py_SetPythonHome
. While this approach was intuitive, it lacked fine-grained control capabilities and was prone to configuration conflicts and inconsistent states.
The post-3.8 PyConfig system provides a structured configuration management approach, allowing developers to precisely control every initialization parameter of the interpreter. The new system implements type-safe configuration setting through functions like PyConfig_SetBytesString
, significantly reducing the possibility of configuration errors. However, this fine-grained control also brings significant complexity increases, requiring developers to understand and manage more configuration options.
Pre-3.8 versions mainly relied on environment variables and runtime Python code to manage module search paths. The advantage of this approach is high flexibility, allowing dynamic modification of sys.path
through Python code execution. The disadvantage is the difficulty in controlling the timing of path settings, easily leading to path priority confusion issues.
Post-3.8 versions allow precise setting of module search paths during the initialization phase, through mechanisms like config.module_search_paths_set
and PyWideStringList_Append
, achieving more strict path control. While this approach improves the determinism of path management, it also increases implementation complexity, particularly in string encoding conversion and memory management.
Early versions' configurations were mainly managed through global variables and environment variables, with relatively weak configuration isolation between different virtual environments. Environment variable modifications could affect the entire process's behavior, easily causing unexpected side effects. The post-3.8 PyConfig system implements better configuration isolation, with each interpreter instance having independent configuration state. This design reduces mutual influence between different virtual environments, but also requires developers to more carefully manage configuration object lifecycles.