Skip to content

Tuesday, 26 August 2025

Qt's model/view framework was one of the big additions to Qt 4 in 2005, replacing the previous item-based list, table, and tree widgets with a more general abstraction. QAbstractItemModel sits at the heart of this framework, and provides a virtual interface that allows implementers to make data available to the UI components. QAbstractItemModel is part of the Qt Core module, and it is also the interface through which Qt Quick's item views read and write data.

Monday, 25 August 2025

Today marks both a milestone and a turning point in my journey with open source software. I’m proud to announce the release of KDE Gear 25.08.0 as my final snap package release. You can find all the details about this exciting update at the official KDE announcement.

After much reflection and with a heavy heart, I’ve made the difficult decision to retire from most of my open source software work, including snap packaging. This wasn’t a choice I made lightly – it comes after months of rejections and silence in an industry I’ve loved and called home for over 20 years.

Passing the Torch

While I’m stepping back, I’m thrilled to share that the future of KDE snaps is in excellent hands. Carlos from the Neon team has been working tirelessly to set up snaps on the new infrastructure that KDE has made available. This means building snaps in KDE CI is now possible – a significant leap forward for the ecosystem. I’ll be helping Carlos get the pipelines properly configured to ensure a smooth transition.

Staying Connected (But Differently)

Though I’m stepping away from most development work, I won’t be disappearing entirely from the communities that have meant so much to me:

  • Kubuntu: I’ll remain available as a backup, though Rik is doing an absolutely fabulous job getting the latest and greatest KDE packages uploaded. The distribution is in capable hands.
  • Ubuntu Community Council: I’m continuing my involvement here because I’ve found myself genuinely enjoying the community side of things. There’s something deeply fulfilling about focusing on the human connections that make these projects possible.
  • Debian: I’ll likely be submitting for emeritus status, as I haven’t had the time to contribute meaningfully and want to be honest about my current capacity.

The Reality Behind the Decision

This transition isn’t just about career fatigue – it’s about financial reality. I’ve spent too many years working for free while struggling to pay my bills. The recent changes in the industry, particularly with AI transforming the web development landscape, have made things even more challenging. Getting traffic to websites now requires extensive social media work and marketing – all expected to be done without compensation.

My stint at webwork was good while it lasted, but the changing landscape has made it unsustainable. I’ve reached a point where I can’t continue doing free work when my family and I are struggling financially. It shouldn’t take breaking a limb to receive the donations needed to survive.

A Career That Meant Everything

These 20+ years in open source have been the defining chapter of my professional life. I’ve watched communities grow, technologies evolve, and witnessed firsthand the incredible things that happen when passionate people work together. The relationships I’ve built, the problems we’ve solved together, and the software we’ve created have been deeply meaningful.

But I also have to be honest about where I stand today: I cannot compete in the current job market. The industry has changed, and despite my experience and passion, the opportunities just aren’t there for someone in my situation.

Looking Forward

Making a career change after two decades is terrifying, but it’s also necessary. I need to find a path that can provide financial stability for my family while still allowing me to contribute meaningfully to the world.

If you’ve benefited from my work over the years and are in a position to help during this transition, I would be forever grateful for any support. Every contribution, no matter the size, helps ease this difficult period: https://gofund.me/a9c55d8f

Thank You

To everyone who has collaborated with me, tested my packages, filed bug reports, offered encouragement, or simply used the software I’ve helped maintain – thank you. You’ve made these 20+ years worthwhile, and you’ve been part of something bigger than any individual contribution.

The open source world will continue to thrive because it’s built on the collective passion of thousands of people like Carlos, Rik, and countless others who are carrying the torch forward. While my active development days are ending, the impact of this community will continue long into the future.

With sincere gratitude and fond farewells,

Scarlett Moore

Intro

In my final week of GSoC with KDE's Krita this summer, I am excited to share this week's progress and reflect on my journey so far. From the initial setup to building the Selection Action Bar, this project has been a meaningful learning experience and a stepping stone toward connecting with Krita's community and open source development.

Final Report

Progress

This week I finalized the Selection Action Bar with my mentor Emmet and made adjustments based on my merge request feedback.

Some key areas of feedback and fixes included:

  • Localization of user-facing strings
  • Removing unused parameters
  • Refactoring naming conventions and standardized styling

These improvements taught me that writing good code is not just about features, but also about clarity, consistency, and collaboration.

Alongside updating my feature merge request, I also worked on documentation to explain how the Selection Action Bar works and how users can use it.

Reflection

Looking back over the past 12 weeks, I realize how much this project has shaped both my technical and personal growth as a developer.

Technical Growth When I started, navigating Krita's large C++/Qt codebase felt overwhelming. Through persistence, code reviews, and mentorship, I've grown confident in reading unfamiliar code, handling ambiguity, and contributing in a way that fits the standards of a large open source project. Following Krita's style guidelines showed me how important naming conventions and standardized code styling are for long-term maintainability.

Personal Growth One of the most important lessons I learned is that open source development isn't about rushing to get the next feature in. It's about patience, clarity, and iteration. Code reviews taught me to embrace feedback, ask better questions, and view them as opportunities for growth rather than blockers.

Community Lessons The most valuable part of this experience was connecting with the Krita and KDE community. I experienced first-hand how collaborative and thoughtful the process of open source development is. Every suggestion, from small style tweaks to broader design decisions, carried the goal of improving the project for everyone. That sense of shared ownership and responsibility is something I want to carry with me in all my future contributions.

Conclusion

These final weeks have been very rewarding. I have grown from starting out by simply reading Krita's large codebase to implementing a feature that enhances users' workflow.

While this marks the end of GSoC for me, it is not the end of my open source journey. My plan moving forward is to:

  • Continue refining the Selection Action Bar based on user feedback
  • Add customization options to the Selection Action Bar
  • Stay involved through ownership of feature creation, bug fixes, community participation, and feature proposals with the Krita and KDE community

Finally, I would like to thank my mentor Emmet, the Krita Developers Dmitry, Halla, Tiar, Wolthera, everyone I interacted with in Krita Chat, and the Krita community for their guidance, patience, and encouragement throughout this project.

I also want to thank Google Summer of Code for making this journey possible and giving me the chance to grow as a developer while contributing to open source.

Contact

To anyone reading this, please feel free to reach out to me. I'm always open to suggestions and thoughts on how to improve as a developer and as a person.
Email: ross.erosales@gmail.com
Matrix: @rossr:matrix.org

Sunday, 24 August 2025


Implementation of Python virtual environment runtime switching

A long‑running backend that evaluates Python code must solve one problem well: switching the active interpreter or virtual environment at runtime without restarting the host process. A reliable solution depends on five pillars: unambiguous input semantics, reproducible version discovery, version‑aware initialization, disciplined management of process environment and sys.path, and transactional switching that can roll back safely on failure. 


The switching workflow begins with a single resolver that accepts either an interpreter executable path or a virtual environment directory. If the input is a file whose basename looks like a Python executable, the resolver treats it as such, and when the path sits under bin or Scripts it walks one directory up to infer the venv root. If the input is a directory, the resolver confirms a venv by checking for pyvenv.cfg or conda‑meta. Inputs that do not meet either criterion are interpreted as requests to use the system Python. One subtle but important detail is to avoid canonicalizing paths during this phase. Symlinked venvs frequently point into system trees; resolving them prematurely would collapse a virtual environment back into “system Python,” undermining the caller’s intent.

Pic 1. Project structure created by venv/virtualenv
Pic 2. Project structure in {virtual_env_path}/bin
Pic 3. Project structure created by conda


Once a target has been identified, the backend determines the interpreter’s major.minor version and applies a session‑level version policy. Virtual environments often publish their version and preferred executable in pyvenv.cfg; the backend reads version, executable and base‑executable if present, falling back to executing the interpreter with a small snippet to print its major and minor components when necessary. For system Python, a small set of common candidates are probed until one responds. At first login, the backend records the initialized major.minor pair and considers subsequent switches compatible only if they match that normalized value. This deliberately conservative choice prevents ABI mismatches inside a single process.

Pic 5.  Content in pyvenv.cfg

Initialization deliberately follows two distinct paths because Python’s embedding APIs changed significantly in 3.8. For older runtimes, the legacy sequence sets the program name and Python home using Py_SetProgramName and Py_SetPythonHome and then calls Py_Initialize. To keep the embedded interpreter’s view of the world coherent, the backend then runs a short configuration script that clears and rebuilds sys.path, sets sys.prefix and sys.exec_prefix, and establishes VIRTUAL_ENV in os.environ. This legacy path also relies on process‑level environment manipulation, which is described below. For modern runtimes, the backend uses the PyConfig API. It constructs an isolated configuration, sets program_name, home, executable and base_executable explicitly, marks module_search_paths_set, and appends each desired search path through PyWideStringList_Append before calling Py_InitializeFromConfig. This approach minimizes dependence on ambient process environment and makes the search space explicit and predictable. It is worth emphasizing that even when switching to the system interpreter on Py≥3.8, module search paths should be set explicitly rather than relying on implicit heuristics.


The legacy initialization path leans on controlled modification of the host process environment. Before entering a venv, the backend saves the current PATH and PYTHONHOME, prepends the venv’s bin or Scripts directory to PATH, unsets PYTHONHOME and clears PYTHONPATH, and sets VIRTUAL_ENV. On restore, PATH and PYTHONHOME are put back, VIRTUAL_ENV and PYTHONPATH are cleared, and a guard bit records that the environment is no longer modified. A frequent source of instability in ad‑hoc implementations is PATH inflation during rapid switching. The fix is straightforward: always rebuild PATH from the original value captured before the first switch rather than stacking new prefixes on top of already mutated values.


Search path construction is handled in two places. On the C++ side, we can expand the venv’s library layout into a concrete list of directories—lib/pythonX.Y/site‑packages, lib/pythonX.Y, and lib64 variants—and, if desired, appends a fallback set of system paths. On the Python side, a short configuration fragment clears sys.path and appends the new list in order, then sets sys.prefix and sys.exec_prefix to the venv root and publishes VIRTUAL_ENV in the environment. Projects that require strict isolation can omit the system fallback entirely or tie the decision to pyvenv.cfg’s include‑system‑site‑packages.


Switching itself is transactional. Before attempting a change, the backend captures a compact description of the current state—the venv directory and detected version. It then finalizes the current interpreter, applies the new target and logs in. If initialization fails for any reason, the backend finalizes again and restores the previous state, re‑logging in and restoring the prior version record on success. This simple but strict “switch‑or‑rollback” contract prevents half‑initialized sessions and ensures the host remains usable regardless of individual switch failures.


Operational visibility matters both for diagnostics and for UI integration. The backend publishes getters for the current venv directory, the detected Python version, and the chosen interpreter path. It can also discover virtual environments by scanning starting directories for pyvenv.cfg and recognizable layout patterns, returning a list of environment paths with associated versions. For consumption by other components, structured formats such as JSON simplify parsing and future evolution; even when initial implementations return human‑readable strings, migrating to a structured schema pays off quickly.


Several pitfalls recur in real deployments. Symlinked venvs must be treated carefully to avoid collapsing into system paths during resolution. PATH must be rebuilt from an original baseline to avoid unbounded growth during rapid switching. On Py≥3.8, the system interpreter should be initialized with explicit module search paths rather than relying on implicit platform logic. On Windows, hard‑coded “C:/Python” roots are fragile; build paths from CMake‑injected PYTHON_STDLIB/PYTHON_SITELIB or query sysconfig from a known interpreter. Finally, enforcing a stable major.minor within a process, while conservative, prevents obscure ABI issues that are otherwise difficult to reproduce.


A typical backend sequence for switching to a new venv reads cleanly: accept a target path, resolve it to either a venv or the system interpreter, finalize the current interpreter, set the new Python home and program name or PyConfig fields as appropriate, initialize, publish paths, and report success. If any step fails, finalize immediately and restore the previous environment. Switching to the system interpreter follows the same template, with the additional recommendation to populate module_search_paths explicitly for Py≥3.8. Querying the active environment simply returns the cached directory, version, and executable path.


A robust runtime venv switcher is primarily a matter of careful engineering rather than novel algorithms. By unifying input semantics, discovering versions reliably, choosing the correct embedding API for the runtime, treating the host environment and sys.path as controlled resources, and insisting on transactional switching with rollback, the backend achieves predictable, production‑grade behavior without sacrificing flexibility.


Implementation of Python interpreter hot switching in Cantor backend architecture

In Cantor’s backend architecture, the Python interpreter is embedded in a long‑running service process, and the frontend communicates with it via a lightweight protocol over standard input and output. The essence of runtime virtual‑environment switching is not to replace this service process but to terminate the current interpreter and reinitialize a new interpreter context within the same process, thereby avoiding any rebuild of the frontend‑backend communication channel. This approach requires a stable message protocol, controllable interpreter lifecycle management, consistent cross‑platform path and environment injection, and compatibility constraints combined with transactional rollback at the version level to ensure safety and observability during switching.

The message protocol adopts a framed “command–response” model with explicit separators and covers environment switching, environment query, and environment discovery. When a switch is initiated, the frontend issues the switching command and immediately follows with an environment‑information query to validate the state and synchronize the UI. Upon receiving the command, the service process first resolves the target environment, accepting either a virtual‑environment root directory or an interpreter executable, normalizing both into an environment root and interpreter path, while avoiding misclassification of system directories as virtual environments. Environment detection adheres to cross‑platform structural conventions: pyvenv.cfg and bin/python[3] on Unix‑like systems, Scripts/python.exe and conda‑meta on Windows.

The interpreter “hot‑switch” follows an explicit lifecycle sequence: finalize the current interpreter, then initialize a new one. For Python 3.8 and later, the PyConfig isolated‑initialization path is used with explicit settings for the executable, base_executable, home, and module_search_paths to minimize external interference; for earlier versions, traditional APIs are used in conjunction with environment variable and sys.path injection. To ensure semantic equivalence with terminal‑based environment activation, sys.prefix and sys.exec_prefix are rebuilt, module search paths are reconstructed, and key variables such as VIRTUAL_ENV, PATH, PYTHONHOME, and PYTHONPATH are injected when entering the new environment and cleaned when reverting to the system environment.

The compatibility policy enforces equality on the major.minor version. After the first successful initialization, the initialized interpreter version is recorded; subsequent switches are permitted only to environments with the same major.minor, mitigating uncertainty introduced by cross‑version ABI or interpreter‑state differences. The switching operation is transactional: prior to finalization, the current environment and version are cached; if initializing the new environment fails, the system automatically rolls back to the previous environment and restores version information, ensuring the server remains available under exceptional conditions. Observability is provided by returning key details—environment root, interpreter path, and version—through the query command, enabling UI presentation and traceability at interpreter granularity; diagnostic outputs are produced on critical paths such as version mismatch, initialization failure, and environment restoration to facilitate investigation of cross‑platform and resolution issues.

The Settings page’s interpreter selector uses a “lazy‑load plus runtime cache” strategy. On first entry, it recursively scans the user directory and conventional locations, deduplicating and classifying environments based on structural markers and version probing; immediately after rendering, it asynchronously requests the backend’s current environment, and if no response arrives within a bounded timeframe, it falls back to locally detecting the active interpreter to ensure sensible defaults in both the drop‑down and input field. To avoid UI jitter, switching is triggered by an explicit confirm/apply action; once applied, an environment‑change signal is emitted, the session layer issues a combined “switch plus query” command to complete the closed loop, and the results are fed back to the UI. Both success and failure are reported in a uniform response format; on failure, the Settings page raises a one‑time warning for the dialog session and automatically realigns to the last known‑good environment to preserve a stable user experience.

In typical usage, providing an absolute interpreter path is recommended for its determinism and cross‑platform clarity; supplying a virtual‑environment root is also supported, and the system will resolve the corresponding interpreter automatically. Returning to the system interpreter can be achieved via an empty path or a dedicated “system interpreter” option in the UI; the backend will clear injected variables and restore system path semantics. When switching across minor versions is required, a more robust practice is to manage backend instances at the major.minor granularity—or to separate them explicitly in the UI—to reduce the frequency of rollbacks and perceived interruptions.

The end‑to‑end interaction sequence and the Settings page “discover–compare–align–apply” workflow are illustrated by the two diagrams above. The former depicts message exchange and lifecycle management across the Settings page, session layer, service process, and embedded interpreter; the latter details environment enumeration, validation, backend alignment, and user confirmation. Together they constitute an engineering‑grade runtime virtual‑environment switching loop that balances stability, cross‑platform consistency, and observability, meeting both interaction and maintainability requirements.


Pic 6. End-to-end timing of runtime switching


Pic 7.  Set up the "Discover-Compare-Align-Apply" workflow on the page



How to switch Python virtual environment through cantor

1. When you open Cantor, if you do not select a virtual environment in the General Tab of Configure Cantor, the Python in the current system will be opened by default. You can get the environment linked to the current Python interpreter by entering "sys.path"

2. 
Open the General Tab of Configure Cantor. You can use the top two options to choose to manually import through the folder (the default is to perform a 5-level recursive search) or manually select the Python interpreter to import the virtual environment.

3. Select the virtual environment you want to switch to and click "Apply" to switch to the new environment


4. Enter "sys.path" again to verify


5. If you select the wrong virtual environment version, the system will prompt an error
                                          

6. If the environment switch fails, the program will fall back to the last successfully switched environment, which is "venv1" in this test



 

Project Structure of Virtual Environments Created with virtualenv or venv

When we create a Python virtual environment, the system automatically generates a complete directory structure to isolate project dependencies. As shown in the image, the venv1 virtual environment contains several core directories, each serving specific functions.

Core Directory Overview

The virtual environment's root directory contains four main directories: bin, include, lib, and lib64, along with an important configuration file pyvenv.cfg. This structure design mimics the layout of system-level Python installations, ensuring environment integrity and independence.

bin Directory: Executable File Hub

The bin directory is the execution center of the virtual environment, containing all executable files and scripts. The most important among these are various activation scripts, such as activate.csh, activate.fish, and activate.ps1, which correspond to different shell environments. When you execute source bin/activate, you're actually running these scripts to modify environment variables.

Additionally, this directory contains symbolic links to the Python interpreter, such as python, python3, and python3.12, all pointing to the same Python interpreter instance. Package management tools pip, pip3, and pip3.12 are also located here, ensuring that packages installed in the virtual environment don't affect the system-level Python environment.

include Directory: Header File Repository

The include directory primarily stores Python header files, particularly the C API header files in the python3.12 subdirectory. These files are crucial when compiling Python packages containing C extensions, such as numpy, scipy, and other scientific computing libraries. The virtual environment provides copies of these header files to ensure compilation process consistency.

lib Directory: Core Library Repository

The lib directory is the core of the virtual environment, containing the actual files of Python standard library and third-party packages. The site-packages folder in the python3.12 subdirectory is where all packages installed via pip are stored. This directory's isolation ensures that dependencies between different projects don't conflict with each other.

lib64 Directory: Architecture Compatibility Support

lib64 is typically a symbolic link pointing to lib. This design is primarily to support the library file lookup mechanism for 64-bit systems. In some Linux distributions, the system searches both lib and lib64 directories, and the symbolic link ensures compatibility.

pyvenv.cfg: Environment Configuration Core

The pyvenv.cfg file is the configuration core of the virtual environment, recording the environment's basic information, including the Python interpreter path, version information, and whether system site-packages are included. This file determines the virtual environment's behavior mode.

Python Interpreter's System Environment

According to the image, this is a Python 3.12.3 Linux environment, showing the Python interpreter's module search paths through sys.path.

First is the /usr/lib/python312.zip path, which represents Python's standard library compressed package. This is an optimization strategy where Python packages core standard library modules into a zip file to improve loading speed and save disk space. When importing standard library modules like os, sys, and json, the Python interpreter first searches in this compressed file.

The next /usr/lib/python3.12 directory is the main installation location for Python's standard library. This contains all standard library modules written in pure Python, as well as some configuration files and auxiliary scripts. This directory's structure reflects Python's module organization approach, containing complete implementations of packages such as collections, concurrent, and email.

The /usr/lib/python3.12/lib-dynload directory specifically stores dynamically loaded extension modules, which are typically modules written in C or C++ and compiled into shared libraries. These extension modules provide Python with the ability to interact with the underlying system, including file system operations, network communication, mathematical calculations, and other performance-critical functions.

In package management, the /usr/local/lib/python3.12/dist-packages directory plays an important role. This is the storage location for system-level installed third-party packages, typically installed through system package managers or pip packages installed with administrator privileges.

Finally, the /usr/lib/python3/dist-packages directory is another storage location for third-party packages, usually containing Python packages installed through Linux distribution package management systems. This design allows system package managers and Python package managers to coexist harmoniously, avoiding dependency conflicts.

This directory structure design reflects several important principles of the Python ecosystem. First is modular and layered management, where different types of modules are clearly separated into different directories. Second is the priority mechanism, where Python searches these directories in the order of sys.path, ensuring correct module loading behavior. Finally is package management flexibility, supporting multiple installation methods and management strategies.

System Environment Changes After Switching to Virtual Environment


When we activate a virtual environment, the Python interpreter's module search paths undergo fundamental changes. The comparison in the image clearly shows the differences in sys.path before and after virtual environment activation, revealing the sophisticated design of the virtual environment isolation mechanism.

When the virtual environment is not activated, sys.path follows the standard system-level path structure, with the Python interpreter searching for modules according to established priority order. However, once the source bin/activate command is executed to activate the virtual environment, the system cleverly inserts virtual environment-specific paths at the beginning of the sys.path list.

The most significant change is the addition of /home/zjh/test_venv/venv1/lib/python3.12/site-packages to the first position of the search path. This seemingly simple adjustment is actually the core of the virtual environment isolation mechanism. Python's module search follows the "first found, first used" principle, so when the virtual environment's site-packages directory is at the front of the search path, any packages installed in the virtual environment will be loaded preferentially.

This path priority reordering creates an elegant hierarchical overlay system. If you install a specific version of a package in the virtual environment, such as Django 4.2, while the system-level environment has Django 3.2 installed, then with the virtual environment activated, the Python interpreter will prioritize using Django 4.2 from the virtual environment. This mechanism ensures dependency precision and predictability.

It's worth noting that the virtual environment doesn't completely isolate system-level Python paths but adopts a more pragmatic approach. System-level paths, such as /usr/lib/python312.zip and /usr/lib/python3.12, remain in the search path, but with reduced priority. This means projects in the virtual environment can still access the Python standard library and system-level installed packages, but will prioritize versions from the virtual environment.

This design philosophy reflects the inclusiveness and practicality of the Python ecosystem. The standard library, as Python's core component, should remain accessible in all environments, while third-party packages achieve project-level isolation through virtual environments. Developers don't need to worry about reinstalling standard library modules like os and sys in virtual environments, while being able to precisely control project's third-party dependencies.

From a technical implementation perspective, this path management strategy brings another important advantage: high efficiency of environment switching. Activating and deactivating virtual environments is essentially just dynamic modification of sys.path, without requiring copying or moving large amounts of files. This allows developers to quickly switch between different project environments without significant performance overhead.

Runtime Virtual Environment Switching Comparison for Different Python Versions

Based on the significant architectural changes in Python 3.8, virtual environment switching shows distinct watershed characteristics in technical implementation:

Fundamental Restructuring of Initialization API

The PyConfig system introduced in Python 3.8 completely changed the interpreter initialization paradigm. Before 3.8, virtual environment switching relied on relatively simple but crude global variable setting methods, configured through functions like Py_SetProgramName and Py_SetPythonHome. While this approach was intuitive, it lacked fine-grained control capabilities and was prone to configuration conflicts and inconsistent states.

The post-3.8 PyConfig system provides a structured configuration management approach, allowing developers to precisely control every initialization parameter of the interpreter. The new system implements type-safe configuration setting through functions like PyConfig_SetBytesString, significantly reducing the possibility of configuration errors. However, this fine-grained control also brings significant complexity increases, requiring developers to understand and manage more configuration options.

Evolution of Path Management Mechanisms

Pre-3.8 versions mainly relied on environment variables and runtime Python code to manage module search paths. The advantage of this approach is high flexibility, allowing dynamic modification of sys.path through Python code execution. The disadvantage is the difficulty in controlling the timing of path settings, easily leading to path priority confusion issues.

Post-3.8 versions allow precise setting of module search paths during the initialization phase, through mechanisms like config.module_search_paths_set and PyWideStringList_Append, achieving more strict path control. While this approach improves the determinism of path management, it also increases implementation complexity, particularly in string encoding conversion and memory management.

Enhanced Configuration Isolation

Early versions' configurations were mainly managed through global variables and environment variables, with relatively weak configuration isolation between different virtual environments. Environment variable modifications could affect the entire process's behavior, easily causing unexpected side effects. The post-3.8 PyConfig system implements better configuration isolation, with each interpreter instance having independent configuration state. This design reduces mutual influence between different virtual environments, but also requires developers to more carefully manage configuration object lifecycles.

Did you know there are three different writing systems for English? That’s right - you can write English using letters that aren’t Latin characters. If you’re using KDE Plasma version 6.4 or earlier, go to the “Region & Language” system settings page and search “en_US.” You’ll see two locales in the results, and they both say they’re “en_US.” The latter en_US doesn’t seem to be English at all. It turns out that the “weird” English is the “America English Deseret” locale, and the “normal” en_US is called “America English Latin”.

Saturday, 23 August 2025

Welcome to a new issue of This Week in Plasma!

Every week we cover the highlights of what’s happening in the world of KDE Plasma and its associated apps like Discover, System Monitor, and more.


This week Plasma gained an initial system setup wizard! For a few years now, we’ve had Welcome Center, which runs after you log in for the first time. But what creates the user account you log into?

If you’re the person who installed the OS, the installer did it after you told it what username and password you wanted. But what if someone else ran the installer? Say, the company you bought the computer from. Or the last person who wiped the machine before giving or selling it you. In this case, no user acounts have been set up, so something needs to do that.

KDE Initial System Setup now takes care of it! Kristen McWilliam has brought KISS from an internal skunkworks project to a production-ready part of the OEM setup story. KISS lands in Plasma 6.5.0.

KDE Initial System Setup wizard
KDE Initial System Setup wizard — third page

Notable UI Improvements

Plasma 6.5.0

Plasma panels now become scrollable when they contain far too much to see at once (usually due to opening lots of apps or entering Touch Mode). This scrollability doesn’t emerge immediately; first the Task Manager widget’s icons compress a little bit, but after a certain point they stop compressing and instead the panel becomes scrollable. (Niccolò Venerandi, link)

Improved the tone mapping curve used by KWin when displaying HDR content. Hopefully it should look even better now! (Xaver Hugl, link)

By default, system Settings no longer shows you the Drawing Tablet page if you don’t have any drawing tablets connected. Of course, us being KDE, there’s an option to show such filtered-out pages anyway, if you need them for troubleshooting purposes, for example. (Kai Uwe Broulik, link)

The output stream for volume feedback sounds no longer shows up briefly on the Audio Volume widget and System Settings page. (Ismael Asensio, link)

Improved the accessibility of System Settings’ Shortcuts page. (Christoph Wolk, link)

Added more relevant information about your game controllers to System Settings’ Game Controller page. (Jeremy Whiting, link)

The notification saying “you missed some notifications” after you leave Do Not Disturb mode no longer becomes visible in the history view after it expires, because if you can see it there, you’re already in the place it wanted to tell you about. (Nate Graham, link)

Notable Bug Fixes

Plasma 6.4.5

Fixed several related issues with Plasma panel customization: one issue that prevented the Escape key from closing the configuration dialog, and another that caused widgets to get stuck if you pressed Escape while dragging them. (Niccolò Venerandi, link 1 and link 2)

Cloning a panel now also clones the settings of its System Tray widget, if it has one. (Niccolò Venerandi, link)

Fixed a layout issue with the Audio Volume widget that could cause an app’s recording stream to be visually indented more than it should have been. (Christoph Wolk, link)

Plasma 6.5.0

Applied several more bug fixes for desktop icons to make sure they don’t shift around so much. One of them fixes a related issue whereby the icons would reset their positions after you switched between the Folder layout and the Desktop layout, and then back again. (Akseli Lahtinen, link)

Made a few reliability fixes for the built-in RDP server to make sure that on every distro, it can be manually enabled, and also doesn’t auto-start unless specifically told to. (Arnav Rawat and Nate Graham, link 1 and link 2)

Fixed an issue causing a standalone Audio Volume widget on a panel to sometimes take up too much space. (Niccolò Venerandi, link)

Other bug information of note:

Notable in Performance & Technical

Plasma 6.4.5

Worked around a nasty issue in the AMD GPU graphics drivers. (Xaver Hugl, link)

Fixed a case where Plasma could hang after copying files from a slow (or later inaccessible) network location and opening the clipboard popup. (Fushan Wen, link)

Plasma 6.5.0

Swiched to a more lightweight timer for KWin’s render loop, slightly reducing resource usage everywhere. (Aleix Pol Gonzalez, link)

Implemented support for version 2 of the global shortcuts portal. (David Redondo, link)

Frameworks 6.18

Further improved the speed of thumbnail generation throughout all KDE software. (David Edmundson, link)

How You Can Help

KDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.

You can help KDE by becoming an active community member and getting involved somehow. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer, either; many other opportunities exist!

You can also help us by making a donation! A monetary contribution of any size will help us cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.

To get a new Plasma feature or a bugfix mentioned here, feel free to push a commit to the relevant merge request on invent.kde.org.

Friday, 22 August 2025

Let’s go for my web review for the week 2025-34.


Google is killing the open web

Tags: tech, google, web, xml, xslt, html, history, vendor-lockin

Or why the XML roots of the web are important to keep in shape. I’m not necessarily in love with how verbose XML is, but it’s been a great enabler for interoperability. That’s indeed the latter reason which pushed Google to try to get rid of it as much as possible.

https://wok.oblomov.eu/tecnologia/google-killing-open-web/


Is Germany on the Brink of Banning Ad Blockers?

Tags: tech, advertisement, attention-economy, law

This latest ruling from the German supreme court is rather worrying…

https://blog.mozilla.org/netpolicy/2025/08/14/is-germany-on-the-brink-of-banning-ad-blockers-user-freedom-privacy-and-security-is-at-risk/


The Lawnmower IRC Server

Tags: tech, hardware, irc

OK, this is completely useless but definitely a fun project.

https://jotunheimr.idlerpg.net/users/jotun/lawnmower/


The future of large files in Git is Git

Tags: tech, version-control, git, storage, tools

Looking forward to Git LFS going away indeed.

https://tylercipriani.com/blog/2025/08/15/git-lfs/


Introduce git-history command for easy history editing

Tags: tech, tools, version-control, git

Let’s see if this gets merged. This could be interesting convenience.

https://lore.kernel.org/git/20250819-b4-pks-history-builtin-v1-0-9b77c32688fe@pks.im/


Cheap tricks for high-performance Rust

Tags: tech, performance, memory, rust, tools

No good tricks to optimize your code, but knowing the tooling knobs sometimes help.

https://deterministic.space/high-performance-rust.html


The issue of anti-cheat on Linux

Tags: tech, gaming, windows, linux, kernel, system

Or why competitive multiplayer games which anti-cheat probably will never make it to Linux. I’m not into this kind of games but this is an interesting piece on comparing the differences between the Linux and Windows kernels. It also show that with some care from the game developers, those anti-cheats might not be necessary in the first place.

https://tulach.cc/the-issue-of-anti-cheat-on-linux/


#### Predictable memory accesses are much faster *Tags: tech, cpu, hardware, memory, performance*

Indeed, CPU prefetchers are really good nowadays. Now you know what to do to keep your code fast.

https://lemire.me/blog/2025/08/15/predictable-memory-accesses-are-much-faster/


Fun and weirdness with SSDs

Tags: tech, databases, ssd, performance

Interesting, it looks like index scans in your databases can have surprising performance results with SSDs.

https://vondra.me/posts/fun-and-weirdness-with-ssds/


How to Think About GPUs

Tags: tech, ai, machine-learning, gpu, tpu, hardware

Long but interesting chapter which shows how GPUs architecture works and the differences with TPUs. This is unsurprisingly written in the context of large models training.

https://jax-ml.github.io/scaling-book/gpus/


Tag-based logging

Tags: tech, logging

The idea is interesting, I wouldn’t throw away level based logging but this could complete it nicely.

https://mmapped.blog/posts/44-tag-based-logging


A programmer’s field guide to assertions

Tags: tech, safety, programming, organization

A bit of a long read, but does a good job explaining the use of assertions and how to introduce them in your organization.

https://typesanitizer.com/blog/assertions.html


A Better Vocabulary for Testing

Tags: tech, tests

There’s a need for clearer vocabulary about testing indeed. The write up is a bit dry here but that’s a start.

https://alperenkeles.com/posts/vocab-for-testing/


Everything I know about good system design

Tags: tech, system, design, complexity

A good list of things to consider when designing systems. And indeed in case of success the result looks probably boring.

https://www.seangoedecke.com/good-system-design/


Why do software developers love complexity?

Tags: tech, complexity, architecture, programming

Indeed, let’s not fall for the marketing. It’s better to write less code if it’s enough to solve actual problems.

https://kyrylo.org/software/2025/08/21/why-do-software-developers-love-complexity.html


Are Your Programmers Working Hard, Or Are They Lazy?

Tags: tech, organization, team, productivity, quality, management

A good reminder that long hours are not a sign of success with your project… on the contrary.

https://mikehadlow.blogspot.com/2013/12/are-your-programmers-working-hard-or.html


Hordes Of Novices

Tags: tech, craftsmanship, learning, quality

Easy to misunderstand as an elitist stance… But it’s not the way I read it. Churning more code faster isn’t going to help us, you need to take the time for people to grow and improve. It’s not possible to achieve if you’re drowning in eager beginners.

https://blog.cleancoder.com/uncle-bob/2013/11/19/HoardsOfNovices.html


The 10 models of remote and hybrid work

Tags: tech, gitlab, remote-working, management, culture, organization

A good way to frame the possible models for your organization regarding remote work. The GitLab Handbook stays a very good resource regarding remote work, they really thought about it and documented their findings.

https://handbook.gitlab.com/handbook/company/culture/all-remote/stages/


Agile Product Ownership in a nutshell

Tags: tech, agile, product-management

I think this is still one of the best distilled explanation of product ownership. It’s also interesting for the other parties on an agile project.

https://blog.crisp.se/2012/10/25/henrikkniberg/agile-product-ownership-in-a-nutshell


Managing in Mayberry: An examination of three distinct leadership styles

Tags: tech, management, leadership

Interesting parable, it’s indeed a good way to illustrate different leadership styles. Being more strategic is clearly what one should try to do.

https://www.donaldegray.com/managing-in-mayberry-an-examination-of-three-distinct-leadership-styles/


The importance of stupidity in scientific research

Tags: science, research

An important essay in my opinion. It reminds us quite well what the core drive of scientific research is about.

https://journals.biologists.com/jcs/article/121/11/1771/30038/The-importance-of-stupidity-in-scientific-research



Bye for now!

Integrate KTextEditor into Cantor(2)

Over the past few months, I’ve been working on an important refactor in Cantor: migrating the editor for command entries from our in-house QTextDocument implementation to the powerful KTextEditor framework. In my previous update, I described how Phase 1 laid the foundation—command cells were migrated to use KTextEditor::View, enabling basic syntax highlighting and a modern editing experience.

Today, I’m excited to share that Phase 2 is now complete! With this milestone, the migration of command entries to KTextEditor is fully in place, ensuring that all existing functionality works smoothly without regressions. This achievement provides a solid foundation for future enhancements while keeping Cantor stable and reliable for everyday use.

What’s New in Phase 2

With Phase 2 now complete, command entries are fully integrated into KTextEditor. Along the way, we introduced three major upgrades to Cantor’s core architecture, paving the way for a more consistent, powerful, and future-ready user experience.

🔹 Unified Highlighting Framework

All syntax highlighting in Cantor is now powered by KSyntaxHighlighting, the same robust engine behind Kate and KWrite. This change ensures that every backend (such as Python, Maxima, R, Octave, etc.) benefits from a consistent, accurate, and highly reliable highlighting system.

Previously, each backend shipped with its own ad-hoc rules that were difficult to maintain and often inconsistent in style. With the new centralized approach, Cantor handles highlighting uniformly, not only providing a smoother user experience but also laying the groundwork for future support of custom themes and user-defined keywords.


🔹 Unified Completion Infrastructure

Code completion has likewise been consolidated into a common framework coordinated through KTextEditor. In the past, each backend had its own incomplete and sometimes inconsistent completion logic. Now, all completion requests are handled in a unified, predictable manner, with backend-specific intelligent suggestions seamlessly integrated.

The result is less duplicated code, easier maintenance, and—most importantly—a more cohesive user experience. Whether you are writing Python scripts or Maxima formulas, code completion now behaves consistently, making Cantor feel smarter and more reliable.


🔹 Reduced Code Redundancy

By adopting KTextEditor as the core for command entry editing, we eliminated a significant amount of custom code that had been written in Cantor over the years to handle code completion and highlighting for the different supported languages.

This streamlining improves maintainability, reduces potential bug risks, and makes Cantor’s codebase more approachable for new contributors. Developers no longer need to reimplement low-level editor features, allowing them to focus on advancing high-level functionality. In short: less boilerplate, more room for innovation.


Functional demonstration: new and old comparison, take a look!

Thanks to the new KSyntaxHighlighting backend, we can now temporarily change the theme of command entries, demonstrating future possibilities.

Please note that this is currently a preview feature; global “sheet themes” (applying themes uniformly to the entire sheet,) are our next steps.

  • Breeze Dark

  • Github Dark

  • Breeze Light

By integrating KTextEditor, Cantor now provides a unified and reliable code completion experience for all backends (such as Python, R, and Maxima).

Cantor also supports consistent multi-cell handling, with themes and syntax highlighting applied uniformly.

Why This Matters

This migration is not just a technical change under the hood—it directly impacts how Cantor evolves:

  • Stability first: by ensuring no regressions during the migration, users can continue to rely on Cantor for daily work without disruption.
  • Consistency across backends: highlighting and completion now feel the same, no matter which computational engine you choose.
  • Future-proof foundation: less redundant code and more reliance on KDE Frameworks means Cantor can keep pace with new features in KTextEditor and the broader KDE ecosystem.

What’s Next

With command entries now fully migrated, the door is open for exciting new improvements:

  • Theming support: enabling custom color schemes and styles, giving users the ability to tailor Cantor’s appearance to their preferences.
  • Vi mode integration: bringing modal editing from Kate into Cantor.
  • Spell checking: powered by Sonnet, useful for Markdown and explanatory text inside worksheets.
  • Smarter backend completions: richer suggestions, function signatures, and inline documentation.
  • Performance work: optimizing for very large worksheets and heavy computations.

Theming support (planned)

For now, Cantor will keep the Default theme, which uses the desktop palette. This preserves the familiar look and behavior.

Next, we plan to introduce a Worksheet Theme setting. Users will be able to:

  • Stay with Default (desktop palette, as before), or
  • Choose a theme from KTextEditor/KSyntaxHighlighting, just like in Kate.

The selected theme will apply consistently across the worksheet—including command entries and results—for a unified appearance. Instead of relying on hardcoded colors or the system palette, Cantor will use the color roles provided by KTextEditor and KSyntaxHighlighting.

This approach avoids performance overhead from repeatedly reading theme files, ensures instant updates when switching themes, and lays the foundation for richer customization in the future—such as clearer distinctions between prompts, results, and errors, all within a consistent global style.


Wednesday, 20 August 2025

AI Coding with Qt: Qt AI Assistant for Qt Creator

The integration of artificial intelligence into software development environments has rapidly evolved, and Qt Creator is no exception. With the introduction of the Qt AI Assistant by Qt Company, developers working with Qt Creator now have access to AI models through the IDE. This post provides an introduction to the Qt Creator plugin.

This is part 1 of an ongoing series about AI coding with Qt.

What is Qt AI Assistant?

Qt AI Assistant is a commercial plugin for Qt Creator to bring current AI models to the IDE. Features provided by the plugin include

  • Code completion for multiple languages (QML, C++, Python, Bash, etc.)
  • Contextual chat with your codebase, enabling explanations, code generation and code review
  • Automated test case generation, particularly tailored for QML and Qt-specific workflows
  • Model choice based on languages (QML vs other languages) and task (chat vs code completion)

This is a step up from the existing GitHub Copilot support in Qt Creator that was focused on code completion only.

Completing Qt AI Assistant is a publicly available set of models by Qt Group. The models are based on CodeLlama and are fine-tuned for usage with Qt 6 QML. They are not included with the plugin but need to be set up manually using Ollama.

Setting Up Qt AI Assistant

The setup process for Qt AI Assistant is more involved than some other AI coding tools. The plugin is currently available as part of the commercial Qt distribution. Installation requires enabling the appropriate extension repository within Qt Creator and activating the plugin. Once installed, configuration is necessary to connect the plugin to a large language model (LLM) provider.

Supported LLMs include OpenAI (ChatGPT), Anthropic (Claude), Google (Gemini), and self-hosted models via Ollama. For OpenAI integration, developers must use the OpenAI developer platform to generate an API key, which is different from having an account for ChatGPT. This API key is then entered into the plugin’s settings within Qt Creator. Other models require similar setup using URLs and credentials, depending on the provider or the self-hosting method.

More information is in this video linked at the bottom of this blog post.

Features in Practice

Code Completion and Chat

The plugin distinguishes between code completion suggestions as you type and prompt-based interactions, such as asking for code explanations or generating new code. For QML, a specialized Code Llama 13B QML model can be used. For other languages general purpose models are employed.

The chat interface allows developers to highlight code and request explanations or modifications. For example, selecting a block of QML or C++ and asking the assistant to "explain the selected code" yields a detailed, context-aware explanation.

Test Case Generation

A notable feature is the ability to generate test cases from selected QML code. While the generated tests may require manual refinement, this automation can accelerate the initial setup of unit tests and reduce repetitive work. The plugin’s approach is to copy relevant code into the test, which may not always result in optimal reuse, but provides a useful starting point.

Model Choice

Developers can choose between different LLMs to use for the chat and review vs the code completion scenario. For QML model choice is separate, and offers including the fine-tuned models provided by Qt Company. This flexibility extends to hosting options, supporting both cloud and local deployments, depending on organizational needs and privacy considerations.

Further Resources

For a detailed walkthrough and live demonstrations, watch the following episodes of "The Curious Developer" series:

Additionally, the official Qt AI Assistant product page provides up-to-date information on features and availability: https://www.qt.io/product/ai-assistant.

Outlook

Future posts in this series will consider alternative coding tools useful for Qt and will bring the newest developments of the tools we mention here.

The post AI Coding with Qt: Qt AI Assistant for Qt Creator appeared first on KDAB.