Wednesday, 12 February 2025
Welcome to the @Krita-promo team's January 2025 development and community update.
Development Report
Krita 5.2.9 Released
A new bugfix release is out! Check out the Krita 5.2.9 release post and keep up-to-date with the latest version.
Qt6 Port Progress
Krita can now be compiled (MR!2306) and run (Mastodon post) with Qt6 on Linux, a major milestone on the long road of porting from the outdated Qt5 framework. However, it's still a long way to go to get things working correctly, and it will be some time before any pre-alpha builds are available for the far-off Krita 6.0.
Community Report
January 2025 Monthly Art Challenge Results
For the "Magical Adventure" theme, 14 members submitted 20 original artworks. And the winner is… Magical Adventure by @Mythmaker

The February Art Challenge is Open Now
For the February Art Challenge, @Mythmaker has chosen "Fabulous Flora" as the theme, with the optional challenge of using natural texture. See the full brief for more details, and bring some color into bloom.
Featured Artwork
Best of Krita-Artists - December 2024/January 2025
Nine images were submitted to the Best of Krita-Artists Nominations thread, which was open from December 14th to January 11th. When the poll closed on January 14th, these five wonderful works made their way onto the Krita-Artists featured artwork banner:
Coven Camille | League Of Legends Fan Art by @Dehaf

Still Evening by @MangooSalade


Flying Pig Squadron by @Yaroslavus_Artem

Oniwakamaru and the giant carp by @GioArtworks

Best of Krita-Artists - January/February 2025
Voting is open until February 15th!
Ways to Help Krita
Krita is Free and Open Source Software developed by an international team of sponsored developers and volunteer contributors.
Visit Krita's funding page to see how user donations keep development going, and explore a one-time or monthly contribution. Or check out more ways to Get Involved, from testing, coding, translating, and documentation writing, to just sharing your artwork made with Krita.
The Krita-promo team has put out a call for volunteers, come join us and help keep these monthly updates going.
Notable Changes
Notable changes in Krita's development builds from Jan. 16 - Feb. 12, 2025.
Unstable branch (5.3.0-prealpha):
Bug fixes:
- Blending Modes: Rewrite blending modes to properly support float and HDR colorspaces. (bug report) (Change, by Dmitry Kazakov)
- Brush Engines: Fix Filter Brush engine to work with per- and cross-channel filters. (Change, by Dmitry Kazakov)
- Filters: Screentone: Change default screentone interpolation type to Linear. (Change, by Emmet O'Neill)
- Scripting: Fix Node.paint script functions to use the given node instead of active node. (Change, by Freya Lupen)
Features:
- Text: Load font families as resources and display a preview in the font chooser. (Change, by Wolthera van Hövell)
- Filters: Random Noise: Add grayscale noise option and improve performance. (Change, by Maciej Jesionowski)
- Blending Modes: Add a new HSY blending mode, "Tint", which colorizes and slightly lightens. It's suggested to be used with the Fast Color Overlay filter. (Change 1, Change 2 by Maciej Jesionowski)
Nightly Builds
Pre-release versions of Krita are built every day for testing new changes.
Get the latest bugfixes in Stable "Krita Plus" (5.2.10-prealpha): Linux - Windows - macOS (unsigned) - Android arm64-v8a - Android arm32-v7a - Android x86_64
Or test out the latest Experimental features in "Krita Next" (5.3.0-prealpha). Feedback and bug reports are appreciated!: Linux - Windows - macOS (unsigned) - Android arm64-v8a - Android arm32-v7a - Android x86_64
Tuesday, 11 February 2025
This will be a boring one, sorry. Basically a “JFYI” for people interested in Plasma’s release administration.
A while back, there was a discussion about moving Plasma to a twice-yearly release model, to better align with discrete release distros that also have two yearly releases — mostly Fedora KDE and Kubuntu.
We discussed this at last year’s Akademy and decided not to do it yet, based on the reason that a faster release schedule would help get important Wayland improvements to users more quickly, given that we’re trying to polish up our Wayland session at high speed. We agreed that once the Wayland Known Significant Issues wiki page is empty, we can re-evaluate.
Until that re-evaluation happens, we decided to instead lengthen our beta release periods from four weeks to six weeks, with new beta releases every two.
Plasma 6.3 will be released very soon, yay! For this release, we remembered to push out the extra new beta releases, but forgot about adding on an extra two weeks to its duration. Oops.
Interestingly, we haven’t gotten many bug reports from users of the Plasma 6.3 beta, which is historically unusual. I’m not totally sure what to make of this. The optimistic assessment is that the release was already great even by the time we shipped the beta! The pessimistic one is that few people showed up to QA it, so lots of bugs got missed (I’m less sure it’s this, but anything’s possible). Either way, it seems like an extra two weeks of beta time wouldn’t have made much difference, so we’re not considering a great loss.
As a result, we’ve decided to try this same approach for 6.4: a four-week beta period, with new releases every two weeks. We’ll see how it goes! If it’s fine, we can keep it, and if it’s not, we can return to the original plan of six-week beta periods. Until then, enjoy Plasma 6.3!
Tellico 4.1.1 is available, with a few fixes.
Improvements
- Updated Filmaffinity and ComicVine data sources.
- Added configurable image size to Discogs and Moviemeter data sources.
- Fixed ISBN searching with Colnect.
Monday, 10 February 2025
This is the release schedule the release team agreed on
https://community.kde.org/Schedules/KDE_Gear_25.04_Schedule
Dependency freeze is in around 3 weeks (March 6) and feature freeze one
after that. Get your stuff ready!
This week, I focused on integrating the Monte Carlo Tree Search (MCTS) algorithm into the MankalaEngine. The primary goal was to test the performance of the MCTS-based agent against various existing algorithms in the engine. Let's dive into what MCTS is, how it works, and what I discovered during the testing phase.
What is Monte Carlo Tree Search (MCTS)?
The Monte Carlo Tree Search (MCTS) is a heuristic search algorithm used for decision-making in sequential decision problems. It incrementally builds a search tree and simulates multiple random moves at each step to evaluate potential outcomes. These simulations help the algorithm determine the most promising move to make.
How Does MCTS Work?
MCTS operates through four key steps:
1. Selection
The algorithm starts at the root node (representing the current game state) and traverses down the tree to a leaf node (an unexplored state). During this process, the algorithm selects child nodes using a specific strategy.
A popular strategy for node selection is Upper Confidence Bounds for Trees (UCT). The UCT formula helps balance exploration and exploitation by selecting nodes based on the following equation:
UCT = mean + C × sqrt(ln(N) / n)
Where:
mean
is the average reward (or outcome) of a node.N
is the total number of simulations performed for the parent node.n
is the number of simulations performed for the current child node.C
is a constant that controls the level of exploration.
2. Expansion
Once the algorithm reaches a leaf node, it expands the tree by adding one or more child nodes representing potential moves or decisions that can be made from the current state.
3. Simulation
The algorithm then performs a simulation or rollout from the newly added child node. During this phase, the algorithm randomly plays out a series of moves (typically following a simple strategy) until the game reaches a terminal state (i.e., win, loss, or draw).
This is where the Monte Carlo aspect of MCTS shines. By simulating many random games, the algorithm gains insights into the likely outcomes of different actions.
4. Backpropagation
After the simulation ends, the results are propagated back up the tree, updating the nodes with the outcome of the simulation. This allows the algorithm to adjust the expected rewards of the parent nodes based on the result of the child node’s simulation.
read more here - MCTS
Implementing MCTS in C++ for MankalaEngine
With a solid understanding of the algorithm, I began implementing MCTS in C++. The initial step involved integrating the MCTS logic into the benchmark utility of the MankalaEngine. After resolving a series of issues and running multiple tests, the code was functioning as expected.
Testing Results
I compared the performance of the MCTS agent against other existing agents in the MankalaEngine, such as Minimax, MTDF, and Random agents. Here’s how the MCTS agent performed:
Random Agent (Player 1) vs. MCTS (Player 2)
- MCTS won 80% of the time
MCTS (Player 1) vs. Random Agent (Player 2)
- MCTS won 60% of the time
MCTS vs. Minimax & MTDF
- Unfortunately, MCTS consistently lost against both Minimax and MTDF agents. 😞
Key Improvements for MCTS
While MCTS performed well against the Random Agent, there is still room for improvement, especially in its simulation phase. Currently, the algorithm uses a random policy for simulations, which can be inefficient. To improve performance, we can:
- Use more efficient simulation policies that simulate only promising moves, rather than randomly selecting moves.
- At the start of the Selection step, focus on moves that have historically been good opening strategies (this requires further research to identify these moves, especially in Pallanguli).
- Fine-tune the exploration-exploitation balance to improve decision-making.
Upcoming Tasks
In the upcoming week, I plan to:
- Write test cases for the Pallanguli implementation.
- Review the implementation of Pallanguli.
Last year during Akademy I gave a talk called Union: The Future of Styling in KDE?!. In this talk I presented a problem: We currently have four ways of styling our applications. Not only that, but some of these approaches are quite hard to work with, especially for designers who lack programming skills. This all leads to it being incredibly hard to make changes to our application styling currently, which is not only a problem for something like the Plasma Next Initiative, but even smaller changes take a lot of effort.
This problem is not new; we already identified it several years ago. Unfortunately, it also is not easy to solve. Some of the reasons it got to this state are simply inertia. Some things like Plasma's SVG styling were developed as a way to improve styling in an era where a lot of the technologies we currently use did not exist yet. The solutions developed in those days have now existed for a pretty long time so we cannot suddenly drop them. Other reasons are more technical in nature, such as completely different rendering stacks.
Introducing Union
Those different rendering stacks are actually one of the core issues that makes this hard to solve. It means that we cannot simply use the same rendering code for everything, but have to come up with a tricky compatibility layer to make that work. This is what we currently do, and while it works, it means we need to maintain said compatibility layer. It also means we are not utilizing the rendering stack to its full potential.
However, there is another option, which is to take a step back and realise that we actually may not even want to share the rendering code, given that they are quite different. Instead, we need a description of what the element should look like, and then we can have specific rendering code that implements how to render that in the best way for a certain technology stack.
This idea is at the core of a project I called Union, which is a styling system intended to unify all our separate approaches into a single unified styling engine that can support all the different technologies we use for styling our applications.
Image The three separate parts of UnionUnion consists of three parts: an input layer, an intermediate layer and an output layer. The input layer consists of plugins that can read and interpret some input file format containing a style description and turn it into a more abstract desciption of what to render. How to do that is defined by the middle intermediate layer, which is a library containing the description of the data model and a method of defining which elements to apply things to. Finally, the output layer consists of plugins that use the data from the intermediate layer and turn it into actual rendering commands, as needed for a specific rendering stack.
Implementing Things
This sounds nice on paper, but implementing it is easier said than done. For starters, everything depends on the intermediate layer being both flexible enough to handle varying use cases but at the same time rigid enough that it becomes hard to - intentionally or unintentionally - create dependencies between the input and output layers. Apart from that, replacing the entire styling stack is simply going to be a lot of work.
Image Plasma's SVG styling uses specially-marked SVG items for styling.To allow us to focus more on the core we needed to break things down into more manageable parts. We chose to focus on the intermediate layer first, by using Plasma's SVG themes as an input format and a QtQuick Style as output. This means we are working with an input format that we already know how to deal with. It also means we have a clear picture of what the output should look like, as it should ultimately look just like how Plasma looks.
At this point, a lot of this work has now been done. While Union does not yet implement a full QtQuick style, it implements most of the basic controls to allow something such as Discover to run without looking completely alien. Focusing on the intermediate layer proved very useful, we encountered and managed to solve several pretty tricky technical issues that would have been even trickier if we did not know what things should look like.
Image Plasma Discover running using Union.Union Needs You!
All that said, there is still a lot to be done. For starters, to be an actual unified styling system for KDE we need a QtWidgets implementation. Some work on that has started, but it is going to be a lot harder than the QtQuick implementation. We also need a different input format. While Plasma's SVG styling works, it is not ideal for developing new styles with. I would personally like to investigate using CSS as input format as it has most of what we need while also being familiar to a lot of people. Unfortunately, finding a good CSS parser library turns out to be quite hard.
However, at this stage we are at a point where we have multiple tasks that can be done in parallel. This means it is now at a point where it would be great if we had more people developing code, as well as some initial testing and feedback on the systen. If you are interested in helping out, the code can be found at invent.kde.org/plasma/union. There is also a Matrix channel for more realtime disucssions.
Discuss this article on KDE Discuss.
ahiemstra Mon, 02/10/2025 - 12:32Kasts polishing, progress on Krita Qt6 port and Kdenlive fundraising report
Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps. This issue contains change from the last two weeks.
Much happened the past two weeks, we had a successful KDE presence at FOSDEM, we now have a location and date for this year's edition of Linux App Summit (April 25-26, 2025 in Tirana, Albania) and also continued to improve our apps. Let's dive in!
Releases
- KDE Gear 24.12.2 is out with some bugfixes.
- Glaxnimate 0.6.0 beta is out. Glaxnimate is a 2d animation software and 0.6.0 is the first version of Glaxnimate as part of KDE. Checkout Glaxnimate's website.
- KStars 3.7.5 is out with mostly bugfixes and performance improvements.
- GCompris 25.0 is out. This is a big release containing 5 new activities.
- Krita 5.2.9 is out. This is a bug fix release, containing all bugfixes of our bug hunt efforts back in November. Major bug-fixes include fixes to clone-layers, fixes to opacity handling, in particular for file formats like Exr, a number of crash fixes and much more!
Akonadi Background service for KDE PIM apps
We fixed an issue where loading tags was broken and would result in a constant 100% CPU usage. (Carl Schwan, 24.12.3. Link)
Elisa Play local music and listen to online radio
We now correctly re-open Elisa when it was minimized to the system tray. (Pedro Nishiyama, 24.12.2. Link)
Dolphin Manage your files
We made it possible to rename tabs in Dolphin. This action is available in each tab's context menu. This is useful for very long tab names or when it is difficult to identify a tab by a folder's name alone. (ambar chakravartty, 25.04.0. Link)
We also improved the keyboard based selection of items. Typing a letter on the keyboard usually selects the item in the view which starts with that letter. Diacritics are now ignored here, so you will for example be able to press the "U" key to select a file starting with an "Ü". (Thomas Moerschell, 24.12.3. Link)
We changed the three view buttons to a single menu button. (Akseli Lahtinen, 25.04.0. Link)

We made the "Empty Trash" icon red in conformance to our HIG as it is a destructive operation. (Nate Graham, 25.04.0. Link)
We improved getting the information from supported version control systems (e.g. Git). It is now faster and happens earlier. (Méven Car, 25.04.0. Link)
Falkon Web Browser
We added input methods hints to input fields. This is mostly helpful when using different input methods than a traditional keyboard (e.g. a virtual keyboard). (Juraj Oravec. Link)
KDE Itinerary Digital travel assistant
We continued to improve the coverage of Itinerary in Poland. This week we added support for the train operator Polregio, fixed and refactored the extractor for Koleo and rewrote the extractor for PKP-app to support the ticket layouts. (Grzegorz Mu, 24.12.3. Link 1, link 2, and link 3)
We also added support for CitizenM hotel bookings. (Joshua Goins, 24.12.3. Link)
We also started working on an online version of the ticket extractor. A preview is available on Carl's website.
Volker also published a recap of the past two months in Itinerary. This contains also some orthogonal topics like the free software routing service Transitous.
Kasts Podcast application
We fixed the vertical alignment of the queue header. (Joshua Goin, 25.04.0. Link)
We are now using Kirigami.UrlButton
for links and Kirigami.SelectableLabel
for the text description in the podcast details page to improve visual and behavior consistency with other Kirigami applications. (Joshua Goins, 25.04.0. Link)
We also improved the look of the search bar in the discovery page. It's now properly separated from the rest of the content. (Joshua Goins, 25.04.0. Link)
We added the ability to force the app to mobile/desktop mode. (Bart De Vries, 25.04.0. Link)

We fixed the sort order of the podcasts episodes. (Bart De Vries, 24.12.3. Link)
Finally we made various improvements to our usage of QML in Kasts to use newer QML constructs. This should improve slighly the performance while reducing the technical debt. (Tobias Fella, 25.04.0. Link 1, link 2, link 3, link 4, link 5, and link 6)
Kate Advanced text editor
We fixed some issues with the list of commits displayed in Kate. The highlight color is now correct and the margins consistent. (Leo Ruggeri, 25.04.0. Link)
We improved the diff widget of Kate. The toolbar icon sizes are now the same as other toolbars in Kate. (Leo Ruggeri, 25.04.0. Link)
Kdenlive Video editor
The Kdenlive team published a report about the result of their last fundraising. It contains a huge amount of great improvements, so go read it!
Added checkerboard option in clip monitor background (Julius Künzel, 25.04.0, Link)

Konqueror KDE File Manager & Web Browser
We fixed the handling of the cookie policy when no policy has been explicitly set. (Stefano Crocco, 24.12.3. Link)
Krita Digital Painting, Creative Freedom
The Krita team continued porting Krita to Qt6/KF6. The application now compiles and run with Qt6, but there are still some uni tests not working. Link to Mastodon thread

Ramon published a video about "Memileo Brushes" on YouTube.
KRDC Connect with RDP or VNC to another computer
We implemented the dynamic resolution mode from the remote desktop protocol (RDP). This means we now resize the remote desktop to fit the current KRDC window. This works for Windows >= 8.1. (Fabio Bas, 25.04.0. Link)
We added support for the domain
field in the authentication process. (Fabio Fas, 25.04.0. Link)
We adapted the code to work with FreeRDP 3.11. (Fabio Bas, 25.04.0. Link)
Marknote Write down your thoughts
We fixed the list of "Sort Notes List" option not being set by default. (Joshua Goins. Link)
We now properly capitalize the "undo" and "redo" actions. (Joshua Goins. Link)
We removed internal copies of some Kirigami Addons components in Marknote. (Joshua Goins. Link)
Okular View and annotate documents
We added a way to filter the list of certificate to only show certificates for "Qualified Signatures" in the certificate selection. (Sune Vuorela, 25.04.0. Link)
PlasmaTube Watch YouTube videos
We improved the placeholder messages for empty views. (Joshua Goins, 25.04.0. Link 1 and link 2)
We fixed displaying thumbnails and avatars when using the Peertube backend. (Joshua Goins, 24.12.3. Link, link 2, and link 3)
Barcode Scanner Scan and create QR-Codes
Qrca can now scan a QR code directly from an image instead of just from the camera. (Onuralp Sezer, 25.04.0. Link)
Tokodon Browse the Fediverse
We are now using more fitting icons for the "Embed" and "Open in Browser" actions in Tokodon's context menu. We also removed the duplicated "Copy to Clipboard" action from that context menu. (Joshua Goins, 24.12.3. Link and link 2)

Following the improvements from two weeks ago, we did even even more accessibility/screen reader improvements to Tokodon. (Joshua Goins, 24.12.3. Link)
…And Everything Else
This blog only covers the tip of the iceberg! If you’re hungry for more, check out Nate's blog about Plasma and be sure not to miss his This Week in Plasma series, where every Saturday he covers all the work being put into KDE's Plasma desktop environment.
For a complete overview of what's going on, visit KDE's Planet, where you can find all KDE news unfiltered directly from our contributors.
Get Involved
The KDE organization has become important in the world, and your time and contributions have helped us get there. As we grow, we're going to need your support for KDE to become sustainable.
You can help KDE by becoming an active community member and getting involved. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer either. There are many things you can do: you can help hunt and confirm bugs, even maybe solve them; contribute designs for wallpapers, web pages, icons and app interfaces; translate messages and menu items into your own language; promote KDE in your local community; and a ton more things.
You can also help us by donating. Any monetarnky contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general just keep KDE bringing Free Software to the world.
To get your application mentioned here, please ping us in invent or in Matrix.
If you don’t know what Fast Sketch Cleanup plugin is, here’s a blog post describing it in detail: https://krita.org/en/posts/2024/fast_sketch_background/. In short, it’s a neural network-based filter similar to Edge Detection or Engrave that is supposed to clean up a sketch and create lines that can be used as a base for a lineart or help with coloring.
Download
Windows
- Plugin: FastSketchPlugin1.1.0.zip
- Portable zip file: krita-x64-5.3.0-prealpha-68346790.zip
Linux
- 64 bits Linux: krita-5.3.0-prealpha-68346790dc-x86_64.AppImage
New GUI
The old GUI was relatively difficult to use and quite limited. For example, there was no way to use a custom model outside of the main directory, you’d have to manually put the model files into the main directory of the plugin. There was also no pre- or post-processing, and the resolution of the input image was fixed, which didn’t allow for fine-tuning the result.
The new GUI looks like this:
Model
In this section you can select the model in the File combobox, or you can switch to another folder using either the button with the folder icon (for a custom folder) or the “Reset to default” button (which resets the path to the default path for the plugin). The combobox with models gets updated to show the models from the currently selected folder.
“Note about the model” presents some notes or hints about usage that were saved into the model information file.
Device to use
Here you can choose whether to use CPU, GPU or NPU. NPU is a new type of device that is only available on some computers, on Windows you should have all the drivers already installed, but if you’re on Linux, you would need to install them manually. CPU is typically the slowest, so if any other is available, use the other one. Unavailable devices should be greyed out.
Preview images
Those are small cutouts of the image on different stages of the processing. First image shows the original sample; the second one shows the result of the pre-processing; third shows the result of the inference (processing through the model) applied to the pre-processed image; and the last one shows the final result.
Preview size determines how big the preview is. The sample is cut out of the center of the image, with the width and height being equal to the Preview Size * Model Input Size (usually 256) / Scale. That means that a Preview Size of 8 would update roughly 16x slower than Preview Size of 1, no matter the Scale, and assuming the same model. That might make the dialog less responsive, so be careful with higher values. Sometimes it is useful though to see a bigger preview though.
If you click on one of the images, it will bring out a dialog showing the same image in a bigger size, and you can click on the buttons or use arrows to navigate to the other images. You can resize that dialog to see even more detail if needed.
Pre-processing
Defines the pre-processing. It’s performed in the order of the widgets in the dialog.
Levels widget: it’s a quick way to increase contrast in the input image.
Scale: every model has a specific size of its context window, which means it’s sensitive to resolution. Using Scale you can quickly decrease or increase the resolution of the input image, changing the result in a very significant way. Be careful, it’s a scale for one dimension, meaning that the processing time will increase or decrease exponentially.
Post-processing
Scale: it’s just a widget showing the reversal of the scaling in pre-processing. You can’t change it. It ensures that the result has the same size as the input image.
Levels widget: it works just like in the pre-processing.
Sharpen filter: it sharpens the result, with the strength equal to the number from the slider. Zero means input = output, every higher value sharpens the result. One means the exact result you’d get from Krita’s normal Sharpen filter.
Advanced options
Invert: usually you don’t need to change this option, because whether it needs to be checked or not is embedded in the model information file (the same one that contains the note). Most models do require this checkbox checked.
Run
Press the button to start processing. It takes the projection (think: “New Layer From Visible”) of the canvas, puts it through all the processing, and then creates a new layer with the result.
The Run button changes into a Progress Bar to show you progress. When the image is processed, the dialog closes automatically.
Note that it’s not possible to cancel the processing, unfortunately.
Best workflow
The ultimate best workflow I found to get the best result is to first use SketchyModel.xml with low scale (either 1.0 or often even below that), then either decrease the opacity of the result or put a white semi-opaque layer on top, and then use InkModel.xml. The first model removes unnecessary lines and smoothes the lines out, and the second model creates nice, crisp lines. The only problem with using them one after another is that SketchyModel produces pretty dark lines, while InkModel is sensitive to values and requires the input to be light grey, otherwise it doesn’t work properly, hence the additional white layer.
You can also use InkModel.xml directly, if the sketch is clean enough already.
Example 1.
The following examples are derivatives of David Revoy’s sketch “Pepper Sketch”, with the only editing being the FSC plugin or Engrave G’MIC filter (used for comparison).
Workflow:
- Use SketchyModel, Levels: (0.3, 1.0)
- Add a white layer, opacity = 40%
- Use the mentioned model or G’MIC filter.
Results:
- Original sketch:

- Result of SketchyModel, with Preprocessing: Levels (0.30, 1.00), and then with a white 40% transparent layer on top:


- Results of the workflow with, in order of appearance: a) SoftInkModel, scale 4.0, b) InkModel, scale 4.0, c) InkModel, scale 6.0:



- Result of G’MIC’s filter Engrave, in order of appearance: a) over the original sketch, b) over the version smoothed out by SketchyModel:


Example 2.
The following example is a derivative of “Pepper and Carrot in traditional clothing” by David Revoy.
Workflow:
- Use SketchyModel, with standard options, Scale = 1.0.
- Add a white layer with 40% opacity.
- Use InkModel, Scale = 4.0.


Example 3.
The following example is a derivative of “Huge machine to maintain” by David Revoy.
Workflow:
- Use SketchyModel, Levels in preprocessing: (0.0., 0.82) (to whiten the background), Scale either 1.0 or 2.0.
- Add a white layer with 40% opacity.
- Use InkModel, Scale - 4.0.
Original:

Using SketchyModel at Scale 1.0 (resulting in less details):

Using SketchyModel at Scale 2.0 (more details):

Workflow 2.
- Just using InkModel, with Levels (0.0, 0.9) and Scale = 4.0.
Result:

Sunday, 9 February 2025
by Alexander Bokovoy and Andreas Schneider
FOSDEM 2025 is just behind us and it was a great event as always. Alexander and I had a chance to talk
about the local authentication hub project. Our FOSDEM talk was “localkdc – a general local authentication hub”. You can watch it and come back here for more details.
But before going into details, let us provide a bit of a background. It is 2025 now and we should go almost three decades back (ugh!).
History dive
Authentication on Linux systems is interwoven with the identity of the users. Once a user logged in, a process is running under a certain POSIX account identity. Many applications validate the presence of the account prior to the authentication itself. For example, the OpenSSH server does check the POSIX account and its properties and if the user was not found, will intentionally corrupt the password passed to the PAM authentication stack request. An authentication request will fail but the attempt will be recorded in the system journal.
This joint operation between authentication and identification sources in Linux makes it important to maintain a coherent information state. No wonder that in corporate environments it is often handled centrally: user and group identities stored at a central server and sourced from that one by a local software, such as SSSD. In order to consume these POSIX users and groups, SSSD needs to be registered with the centralized authority or, in other words, enrolled into the domain. Domain enrollment allows not only identity and authentication of users: both the central server and the enrolled client machine can mutually authenticate each other and be sure they talk to the right authority when authenticating the user.
FreeIPA provides a stable mechanism for building a centralized domain management system. Each user account has POSIX attributes associated with it and each user account is represented by the Kerberos principal. Kerberos authentication can be used to transfer the authentication state across multiple services and provides a chance for services to discover user identity information beyond POSIX. It also makes strong linking between the POSIX level identity and authentication structure possible: for example, a Kerberos service may introspect a Kerberos ticket presented by a user’s client application to see how this user was authenticated originally: with a password or some specific passwordless mechanism. Or, perhaps, that a client application performs operations on behalf of the user after claiming it was authenticated using a different (non-Kerberos) authentication.
Local user accounts’ use lacks this experience. Each individual service needs to reauthenticate a user again and again. Local system login: authenticate. Elevating privileges through SUDO? Authenticate again, if not explicitly configured otherwise. Details of the user session state, like how long this particular session is active, is not checked by the applications, making it also harder to limit access. There is no information on how this user was authenticated. Finally, overall user experience between local (standalone) authentication and domain-enrolled one differs, making it harder to adjust and educate users.
Local authentication is also typically password-based. This is not a bad thing in itself but depending on applications and protocols, worse choices could be made, security-wise. For example, contemporary SMB 3.11 protocol is quite secure if authenticated using Kerberos. For non-Kerberos usage, however, it is left to rely on NTLM authentication protocol which requires use of RC4 stream cipher. There are multiple attacks known to break RC4-based encryption, yet it is still used in majority of non-domain joined communications using SMB protocol simply because there was no (so far) alternative. To be correct, there was always an alternative, use of Kerberos protocol, but setting it up for individual isolated systems wasn’t practical.
The Kerberos protocol assumes the use of three different parties: a client, a service, and a key distribution center (KDC). In corporate environments a KDC is part of the domain controller system, a client and a service are both domain members, computers are enrolled in the domain. The client authenticates to KDC and obtains a Kerberos ticket granting ticket (TGT). It then requests a service ticket from the KDC by presenting its TGT and then presents this service ticket to the service. The service application, on its side, is able to decrypt the service ticket presented by the client and authenticate the request.
In the late 2000s Apple realised that for individual computers a number of user accounts is typically small and a KDC can be run as a service on the individual computer itself. When both the client and server are on the same computer, this works beautifully. The only problem is that when a user needs to authenticate to a different computer’s service, the client cannot reach the KDC hosted on the other computer because it is not exposed to the network directly. Luckily, MIT Kerberos folks already thought about this problem a decade prior to that: in 1997 a first idea was published for a Kerberos extension that allowed to tunnel Kerberos requests over a different application protocol. This specification became later known as “Initial and Pass Through Authentication Using Kerberos V5 and the GSS-API” (IAKerb). An initial implementation for MIT Kerberos was done in 2009/2010 while Apple introduced it in 2007 to enable remote access to your own Mac across the internet. It came in MacOS X 10.5 as a “Back to My Mac” feature and even got specified in RFC 6281, only to be retired from MacOS in 2019.
Modern days
In the 2020s Microsoft continued to work on NTLM removal. In 2023 they announced that all Windows systems will have a local KDC as their local authentication source, accessible externally via selected applications through the IAKerb mechanism. By the end of 2024, we have only seen demos published by Microsoft engineers at various events but this is a promising path forward. Presence of the local KDC in Windows raises an interoperability requirement: Linux systems will have to handle access to Windows machines in a standalone environment over SMB protocol. Authentication is currently done with NTLM, it will eventually be removed, thus we need to support the IAKerb protocol extension.
The NTLM removal for Linux systems requires several changes. First, the Samba server will need to learn how to accept authentication with the IAKerb protocol extension. Then, Samba client code needs to be able to establish a client connection and advertise IAKerb protocol extension. For kernel level access, the SMB filesystem driver needs to learn how to use IAKerb as well, this will also need to be implemented in the user space cifs-utils package. Finally, to be able to use the same feature in a pure Linux environment, we need to be able to deploy Kerberos KDC locally and do it in an easy manner on each machine.
This is where we had an idea. If we are going to have a local KDC running on each system, maybe we should use it to handle all authentication and not just for the NTLM removal? This way we can make both the local and domain-enrolled user experience the same and provide access locally to a whole set of authentication methods we support for FreeIPA: passwords, smartcards, one-time passwords and remote RADIUS server authentication, use of FIDO2 tokens, and authentication against an external OAuth2 Identity Provider using a device authorization grant flow.
How “local” a local KDC should be?
On standalone systems it is often not desirable to run daemons continuously. Also, it is not desirable to expose these services to the connected network if they really don’t need to be exposed. A common approach to solve this problem is by providing a local inter-process communication (IPC) mechanism to communicate with the server components. We chose to expose a local KDC via UNIX domain sockets. A UNIX domain socket is a well-known mechanism and has known security properties. With the help of a systemd feature called socket activation, we also can start local KDC on demand, when a Kerberos client connects over the UNIX domain socket. Since on local systems actual authentication requests don’t happen often, this helps to reduce memory and CPU usage in the long run.
If a local KDC is only accessible over a UNIX domain socket, remote applications could not get access to it directly. This means they would need to have help from a server application that can utilize the IAKerb mechanism to pass-through the communication between a client and the KDC. It would enable us to authenticate as a local user remotely from a different machine. Due to how the IAKerb mechanism is designed and integrated into GSS-API, this only allows password-based authentication. Anything that requires passwordless methods cannot obtain initial Kerberos authentication over IAKerb, at least at this point.
Here is a small demo on Fedora, using our localkdc
tool to start a local KDC, obtain a Kerberos ticket upon login. The tickets can then be used effortlessly to authenticate to local services such as SUDO or Samba. For remote access we rely on Samba support for IAKerb and authenticate with GSSAPI but local smbclient
uses a password first to obtain the initial ticket over IAKerb. This is purely a limitation of the current patches we have to Samba.
Make a pause here and think about the implications. We have an initial Kerberos ticket from the local system. The Kerberos ticket embeds details of how this authentication happened. We might have used a password to authenticate, or a smartcard. Or any other supported pre-authentication methods. We could reuse the same methods FreeIPA already provides in the centralized environment.
The Kerberos ticket also can contain details about the user session, including current group membership. It does not current have that in the local KDC case but we aim to fix that. This ticket can be used to authenticate to any GSS-API or Kerberos-aware service on this machine. If a remote machine accepts Kerberos, it theoretically could accept a ticket presented by a client application running on the local machine as well. Only, to do that it needs to be able to communicate with our local KDC and it couldn’t access it.
Trust management
Luckily, a local KDC deployment is a full-featured Kerberos realm and thus can establish cross-realm agreements with other Kerberos realms. If two “local” KDC realms have trust agreements between each other, they can issue cross-realm Kerberos tickets which applications can present over IAKerb to the remote “local” KDC. Then a Kerberos ticket to a service running on the target system can be requested and issued by the system’s local KDC.
Thus, we can achieve passwordless authentication locally on Linux systems and have the ability to establish peer to peer agreements across multiple systems, to allow authentication requests to flow and operate on commonly agreed credentials. A problem now moves to the management area: how to manage these peer to peer agreements and permissions in an easy way?
Systemd User/Group API support
MIT Kerberos KDC implementation provides a flexible way to handle Kerberos principals’ information. A database backend (KDB) implementation can be dynamically loaded and replaced. This is already used by both FreeIPA and Samba AD to integrate MIT Kerberos KDC with their own database backends based on different LDAP server implementations. For a local KDC use case running a full-featured LDAP server is not required nor intended. However, it would be great if different applications could expose parts of the data needed by the KDB interfaces and cooperate together. Then a single KDB driver implementation could be used to streamline and provide uniform implementation of Kerberos-specific details in a local KDC.
One of the promising interfaces to achieve that is the User/Group record lookup API via varlink from systemd. Varlink allows applications to register themselves and listen on UNIX domain sockets for communication similar to D-Bus but with much less implementation overhead. The User/Group API technically also allows to merge data coming from different sources when an application inquires the information. “Technically”, because io.systemd.Multiplexer
API endpoint currently does not support merging non-overlapping data representing the same account from multiple sources. Once it would become possible, we could combine the data dynamically and may interact with users on demand when corresponding requsts come in. Or we can implement our own blending service.
Blending data requests from multiple sources within MIT KDC needs a specialized KDB driver. We certainly don’t want this driver to duplicate the code from other drivers, so making these drivers stackable would be a good option. Support for one level of stacking has been merged to MIT Kerberos through a quickly processed pull request and will be available in the next MIT Kerberos release. This allows us to have a single KDB driver that loads other drivers specialized in storing Kerberos principals and processing additional information like MS-PAC structure or applying additional authorization details.
Establishing trusts
If Alice and Bob are in the same network and want to exchange some files, they could do this using SMB and Samba. But that Alice can authenticate on Bob’s machine, they would need to establish a Kerberos cross realm trust. With the current tooling this is a complex task. For users we need to make this more accessible. We want to allow users to request trust on demand and validate these requests interactively. We also want to allow trust to be present for a limited timeframe, automatically expiring or manually removed.
If we have a Kerberos principal lookup on demand through a curated varlink API endpoint, we also can have a user-facing service to initiate establishing the trust between two machines on demand. Imagine a user trying to access SMB share on one desktop system that triggers a pop-up to establish trust relationship with a corresponding local KDC on the remote desktop system. Both owners of the systems would be able to communicate out of band that provided information is correct and can be trusted. Once it is done, we can return back the details of the specific Kerberos principal that represents this trust relationship. We can limit lifetime of this agreement so that it would disappear automatically in one hour or a day, or a week.
Current state of local authentication hub
We started with two individual implementation paths early in 2024:
- Support IAKerb in MIT Kerberos and Samba
- Enable MIT Kerberos to be used locally without network exposure
MIT Kerberos did have support for IAKerb protocol extension for more than a decade but since Microsoft introduced some changes to the protocol, those changes needed to be integrated as well. This was completed during summer 2024, though no upstream release is available yet. MIT Kerberos typically releases new versions yearly in January so we hope to get some updates early 2025.
Samba integration with IAKerb is currently under implementation. Originally, Microsoft was planning to release Windows 11 and Windows Server 2025 with IAKerb support enabled during autumn 2024. However, the Windows engineering team faced some issues and IAKerb is still not enabled in the Windows Server 2025 and Windows 11 releases. We are looking forward to getting access to Windows builds that enable IAKerb support to ensure interoperability before merging Samba changes upstream. We also need to complete the Samba implementation to properly support locally-issued Kerberos tickets and not only do acquisition of the ticket based on the password.
Meanwhile, our cooperation with MIT Kerberos development team led to advancements in the local KDC support. The MIT Kerberos KDC can now be run over a UNIX domain socket. Also on systemd-enabled systems we allow socket activation, transforming local KDC into an on-demand service. We will continue our work on a dynamic database for a local KDC, to allow on-demand combination of resources from multiple authoritative local sources (Samba, FreeIPA, SSSD, local KDC, future dynamic trust application).
For experiments and ease of deployments, a new configuration tool was developed, localkdc. The tool is available at localkdc and COPR repository can be used to try the whole solution on Fedora.
If you want to get that test tried in a simple setup, you might be interested in a tool that we developed initially for FreeIPA: FreeIPA local tests. This tool allows to provision and run a complex test environment in podman containers. The video of the local KDC usage was actually generated automatically by the scripts from here.