Skip to content

Saturday, 21 June 2025

It took a year for me to actually make a release, but KTimeTracker 6.0.0 is now out!

Major changes

  • The major thing about it is that KTimeTracker has been ported to Qt6. For end users this means up-to-date Linux distributions that had orphaned KTimeTracker will get the package back once a package maintainer steps up.

  • KTimeTracker has long had a (currently) X11-exclusive feature where it detects the virtual desktop you’re in and uses that to start/stop tracking a task. This does not work on Wayland and Windows, and now it won’t show up on either platform so you don’t attempt to use something that doesn’t work!

Wednesday, 18 June 2025

Tuesday, 17 June 2025

Car Game

This project began as a casual college game I developed in my second year, using Pygame. The idea was simple. You’re driving a car, and your job is to survive enemy attacks, collect energy to stay alive, and shoot down as many opponents as you can. The more you destroy, the higher your score.

The core gameplay loop was designed in Pygame and includes:

  • A player car that moves left and right.
  • Opponent cars that spawn and rush toward the player.
  • Energy pickups that keep your car alive.
  • Bullets using which you take down enemy cars.

Each component is managed by its respective class: MyCar, Opponent, Fire, and Explosion.

The original version used keyboard input for movement and shooting. The objective was to survive as long as possible while scoring points by destroying opponents.

While building the game, I found myself knee-deep in things I hadn’t anticipated—like why a car would randomly vanish mid-frame, or why every collision either did nothing or ended in total chaos. I spent hours tweaking bounding rectangles, trying to get explosions to appear in the right place, and making sure enemy cars didn’t spawn on top of each other. Most of my time went into figuring out how to reset things properly after a crash or making sure the game didn’t freeze when too many things happened at once. It was messy, confusing, and at times exhausting, but weirdly satisfying when everything finally came together.

Recently, I revisited this project with the idea of automating it. I wanted to see if the car could make its own decisions—to dodge, shoot, or stay put—all without human input. That’s where Monte Carlo Tree Search (MCTS) came in. Being a decision-making algorithm, it’s particularly useful in many strategic games when the search space is large and rewards are sparse or delayed—perfect for a chaotic survival game like mine.

Implementation Details

The first step was to abstract the game state into a simplified object. I created a GameState class in mcts_car_shooter.py that captures:

  • My car’s x position.
  • Remaining energy and current score.
  • Positions and energy levels of alive opponents.
  • Fire coordinates (optional) and energy pickup position.

This allowed the MCTS algorithm to run without needing to interact with the actual rendering or physics code.

In the main game loop, every 5 frames, I pass the current game state to the MCTS engine:

if frame_counter % 5 == 0:
    state = get_game_state_from_main(mycar, energy, score, list(opponent))
    action = mcts_search(state, computation_time=0.05)

The result is one of four possible actions: "left", "right", "shoot", or "none".

Once the decision is made, the game responds accordingly:

if action == "left":
    mycar.move("left")
elif action == "right":
    mycar.move("right")
elif action == "shoot":
    fire_sound.play()

So here’s what’s actually going on behind the scenes every time the AI makes a move. The MCTS algorithm starts by traversing the existing tree of game states to find the most promising node to explore—this is the selection step. Once it lands on that node, it simulates one new possible action from there, which is the expansion phase. From that new state, it plays out a few random steps of the game using a basic policy (like “shoot if you see enemies” or “don’t move if energy is low”)—this is the simulation part. And then finally, based on how well or badly that rollout went, it backpropagates the reward back up the tree so that decisions that led to good outcomes get reinforced and are more likely to be chosen in the future. Each loop tries to balance exploration (trying out new stuff) and exploitation (doing what’s already known to work), and this constant balance somehow ends up producing surprisingly smart behavior out of nothing but random simulations and reward math.

After integrating MCTS, the game now plays itself. The car intelligently avoids enemy fire, conserves energy, and shoots at the right moments. It’s not perfect—but it’s good enough to survive for a few minutes and rack up a decent score.

However, one limitation of the current setup is that the AI doesn’t retain any memory of past games—it starts from scratch every time the game restarts. The MCTS algorithm only simulates forward from the current state and doesn’t learn or adapt across episodes. So while it can make fairly smart decisions in the moment, it has no long-term strategy or evolving understanding of what works best over time. There’s no persistence of experience, which means it can’t build on previous runs to improve future performance. This makes it efficient for one-off decisions but not ideal for learning patterns or refining behavior over multiple plays.

Next, I’m planning to take things a bit further. I want to train a policy network on the trajectories generated by MCTS so the model can learn from past simulations and make better long-term decisions without needing to simulate every time. I’m also thinking of adding a simple GUI to visualize how the MCTS tree grows and changes in real time—because watching the AI think would honestly be super fun. And eventually, I’d like to give players the option to toggle between AI-controlled and manual play, so they can either sit back and watch the car do its thing or take control themselves. You can find the full implementation on my GitHub. Thanks for reading!

Car Game

Back in my second year of college, I had just started exploring functional programming. I was picking up Haskell out of curiosity - it felt different, abstract, and honestly a bit intimidating at first. Around the same time, I was also diving into topics like context-free grammars, automata theory, parse trees, and the Chomsky hierarchy - all the foundational concepts that explain how programming languages are parsed, interpreted, and understood by machines.

Somewhere along the way, it hit me: what if I could build something with both? What could be more fun than writing an interpreter for an imperative programming language using a functional one? That idea stuck - and over the next few weeks, I set out to build a purely functional monadic interpreter in Haskell.

I designed the grammar for the language myself, mostly inspired by Python. I wanted it to support loops, conditionals, variable assignments, print statements, and basic arithmetic, boolean, and string operations. It even has a “++” operator for string concatenation. Writing the grammar rules involved figuring out how to model nested blocks, expressions with precedence, and side-effect-free evaluation. I built the entire thing using monadic parser combinators—no parser generators or external libraries, just Haskell’s type system and some stubbornness.

Here’s a rough look at the grammar that powers the interpreter:

Block 
    : { Part }

Part 
    : Statement Part
    | IfStatement Part
    | WhileLoop Part
    | Comment String Part
    | epsilon

Statement 
    : var = AllExpr;
    | print( AllExpr );

AllExpr 
    : Sentences ++ AllExpr
    | Sentences

Sentences
    : string
    | LogicExpr

IfStatement
    : if ( LogicExpr ) Block else Block

WhileLoop
    : while ( LogicExpr ) Block 

LogicExpr
    : BoolExpr && LogicExpr
    | BoolExpr || LogicExpr
    | BoolExpr

BoolExpr 
    : True
    | False
    | ArithBoolExpr

ArithBoolExpr
    : Expr > Expr
    | Expr < Expr
    | Expr == Expr
    | Expr != Expr
    | Expr

Expr 
    : HiExpr + Expr
    | HiExpr - Expr
    | HiExpr

HiExpr 
    : SignExpr * HiExpr
    | SignExpr / HiExpr
    | SignExpr % HiExpr
    | SignExpr 

SignExpr
    : int
    | ( AllExpr )
    | var

The interpreter parses the source code using this grammar, builds an abstract syntax tree, and evaluates it by simulating an environment. There’s no mutation—it just returns a new environment every time a variable is assigned or a block is executed.

Running it is simple enough. After compiling with GHC, it reads the program from stdin and prints the resulting variable bindings and any output generated by print() statements.

ghc -o interpreter interpreter.hs
./interpreter

Here’s a sample program to show how it works:

  
    { 
        i = 5;
        a = (4 < 3) || 6 != 7;
        print(a);

        # First While! #
        while(i != 0 && a) 
        { 
            print(i); 
            i = i - 1; 
        }

    }

    Output : a True
             i 0
             print True 5 4 3 2 1 

Once I had the interpreter working, I wanted to make it a bit more fun to interact with. So I built a small GUI in Python using tkinter. It’s nothing fancy—just a textbox to enter code, a button to run it, and an output area to display the result. When you click “Run,” the Python script sends the code to the Haskell interpreter and prints whatever comes back.

The entire thing—from parsing to evaluation—is written in a purely functional style. No mutable state, no IO hacks, no shortcuts. Just expressions flowing through types and functions. It’s probably not the fastest interpreter out there, but writing it did teach me a lot about how languages work under the hood.

Sunday, 15 June 2025

This is the release schedule the release team agreed on

https://community.kde.org/Schedules/KDE_Gear_25.08_Schedule

Dependency freeze is in around 2 weeks (July 3) and feature freeze one
after that. Get your stuff ready! 

🎉 New Clazy Release: Stability Boost & New Checks!

We’re excited to roll out a new Clazy release packed with bug fixes, a new check, and improvements to existing checks. This release included 34 commits from 5 contributors.


🔍 New Features & Improvements

  • New Check: readlock-detaching
    Detects unsafe and likely unwanted detachment of member-containers while holding a read lock. For example, when calling .first() on the mutable member instead of .constFirst()

  • Expanded Support for Detaching Checks
    Additional methods now covered when checking for detaching temporary or member lists/maps. This includes reverse iterators on many Qt containers and keyValueBegin/keyValueEnd on QMap. All those methods have const counterparts that allow you to avoid detaching.

  • Internal Changes With this release, Clang 19 or later is a required dependency. All older versions needed compatibility logic and were not thouroughly tested on CI. In case you are on an older Version of a Debian based distro, consider using https://apt.llvm.org/ and compile Clazy from source ;)


🐞 Bug Fixes

  • install-event-filter: Fixed crash when no child exists at the given depth.
    BUG: 464372

  • fully-qualified-moc-types: Now properly evaluates enum and enum class types.
    BUG: 423780

  • qstring-comparison-to-implicit-char: Fixed and edgecase where assumptions about function definition were fragile.
    BUG: 502458

  • fully-qualified-moc-types: Now evaluates complex signal expressions like std::bitset<int(8)> without crashing. #28

  • qvariant-template-instantiation: Crash fixed for certain template patterns when using pointer types.


Also, thanks to Christoph Grüninger, Johnny Jazeix, Marcel Schneider and Andrey Rodionov for contributing to this release!

From Refactor to Functioning Plugin

Hi again! Week two was all about turning last week’s refactored EteSync resource and newly separated configuration plugin into a fully working, stable component. While the initial plugin structure was in place, this week focused on making the pieces actually work together — and debugging some tricky issues that emerged during testing.


Removing QtWidgets Dependencies with KNotification

While testing, I discovered that the original EteSync resource code used QDialog and KMessageBox directly for showing error messages or status updates. These widget-based UI elements are too heavy for a background resource and conflict with the goal of keeping the resource lightweight and GUI-free.

To address this, I replaced them with a much cleaner approach: creating KNotification instances directly. This allows the resource to send system notifications (like “EteSync sync successful” or error messages) through the desktop’s notification system, without relying on any QtWidgets. As a result, the resource is now fully compatible with non-GUI environments and no longer needs to link against the QtWidgets library.


Refactoring Settings Management for Plugin Compatibility

Another major change this week involved how the resource handles its settings.

Previously, the configuration was implemented as a singleton, meaning both the resource and its configuration plugin were sharing a single instance of the settings object. This worked in the old, tightly-coupled model, but caused conflicts in the new plugin-based architecture.

To fix this, I updated the settings.kcfgc file to set singleton=false. This change allows the resource and the configuration plugin to maintain separate instances of the settings object, avoiding interference. I also updated both etesyncclientstate.cpp and etesyncresource.cpp to properly manage their respective configurations.


Solving the “Zombie Window” Issue

One final issue emerged after separating the UI: the configuration wizard now appears in a separate window from the main Akonadi configuration dialog. When the wizard is completed and closes, the original configuration window — now empty and disconnected — remains open.

Clicking buttons on this leftover window causes terminal errors, since it no longer communicates with a valid process. This results in a confusing and potentially buggy experience for users.


What’s Next?

My next task is to figure out a clean way to close the original parent window when the wizard completes, ensuring a smooth and error-free configuration flow. In addition to that, I’ll begin testing the full integration between the EteSync resource and its configuration plugin to ensure everything works correctly — from saving and applying settings to triggering synchronization. This will help verify that the decoupling is both functionally solid and user-friendly.

Friday, 13 June 2025

Improved Black Box Testing with Cucumber-CPP

Learn how to use Cucumber-CPP and Gherkin to implement better black box tests for a C++ library. We developed a case-study based on Qt OPC UA.

Continue reading Improved Black Box Testing with Cucumber-CPP at basysKom GmbH.

Intro

One of the largest hurdles in any job or activity is getting your resources set up. Luckily for you, Krita has one of the most detailed and straightforward documentation for setup. In this blog I will go over my experience setting up Krita and provide quick links to answer all the questions you may have during set up.

Setup

One Stop Shop for Links
Download and Install Ubuntu
Create KDE account
Fork Krita Repository
Follow build instructions
If you use QTCreator to Build and Run Krita follow this video Krita Chat - Create account, join chat room, introduce yourself and ask questions

The goal is to get Krita running on your machine. For my setup and for simplicity of instructions, I use Oracle's Virtualbox to run a virtual machine(VM) with Ubuntu on my windows machine. You can use any VM host for set up. The Follow build instructions should be straightforward to follow. The great thing about these instructions is that you don't need to know a lot of detail about docker or C++ yet, but you will need to understand some basic linux and git commands.

In the above links, follow the instruction in the hyperlink title.

My Experience

When I set up Krita for the first time, I felt a sense of accomplishment. Not only was I able to set up Krita, but I was able to deepen my understanding of git, learn about docker, VMs and QT.

I think the biggest take away from setting up Krita is to never give up, ask questions in chat, ask yourself "What do I not understand?" before moving to the next instruction.

Conclusion

Setting up Krita is as simple as you make it out to be. The hardest part is finding the resources to be successful. I hope this blog post can simplify set up for newcomers and experienced users.

Contact

To anyone reading this, please feel free to reach out to me. I’m always open to suggestions and thoughts on how to improve as a developer and as a person.
Email: ross.erosales@gmail.com
Matrix: @rossr:matrix.org

Thursday, 12 June 2025

To briefly recap, Natalie Clarius and I applied for an NLnet grant to improve gesture support in Plasma, and they accepted our project proposal. We thought it would be a good idea to meet in person and workshop this topic from morning to evening for three days in a row. Props to Natalie taking the trip from far away in Germany to my parents' place, where we were kindly hosted and deliciously fed.

Our project plan starts with me adding stroke gesture support to KWin in the first place, while Natalie works on making multi-touch gestures customizable. Divvying up the work along these lines allows us to make progress independently without being blocked on each other's work too often. But of course there is quite a bit of overlap, which is why we applied to NLnet together as a single project.

The common thread is that both kinds of gestures can result in similar actions being triggered, for example:

  • Showing Plasma's Window Overview
  • Starting an app / running a custom command
  • Invoking an action inside a running app

So if we want to avoid duplicating lots of code, we'll want a common way to assign actions to a gesture. We need to know what to store in a config file, how Plasma code will make use of it, and how System Settings can provide a user interface that makes sense to most people. These are the topics we focused on. Time always runs out faster than you'd like, ya gotta make it count.

Three days in a nutshell

Getting to results is an iterative process. You start with some ideas for a good user experience (UX) and make your way to the required config data, or you start with config data and make your way to actual code, or you hit a wall and start from the other end going from code to UX until you hit another wall again. Rinse and repeat until you like it well enough to ship it.

On day 1, we:

  • Explored some code together, primarily in:
    • KWin, which recognizes gestures;
    • KGlobalAccelD, which manages global shortcut configurations;
    • the KGlobalAccel framework, which asks KGlobalAccelD to to register a global shortcut,
    • and the Shortcuts page in System Settings, a.k.a. kcm_keys.
  • Figured out why Natalie's KWin session wouldn't produce systemd logs.
  • Collected a comprehensive list of gestures (and gesture variants) to support.

On day 2, we:

  • Collected a broad list of actions (and action types) to invoke when a gesture is triggered.
  • Sketched out UI concepts for configuring gestures.
  • Weren't quite satisfied, came up with a different design which we like better.
  • Discussed how we can automatically use one-to-one gesture tracking when an assigned action supports it.
  • Drafted a config file format to associate (gesture) triggers with actions.

On day 3, we:

  • Drafted a competing config file format which adds the same data to the existing kglobalshortcutsrc file instead.
  • Reviewed existing gesture assignments and proposals.
  • Created a table with proposed default gesture assignments (to be used once gestures are configurable).
  • Collected remaining questions that we didn't get to.

What I just wrote is a lie, of course. I needed to break up the long bullet point list into smaller sections. In reality we jumped back and forth across all of these topics in order to reach some sort of conclusion at the end. Fortunately, we make for a pretty good team and managed to answer a good amount of questions together. We even managed to make time for ice cream and owl spottings along the way.

Since you asked for it, here's a picture of Natalie and I drawing multi-touch gestures in the air.

Photo of the two mini-sprint participants

Next up in gestures

So there are some good ideas, we need to make them real. Since the sprint, I've been trying my hand on more detailed mockups for our rough design sketches. This always raises a few more issues, which we want to tackle before asking for opinions from KWin maintainers and Plasma's design community. There isn't much to share with the community yet, but we'll involve other contributors before too long.

Likewise, my first KWin MR for stroke gesture infrastructure is not quite there yet, but it's getting closer. The first milestone will be to make it possible for someone to provide stroke gesture actions. The second milestone will be for Plasma/KWin to provide stroke gesture actions by itself and offer a nice user interface for it.

Baby steps. Keep chiseling away at it and trust that you'll create something decent eventually. This is not even among the largest efforts in KDE, and yet there are numerous pieces to fit and tasks to tackle. Sometimes I'm frankly in awe of communities like KDE that manage to maintain a massive codebase together, with very little overhead, through sheer dedication and skill. Those donations don't go to waste.

At this point I would also like to apologize to anyone who was looking for reviews or other support from me elsewhere in Plasma (notably, PowerDevil) which I haven't helped with. I get stressed when having to divide my time and focus between different tasks, so I tend to avoid it, in the knowledge that someone or something will be left wanting. I greatly admire people who wear lots of different hats simultaneously, and it would surely be so nice to have the aptitude for that, but it kills me so I have to pick one battle at a time.

Right now, that's gestures. Soon, a little bit of travel. Then gestures again. Once that's done, we'll see what needs work most urgently or importantly.

Take care & till next time!