Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps. This time again a bit delayed due to some personal travel.
Releases
Kaidan 0.11.0 is out. This new version of KDE's XMPP client brings Qt6 support as well as a few new features.
For a complete overview of what's going on, visit KDE's Planet, where you can find all KDE news unfiltered directly from our contributors.
Get Involved
The KDE organization has become important in the world, and your time and
contributions have helped us get there. As we grow, we're going to need
your support for KDE to become sustainable.
You can help KDE by becoming an active community member and getting involved.
Each contributor makes a huge difference in KDE — you are not a number or a cog
in a machine! You don’t have to be a programmer either. There are many things
you can do: you can help hunt and confirm bugs, even maybe solve them;
contribute designs for wallpapers, web pages, icons and app interfaces;
translate messages and menu items into your own language; promote KDE in your
local community; and a ton more things.
You can also help us by donating. Any monetarnky
contribution, however small, will help us cover operational costs, salaries,
travel expenses for contributors and in general just keep KDE bringing Free
Software to the world.
To get your application mentioned here, please ping us in invent or in Matrix.
The Amarok Development Squad is happy to announce the immediate availability of Amarok 3.2.2, the second bugfix release for Amarok 3.2 "Punkadiddle"!
3.2.2 features some minor bugfixes, and improvements for building Amarok on non-UNIX systems and without X11 support. Additionally, a 16-year-old feature request has been fulfilled.
Concluding years of Qt5 porting and polishing work, Amarok 3.2.2 is likely to be the last version with Qt5/KF5 support, and
it should provide a nice and stable music player experience for users on various systems and distributions.
The development in git, on the other hand, will soon switch the default configuration to Qt6/KF6, and focus for the next 3.3 series will be to ensure that everything functions nicely with the new Qt version.
Changes since 3.2.1
FEATURES:
Try to preserve collection browser order when adding tracks to playlist (BR 180404)
CHANGES:
Allow building without X11 support
Various build fixes for non-UNIX systems
BUGFIXES:
Fix DAAP collection connections, browsing and playing (BR 498654)
Fix first line of lyrics.ovh lyrics missing (BR 493882)
Getting Amarok
In addition to source code, Amarok is available for installation from many distributions' package
repositories, which are likely to get updated to 3.2.2 soon, as well as
the flatpak available on flathub.
Welcome to a new issue of "This Week in Plasma"! Every week we cover as much as possible of what's happening in the world of KDE Plasma and its associated apps like Discover, System Monitor, and more.
Plasma 6.3 is out! So far the response has been very good, but of course a few issues were found once it was in the wild.
Maybe the worst issue is something that KWin devs have actually tracked down to a bug in the GCC compiler, of all things! It only manifests with the kind of release build configurations that many distros use, and also only with GCC 15 and an ICC profile set up. We've informed distros how to work around it until the root cause is understood and GCC gets patched, or KWin devs are able to guard against it internally.
Unfortunately this is a sign that we did not in fact get enough beta testers, since the issue should have been obvious to people in affected environments. Another sign is that most of the regressions are hardware-related. We've got them fixed now, but we need people to be testing the betas with their personal hardware setups! There's simply no way for a small pool of KDE developers to test all of these hardware setups themselves.
Anyway, with those caveats aside, it looks like it's been a pretty smooth release! Building on it, there have been a number of positive changes to the Media Player widget, Weather Report Widget, Info Center Energy page, and touchscreen support.
Notable new Features
Plasma 6.4.0
The Media Player widget now features a playback rate selector when the source media makes this feature available using its MPRIS implementation. (Kai Uwe Broulik, link)
Notable UI Improvements
Plasma 6.3.1
Improved the presentation of search results for the new DWD weather provider in the Weather Report widget. (Ismael Asensio, link 1 and link 2)
The BBC Weather provider has recently improved the quality of their forecast data, so we've changed the weather widget to no longer hide search results from it if there are results from other providers as well. (Ismael Asensio, link)
The updates list in Discover is now sorted case-insensitively. (Aleix Pol Gonzalez, link)
Welcome Center now remembers its window size (and on X11, position) across launches, like most of our other QML app windows these days. (Tracey Clark, link)
Plasma 6.4.0
Improved the graph view on Info Center's Energy page: Now it's in a card, like in System Monitor, and has more normal and visually pleasing margins. (Ismael Asensio, link 1 and link 2)
Spectacle has gained support for pinch-zooming in its screenshot viewer window, which can be especially useful when annotating using a touchscreen. (Noah Davis, link)
You can now actually scroll through the Widget Explorer with a single-finger touchscreen scroll gesture, because dragging widgets using touch now requires a tap-and-hold. (Niccolò Venerandi, link)
Notable Bug Fixes
Plasma 6.3.1
Fixed a regression that would cause KWin to crash in the X11 session when hotplugging or switching between HDMI screens. (Fushan Wen, link 1 and link 2). Consider it a reminder for everyone still on X11 to try the Wayland session again, because the X11 session receives almost no testing from developers anymore!
Fixed a regression that could cause KWin to sometimes crash hours after hotplugging a Thunderbolt dock. (Xaver Hugl, link)
Fixed a regression that would cause KWin to crash when you interact with the Alt+Tab task switcher while using software rendering. (Vlad Zahorodnii, link)
Fixed a regression that could cause certain Qt-based apps to crash on launch when using the Breeze style. (Antonio Rojas, link)
Fixed a case where Plasma might sometimes crash when clicking on the Networks icon in the System Tray, especially when built using GCC 15. (David Edmundson, link)
Fixed a regression that caused the new "Prefer efficiency" ICC color mode setting to not actually improve efficiency on certain hardware. (Xaver Hugl, link)
Panels in auto-hide mode no longer inappropriately hide again when you start dragging Task Manager tasks to re-order them. (Tino Lorenz, link)
The new bar separator between the date and time in the Digital Clock widget no longer appears inappropriately when the date has been intentionally suppressed. (Christoph Wolk, link)
Fixed an issue that broke the layout of the device tiles on Info Center's Energy page when using a larger-than-default font size or loads of devices with batteries. (Ismael Asensio, link)
Fixed two keyboard navigation issues in the Power and Battery widget. (Ismael Asensio, link 1 and link 2)
Fixed an older issue that prevented the keyboard brightness controls on certain laptops from being visible immediately. (Nicolas Fella, link)
Fixed an older issue that caused Info Center's Energy page to vibrate disturbingly at certain window sizes. It was, heh heh heh… very high energy! (Ismael Asensio, link)
Qt 6.8.3
Committed a better Qt fix for the issue whereby the first click after dragging Task Manager tasks got ignored. (David Redondo, link)
86 KDE bugs of all kinds fixed over the past week. Full list of bugs
How You Can Help
KDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.
You can help KDE by becoming an active community member and getting involved somehow. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine!
You don’t have to be a programmer, either. Many other opportunities exist:
You can also help us by making a donation! Any monetary contribution — however small — will help us cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.
The second maintenance release of the 24.12 cycle is out with multiple bug fixes. Notable changes include fixes for crashes, UI resizing issues, effect stack behavior, proxy clip handling, and rendering progress display, along with improvements to Speech-to-text in Flatpak and macOS packages.
Don’t try to update monitor overlay if effect is disabled. Commit.
Fix crash setting empty name for folder. Commit. Fixes bug #499070.
Better fix for expand library clips broken with proxies. Commit. Fixes bug #499171.
Try to fix Whisper models folder on Flatpak. Commit. See bug #499012.
Don’t try to delete ui file elements on subtitlemanager close. Commit.
Fix effect stack widget not properly resizing. Commit.
Ensure built-in effects reset button is enabled. Commit.
Ensure vidstab external files are correctly listed and archived. Commit.
Added 2 decimals for the rotation parameter (addresses bug #498586). Commit.
Some powerful bullies want to make the life of editors impossible. Looks like the foundation has the right tools in store to protect those contributors.
Alright, this piece is full of vitriol… And I like it. The CES has clearly become a mirror of the absurdities our industry is going through. The vision proposed by a good chunk of the companies is not appealing and lazy.
Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared”
Tags: tech, ai, machine-learning, gpt, ux, cognition, research
This is clearly pointing in the direction of UX challenges around LLM uses. For some tasks the user’s critical thinking must be fostered otherwise bad decisions will ensue.
Of course it would be less of a problem if explainability was better with such models. It’s not the case though, so it means they can spew very subtle propaganda. This is bound to become even more of a political power tool.
This is an interesting way to frame the problem. We can’t rely too much on LLMs for computer science problems without loosing important skills and hindering learning. This is to be kept in mind.
How Does Ada’s Memory Safety Compare Against Rust?
Tags: tech, system, rust, ada, memory
Interesting comparison. Ada doesn’t fare as good as I’d have expected as soon as pointers are in the mix… but there is a twist, you can go a very long way without pointers in Ada.
This is obviously all good news on the Wayland front. Took time to get there, got lots of justified (and even more unjustified) complaints, but now things are looking bright.
Operational and Denotational Strategies for Understanding Code
Tags: tech, programming, teaching
A good reminder that you should always bring several perspectives when teaching something. This a a simple framework which can be used widely in our field.
Modern distributed systems need to process massive amounts of data efficiently while maintaining strict ordering guarantees. This is especially challenging when scaling horizontally across multiple nodes. How do we ensure messages from specific sources are processed in order while still taking advantage of parallelism and fault tolerance?
Elixir, with its robust concurrency model and distributed computing capabilities, is well-suited for solving this problem. In this article, we’ll build a scalable, distributed message pipeline that:
Distribute the message pipelines evenly across the Elixir cluster.
Gracefully handles failures and network partitions.
Many modern applications require processing large volumes of data while preserving message order from individual sources. Consider, for example, IoT systems where sensor readings must be processed in sequence, or multi-tenant applications where each tenant’s data requires sequential processing.
The solution we’ll build addresses these requirements by treating each RabbitMQ queue as an ordered data source.
Let’s explore how to design this system using Elixir’s distributed computing libraries: Broadway, Horde, and libcluster.
Architecture overview
The system consists of multiple Elixir nodes forming a distributed cluster. Each node runs one or more Broadway pipelines to process messages from RabbitMQ queues and forward them to Google Cloud PubSub. To maintain message ordering, each queue has exactly one pipeline instance running across the cluster at any time. If a node fails the system must redistribute its pipelines to other nodes automatically, and if a new node joins the cluster then the existing pipelines should be redistributed to ensure a balanced load.
Elixir natively supports the ability to cluster multiple nodes together so that processes and distributed components within the cluster can communicate seamlessly. We will employ the libcluster library since it provides several strategies to automatize cluster formation and healing.
For the data pipelines, the Broadway library provides a great framework to support multi-stage data processing while handling back-pressure, batching, fault tolerance and other good features.
To correctly maintain the distribution of data pipelines across the Elixir nodes, the Horde library comes to the rescue by providing the building blocks we need: a distributed supervisor that we can use to distribute and maintain healthy pipelines on the nodes, and a distributed registry that we use directly to track which pipelines exist and on which nodes they are.
Finally, a PipelineManager component will take care of monitoring RabbitMQ for new queues and starting/stopping corresponding pipelines dynamically across the cluster.
Technical implementation
Let’s initiate a new Elixir app with a supervision tree.
mix new message_pipeline --sup
First, we’ll need to add our library dependencies in mix.exs and run mix deps.get:
defmodule MessagePipeline.MixProject do use Mix.Project
def project do [ app: :message_pipeline, version: "0.1.0", elixir: "~> 1.17", start_permanent: Mix.env() == :prod, deps: deps() ] end
def application do [ extra_applications: [:logger], mod: {MessagePipeline.Application, []} ] end
defp generate_auth_token do with {:ok, %{token: token}} <- Goth.fetch(MessagePipeline.Goth) do {:ok, token} end end end
Clustering with libcluster
We’ll use libcluster to establish communication between our Elixir nodes. Here’s an example configuration that uses the Gossip strategy to form a cluster between nodes:
defmodule MessagePipeline.Application do use Application
children = [ {Cluster.Supervisor, [topologies, [name: MessagePipeline.ClusterSupervisor]]}, # Other children... ]
Supervisor.start_link(children, strategy: :one_for_one) end end
Distributed process management with Horde
We’ll use Horde to manage our Broadway pipelines across the cluster. Horde ensures that each pipeline runs on exactly one node and handles redistribution when nodes fail.
Let’s add Horde’s supervisor and registry to the application’s supervision tree.
The UniformQuorumDistribution distribution strategy distributes processes using a hash mechanism among all reachable nodes. In the event of a network partition, it enforces a quorum and will shut down all processes on a node if it is split from the rest of the cluster: the unreachable node is drained and the pipelines can be resumed on the other cluster nodes.
defmodule MessagePipeline.Application do use Application
def start(_type, _args) do children = [ {Horde.Registry, [name: MessagePipeline.PipelineRegistry, keys: :unique]}, {Horde.DynamicSupervisor, [name: MessagePipeline.PipelineSupervisor, strategy: :one_for_one, distribution_strategy: Horde.UniformQuorumDistribution]} # Other children... ]
Supervisor.start_link(children, strategy: :one_for_one) end end
Broadway pipeline implementation
Each pipeline uses Broadway to consume messages from RabbitMQ and publish them to Google PubSub.
A strict, per-queue ordering is guaranteed by setting a concurrency of 1.
defmodule MessagePipeline.Pipeline do use Broadway
alias Broadway.Message
def start_link(opts) do queue_name = Keyword.fetch!(opts, :queue_name) pipeline_name = pipeline_name(queue_name)
case Broadway.start_link(__MODULE__, pipeline_opts) do {:ok, pid} -> {:ok, pid}
{:error, {:already_started, _pid}} -> :ignore end end
def pipeline_name(queue_name) do String.to_atom("pipeline_#{queue_name}") end
@impl true def handle_message(_, message, _) do message |> Message.update_data(&process_data/1) end
@impl true def handle_batch(_, messages, _, _) do case publish_to_pubsub(messages) do {:ok, _message_ids} -> messages {:error, reason} -> # Mark messages as failed Enum.map(messages, &Message.failed(&1, reason)) end end
defp process_data(data) do # Transform message data as needed data end
defp publish_to_pubsub(messages) do MessagePipeline.GooglePubsub.publish_messages(messages) end end
Queue discovery and pipeline management
Finally, we need a process to monitor RabbitMQ queues and ensure pipelines are running for each one.
The Pipeline Manager periodically queries RabbitMQ for existing queues. If a new queue appears, it starts a Broadway pipeline only if one does not already exist in the cluster. If a queue is removed, the corresponding pipeline is shut down.
defmodule MessagePipeline.PipelineManager do use GenServer
@timeout :timer.minutes(1)
def start_link(opts) do GenServer.start_link(__MODULE__, opts, name: __MODULE__) end
def init(_opts) do state = %{managed_queues: MapSet.new()}
{:ok, state, {:continue, :start}} end
def handle_continue(:start, state) do state = manage_queues(state)
{:noreply, state, @timeout} end
def handle_info(:timeout, state) do state = manage_queues(state)
{:noreply, state, @timeout} end
def manage_queues(state) do {:ok, new_queues} = discover_queues() current_queues = state.managed_queues
# Filter out system queues queues |> Enum.reject(fn %{name: name} -> String.starts_with?(name, "amq.") or String.starts_with?(name, "rabbit") end) |> Enum.map(& &1.name) |> MapSet.new() end
defp start_pipeline(queue_name) do pipeline_name = MessagePipeline.Pipeline.pipeline_name(queue_name)
case Horde.Registry.lookup(MessagePipeline.PipelineRegistry, pipeline_name) do [{pid, _}] -> {:error, :already_started} [] -> Horde.DynamicSupervisor.start_child( MessagePipeline.PipelineSupervisor, {MessagePipeline.Pipeline, queue_name: queue_name} ) end end
defp stop_pipeline(queue_name) do pipeline_name = MessagePipeline.Pipeline.pipeline_name(queue_name)
case Horde.Registry.lookup(MessagePipeline.PipelineRegistry, pipeline_name) do [{pid, _}] -> Horde.DynamicSupervisor.terminate_child(MessagePipeline.PipelineSupervisor, pid) [] -> {:error, :not_found} end end end
Let’s not forget to also add the pipeline manager to the application’s supervision tree.
defmodule MessagePipeline.Application do use Application
def start(_type, _args) do children = [ {MessagePipeline.PipelineManager, []} # Other children... ]
Supervisor.start_link(children, strategy: :one_for_one) end end
Test the system
We should now have a working and reliable system. To quickly test it out, we can configure a local RabbitMQ broker, a Google Cloud PubSub topic, and finally a couple of Elixir nodes to verify that distributed pipelines are effectively run to forward messages between RabbitMQ queues and PubSub.
Let’s start by running RabbitMQ with the management plugin. RabbitMQ will listen for connections on the 5672 port, while also exposing the management interface at http://localhost:15672. The default credentials are guest/guest.
# Publish test messages ./rabbitmqadmin publish routing_key=test-queue-1 payload="Message 1 for queue 1" ./rabbitmqadmin publish routing_key=test-queue-1 payload="Message 2 for queue 1" ./rabbitmqadmin publish routing_key=test-queue-2 payload="Message 1 for queue 2"
# List queues and their message counts ./rabbitmqadmin list queues name messages_ready messages_unacknowledged
# Get messages (without consuming them) ./rabbitmqadmin get queue=test-queue-1 count=5 ackmode=reject_requeue_true
One can also use the RabbitMQ management interface at http://localhost:15672, authenticate with the guest/guest default credentials, go to the “Queues” tab, click “Add a new queue”, and create “test-queue-1” and “test-queue-2”.
After a minute, the Elixir nodes should automatically start some pipelines corresponding to the RabbitMQ queues.
# List all registered pipelines Horde.Registry.select(MessagePipeline.PipelineRegistry, [{{:"$1", :"$2", :"$3"}, [], [:"$2"]}])
# Check specific pipeline pipeline_name = :"pipeline_test-queue-1" Horde.Registry.lookup(MessagePipeline.PipelineRegistry, pipeline_name)
Now, if we publish messages on the RabbitMQ queues, we should see them appear on the PubSub topic.
We can verify it from Google Cloud Console, or by creating a subscription, publishing some messages on RabbitMQ, and then pulling messages from the PubSub subscription.
If we stop one of the Elixir nodes (Ctrl+C twice in its IEx session) to simulate a failure, the pipelines should be redistributed in the remaining node:
# Check updated node list Node.list()
# Check pipeline distribution Horde.Registry.select(MessagePipeline.PipelineRegistry, [{{:"$1", :"$2", :"$3"}, [], [:"$2"]}])
Rebalancing pipelines on new nodes
With our current implementation, pipelines are automatically redistributed when a node fail but they are not redistributed when a new node joins the cluster.
Fortunately, Horde supports precisely this functionality from v0.8+, and we don’t have to manually stop and re-start our pipelines to have them landing on other nodes.
All we need to do is enable the option process_distribution: :active on Horde’s supervisor to automatically rebalance processes on node joining / leaving. The option runs each child spec through the choose_node/2 function of the preferred distribution strategy, detects which processes should be running on other nodes considering the new cluster configuration, and specifically restarts those particular processes such that they run on the correct node.
defmodule MessagePipeline.Application do use Application
def start(_type, _args) do children = [ {Horde.Registry, [name: MessagePipeline.PipelineRegistry, keys: :unique]}, {Horde.DynamicSupervisor, [ name: MessagePipeline.PipelineSupervisor, strategy: :one_for_one, distribution_strategy: Horde.UniformQuorumDistribution, process_redistribution: :active ]} # Other children... ]
Supervisor.start_link(children, strategy: :one_for_one) end end
Conclusion
This architecture provides a robust solution for processing ordered message streams at scale. The combination of Elixir’s distributed capabilities, Broadway’s message processing features, and careful coordination across nodes enables us to build a system that can handle high throughput while maintaining message ordering guarantees.
To extend this solution for your specific needs, consider these enhancements:
Adopt a libcluster strategy suitable for a production environment, such as Kubernetes.
Tune queue discovery latency, configuring the polling interval based on how frequently new queues are created. Better yet, instead of polling RabbitMQ, consider setting up RabbitMQ event notifications to detect queue changes in real-time.
Add monitoring, instrumenting Broadway and Horde with Telemetry metrics.
Enhance error handling and retry mechanisms.
Unit & e2e testing. Consider that the gcloud CLI (gcr.io/google.com/cloudsdktool/google-cloud-cli:emulators) contains a PubSub emulator that may come in handy: e.g. gcloud beta emulators pubsub start — project=test-project — host-port=0.0.0.0:8085
Leverage an HorizontalPodAutoscaler for automated scaling on Kubernetes environments based on resource demand.
Evaluate the use of Workload Identities if possible. For instance, you can provide your workloads with access to Google Cloud resources by using federated identities instead of a service account key. This approach frees you from the security concerns of manually managing service account credentials.
KDE today announces the release of KDE Frameworks 6.11.0.
KDE Frameworks are 72 addon libraries to Qt which provide a wide variety of commonly needed functionality in mature, peer reviewed and well tested libraries with friendly licensing terms. For an introduction see the KDE Frameworks release announcement.
This release is part of a series of planned monthly releases making improvements available to developers in a quick and predictable manner.
KBusyIndicatorWidget: Add functions setRunning() and isRunning() to control the spinning animation, update API documentation, and test file, and define Q_PROPERTY. Commit.