Modern distributed systems need to process massive amounts of data efficiently while maintaining strict ordering guarantees. This is especially challenging when scaling horizontally across multiple nodes. How do we ensure messages from specific sources are processed in order while still taking advantage of parallelism and fault tolerance?
Elixir, with its robust concurrency model and distributed computing capabilities, is well-suited for solving this problem. In this article, we’ll build a scalable, distributed message pipeline that:
Distribute the message pipelines evenly across the Elixir cluster.
Gracefully handles failures and network partitions.
Many modern applications require processing large volumes of data while preserving message order from individual sources. Consider, for example, IoT systems where sensor readings must be processed in sequence, or multi-tenant applications where each tenant’s data requires sequential processing.
The solution we’ll build addresses these requirements by treating each RabbitMQ queue as an ordered data source.
Let’s explore how to design this system using Elixir’s distributed computing libraries: Broadway, Horde, and libcluster.
Architecture overview
The system consists of multiple Elixir nodes forming a distributed cluster. Each node runs one or more Broadway pipelines to process messages from RabbitMQ queues and forward them to Google Cloud PubSub. To maintain message ordering, each queue has exactly one pipeline instance running across the cluster at any time. If a node fails the system must redistribute its pipelines to other nodes automatically, and if a new node joins the cluster then the existing pipelines should be redistributed to ensure a balanced load.
Elixir natively supports the ability to cluster multiple nodes together so that processes and distributed components within the cluster can communicate seamlessly. We will employ the libcluster library since it provides several strategies to automatize cluster formation and healing.
For the data pipelines, the Broadway library provides a great framework to support multi-stage data processing while handling back-pressure, batching, fault tolerance and other good features.
To correctly maintain the distribution of data pipelines across the Elixir nodes, the Horde library comes to the rescue by providing the building blocks we need: a distributed supervisor that we can use to distribute and maintain healthy pipelines on the nodes, and a distributed registry that we use directly to track which pipelines exist and on which nodes they are.
Finally, a PipelineManager component will take care of monitoring RabbitMQ for new queues and starting/stopping corresponding pipelines dynamically across the cluster.
Technical implementation
Let’s initiate a new Elixir app with a supervision tree.
mix new message_pipeline --sup
First, we’ll need to add our library dependencies in mix.exs and run mix deps.get:
defmodule MessagePipeline.MixProject do use Mix.Project
def project do [ app: :message_pipeline, version: "0.1.0", elixir: "~> 1.17", start_permanent: Mix.env() == :prod, deps: deps() ] end
def application do [ extra_applications: [:logger], mod: {MessagePipeline.Application, []} ] end
defp generate_auth_token do with {:ok, %{token: token}} <- Goth.fetch(MessagePipeline.Goth) do {:ok, token} end end end
Clustering with libcluster
We’ll use libcluster to establish communication between our Elixir nodes. Here’s an example configuration that uses the Gossip strategy to form a cluster between nodes:
defmodule MessagePipeline.Application do use Application
children = [ {Cluster.Supervisor, [topologies, [name: MessagePipeline.ClusterSupervisor]]}, # Other children... ]
Supervisor.start_link(children, strategy: :one_for_one) end end
Distributed process management with Horde
We’ll use Horde to manage our Broadway pipelines across the cluster. Horde ensures that each pipeline runs on exactly one node and handles redistribution when nodes fail.
Let’s add Horde’s supervisor and registry to the application’s supervision tree.
The UniformQuorumDistribution distribution strategy distributes processes using a hash mechanism among all reachable nodes. In the event of a network partition, it enforces a quorum and will shut down all processes on a node if it is split from the rest of the cluster: the unreachable node is drained and the pipelines can be resumed on the other cluster nodes.
defmodule MessagePipeline.Application do use Application
case Broadway.start_link(__MODULE__, pipeline_opts) do {:ok, pid} -> {:ok, pid}
{:error, {:already_started, _pid}} -> :ignore end end
def pipeline_name(queue_name) do String.to_atom("pipeline_#{queue_name}") end
@impl true def handle_message(_, message, _) do message |> Message.update_data(&process_data/1) end
@impl true def handle_batch(_, messages, _, _) do case publish_to_pubsub(messages) do {:ok, _message_ids} -> messages {:error, reason} -> # Mark messages as failed Enum.map(messages, &Message.failed(&1, reason)) end end
defp process_data(data) do # Transform message data as needed data end
defp publish_to_pubsub(messages) do MessagePipeline.GooglePubsub.publish_messages(messages) end end
Queue discovery and pipeline management
Finally, we need a process to monitor RabbitMQ queues and ensure pipelines are running for each one.
The Pipeline Manager periodically queries RabbitMQ for existing queues. If a new queue appears, it starts a Broadway pipeline only if one does not already exist in the cluster. If a queue is removed, the corresponding pipeline is shut down.
defmodule MessagePipeline.PipelineManager do use GenServer
@timeout :timer.minutes(1)
def start_link(opts) do GenServer.start_link(__MODULE__, opts, name: __MODULE__) end
def init(_opts) do state = %{managed_queues: MapSet.new()}
{:ok, state, {:continue, :start}} end
def handle_continue(:start, state) do state = manage_queues(state)
{:noreply, state, @timeout} end
def handle_info(:timeout, state) do state = manage_queues(state)
{:noreply, state, @timeout} end
def manage_queues(state) do {:ok, new_queues} = discover_queues() current_queues = state.managed_queues
# Filter out system queues queues |> Enum.reject(fn %{name: name} -> String.starts_with?(name, "amq.") or String.starts_with?(name, "rabbit") end) |> Enum.map(& &1.name) |> MapSet.new() end
defp start_pipeline(queue_name) do pipeline_name = MessagePipeline.Pipeline.pipeline_name(queue_name)
case Horde.Registry.lookup(MessagePipeline.PipelineRegistry, pipeline_name) do [{pid, _}] -> {:error, :already_started} [] -> Horde.DynamicSupervisor.start_child( MessagePipeline.PipelineSupervisor, {MessagePipeline.Pipeline, queue_name: queue_name} ) end end
defp stop_pipeline(queue_name) do pipeline_name = MessagePipeline.Pipeline.pipeline_name(queue_name)
case Horde.Registry.lookup(MessagePipeline.PipelineRegistry, pipeline_name) do [{pid, _}] -> Horde.DynamicSupervisor.terminate_child(MessagePipeline.PipelineSupervisor, pid) [] -> {:error, :not_found} end end end
Let’s not forget to also add the pipeline manager to the application’s supervision tree.
defmodule MessagePipeline.Application do use Application
def start(_type, _args) do children = [ {MessagePipeline.PipelineManager, []} # Other children... ]
Supervisor.start_link(children, strategy: :one_for_one) end end
Test the system
We should now have a working and reliable system. To quickly test it out, we can configure a local RabbitMQ broker, a Google Cloud PubSub topic, and finally a couple of Elixir nodes to verify that distributed pipelines are effectively run to forward messages between RabbitMQ queues and PubSub.
Let’s start by running RabbitMQ with the management plugin. RabbitMQ will listen for connections on the 5672 port, while also exposing the management interface at http://localhost:15672. The default credentials are guest/guest.
# Publish test messages ./rabbitmqadmin publish routing_key=test-queue-1 payload="Message 1 for queue 1" ./rabbitmqadmin publish routing_key=test-queue-1 payload="Message 2 for queue 1" ./rabbitmqadmin publish routing_key=test-queue-2 payload="Message 1 for queue 2"
# List queues and their message counts ./rabbitmqadmin list queues name messages_ready messages_unacknowledged
# Get messages (without consuming them) ./rabbitmqadmin get queue=test-queue-1 count=5 ackmode=reject_requeue_true
One can also use the RabbitMQ management interface at http://localhost:15672, authenticate with the guest/guest default credentials, go to the “Queues” tab, click “Add a new queue”, and create “test-queue-1” and “test-queue-2”.
After a minute, the Elixir nodes should automatically start some pipelines corresponding to the RabbitMQ queues.
# List all registered pipelines Horde.Registry.select(MessagePipeline.PipelineRegistry, [{{:"$1", :"$2", :"$3"}, [], [:"$2"]}])
# Check specific pipeline pipeline_name = :"pipeline_test-queue-1" Horde.Registry.lookup(MessagePipeline.PipelineRegistry, pipeline_name)
Now, if we publish messages on the RabbitMQ queues, we should see them appear on the PubSub topic.
We can verify it from Google Cloud Console, or by creating a subscription, publishing some messages on RabbitMQ, and then pulling messages from the PubSub subscription.
If we stop one of the Elixir nodes (Ctrl+C twice in its IEx session) to simulate a failure, the pipelines should be redistributed in the remaining node:
# Check updated node list Node.list()
# Check pipeline distribution Horde.Registry.select(MessagePipeline.PipelineRegistry, [{{:"$1", :"$2", :"$3"}, [], [:"$2"]}])
Rebalancing pipelines on new nodes
With our current implementation, pipelines are automatically redistributed when a node fail but they are not redistributed when a new node joins the cluster.
Fortunately, Horde supports precisely this functionality from v0.8+, and we don’t have to manually stop and re-start our pipelines to have them landing on other nodes.
All we need to do is enable the option process_distribution: :active on Horde’s supervisor to automatically rebalance processes on node joining / leaving. The option runs each child spec through the choose_node/2 function of the preferred distribution strategy, detects which processes should be running on other nodes considering the new cluster configuration, and specifically restarts those particular processes such that they run on the correct node.
defmodule MessagePipeline.Application do use Application
Supervisor.start_link(children, strategy: :one_for_one) end end
Conclusion
This architecture provides a robust solution for processing ordered message streams at scale. The combination of Elixir’s distributed capabilities, Broadway’s message processing features, and careful coordination across nodes enables us to build a system that can handle high throughput while maintaining message ordering guarantees.
To extend this solution for your specific needs, consider these enhancements:
Adopt a libcluster strategy suitable for a production environment, such as Kubernetes.
Tune queue discovery latency, configuring the polling interval based on how frequently new queues are created. Better yet, instead of polling RabbitMQ, consider setting up RabbitMQ event notifications to detect queue changes in real-time.
Declare AMQP queues as durable and make sure that publishers mark published messages as persisted, in order to survive broker restarts and improve delivery guarantees. Use publisher confirms to ensure messages are safely received by the broker. Deploy RabbitMQ in a cluster with queue mirroring or quorum queues for additional reliability.
Add monitoring, instrumenting Broadway and Horde with Telemetry metrics.
Enhance error handling and retry mechanisms. For example, retry message publication to PubSub N times before failing the messages, thus invalidating the (possibly costly) processing operation.
Unit & e2e testing. Consider that the gcloud CLI (gcr.io/google.com/cloudsdktool/google-cloud-cli:emulators) contains a PubSub emulator that may come in handy: e.g. gcloud beta emulators pubsub start — project=test-project — host-port=0.0.0.0:8085
Leverage an HorizontalPodAutoscaler for automated scaling on Kubernetes environments based on resource demand.
Evaluate the use of Workload Identities if possible. For instance, you can provide your workloads with access to Google Cloud resources by using federated identities instead of a service account key. This approach frees you from the security concerns of manually managing service account credentials.
The second maintenance release of the 24.12 cycle is out with multiple bug fixes. Notable changes include fixes for crashes, UI resizing issues, effect stack behavior, proxy clip handling, and rendering progress display, along with improvements to Speech-to-text in Flatpak and macOS packages.
Krita can now be compiled (MR!2306) and run (Mastodon post) with Qt6 on Linux, a major milestone on the long road of porting from the outdated Qt5 framework. However, it's still a long way to go to get things working correctly, and it will be some time before any pre-alpha builds are available for the far-off Krita 6.0.
For the February Art Challenge, @Mythmaker has chosen "Fabulous Flora" as the theme, with the optional challenge of using natural texture. See the full brief for more details, and bring some color into bloom.
Featured Artwork
Best of Krita-Artists - December 2024/January 2025
Nine images were submitted to the Best of Krita-Artists Nominations thread, which was open from December 14th to January 11th. When the poll closed on January 14th, these five wonderful works made their way onto the Krita-Artists featured artwork banner:
Krita is Free and Open Source Software developed by an international team of sponsored developers and volunteer contributors.
Visit Krita's funding page to see how user donations keep development going, and explore a one-time or monthly contribution. Or check out more ways to Get Involved, from testing, coding, translating, and documentation writing, to just sharing your artwork made with Krita.
The Krita-promo team has put out a call for volunteers, come join us and help keep these monthly updates going.
Notable Changes
Notable changes in Krita's development builds from Jan. 16 - Feb. 12, 2025.
Unstable branch (5.3.0-prealpha):
Bug fixes:
Blending Modes: Rewrite blending modes to properly support float and HDR colorspaces. (bug report) (Change, by Dmitry Kazakov)
Brush Engines: Fix Filter Brush engine to work with per- and cross-channel filters. (Change, by Dmitry Kazakov)
Filters: Screentone: Change default screentone interpolation type to Linear. (Change, by Emmet O'Neill)
Scripting: Fix Node.paint script functions to use the given node instead of active node. (Change, by Freya Lupen)
Features:
Text: Load font families as resources and display a preview in the font chooser. (Change, by Wolthera van Hövell)
Filters: Random Noise: Add grayscale noise option and improve performance. (Change, by Maciej Jesionowski)
Blending Modes: Add a new HSY blending mode, "Tint", which colorizes and slightly lightens. It's suggested to be used with the Fast Color Overlay filter. (Change 1, Change 2 by Maciej Jesionowski)
Nightly Builds
Pre-release versions of Krita are built every day for testing new changes.
This week, I focused on integrating the Monte Carlo Tree Search (MCTS) algorithm into the MankalaEngine. The primary goal was to test the performance of the MCTS-based agent against various existing algorithms in the engine. Let's dive into what MCTS is, how it works, and what I discovered during the testing phase.
What is Monte Carlo Tree Search (MCTS)?
The Monte Carlo Tree Search (MCTS) is a heuristic search algorithm used for decision-making in sequential decision problems. It incrementally builds a search tree and simulates multiple random moves at each step to evaluate potential outcomes. These simulations help the algorithm determine the most promising move to make.
How Does MCTS Work?
MCTS operates through four key steps:
1. Selection
The algorithm starts at the root node (representing the current game state) and traverses down the tree to a leaf node (an unexplored state). During this process, the algorithm selects child nodes using a specific strategy.
A popular strategy for node selection is Upper Confidence Bounds for Trees (UCT). The UCT formula helps balance exploration and exploitation by selecting nodes based on the following equation:
UCT = mean + C × sqrt(ln(N) / n)
Where:
mean is the average reward (or outcome) of a node.
N is the total number of simulations performed for the parent node.
n is the number of simulations performed for the current child node.
C is a constant that controls the level of exploration.
2. Expansion
Once the algorithm reaches a leaf node, it expands the tree by adding one or more child nodes representing potential moves or decisions that can be made from the current state.
3. Simulation
The algorithm then performs a simulation or rollout from the newly added child node. During this phase, the algorithm randomly plays out a series of moves (typically following a simple strategy) until the game reaches a terminal state (i.e., win, loss, or draw).
This is where the Monte Carlo aspect of MCTS shines. By simulating many random games, the algorithm gains insights into the likely outcomes of different actions.
4. Backpropagation
After the simulation ends, the results are propagated back up the tree, updating the nodes with the outcome of the simulation. This allows the algorithm to adjust the expected rewards of the parent nodes based on the result of the child node’s simulation.
With a solid understanding of the algorithm, I began implementing MCTS in C++. The initial step involved integrating the MCTS logic into the benchmark utility of the MankalaEngine. After resolving a series of issues and running multiple tests, the code was functioning as expected.
Testing Results
I compared the performance of the MCTS agent against other existing agents in the MankalaEngine, such as Minimax, MTDF, and Random agents. Here’s how the MCTS agent performed:
Random Agent (Player 1) vs. MCTS (Player 2)
MCTS won 80% of the time
MCTS (Player 1) vs. Random Agent (Player 2)
MCTS won 60% of the time
MCTS vs. Minimax & MTDF
Unfortunately, MCTS consistently lost against both Minimax and MTDF agents. 😞
Key Improvements for MCTS
While MCTS performed well against the Random Agent, there is still room for improvement, especially in its simulation phase. Currently, the algorithm uses a random policy for simulations, which can be inefficient. To improve performance, we can:
Use more efficient simulation policies that simulate only promising moves, rather than randomly selecting moves.
At the start of the Selection step, focus on moves that have historically been good opening strategies (this requires further research to identify these moves, especially in Pallanguli).
Fine-tune the exploration-exploitation balance to improve decision-making.
Upcoming Tasks
In the upcoming week, I plan to:
Write test cases for the Pallanguli implementation.
Last year during Akademy I gave a talk called Union: The Future of Styling in KDE?!. In this talk I presented a problem: We currently have four ways of styling our applications. Not only that, but some of these approaches are quite hard to work with, especially for designers who lack programming skills. This all leads to it being incredibly hard to make changes to our application styling currently, which is not only a problem for something like the Plasma Next Initiative, but even smaller changes take a lot of effort.
This problem is not new; we already identified it several years ago. Unfortunately, it also is not easy to solve. Some of the reasons it got to this state are simply inertia. Some things like Plasma's SVG styling were developed as a way to improve styling in an era where a lot of the technologies we currently use did not exist yet. The solutions developed in those days have now existed for a pretty long time so we cannot suddenly drop them. Other reasons are more technical in nature, such as completely different rendering stacks.
Introducing Union
Those different rendering stacks are actually one of the core issues that makes this hard to solve. It means that we cannot simply use the same rendering code for everything, but have to come up with a tricky compatibility layer to make that work. This is what we currently do, and while it works, it means we need to maintain said compatibility layer. It also means we are not utilizing the rendering stack to its full potential.
However, there is another option, which is to take a step back and realise that we actually may not even want to share the rendering code, given that they are quite different. Instead, we need a description of what the element should look like, and then we can have specific rendering code that implements how to render that in the best way for a certain technology stack.
This idea is at the core of a project I called Union, which is a styling system intended to unify all our separate approaches into a single unified styling engine that can support all the different technologies we use for styling our applications.
Union consists of three parts: an input layer, an intermediate layer and an output layer. The input layer consists of plugins that can read and interpret some input file format containing a style description and turn it into a more abstract desciption of what to render. How to do that is defined by the middle intermediate layer, which is a library containing the description of the data model and a method of defining which elements to apply things to. Finally, the output layer consists of plugins that use the data from the intermediate layer and turn it into actual rendering commands, as needed for a specific rendering stack.
Implementing Things
This sounds nice on paper, but implementing it is easier said than done. For starters, everything depends on the intermediate layer being both flexible enough to handle varying use cases but at the same time rigid enough that it becomes hard to - intentionally or unintentionally - create dependencies between the input and output layers. Apart from that, replacing the entire styling stack is simply going to be a lot of work.
To allow us to focus more on the core we needed to break things down into more manageable parts. We chose to focus on the intermediate layer first, by using Plasma's SVG themes as an input format and a QtQuick Style as output. This means we are working with an input format that we already know how to deal with. It also means we have a clear picture of what the output should look like, as it should ultimately look just like how Plasma looks.
At this point, a lot of this work has now been done. While Union does not yet implement a full QtQuick style, it implements most of the basic controls to allow something such as Discover to run without looking completely alien. Focusing on the intermediate layer proved very useful, we encountered and managed to solve several pretty tricky technical issues that would have been even trickier if we did not know what things should look like.
Union Needs You!
All that said, there is still a lot to be done. For starters, to be an actual unified styling system for KDE we need a QtWidgets implementation. Some work on that has started, but it is going to be a lot harder than the QtQuick implementation. We also need a different input format. While Plasma's SVG styling works, it is not ideal for developing new styles with. I would personally like to investigate using CSS as input format as it has most of what we need while also being familiar to a lot of people. Unfortunately, finding a good CSS parser library turns out to be quite hard.
However, at this stage we are at a point where we have multiple tasks that can be done in parallel. This means it is now at a point where it would be great if we had more people developing code, as well as some initial testing and feedback on the systen. If you are interested in helping out, the code can be found at invent.kde.org/plasma/union. There is also a Matrix channel for more realtime disucssions.
Kasts polishing, progress on Krita Qt6 port and Kdenlive fundraising report
Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps. This issue contains change from the last two weeks.
Much happened the past two weeks, we had a successful KDE presence at FOSDEM, we now have a location and date for this year's edition of Linux App Summit (April 25-26, 2025 in Tirana, Albania) and also continued to improve our apps. Let's dive in!
KStars 3.7.5 is out with mostly bugfixes and performance improvements.
GCompris 25.0 is out. This is a big release containing 5 new activities.
Krita 5.2.9 is out. This is a bug fix release, containing all bugfixes of our bug hunt efforts back in November. Major bug-fixes include fixes to clone-layers, fixes to opacity handling, in particular for file formats like Exr, a number of crash fixes and much more!
We made it possible to rename tabs in Dolphin. This action is available in each tab's context menu. This is useful for very long tab names or when it is difficult to identify a tab by a folder's name alone. (ambar chakravartty, 25.04.0. Link)
We also improved the keyboard based selection of items. Typing a letter on the keyboard usually selects the item in the view which starts with that letter. Diacritics are now ignored here, so you will for example be able to press the "U" key to select a file starting with an "Ü". (Thomas Moerschell, 24.12.3. Link)
We changed the three view buttons to a single menu button. (Akseli Lahtinen, 25.04.0. Link)
We made the "Empty Trash" icon red in conformance to our HIG as it is a destructive operation. (Nate Graham, 25.04.0. Link)
We improved getting the information from supported version control systems (e.g. Git). It is now faster and happens earlier. (Méven Car, 25.04.0. Link)
We added input methods hints to input fields. This is mostly helpful when using different input methods than a traditional keyboard (e.g. a virtual keyboard). (Juraj Oravec. Link)
We continued to improve the coverage of Itinerary in Poland. This week we added support for the train operator Polregio, fixed and refactored the extractor for Koleo and rewrote the extractor for PKP-app to support the ticket layouts. (Grzegorz Mu, 24.12.3. Link 1, link 2, and link 3)
We also added support for CitizenM hotel bookings. (Joshua Goins, 24.12.3. Link)
We also started working on an online version of the ticket extractor. A preview is available on Carl's website.
Volker also published a recap of the past two months in Itinerary. This contains also some orthogonal topics like the free software routing service Transitous.
We fixed the vertical alignment of the queue header. (Joshua Goin, 25.04.0. Link)
We are now using Kirigami.UrlButton for links and Kirigami.SelectableLabel for the text description in the podcast details page to improve visual and behavior consistency with other Kirigami applications. (Joshua Goins, 25.04.0. Link)
We also improved the look of the search bar in the discovery page. It's now properly separated from the rest of the content. (Joshua Goins, 25.04.0. Link)
We added the ability to force the app to mobile/desktop mode. (Bart De Vries, 25.04.0. Link)
We fixed the sort order of the podcasts episodes. (Bart De Vries, 24.12.3. Link)
Finally we made various improvements to our usage of QML in Kasts to use newer QML constructs. This should improve slighly the performance while reducing the technical debt. (Tobias Fella, 25.04.0. Link 1, link 2, link 3, link 4, link 5, and link 6)
We fixed some issues with the list of commits displayed in Kate. The highlight color is now correct and the margins consistent. (Leo Ruggeri, 25.04.0. Link)
We improved the diff widget of Kate. The toolbar icon sizes are now the same as other toolbars in Kate. (Leo Ruggeri, 25.04.0. Link)
The Krita team continued porting Krita to Qt6/KF6. The application now compiles and run with Qt6, but there are still some uni tests not working. Link to Mastodon thread
We implemented the dynamic resolution mode from the remote desktop protocol (RDP). This means we now resize the remote desktop to fit the current KRDC window. This works for Windows >= 8.1. (Fabio Bas, 25.04.0. Link)
We added support for the domain field in the authentication process. (Fabio Fas, 25.04.0. Link)
We adapted the code to work with FreeRDP 3.11. (Fabio Bas, 25.04.0. Link)
We added a way to filter the list of certificate to only show certificates for "Qualified Signatures" in the certificate selection. (Sune Vuorela, 25.04.0. Link)
We are now using more fitting icons for the "Embed" and "Open in Browser" actions in Tokodon's context menu. We also removed the duplicated "Copy to Clipboard" action from that context menu. (Joshua Goins, 24.12.3. Link and link 2)
Following the improvements from two weeks ago, we did even even more accessibility/screen reader improvements to Tokodon. (Joshua Goins, 24.12.3. Link)
For a complete overview of what's going on, visit KDE's Planet, where you can find all KDE news unfiltered directly from our contributors.
Get Involved
The KDE organization has become important in the world, and your time and
contributions have helped us get there. As we grow, we're going to need
your support for KDE to become sustainable.
You can help KDE by becoming an active community member and getting involved.
Each contributor makes a huge difference in KDE — you are not a number or a cog
in a machine! You don’t have to be a programmer either. There are many things
you can do: you can help hunt and confirm bugs, even maybe solve them;
contribute designs for wallpapers, web pages, icons and app interfaces;
translate messages and menu items into your own language; promote KDE in your
local community; and a ton more things.
You can also help us by donating. Any monetarnky
contribution, however small, will help us cover operational costs, salaries,
travel expenses for contributors and in general just keep KDE bringing Free
Software to the world.
To get your application mentioned here, please ping us in invent or in Matrix.