Skip to content

Tuesday, 5 January 2021

If I look back at the last post I made on ths blog… let’s say quite a lot of time has passed. The reason? Well, first of all, one would call it a lack of motivation1, and afterwards, the emergence of a small yet quite annoying pathogen which caused a bit of a stir worldwide. But today I’m not going to talk about viruses: perhaps some other time when you can avoid hear about it for breakfast, lunch, and dinner.

KDE software from git and the OBS

Yes, today I’m going to talk about the OBS, that is the Open Build Service, not to be confused with another highly successful open source project.

As you know, since ages, the openSUSE KDE team provides a series of repositories which track the latest state of the git repositories in KDE, be them either Frameworks, Plasma, or the applications part of the Release Service. This also allows to create Live CDs which can be useful for testing out the software.

But the question that I’ve seen every now and then is… how it is actually done? Is everything provided by the OBS, or does someone need to add some glue on top of that?

Using the OBS to provide updates to KDE packages from git

Source services

From the official documentation:

Source Services are tools to validate, generate or modify sources in a trustable way.

Ok, that doesn’t tell us much, does it? Luckily, the concept is simple. It is that, for packages built in the OBS, we can use tools (internal or external) to perform operations on the source(s) the packages are built from. These are done before any actual building starts.

The openSUSE wiki has some examples of what a source service can do, of which one immediately jumps to our eye:

Checkout service - checks out from SVC system (svn, git, …) and creates a tarball.

That means that we can, theoretically, ask the OBS to do a checkout from git, and automatically generate a tarball from there. In addition, a source service can add version information to the RPM spec file - the blueprint of an openSUSE package - including some data taken off git, for example the date and the revision hash of the checkout.

Source services are implemented as files named _service which live in the OBS for each package. Let’s take a look at the one for kcoreaddons in the KDE:Unstable::

  <service name="obs_scm">
    <param name="url"></param>
    <param name="scm">git</param>
    <param name="versionformat">VERSIONgit.%ci~%h</param>
 <service name="set_version_kf5" mode="buildtime"/>
  <service mode="buildtime" name="tar" />
  <service mode="buildtime" name="recompress">
    <param name="file">*.tar</param>
    <param name="compression">xz</param>
  <service mode="buildtime" name="set_version" />


The first line inside service tells us that we’re using the obs_scm service, which is a built-in service in openSUSE’s OBS instance to checkout the sources from the url, using git. The versionformat parameter sets the version to a literal VERSION (more on that later) and adds the git suffix, and then adds %ci, which is replaced by the checkout date (example: 20210102T122329) and %h, which puts the short git revision hash in the version (example: 5d069715).

The whole data is compressed in a cpio archive with the obscpio extension. At this point, we don’t have a tarball yet.

After the checkout

THe next services run when the package is actually built, as you can see by the mode="builtime" in their definitions. The first one (set_version_kf5) is actually a service made by Fabian Vogt (fellow member of the KDE team) which replaces VERSION by the current version present in the KDE git (done manually: it is read from the OBS project configuration, and bumped every time it happens upstream). The following lines set the version in the spec file, and compress the whole thing into a tarball.

At this point, the build system starts building the actual source and creating the package.

Manual labor

Just when are these kind of source services run? If left by themselves, never. You need to actually signal the OBS (trigger, in OBS-speak) to run the service. An example is with the osc command line utility:

osc service remoterun KDE:Unstable:Frameworks kcoreaddons

Or there’s a button in the OBS web interface which can allow you to do just that:

There’s just a little problem. When there are more than 200 packages to handle, updating in this way gets a complicated. Also, while the OBS is smart enough to avoid updating a package that has not changed, it has no way to tell you before the service is actually run.


Luckily, the OBS has features which, used with some external tools, can solve the problem in a hopefully effective way.

Since 2013 the OBS can use authentication tokens to run source services, and you can for example trigger updates with pushes to GitHub repositories. To perform this task for the KDE:Unstable repositories, I based upon these resources to build something to do mass updates in a reliable way.

First of all, I generated a token, and I wanted to make sure that it could only trigger updates. Nothing more, nothing less. This was fairly easy with osc:

osc token --create -o runservice

I did not specify a project, so the token works with all the repositories I have access to. This was a requirement, because the KDE Unstable repositories are actually three different OBS projects.

To trigger an update in the OBS, one needs to call the endpoint, doing a POST request and include the project name (project parameter) and the package name (package parameter). An example (I know, I didn’t encode the URL for simplicity, bear with me):

The token needs to be passed as an Authorization header, in the form Token <your token here>. You get a 200 response if the trigger is successful, and 404 in other cases (including an incorrect token).

Like this, I was able to find a way to reliably trigger the updates. In fact, it would be probably easy to conjure a script to do that. But I wanted to do something more.

A step further

I wanted to actually make sure to trigger the update only if there were any new commits. But at the same time I did not want to have a full checkout of the KDE git repositories: that would have been a massive waste of space.

Enter git ls-remote, which can give you the latest revision of a repository for all its branches (and tags), without having to clone it. For example:

git ls-remote --heads
22175dc433dad1b1411224d80d77f0f655219122        refs/heads/Applications/18.08
5a0a80e42eee138bda5855606cbdd998fffce6c0        refs/heads/Applications/18.12
2ca039e6d4a35f3ab00af9518f489e00ffb09638        refs/heads/Applications/19.04
2f96d829f28e85f3abe486f601007faa2cb8cde5        refs/heads/Applications/19.08
f12f2cb73e3229a9a9dafb347a2f5fe9bd6c7975        refs/heads/master
18f675d888dd467ebaeaafc3f7d26b639a97d142        refs/heads/release/19.12
90ba79572e428dd150183ba1eea23d3f95040469        refs/heads/release/20.04
763832e4f1ae1a3162670a93973e58259362a7ba        refs/heads/release/20.08
c16930a0b70f5735899826a66ad6746ffb665bce        refs/heads/release/20.12

In this case I limited the list to branches to branches (--heads). As you can see, the latest revision in master for kitinerary at the time of writing is f12f2cb73e3229a9a9dafb347a2f5fe9bd6c7975. So, the idea I had in mind was:

  1. Get the state of the master branch in all repositories part of the KDE Unstable hierarchy;
  2. Save the latest revision on disk;
  3. On the following check (24 hours later) compare the revisions between what’s stored on disk and the remote;
  4. If the revisions differ, trigger an update;
  5. Save the new revision to disk;

This was slightly complicated by the fact that package names on the OBS and source repositories in KDE can differ (example: plasma-workspace is plasma5-workspace or akonadi is akonadi-server). My over-engineered idea was to create a JSON mapping of the whole thing (excerpt):

    "KDE:Unstable:Frameworks": [
            "kde": "frameworks/attica",
            "obs": "attica-qt5",
            "branch": "master"
            "kde": "frameworks/kdav",
            "obs": "kdav",
            "branch": "master"

(Full details on the actual repository)

It was painful to set up the first time, but it paid off afterwards. I just needed a few more adjustments:

  • Check whether the remote repository actually existed: I used GitLab’s project API to obtain a yes/no answer for each repository, and skip them if they do not exist.
  • Commit the changed files to git and push them to the remote repository holding the data.

As I am mostly a Python person, I just used Python to write a yet another over-engineered script to do all the steps outlined above.

Icing on the cake

Mmm… cake. Wait. We’re not talking about food here, just about how the whole idea was put into “production” (include several hundred of quotes around that word). I created a separate user on my server (the same which runs with minimal privileges, put the token into a file with 0600 permissions, and set up a cron job which runs the script every day at 20 UTC+1.

There was just one extra step. Historically (but this is not the case anymore nowadays) the OBS would fail to perform a git checkout. This would leave a package in the broken state, and the only way to recover would be to force an update again (if that did not fix it, it would be time to poke the friendly people in #opensuse-buildservice).

Inspired by what a former KDE team member (Raymond “tittiatcoke” Wooninck) did, I wrote a stupid script which checks the packages in broken state (at the moment just for one repository and one architecture: a broken clone would affect all of them equally) and forces an update. It needs to use tokens rather than osc, but I’ll get to that soon, hopefully.


There, you have it. This is how (at the moment) I can handle updating all KDE packages from git in the OBS with (now) minimal effort. Probably it is an imperfect solution, but it does the job well. Shout out to Fabian who mentioned tokens and prompted the idea.

  1. Also called laziness. ↩︎

Thursday, 31 December 2020

It’s a pleasure to announce the first stable release of KIO FUSE, just in time for 2021.

Compared to the last release candidate, the following changed:

  • Symlinks with an absolute target path are “rewritten” to point to the location inside the target instead of the host system
  • Mounting a URL which includes symlinks now works.
  • The DBus service can use systemd activation now.

Known issues:

Hopefully distro packagers will be quick to pick up this release. If you wish to test with your own compiled version, or wish to contribute a patch please check out our README.

Please file bug reports here.

If you’re eligible for Google Summer of Code, please contact us if you’d like to take up a project developing on KIO FUSE. Some ideas (of which you can suggest your own) and contact details can be found here.



I recently obtained a brand new Raspberry Pi4 device and took the free days around x-mas to play a little bit around with it. And I must say, I am very pleased with this device!

Raspberry Pi 4 B

The important updates for me, comparing to the older Pis are:

  • Two displays can be connected! Either two HDMI or one DSI plus one HDMI.
  • It has a VideoCore VI GPU (very different to the IV from RPi3), which is driven by the MESA V3D driver.

My goal was to get a Yocto-built multi-display plasma-mobile environment on the device. Except two magic lines for the /boot/config.txt configuration that enabled multi-display output for me, it nearly worked out of the box.

RPi4 Multi-Display Setup

The important configuration step, compared to the default configuration as provided by meta-raspberry, are the following two lines that I had to add to the /boot/config.txt boot configuration:


Without these lines, the second screen always displayed just the Raspberry’s rainbow boot screen but it was never detected. I tested with both DSI+HDMI and HMDI+HDMI and both screens were always correctly detected at boot with these configuration.

Running Qt on Yocto-Build Image

Having the above configuration added, I was able to run a simple multi-screen QtWayland Compositor on the device. Note that I built Qt with

PACKAGECONFIG_append = " gbm kms eglfs"

and QtWayland with

PACKAGECONFIG_append_raspberrypi4 = " wayland-drm-egl-server-buffer"

With these options and having all requirements installed, the compositor runs via

export XDG_RUNTIME_DIR=/var/run/
export GALLIUM_HUD=fps # gives nice profiling information about fps
export QT_WAYLAND_CLIENT_BUFFER_INTEGRATION=linux-dmabuf-unstable-v1
qmlscene Compositor.qml -platform eglfs

It is important to note that qmlscene internally sets the Qt::AA_ShareOpenGLContexts attribute which you have to do yourself when running a compositor with your own main file.

Having this compositor running, I could run a simple spinning rectangle application via

export XDG_RUNTIME_DIR=/var/run/
qmlscene SpinningRectangle.qml -platform wayland-egl

Plasma Mobile

The final step though was to get our KDE Demo Setup running. Since there were no live conferences this year, some parts were slightly outdated. So, this was a good opportunity to update our meta layers:

  • meta-kf5 is now updated to KDE Frameworks 5.77.0. Note that we also cleaned up the license statements a few months ago, which was only possible due to much better license information via the SPDX/REUSE conversion of frameworks.
  • meta-kde also gained an update to the latest Plasma release and to the latest KDE Applications release. The number of provided applications — additional to Plasma — is still small, but I also used the opportunity to add some more KDE Edu applications (Blinken, Marble, Kanagram, KHangman, GCompris).

Final Result

Plasma mobile running with two screens \o/

PS: My whole test configuration if available by my (quick and dirty) umbrella test repository, in which I have all used meta layers integrated as submodules.

Wednesday, 30 December 2020

Over the last few years and especially since the Wayland goal vote, the Plasma team, we have been focusing on having our Plasma Wayland experience work at least as good as our traditional Plasma X11 experience does. Today I’d like to wrap up my experience on this front for 2020.

Despite having been working on KDE and even Plasma projects for a long while, I’d never gotten much deep into working on KWin internally. I dipped my toes in it in 2019 when working on the key states feature (that plasmoid that says if caps lock is pressed or not, which we needed because the KDE Slimbook didn’t have an LED for the Caps Lock). Here I’ll discuss a bit how it evolved over time.


Tablet support

It’s probably been my first big feature contribution to KWin. From 5.19 you’ve been able to use your tablets to draw things, all features should work fine and life should work. In the beginning, I was mostly interested in getting the 2-in-1 tablet with a pen work, hence implementing this one first.
The rest of the spec implementation is pending review here and hopefully should make it into 5.21:

Screen casting support

This is something I’ve worked on mostly with my Blue Systems hat on, but still is something very important for our daily usage of Wayland, especially nowadays where working remotely is more important than ever and sharing our screens and windows is something that we need to do on a daily basis.

KWin used to support already sharing screens, or rather xdg-desktop-portal-kde could, now we centralised the implementation in KWin and made sure it’s as performant as possible. It was admittedly rather complex to put together, but allowed me to understand how the system works and helped me have a better insight of the architecture.

Input Methods

Plasma Mobile is becoming a palpable reality and input is very important on any touch device. In Plasma Mobile we’d been relying on having Qt’s embedded virtual keyboard since the beginning and while it worked reasonably well, we needed to make it possible for us to offer more and better solutions. This time, we implemented the unstable input-method protocol which allowed us to use keyboards implemented in a separate process, hence making it possible to integrate the Maliit keyboard transparent, as well as the weston keyboard if minimalism is your thing.

Maliit keyboard

This, of course, opens the possibility of much more development on top in terms of other better virtual keyboards, improving the current ones or the integration of more esoteric kinds of input methods (e.g. ibus, fcitx, emoji keyboard or even spell checkers and the likes).

Developing comfortably

Something that was clear to me as soon as the Wayland Goal was shaping up was that we had to find ways to free ourselves a bit from the ABI limitations. From the beginning, Wayland interfaces started to be developed in KWayland under KDE Frameworks 5. This meant that server-side implementations had to stay backwards compatible to certain versions of KF5 and that we couldn’t do certain things. We moved KWayland Server components into a separate component here, that is released with Plasma and we can develop as we see fit: Note that KWayland Client classes stay there where they always were.

This has allowed us in turn to adopt the usage of qtwaylandscanner, which is a tool that generates certain boilerplate C++ code for us from the xml spec, allowing us to focus on the parts we care about in the implementation. This makes Wayland protocol implementation a bit more straightforward while removing some code. You can see the MRs Adrien and Vlad made doing this if you’re curious about the changes it conveys. Our first protocol implementation to use qtwaylandscanner was the keystate protocol I mentioned earlier.


As it’s important to explain what we do so people are aware of it, I decided to take some time this year to explain the different aspects of our Wayland initiative and overall why Wayland makes sense. You can see it explained here, I wouldn’t recommend you to look at them all, but it could be useful for you to look at the one that fits your perspective better.

KWin and Wayland

Wayland for Product creators

Wayland for App Developers

Plasma Wayland: donde estamos y como ayudar (in Spanish)


I know there’s a lot of really exciting stuff coming up from the team. If you’re interested, stay tuned. We will be having a sprint early January to discuss different topics from major roadblocks to a light and comfortable experience.

Consider joining us either at the sprint or at the Wayland goal forums and work with us towards a better experience using Free Software and Plasma.

Currently, Calindori works with calendar data provided by files that follow the iCalendar specification, without offering an out-of-the-box way to synchronize your calendars with external sources, e.g. with Nextcloud. However, this will change in the future. In specific, a plan for this feature has been devised. The first step, a plugin interface that will enable Calindori to use calendar data from various data sources is already in progress.

Although Calindori works on Linux mobile, desktop and even Android, it has been created as the calendar of Plasma Mobile. From this point of view, as soon as a personal information management (PIM) system is available on Plasma Mobile, Calindori will make use of it. However, such a system has not been implemented yet. Various ideas have been discussed on the Plasma Mobile sprints and community meetings. Personally, I am in favor of a sustainable, KDE community driven solution that will work well with Plasma desktop as well as taking into account the particularities of the mobile world, e.g. low energy consumption, “deep sleep” support, etc.

Calindori desktop Calindori on desktop

That being said, let me describe here an online calendar synchronization approach that has worked well for me. My personal workflow consists of a personal Nextcloud server -where my calendar is hosted- and various devices that make use of the Nextcloud calendar. As I said in the beginning, Calindori uses iCalendar files for calendar data. Thus, I looked for a mechanism that uses iCalendar files and synchronizes them with Nextcloud. During my research for such a solution, I stumbled upon Vdirsyncer.

As the home page of the project reads, Vdirsyncer is a “command-line tool for synchronizing calendars and address books between a variety of servers and the local file system”. Let me now describe how I managed to configure it and make it work with Calindori.

Certainly, the first necessary step is to install Vdirsyncer. Luckily, it is available in various Linux repositories. E.g, on Ubuntu you just have to install the vdirsyncer package. Next, according to the project documentation, a configuration file should be created. So, I created this config file in the ~/.config/vdirsyncer directory:

status_path = "~/.local/share/vdirsyncer/status/"

[pair nc_myusername_caldav]
a = "nc_myusername_caldav_local"
b = "nc_myusername_caldav_remote"
collections = ["from a", "from b"]
metadata = ["color"]
conflict_resolution = "b wins"

[storage nc_myusername_caldav_local]
type = "singlefile"
path = "~/.local/share/vdirsyncer/caldav/myusername/%s.ics"

[storage nc_myusername_caldav_remote]
type = "caldav"
url = ""
username = "myusername"
password.fetch = ["command", "keyring", "-b", "keyring.backends.kwallet.DBusKeyring", "get", "Nextcloud", "myusername:"]

So, in the configuration file, I defined two storages:

  • nc_myusername_caldav_local, the local iCalendar file that Calindori will “consume” and the
  • nc_myusername_caldav_remote, the Nextcloud personal calendar.

The pair nc_myusername_caldav section makes Vdirsyncer synchronize the calendar storages bidirectionally. In case of a conflict -an event or task has changed on both sides since the last sync- I opted for the conflict to be resolved in favor of the remote storage.

With regards to authentication to Nextcloud for user myusername, keyring has been used in order to access the kwallet subsystem via D-Bus.

KDE Wallet KDE Wallet Nextcloud entry

This approach works perfectly for me since various passwords that I daily use have been stored in the kwallet which is opened just after logging in to my user session. If this approach does not fit your needs, there are various alternatives.

Then, after creating the ~/.local/share/vdirsyncer/caldav/myusername directory and running:

vdirsyncer discover
vdirsyncer sync

an iCalendar file will be created and populated with your Nextcloud tasks and events.

The next step is to synchronize automatically at certain intervals. So, if you

  • download vdirsyncer.service and vdirsyncer.timer
  • put them into ~/.local/share/systemd/user
  • activate and run the timer
    systemctl --user enable vdirsyncer.timer
    systemctl --user start vdirsyncer.timer

the Nextcloud calendar will be synchronized every 15 minutes.

Finally, we need to let Calindori know about the Nextcloud calendar. The process is straight forward: navigate to Settings > External > Add and add the Vdirsyncer calendar file. From now on, the tasks and events that are created either on Calindori or Nextcloud side are going to be synchronized between each other.

Add external calendar Add an external calendar

Finally, let me clarify that this approach is not the way that Calindori and Plasma Mobile are going to offer online synchronization of calendars in the future. Nevertheless, Vdirsyncer is a nice, simple utility that enable users to use Nextcloud calendars in Calindori at the moment. It has worked pretty well for me, and I think that the Linux-on-mobile community will find it as an interesting solution for calendar synchronization.

Tuesday, 29 December 2020

Update 15.03.2023: Thanks to, this code now handles replies in a lot nicer way. You might want check out her solution too.

One of the biggest disadvantages of static site generators is that they are static and can’t include comments.

There are multiples solutions to solve this problem. You could add a third party blog engine like Disqus, but this has the drawback of including a third-party tool with a bad privacy record in your website. Another solution would be to host an open-source alternative but this comes at the cost of a higher maintenance burden. Having to host a database was something we wanted to avoid with a static site generator.

In my opinion, a better solution is to leverage the Mastodon and Fediverse platform. Mastodon is a decentralized social network and it allows people to communicate with each other without being on the same server. It is inspired by Twitter, but instead of tweeting, you write toot.

When publishing an article, you now only need to also write a simple toot linking to your article. Then Mastodon has a simple API to fetch the answer to your toot. This is the code I made for my Hugo powered blog, but it is easily adaptable for other static site generators. It will create a button to load comments instead of loading them for every visitor so that it decreases the load on your mastodon server.

{{ with .Params.comments }}
<div class="article-content">
 <p>You can use your Mastodon account to reply to this <a class="link" href="https://{{ .host }}/@{{ .username }}/{{ .id }}">post</a>.</p>
 <p><button id="replyButton" href="https://{{ .host }}/@{{ .username }}/{{ .id }}">Reply</button></p>
 <p id="mastodon-comments-list"><button id="load-comment">Load comments</button></p>
 <dialog id="toot-reply" class="mastodon" data-component="dialog">
 <h3>Reply to {{ .username }}'s post</h3>
 With an account on the Fediverse or Mastodon, you can respond to this post.
 Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one.
 <p>Copy and paste this URL into the search field of your favourite Fediverse app or the web interface of your Mastodon server.</p>
 <div class="copypaste">
 <input type="text" readonly="" value="https://{{ .host }}/@{{ .username }}/{{ .id }}">
 <button class="button" id="copyButton">Copy</button>
 <button class="button" id="cancelButton">Close</button>
 <p id="mastodon-comments-list"><button id="load-comment">Load comments</button></p>
 <noscript><p>You need JavaScript to view the comments.</p></noscript>
 <script src="/assets/js/purify.min.js"></script>
 <script type="text/javascript">

 const dialog = document.querySelector('dialog');

 document.getElementById('replyButton').addEventListener('click', () => {

 document.getElementById('copyButton').addEventListener('click', () => {
 navigator.clipboard.writeText('https://{{ .host }}/@{{ .username }}/{{ .id }}');

 document.getElementById('cancelButton').addEventListener('click', () => {

 dialog.addEventListener('keydown', e => {
 if (e.key === 'Escape') dialog.close();

 const dateOptions = {
 year: "numeric",
 month: "numeric",
 day: "numeric",
 hour: "numeric",
 minute: "numeric",

 function escapeHtml(unsafe) {
 return unsafe
 .replace(/&/g, "&amp;")
 .replace(/</g, "&lt;")
 .replace(/>/g, "&gt;")
 .replace(/"/g, "&quot;")
 .replace(/'/g, "&#039;");

 document.getElementById("load-comment").addEventListener("click", function() {
 document.getElementById("load-comment").innerHTML = "Loading";
 fetch('https://{{ .host }}/api/v1/statuses/{{ .id }}/context')
 .then(function(response) {
 return response.json();
 .then(function(data) {
 if(data['descendants'] &&
 Array.isArray(data['descendants']) &&
 data['descendants'].length > 0) {
 document.getElementById('mastodon-comments-list').innerHTML = "";
 data['descendants'].forEach(function(reply) {
 reply.account.display_name = escapeHtml(reply.account.display_name);
 reply.account.reply_class = reply.in_reply_to_id == "{{ .id }}" ? "reply-original" : "reply-child";
 reply.created_date = new Date(reply.created_at);
 reply.account.emojis.forEach(emoji => {
 reply.account.display_name = reply.account.display_name.replace(`:${emoji.shortcode}:`,
 `<img src="${escapeHtml(emoji.static_url)}" alt="Emoji ${emoji.shortcode}" height="20" width="20" />`);
 mastodonComment =
<div class="mastodon-wrapper">
 <div class="comment-level ${reply.account.reply_class}"><svg xmlns="" viewBox="0 0 512 512">
 <path fill="currentColor" stroke="currentColor" d="m 307,477.17986 c -11.5,-5.1 -19,-16.6 -19,-29.2 v -64 H 176 C 78.8,383.97986 -4.6936293e-8,305.17986 -4.6936293e-8,207.97986 -4.6936293e-8,94.679854 81.5,44.079854 100.2,33.879854 c 2.5,-1.4 5.3,-1.9 8.1,-1.9 10.9,0 19.7,8.9 19.7,19.7 0,7.5 -4.3,14.4 -9.8,19.5 -9.4,8.8 -22.2,26.4 -22.2,56.700006 0,53 43,96 96,96 h 96 v -64 c 0,-12.6 7.4,-24.1 19,-29.2 11.6,-5.1 25,-3 34.4,5.4 l 160,144 c 6.7,6.2 10.6,14.8 10.6,23.9 0,9.1 -3.9,17.7 -10.6,23.8 l -160,144 c -9.4,8.5 -22.9,10.6 -34.4,5.4 z" />
 <div class="mastodon-comment">
 <div class="comment">
 <div class="comment-avatar"><img src="${escapeHtml(reply.account.avatar_static)}" alt=""></div>
 <div class="comment-author">
 <div class="comment-author-name"><a href="${reply.account.url}" rel="nofollow">${reply.account.display_name}</a></div>
 <div class="comment-author-reply"><a href="${reply.account.url}" rel="nofollow">${escapeHtml(reply.account.acct)}</a></div>
 <div class="comment-author-date">${reply.created_date.toLocaleString(navigator.language, dateOptions)}</div>
 <div class="comment-content">${reply.content}</div>
 document.getElementById('mastodon-comments-list').appendChild(DOMPurify.sanitize(mastodonComment, {'RETURN_DOM_FRAGMENT': true}));
 } else {
 document.getElementById('mastodon-comments-list').innerHTML = "<p>Not comments found</p>";
{{ end }}

You can also found some CSS rules on my gitlab.

This code is using DOMPurify to sanitize the input, since it is not a great idea to load data from third party sources without sanitizing them first. Also thanks to chrismorgan, the code was optimized and is more secure.

In my blog post, I can now add the following information to my frontmatter, to make comments appears magically.

 username: carlschwan
 id: 109774012599031406

Update from the 29th Jan 2023: Adapted the code to work with Mastodon 4.0 and replaced by

Dear digiKam fans and users, Just a few words to inform the community that 7.2.0-beta2 is out and ready to test four month late the 7.2.0 beta1 release. After integrating the student codes working on faces management while this summer, we have worked to stabilize code and respond to many user feedbacks about the usability and the performances improvements of faces tagging, faces detection, and faces recognition, already presented in July with 7.

Monday, 28 December 2020

Chess players in KDE community

Some of us have started a KDE community chess players team on the Mostly as a place to find people who are interested in chess and occasionally playing various variants of chess

If you want to join, KDE Chess Players on lichess

Knights animations

Personally I rarely use the knights, and instead use the online chess platforms like Lichess (which by the way is another great FOSS project). Recently I was made aware about the animations in the knights were quite distracting.

See following example video,

I initially wanted to get rid of animation completely but I realised that the animation speed is configuration option instead of something hard-coded. I proposed a patch to change default animation speed to instant.

My next idea is to change the animation code to animate movement of single piece that is moving instead of the knights currently animating all of the pieces from center of board so that users who prefer animation get something sensible.

Wishlist for knights

It would be quite nice if we had a support for the Lichess in the knights to be able to play online games. knights does support FICS already for playing online games. But it would be quite nice to be able to also use knights as the client/front-end for Lichess. It does support extensive API to interact with it.

I will try to look into this probably in future but IMO, web client of Lichess already works great.

Day #2 of the #100DaysToOffload series.

it could already be read somewhere that 2021 will be the year of Linux on the desktop :-D

Fine with me. Just to support that, I did a little hackery over XMas to improve the support for ownClouds virtual file system on the Linux Desktop.

What are Virtual Files?

In professional usecases, users often have a huge amount of data stored in ownCloud. Syncing these completely to the desktop computer or laptop would be too much and costly in bandwidth and harddisk space. That is why most mature file sync solutions came up with the concept of virtual files. That means that users have the full structure with directories and files mirrored to their local machines, but have placeholder of the real files in the local file manager.
The files, however, are not on the disk. They get downloaded on demand.

That way, users see the full set of data virtually, but save time and space of files they never will need on the local system.

The ownCloud Experience

ownCloud has innovated on that topic a while ago, and meanwhile support virtual files mainly on Windows, because there is an elaborated system API to work with placeholder files. As we do not have this kind of API on Linux desktops yet, ownCloud desktop developers implemented the following solution for Linux: The virtual files are 1 byte sized files with the name of the original file, plus a suffix “.owncloud” to indicate that they are virtual.

That works, yet has one downside: Most file managers do not display these placeholder files nicely, because they loose the MIME type information. Also, downloading and also freeing up the space of downloaded files is not integrated. To summarize, there is a building block missing to make that useful on Linux.

Elokab-files-manager with ownCloud

I was wondering if it wasn’t possible to change an existing file manager to support the idea of virtual files. To start trying I found the Elokab-files-manager which is a very compact, yet feature rich file manager built on Qt. It is built with very few other dependencies. Perfect to start playing around with.

In my Github fork you can see the patches I came up with to make Elokab-fm understand ownCloud virtual files.

New Functionality

The screenshot shows the changes in the icon view of Elokab-fm.

Screenshot Elokab-fm

Screenshot of patched Elokab-files-manager to support ownCloud Virtual Files.

To make that possible, Elokab-fm now pulls some information from the ownCloud Sync Client config file and connects to the sync client via local socket to share some information. That means, that the sync client needs to run to make that work.

Directories that are synced with ownCloud now show an cloud overlay in the center (1).

The placeholder files (2) which are not present on the local hard drive indicate that by showing a little cloud icon bottom right. However, other than before, they are displayed with their correct name and mime-type, which makes this already much more useful.

Files, which are on the local disk as the image (3) show their thumbnail as usual.

In the side panel (4) there are a few details added: The blue box on the bottom indicates that the file manager is connected to the sync client. For the selected virtual file (2), it shows an button that downloads the file if clicked which would turn it into a non virtual, local file. There is also an entry in the context menu to achieve that.


This is just a proof of concept and a little XMas fun project. There are bugs, and the implementation is not complete. And maybe Jürgen’s idea of a FUSE layer to improve that is the better approach, but anyway. It just shows what is possible with virtual files also on Linux.

If you like that idea, please let us know or send a little PR if you like. We do not wanna fail providing our share on the year of the Linux Desktop, right? ;-)

Building it from my Github branch should be fairly easy, as it only depends on Qt.

For openSUSE users, I will provide some test packages in my home project on Open Build Service.

Saturday, 26 December 2020

So today I did some housekeeping on my blog,

  • Moving it to new server, old server was based on Ubuntu 16.04 Xenial Xerus, which is quite old now (I still get security updates, but some packages are quite out-of-date and needs quite some PPAs to manage it)
  • Updating it to latest minima theme and removing some of custom templates I had
  • Removing disqus plugin, as I realized ultimately I get less and less comments there and it can be invasive to privacy for readers of this blog
  • Killing the Google analytics from my blog, this is something which I had added back in 2015 without much of thinking, but in fairness I was not looking at dashboards for this site from quite while now. I will also take care to delete the data from Google Analytics Dashboard

I had planned to write different blog post but this housekeeping took most of the time for me, so I will finish post tomorrow.

On separate topic, I realize my blog has been dormant for a year now. Last blog post I made was Plasma Mobile as a daily driver about when I was stuck in Europe with no working phones except PinePhone. While this blog was mostly dormant, I had been writing various posts and contributing to posts on Plasma Mobile blog.

I still have several blog posts which are stuck in the draft state (looking at git status of my blog repo, 7 draft posts at moment), I feel like making writing blog posts habit will help me finish them finally 🙃. Which is why I am signing myself up for #100DaysToOffload challenge by Kev Quirk. I know that I will probably end up writing 25-30 posts compared to 100, but as guidelines say, it is fine

Publish 100 new posts in the space of a year. You don’t need to publish a post every 3 days - if you want a week off, that’s fine. If it comes to the end of the year and you have only published 60 posts, that’s also fine. Just. Write.

That will be much better already compared to 0 posts in whole year 😜. I will also update my RSS feed URL to tag-specific URL so that Planet KDE does not get spammed with off-topic blog posts if I make any on this blog 🙂.

Day #1 of the #100DaysToOffload series.