29 June, 2020

This week, I spent most of my time testing the Rocs graph-layout-plugin. I needed to test the method that applies the force-based layout algorithm to a graph, whose signature is the following.

static void applyForceBasedLayout(GraphDocumentPtr document, const qreal nodeRadius,
                                  const qreal margin, const qreal areaFactor,
                                  const qreal repellingForce, const qreal attractionForce,
                                  const bool randomizeInitialPositions, const quint32 seed);

Unfortunately, there is not much that is guaranteed by this method. Basically, given a graph and some parameters, it tries to find positions for each node in such a way that, if we draw the graph using these positions, it will look nice. What does it mean for the drawing of a graph to look nice? How can we test it? This is a subjective concept, and there is no clear way to test it. But there is still something that can be done: non-functional tests.

Before going to the non-functional part, I decided to deal with the easy and familiar functional tests. I was not precise in my description of the method deliberately. Actually, there is at least one guarantee that it should provide: if we draw each node as a circle of radius nodeRadius with centers at the positions calculated by the method, these circles should respect a left-margin and a top-margin of length margin. This was a nice opportunity for me to try the QtTest framework. I wrote a data-driven Unit Test and everything went well.

Back to the non-functional part, I decided to write a quality benchmark. The idea is to measure some aesthetic criteria of the layouts generated for various classes of graphs. The metrics already implemented are: number of edge crosses, number of edges that cross some other edges, number of node intersections and number of nodes that intersect some other node. Although there is no formal definition of a nice layout, keeping the values of these metrics low seems to be desirable. Currently, I already implemented generators for paths, circles, trees and complete graphs. For each one of these classes of graph, I generate a number of graphs, apply the layout algorithm a certain number of times to each of them, and calculate summary statistics for each one of the considered aesthetic metrics.

For now, there is only one layout algorithm which can applied to any graph. The idea of the quality benchmark is to compare it to other layouts algorithms I will implement. But it does not mean that the quality benchmark is currently useless. Actually, the results I got were quite revealing. The good part is that there were no intersection between nodes. But the results about edge crossing are not so good. Despite my efforts in tuning the parameters, the algorithm can fail to eliminate all edge crosses even for very simple graphs such as paths and circles. Fortunately, choosing parameter values specifically for a graph can sometimes help, and the user can do that in the graph-layout-plugin user interface.

It has been over a month since the start of GSoC. Phase #1 evaluations will start today. This post is to summarise all the work done by me during phase #1

Implementing the MyPaint Brush engine

This involved writing the core brush engine classes like KisMyPaintBrush, KisMyPaintSurface, MyPaintOpPlugin, KisMyPaintOp and KisMyPaintOpFactory. At a very high level, all I had to do was override the draw_dab and get_color methods of MyPaintBrushSurface with my own version to draw over a Krita Paintdevice (Krita Surface class). While working on this, I faced a problem related to lagging that I have discussed in the previous posts. With the help of my mentors, I was able to solve this and bring the lag under acceptable limits. Then, I went on to fix some bugs, which occurred in using some specific presets. The brush engine seems to work fine mostly =]



Loading MyPaint Brushes

This was the second most important thing in my TODO. The mypaint brush engine would be of no use if it can load the installed mypaint brushes from the system. Also, testing the brush engine was a pain if you base off all your testing work on the default brush. This thing was much needed from the start of the project itself.

Loaded MyPaint brushes

Hopefully, I will pass this evaluation. Don't know. 

Till then,
Good Bye :)

OpenUK Awards are nearly closed. Do you know of projects that deserve recognition?
Entries close midnight ending UTC tomorrow
Individual, young person or open source software, open Hardware or open data project or company
The awards are open to individuals resident in the UK in the last year and projects and organisations with notable open source contributions from individuals resident in the UK in the last year.

We are happy to announce the release of the Qt Visual Studio Tools version 2.5.2. Installation packages are now available at the Visual Studio Marketplace.

KDE repositories are switching over to SPDX identifiers following the specifications. This machine-readable form of licensing information pushes for more consistency in licensing and licensing information.

Long, long ago I wrote some kind of license-checker for KDE sources, as part of the English Breakfast Network. The world has moved on since then, and supply-chains increasingly want to know licensing details: specifically, what exact license is in use (avoiding variations in wording that have cropped up) and what license-performative actions are needed exactly (like in the BSD license family, “reproduce the Copyright notice above”).

Andreas Cord-Landwehr has been chasing license information in KDE source code recently, and has re-done tooling and overall made things better. So there’s now changes – via merge requests on our GitLab instance KDE invent – showing up.

There is one minor thing of note which I’ve discussed with him, and which bears upon the Fiduciary License Agreement (FLA) that KDE e.V. has.

The FLA is a voluntary license agreement that individual contributors can enter into, which assigns such rights (remember, Free Software leverages Copyright!) as are assignable, to the fiduciary, and the fiduciary grants a broad license back. This leverages Copyright laws again, to ensure that the fiduciary can act as copyright holder, while leaving the original contributor with (almost) all the original possibilities for using and sharing the contribution.

I’ll be giving a short talk about the FLA at this year’s online KDE Akademy, so I’ll skip a bunch of general background information.

Since I signed the FLA quite some time ago, with the intent that KDE e.V. is the fiduciary – and therefore the holder of my Copyrights in a bunch of KDE code – Andreas has been converting my statements of copyright like this:

SPDX-FileCopyrightText: 2010 KDE e.V. <>
SPDX-FileContributor: 2010 Adriaan de Groot <>

I don’t hold this copyright: the KDE e.V. does. But I’m still morally the author and contributor in this file, so my name is in the file.This is a combination of SPDX tags you’ll probably see more of in the (gradual) conversion of KDE sources to using SPDX tags.

Many other projects also used SPDX statements and follow the REUSE specification: Calamares (a non-KDE project where I’m the maintainer) is slowly switching over, and I have some other projects elsewhere that are following suit. In greenfields (new) code it’s easy to stick to REUSE from the start, but retro-fitting it to an existing codebase can lead to a lot of tedious busywork, so none of my other projects have gone whole-hog over – none of them are “REUSE compliant”, so to speak.

I admire, and salute, Andreas for his dedication to improving the quality of KDE’s codebase in this (tedious and busyworky) way.

Edit 2020-06-29: salute the right name


28 June, 2020

Hello KDE people. First phase evaluations is due from today onward until 3rd of July. It has been coupe of weeks since I had posted about my project. I was quite busy writing code implementing the documentation panel for the various backends supported by Cantor. In the last post I have explained about how I generated the help files namely qhc (Qt Help Collection) and qch (Qt Compressed Help) from the documentation's source file. In today's post I will explain how I utilized Maxima's help files to actually display help inside the Cantor application itself. So here are the things done:-

Things Done

1. Implementation of Documentation Panel
Cantor has dedicated panels for Variable manager and some general Help. So I have created a new panel dedicated to display the documentation of the backend which is currently active. To implement it, I have borrowed some basic code from the cantor/src/panelplugins and added widgets similar to any documentation help. I have added tabs for displaying Contents, Index and Search for the documentation. These widgets are populated by loading the help files using a QHelpEngine instance. Currently, Search functionality is yet to be implemented.

2. Display of documentation inside the Documentation Panel
I have kept the design as simple as possible. I have divided the display area into 2 halves, one half is the QTabWidget which displays the above listed widgets and the other half is the QWebEngineView, which I am using as a display area for the contents of the documentation. To make the display responsive to the events on the QTabWidget, I have connected the signals of the index and content widget to a slot which displays the contents in the View.

3. Context Sensitive Search Capabilities

I have successfully implemented the Context Sensitive Search capability. The user can now while in the worksheet, can select any keyword and then press F2 to show the relevant documentation about that topic. The QWebEngineView updates in real time.

User selected 'sin' and then pressed F2 to forward to the related documentation
On pressing F2 key while 'sin' keyword was selected, the index was filtered and the corresponding topic documentation was shown in the view area. 

For those interested in trying it out themselves and/or playing with the code, can clone my repository and build it from source. Here is the link to my repository on which I am working.

That's all folks for this time, until then, Good bye!!
Part 1 -

With the first month of the coding period almost over, I have been working on completing the first part of my GSoC project.

I have been porting to hugo. The website is very old and has lots and lots of pages. It is even older than me! I have been working on porting these pages to markdown removing the old PHP syntax and adding improvements to the design, responsiveness and accessibility of the website.

I have completed porting the announcements upto the year 2013. I ported the year 2014 as well but I replaced the formatted links into normal ones but I didn’t realise It would break the translations for the pages. So I may have to port these announcements again :( . KDE provides a pot file to its translators and they provide translations in a po file in return. We use a custom extraction script to extract the strings to be translated from the markdown files. The translator is smart enough to ignore some changes to the strings but the changes to the links that I made would break it. It also doesn’t work well with HTML that isn’t inline. I will keep these things in mind in the future.

I am also working on automating (RegEx is Awesome!) much of the work involved in porting these files which may make up for the time lost.

About my Project

The project involves improving KDE Web Infrastructure. KDE has a lot of websites and some of them like the main website could use an update.

The first part of the project involves porting to use Hugo- A go based static site generator. is very old and thus contains a lot of pages. This project would involve porting most of the pages to markdown so as to make the website faster and easier to develop.

The second part of the project involves updating Season of KDE website. The goal is to use more modern tooling and add some new features. This project is a part of the transition of KDE websites from LDAP to OAuth based authentication. OAuth is a much more modern approach to authentication and would solve some headaches with the current authentication system.

Current Working Repository: repo

If you would to join in, Say Hi at #kde-www on irc or telegram.

Hi Everyone! It’s been a while since my last post and during this period I continued adding MMS support in KDE Connect SMS app. After the addition of MMS support in android app, My next step was to enable the desktop SMS client to allow users to reply to multi-target messages. I had some discussion with my mentors related to the structure of the network packets to allow sending multimedia files from android to desktop. Since the Attachment field should be an optional field and replacing the current packet type entirely was not feasible keeping in mind the backward compatibility for the desktop app. Simon suggested a nice idea of converting the thumbnails into Base64 encoded string and then adding it into the network packet. This solved the issue of replacing the entire method of pushing the messages to the desktop.

After successfully completing and testing the code on android studio, I added the support to receive and display the optional attachment object on the desktop side. The desktop side was mostly straight forward except transferring the QImage from C++ to QML but at the end I figured it out.

This brings us to my last task of this period i.e. requesting the original attachment file when a user will click on the thumbnail. The click event generates a attachment request packet and send to remote device. Android on the other side on receiving the request packet fetches the requested attachment file from the MMS database and send it to the desktop. Then desktop downloads the file and stores it locally for the future references.

This would not have been possible without the guidance and support of my mentors Simon, Philip, Nicolas and Piyush. 🙂

Current Version (Only supports text)

NO Multimedia Support

New Version (Multimedia Support)

Next Step

Now my next task will be to work on UI of the app. Mostly the chat elements and other big UI changes will be coming soon with improved look and feel!

Today is the day! — Nitrux 1.3.0 is available to download

We are pleased to announce the launch of Nitrux 1.3.0. This new version brings together the latest software updates, bug fixes, performance improvements, and ready-to-use hardware support.

Nitrux 1.3.0 is available for immediate download.

What’s new

  • We’ve upgraded the kernel to version 5.6.0-1017.

  • We’ve updated KDE Plasma to version 5.19.2, KDE Frameworks to version 5.71.0, KDE Applications to version 20.04.02.
  • We’ve updated the GTK theme to match closer the Kvantum theme and the Plasma color scheme.
  • We’ve updated the SDDM theme and Plasma look and feel package too (splash, and lock screen) with the colors of the color scheme.
  • We’ve added more wallpapers to our default selection, including our new default wallpaper Opal.

  • Inkscape is updated to version 1.0, and Firefox to version 77.0.1.
  • We’ve updated the Nvidia driver and its libraries to version 440.100.
  • appimage-cli-tool is replaced with its newer version appimage-manager. This new version of appimage-manager is rewritten in Go.
  • We’ve added a new AppImage to the system, Wine.

  • We’ve added a Day/Night wallpaper plugin, which allows users to simulate the color transition of the background to match the daylight. Thanks to dark-eye for the wallpaper plugin.

  • We’ve changed the default font from Chivo to Fira Sans for a more modern look and readability.

Known issues

  • Resizing a window from the top-right corner doesn’t work, this is a problem with the Aurorae decoration; we will be using a native QStyle window decoration to replace it in future releases.
  • When using the tiling windows feature (Khronkite), the system tray plasmoids will be treated as regular windows.


  • OpenGL acceleration is used by default, if you use Nitrux in a VM open System Settings>Monitor>Compositor and select XRandr in addition to disabling desktop effects like Blur.

The post Changelog: Nitrux 1.3.0 appeared first on Nitrux — #YourNextOS.

Tomorrow (29/06/2020) begins the first evaluation of the Google Summer of Code 2020. Last GSoC, when I was participating as a student, I wrote in my final report a set of future proposals that could be done in the ROCS graph IDE (Section What’s Next?). This year, some students got interested in these ideas but only one could enter the program (we didn’t have enough mentors for more than one project). Here are the list that I proposed:

  • Implementation of a better algorithm to position the nodes and edges on the plane. I can recommend the use of Force-directed graph drawing algorithms, because they are usually fast and are physics-based;
  • Create a better interface workflow for the program. I can recommend something like the Possible New Configuration image. This configuration consider that the user will spend most part of the time programming, so it creates a better writing space, while the view has a more square shape, which is (in my opinion), better for visualization;
  • Remodelation of how each graph is represented in the javascript code. The type system is good to provide a global configuration, but I think it falls apart when dealing with individual edges and dynamic creation of subgraphs and new edges/nodes (which is needed in some algorithms);
  • Rewrite the view to deal with some problems related to the space of the graphs that is really limited, mouse clicks not working correctly and bad navigation;
  • Change how the icons are used by the ROCS, as some icons don’t have cross-compatibility between some systems.

From this list, Dilson decided to tackle the first one listed. Here is his proposal. Most of the best algorithms involves some type of heuristic inspired in physical motions in the graph, being really fast and good in most graph classes (although there is specialized algorithms for some graph classes). You can see more of his work here. He is doing a great job by showing a good understanding of the algorithms and methods while giving a great amount of thought in the test process (as it is not trivial to test random algorithms).

For now, he implemented a layout algorithm that is an adaptation of the Fruchtermani-Reingold algorithm that works only on connected graphs in a special plugin that controls each physical forces inside the model. I will be giving some updates on his work sparsely in this blog. Please check his blog for more details if interested. :)