July 14, 2017


The first coding period is completed and this is the first coding period report blog. Hope you’ll find it detailed enough!

Task 1: Review current whole database implementation including database schema hosted as XML.


Since I’ve worked previously with DB, things seemed pretty clear this time. Among the three DB, my main focus was on understanding:

  • Understanding to create a new DB from scratch
  • Understanding to write the common code fragments for MySQL/SQLite
  • Thorough reading of DB Config file (dbconfig.xml.cmake.in) to understand how database schema is set up.
  • Analyzing the implementation of Thumbnails DB, especially, to create a similar one for similarity feature.

Task 2: Isolate tables and schemas relevant of similarity fingerprints.


This task involved isolating existing tables and figuring out the new tables to be created, relevant to the similarity DB.

  • ImageHaarMatrix needs to exactly moved to the new database, without any change. I think it’s not required in the core DB. (Haar Algorithm is used to generate image fingerprints in DigiKam).
  • A new ImageSimilarity table is required to store the similarity value of pair of images. Schema would be:

imageID1(int) | imageID2(int) | value(double)

No other tables are required currently to be moved from Core DB.

Thank you for reading. I’ll keep you updated with my next tasks in upcoming blogs!

Akademy 2017 is coming close. The schedule of talks (Saturday, Sunday) is now posted, and the community wiki for organizing things is slowly filling up.

The workshops and lightning talks and BoFs are being planned, too. I’m glad Anu Mittal has mentioned her QML + JS workshop, it’s a great topic for getting started with application development. QML is something I’ve never gotten in to, but should, so I’ve penciled this workshop into my schedule as well.

July 13, 2017

The last point release of the 17.04 cycle is out with crash and compilation fixes and minor interface improvements:

In comparison to previous versions this was the least exciting development cycle, in terms of new features, since all focus has been on the code refactoring which will bring more stability and new features. Don’t miss the next Café to keep track on the progress and share your thoughts if you like.


It’s summer — a bit rainy, but still summer! So it’s time for a summer sale — and we’ve reduced the price for the Made with Krita 2016 art books to just 7,95. That means that shipping (outside the Netherlands) is more expensive than the book itself, but it’s a great chance to get acquainted with forty great artists and their work with Krita! The book is professionally printed on 130 grams paper and softcover bound in signatures. The cover illustration is by Odysseas Stamoglou. Every artist is showcased with a great image, as well as a short bio.

Made with Krita 2016
On sale: €7,95
Forty artists from all over the world, working in all kinds of styles and on all kinds of subjects show how Krita is used in the real world to create amazing and engaging art. The book also contains a biographical section with information about each individual artist. Made with Krita 2016 is now on sale: 7,95€, excluding shipping. Shipping is 11,25€ (3,65€ in the Netherlands).

Get Made with Krita 2016

I guess this is public now – Sergey invited me to give the keynote at C++ Siberia 2017 which will be held in Tomsk in August (25th and 26th).

This will require some serious mental and physical preparation. I’ve never had a talk two hours long. :)

If you are in area, and you speak Russian (My keynote will be in English since, despite having a Russian-sounding name, I don’t speak Russian), come and join us – the tickets are available at the conference website.


Here’s a step-by-step guide to getting a machine with FreeBSD 11 in it, running X, and KDE Plasma 5 Desktop and KDE Applications. It’s the latest thing! (Except that 11-STABLE is in the middle of the pack of what’s supported .. but the KDE bits are fresh. I run 10.3 with KDE4 or Plasma 5 on my physical machines, myself, so the FreeBSD version isn’t that important except that packages are readily available for 11-STABLE, not for 10-STABLE.)

TL;DR: get FreeBSD + X running; switch to mouf.net packages; pkg install kde.

Image of FreeBSD boot schreen

  • Download a 11.0 installation image (e.g. the bootonly ISO).
  • Build a machine with at least 32GB of disk, 4GB of RAM, and one or more processors. I had an AMD A10-5745M board lying around, which is an €85 board that just needs memory and a disk.
  • Put the installation CD in the drive, or plug in the memstick — whatever.
  • Boot it.

It boots to the extremely old-school FreeBSD installer (the irony is not lost on me, relative to what I do the rest of the week).

Installation selection

  • Go through the installer. Install lib32, ports and src, because you’ll need those later.
  • If you have the bootonly ISO, you’ll need to configure networking.
  • Which filesystem layout you pick doesn’t really matter either. I go for ZFS with 1-disk striping (i.e. no redundancy) because then I get ZFS snapshots later, which is convenient for messing around with the system state and possibly starting over.

Wait for the install to finish.

  • Set the system timezone.
  • Set a root password.
  • Disable services you don’t need (I generally disable remote syslog and sendmail; enable clear /tmp. The security-options are up to you.)
  • Create a user.
  • Reboot. While the machine reboots, eject the CD or unplug the memstick. (Have I mentioned I really like KDE Neon’s feature of auto-ejecting the disk?)

  • Log in as root, and do post-installation updates, to wit:
    portsnap fetch extract update
    freebsd-update fetch install
  • install an initial-but-minimal working system, starting with pkg(8), the packaging system itself:
    pkg bootstrap
  • Developer’s basic toolkit (and I prefer bash for an interactive shell):
    pkg install git bash gmake cmake pkgconf gettext-tools binutils
    echo fdesc /dev/fd fdescfs rw 0 0 >> /etc/fstab
    echo proc /proc procfs rw 0 0 >> /etc/fstab
  • An X Server and a backup X11 environment (ancient):
    pkg install xorg xterm twm
  • Desktop technologies (modern):
    pkg install hal dbus
    echo hald_enable=YES >> /etc/rc.conf
    echo dbus_enable=YES >> /etc/rc.conf
  • Clean up
    pkg autoremove
    pkg clean
    rm /usr/ports/distfiles/*
  • Reboot again.

Log in as your regular user and run startx.

Image of TWM

Aren’t you glad you installed twm? Remember, exiting the top-left xterm will exit your X session.

  • If running with ZFS, it’s a good idea to snapshot now, just so you can easily roll back to the it-works-with-basic-X11 setup you have now.
    zfs snapshot -r zroot@x11
  • Now swap out the default FreeBSD package repository, for the KDE-FreeBSD community one. This is documented also on the Area51 page.
    mkdir -p /usr/local/etc/pkg/repos
    cd /usr/local/etc/pkg/repos
    cat > FreeBSD.conf <<EOF
    FreeBSD: { enabled: no }
    cat > Area51.conf <<EOF
    Area51: {
    url: "http://meatwad.mouf.net/rubick/poudriere/packages/110-amd64-kde/",
    priority: 2,
    enabled: yes
  • Tell pkg(8) to refresh itself (it may install a newer pkg, too), then install something nicer than xterm + twm, and then do some post-install configuration:
    pkg update
    pkg install konsole plasma5-plasma-desktop
    echo cuse_load=YES >> /boot/loader.conf
    echo webcamd_enable=YES >> /etc/rc.conf
  • Log in as your test user, and set up .xinitrc to start Plasma 5:
    cat > .xinitrc <<EOF
    #! /bin/sh
    /usr/local/bin/xterm -geometry +0+0 &
    test -x $KDE && exec /usr/local/bin/ck-launch-session $KDE
    exec /usr/local/bin/twm
    chmod 755 .xinitc

If you really want, you can run startx, but this isn’t the complete Plasma 5 desktop experience .. and KDE Applications are not installed, either. So you get a bare xterm (useful to kill X or start konsole) and kwin and not much else. Good thing that getting the rest of KDE Plasma 5 Desktop and KDE Applications is pretty easy (and we could have skipped the intermediate step with konsole and gone straight to the finish:

  • pkg install kde

This metaport will pull in another 2GiB of stuff, for all the KDE Applications and a complete Plasma desktop. There are intermediate metaports for slightly-less-heavy installations, but this one is easy to remember and will almost certainly get you what you want. So it really comes down to installing X, dbus, hal, and then the kde package. Voila!

Screenshot of Plasma 5 desktop

PS. The screenshot shows a machine with 10.3, not 11-STABLE. It comes down to the same process; I have the 10.3 packages built locally so i’ts faster for me this way. The 11-STABLE screenshots were taken in VirtualBox, but I have not had any recent success with VirtualBox and OpenGL and FreeBSD. I can’t run any Qt applications: they all fall over when trying to load shared OpenGL libraries that simply aren’t there in VirtualBox.

A short reminder: the "Call for Presentations" deadline for  the next OpenStack Summit in Sydney, Australia (November 6-8, 2017) is less an a day away. 
If you would like to present in Sydney submit your proposal till July 14, 2017 at 11:59pm PDT (July 15, 2017 at 6:59am UTC).

July 12, 2017

Sometimes, we need to create wrapper types. For example, types like unique_ptr, shared_ptr, optional and similar.

Usually, these types have an accessor member function called .get but they also provide the operator-> to support direct access to the contained value similarly to what ordinary pointers do.

The problem is that sometimes we have a few of these types nested into each other. This means that we need to call .get multiple times, or to have a lot of dereference operators until we reach the value.

Something like this:

    wrap<wrap<std::string>> wp;

This can be a bit ugly. If we can replace one .get() with an arrow, it would be nice if we could replace the second .get() as well. For this, the C++98 introduced a long arrow operator.

    wrap<wrap<std::string>> wp;

What if we have another layer of wrapping? Just make a longer arrow.

    wrap<wrap<wrap<std::string>>> wp;

With a special implementation of wrap, this compiles and works without many problems.


Now, before we continue, you should realize that this post is not a serious one. And that this should never be used in a serious project, just like the left arrow operator <-- [1] and the WTF operator ??!??! [2] (which no longer works in C++17 BTW).


From the very beginning, watercolor was conceived as brush engine. I think it is the best place for it. But it brings some troubles. The most important problem is how to make undo engine.

During a new stroke, the system can have previous strokes that continue to change. And it’s not clear how watercolor should behave with the rest of the engines. And I would like to ask your help. If you have any idea about it, please leave it in comments =)

Another problem is a speed. Now watercolor works very slowly (and only during a stroke).

Nevertheless, watercolor is paintop now. And I’d like to introduce you what it can do. I implemented all 5 strategies:



Wet on dry:


Wet on wet:







P. S.: Here you can look at funny color bug

когда что-то пошло не так.gif

The article published at the Codethink website why we are participating in CIP has been also published at the CIP-project website. I have recently added a new recommended book in the Reads section, The Innovator’s Dilemma, a classic.

Codethink has published in its website an article I wrote providing the main reasons for the company to join the CIP initiative, from the Linux Foundation. A few days later, I published the article on my blog.

Check in  the Reads section the latest recommendation together with the previous books.

New blog post where I describe how I see the sostware for automotive supply chain in the coming future, based on examples from other industries. Read more about it…

When choosing a specific Open Source technology it might be relevant to find out what is the business model behind it. Please read my blog post to find out why.

I have added some information about the work I’ve been doing, together with my colleagues at Codethink Ltd on the GENIVI Development Platform, the delivery project of the GENIVI Alliance. Read every about it

In this new article I describe why automotive has become a great opportunity for KDE and the requirements to take advantage of it, based in my experience in automotive Open Source consortium.

Publish originally at GENIVI blogs, here is my latest post published later on in my personal blog explaining the changes introduced into the GENIVI Development Platform by the delivery team.

Wrote an article that summarizes the transformation processes that an organization has to go through when adopting Open Source.

There is a growing misunderstanding among many decision makers and developers that approach Open Source these days with the relation between automated testing and quality. With this blog post I intend to summarises the best practices that Open Source projects has traditionally applied with a huge impact in softrware quality. Despite the efervescense of automated testing, are still more alive than ever.

Check the blog post. I am looking forward to read your opinions about it.

Heya fellow KDE people,

just a couple of weeks ago I got the opportunity to attend the Google Code-In Summit held in several locations scattered around the San Francisco Bay area. I can tell you first hand: It's been an awesome trip and I encourage anyone else to participate if the opportunity arises.

What is Google Code-In?

Google Code-In (GCI) is a contest which introduces pre-university students (ages 13-17) to open source software development. The event features a wide range of small beginner tasks, which allows students to jump into contributing to open-source no matter what skills they have. Experienced Mentors of participating organizations (hey, KDE!) help out scheduling the work and give useful tips to the newcomers.

This year, GCI has seen 1,340 students from 62 countries, and they completed an impressive number of 6,418 tasks. 17 organizations participated, amongst them were for instance Wikimedia, Drupal, FOSSASIA, (...) and of course KDE!

The summit

Each year, after GCI concluded, Google invites one mentor plus two Grand Prize winners of each participating organization to San Francisco, for the Code-In Summit. The Grand Prize winners of each org are selected by the organizations themselves: those students have completed a huge of number of tasks and/or otherwise performed exceptionally well.

Picture of Grand Prize Winners of GCI The Grand Prize Winners of Google Code-In 2016

Each one of those Grand Prize winners, including their parents, were flown in to the US west coast for four days this summer to meet their mentors and connect with Google engineers.

The four days were quite packed, I'll try to summarize what happened throughout this event

Day 1: Meetup

Day 1 started late in the after noon, where all students and parents, as well as all mentors were invited to the Google San Francisco office for a first get-together. Remember, most of the students haven't met each other or the respective mentor before. After clarifying why we're here and what we're going to do the next days, Stephanie et. al from Google organized a little icebreaker game so we actually got to know each other.

First time I met my great fellow KDE GCI Grand Prize Winners, Ilya and Sergey, too. Congratulations on your award!

Oh, and of course we all got covered with tons of Google Swag and food afterwards.

Day 2: Award ceremony and talks

The second day started (way too early, yawn) at 7:15 AM, sharp, at the lobby of our hotel. We were transferred to the main Google Campus at Mountain View afterwards, where the Award ceremony with lots of other Google engineers started.

Chris DiBona, Director of Open Source at Google, held a talk highlighting the importance of the Open Source programs Google is sponsoring each year, thanking all the participants -- both mentors and students. Both Google Code-In and Google Summer of Code are immensely successful programs which introduce more and more students to the Open Source world each year.

After this, Chris DiBona and his team went through all the organizations and awarded each Grand Price winner with a trophy. Chris made it very clear that each of those students did his or her job extremely well, which is the sole reason he or she could attend this event. The parents of course were super proud of their children, and took the opportunity to take tons of photos during the whole ceremony.

Picture of KDE's Grand Prize Winners of GCI Sergey Popov (left) and Ilya Bizyaev (right), proudly presenting their GCI trophy.
I didn't get a trophy, d'oh!

Both Sergey and Ilya completed around 20 tasks during the GCI period and both showed some impressive enthusiasm, commitment and competence.

Again, congratulations to all the Grand Prize winners, but especially to "our" guys of course -- thanks for participating and helping out in KDE!

The rest of the day was fully packed with talks from Google engineers, a couple of them quite interesting even for the majority of mentors. We got an introduction to Waymo, Google's self-driving car and introduction to how 'to become a Kernel developer' by Grant Grundler (from the Chrome team). After that, an introduction to Open Source Compliance by Max Sills (attorney at Google) -- which led to a little heated discussion initiated by attendants about the usefulness of the Affero General Public License -- hint: Google doesn't like it at all) and a serious discussion about the WTFPLDo What the Fuck You Want to Public License, which is despite its funny touch is actually not recommended to use, especially in the US, since it doesn't clearly deny liability. Could get you in trouble.

Lunch time!

... and of course, more photos, and a short walk around the Google campus, checking out all the various cafeterias and other tourist attractions.

KDE Grand Prize Winners with Android mascot Ilya and Sergey again!
Picture the whole GCI summit attendants This time, a picture of all the GCI attendants: Students, parents and mentors together (Picture: Josh Simmons)

More talks after lunch: A couple of talks about how working at Google is like, how the recruitment process works. Next was a short overview over Oppia, Google's open interactive learning platform, backed by artificial intelligence by Sean Lip and last but not least, an introduction to TensorFlow, and open-source library for machine learning, by Andrew Selle.

All in all, a couple of pretty insightful talks, surely impressive to the students, I'm happy those Googlers found the time to present their projects.

After that, of course more tasty food, a short visit to the Google visitor center + merchandise store and then we were on our way back to San Francisco downtown again.

Day 3: Activity

Day 3 was the activity day, where one group could spent the day visiting the Exploratorium, the Museum of Science, Art and Human Perception, one of the biggest attractions in SF, the other group could ride a Segway next to the piers.

All of KDE of course opted for the Segway ride, which was a fun experience. Especially for the student's parents of course, which had a couple of initial difficulties (but did well afterwards). At least no-one crashed!

Segway introduction Segway introduction by a professional -- my take away: The "bootie stop", the quickest way to get your Segway halted.

Later the day, we got to see the good old Golden Gate bridge. And a couple of students took the opportunity to walk the 2.7 km bridge by foot. Crazy youngsters.

Ilya, Sergey and me at the Golden Gate bridge Ilya, Sergey and me being strong at the selfie game

We were also lucky to see a couple of humpback whales popping up directly under the bridge this day, amazing!

Pictures of humpback whales under the Golden Gate bridge Humpback whales under the Golden Gate bridge (Picture courtesy: Jack Pan-Chen -- great shot!)

We concluded the end of day with a pretty scenic cruise from Sausalito (the city right next to other side of the Golden Gate bridge) back to SF harbor.

More food and drinks included (yes, Google was trying hard cramming us).

Day 4: More talks, end of the summit

Day marked the end of the whole summit, with a couple more talks from Google engineers -- mentors and students both got the opportunity to talk a bit about whatever they wanted, too.

View from Google SF office View from Google SF office at the San Francisco–Oakland Bay Bridge

The remaining talks were about the new Google Open Source web page and how Open Source software is managed at Google (hint: every repository is mirrored internally, otherwise wouldn't be manageable). Next was a talk about Kubernetes, Google's container orchestration software, and an introduction to Code Jam, a competitive programming contest hosted by Google on a yearly basis.
The talk about Project Fi, a multi-carrier "wireless" service provider, was super insightful after all. It's fascinating to see the challenges phone operating systems are facing when trying to do clever real-time network/carrier switching on-the-go, and how Project Fi tries to do better by improving the SIM card design at the lowest levels.
Last but not least, one of the Googlers gave us an introduction to LLVM at Google, and how they switched from GCC to Clang internally a while ago, and that they're actually working a lot on performance improvements in the LLVM stack these days.


I'd like to take the opportunity to thank KDE for letting me attend this summit in the first place. Lots of thanks go out to the hard-working GSoC/GCI admins in KDE, namely Valorie Zimmermann and Bhushan Shah, who keep that whole thing running and nag people when deadlines are close!

Also lots of thanks to Google for sponsoring these kind of programs and for inviting people to their premises for gathering together. Thanks a lot to Stephanie, Mary, Cat, Helen, Josh (all Googlers) for their hard work keeping the attendants happy during the whole four days!

As every year, also this year, I will be going to KDE’s yearly world summit, Akademy. This year, it will take place in Almería, Spain. In our presentation “Plasma: State of the Union“, Marco and I will talk about what’s going on in your favorite workspace, what we’ve been working on and what cool features are coming to you, and what our plans for the future are. Topics we will cover range Wayland, web browser integration, UI design, mobile and release and support planning. Our presentation will take place on Saturday at 11:05, right after the key note held by Robert Kaye. If you can’t make it to Spain next week, there will likely be video recordings, which I will post here as soon as they’re widely available.

Haste luego!

July 11, 2017

I’ve decided to go to Randa again this year.

There’s at least four reasons for me to go:

  • change of pace, trading coding-in-the-attic-office for coding-in-the-dining-room
  • change of pace, trading discuss-coding-on-IRC for discuss-coding-while-hiking-to-the-glacier
  • with a little planning, we can probably get further up the mountain than last year
  • someone needs to make brigadeiro.

More seriously, the theme this year (from the page for this year’s meeting) is

Accessibility is the big topic. But what does accessibility mean regarding KDE and what else do we want to make more accessible?

From that perspective, there are two kinds of accessibility I’m interested in: making KDE available on FreeBSD (which includes hammering PIM into shape) is one. That’s a bit of a cop-out, really. I mean, I could bring my BeagleBone (probably will, too) and claim I was making KDE accessible to armv6. So portability and platform accessibility is a small thing.

More important to me is actual accessibility in the sense of using-software-with-a-screenreader. When I worked for the Dutch Federation of Audiological Centres, I learned a lot about accessibility for the deaf and hard-of-hearing. A software application that is primarily visual in nature doesn’t need much accessibility work for that. But I was one floor above the Dutch Stichting Accessibility, which works for the vision-impaired. That’s been sort-of hanging around the back of my conscience. So, accessibility in the more generally-accepted sense: making Free Software usable by people of all sort and abilities.

So I’ve got two things to sort out (geez, why do I keep getting bogged down in case-distinctions):

  • Orca screen reader in KDE on FreeBSD,
  • Orca screen reader support in Calamares. This is actually the biggie — and only tangentially related to KDE. Calamares is not a KDE project, but is used as the system installer for a variety of smaller Linux distro’s. There’s an issue filed against Calamares that it is largely inaccessible. This is due in part to the way it needs root (e.g. is run through sudo). That makes it difficult to install Linux — even if the eventual installed system is accessible, the installer isn’t. So I’m going to tackle that at Randa. Doing so will make some other issues go away as well — or maybe, I need to make some other issues go away before tackling Orca, and this will make the whole codebase better.

Serendipidity! One Randa meeting to inspire me to work on things I might otherwise put off, and where it turns out that working on accessibility improves things across the board.

July 10, 2017


Improving layout of GCompris’s Family activity

Everything was well planned for a great start for the second phase of GSoC and I anticipated everything to go smoothly since I wouldn’t have any college exams to worry about. Soon enough, I was proven wrong almost immediately by an incoming health issue, which took a lot of working hours from my schedule. Anyways, I started out with the family activity along with continuing with the bug fixes and improvements on the submarine activity.

My goal for this week was to convert the current layout of the family activity into a much more intuitive tree-like representation, where the family members of the same generation would lie in the same vertical layer. As an example, I created a mockup of a currently present level, which looks like the following:


and planned to represent it in the following manner:


The Implementation

To start things of, firstly I moved the entire dataset from family.js into a separate Dataset.qml file. It can be viewed in this commit. The Dataset.qml roughly looks like the following:

QtObject {
    property real nodeWidth: background.nodeWidthRatio
    property real nodeHeight: background.nodeHeightRatio
    property var levelElements: [
                    // level 1
                    {  edgeList: [
                                   [0.37, 0.25, 0.64, 0.25],
                                   [0.51, 0.25, 0.51, 0.50]
                       nodePositions: [
                               [0.211, 0.20],
                               [0.633, 0.20],
                               [0.40, 0.50]
                       captions: [ [0.27, 0.57],
                                  [0.101, 0.25]
                       nodeleave: ["man3.svg", "lady2.svg", "boy1.svg"],
                       currentstate: ["activeTo", "deactive", "active"],
                       answer: [qsTr("Father")],
                       optionss: [qsTr("Father"), qsTr("Grandfather"), qsTr("Uncle")]

                    // level 2

nodeWidth and nodeHeight is used to keep track of the width and height of a single node (all nodes are of same dimension), which are useful for drawing edges accurately from/to the end of a specific node. Followed by this, we have an array called levelElements which stores the (x,y) values of each nodes, (x1, y1) and (x2, y2) values of each edges (endpoints) and the properties of each nodes and edges.

From this, I decided to improve on the 9th and 10th levels of the activity, which looked like the following: family_9

My goal was to connect the edges from the bottom of the node instead of connecting them horizontally. For that, instead of using fully brute-forced numbers for the edges, I used the x-y coordinates of the nodes and used the width and height values of the nodes to accurately find the start/end positions of the edges. This, however, leads to a problem: since the value of nodeWidth and nodeHeight changes with the change in width and height of the activity, the dataset needs to be reloaded whenever there is a change in screen resolution, in order to avoid some incorrect start/end positions of the edges.

    onWidthChanged: loadDatasetDelay.start()
        onHeightChanged: if (!loadDatasetDelay.running) {
         * Adding a delay before reloading the datasets
         * needed for fast width / height changes
        Timer {
            id: loadDatasetDelay
            running: false
            repeat: false
            interval: 100
            onTriggered: Activity.loadDatasets()

Using these, the 9th and 10th level turned out to be like this: family_9_final

Moving forward

For the upcoming weeks, I plan to continue on improving the layouts of the activity. Along with that, I will look forward to improve the implementation of the activity to a Grid based layout, in which instead of hardcoding the x-y coordinate values of the nodes, we will be creating a Grid element and the datasets will contain the (row, column) values of the nodes and the (r1, c1) to (r2, c2) values for the edges.

July 09, 2017

KDE’s CI system for FreeBSD (that is, what upstream runs to continuously test KDE git code on the FreeBSD platform) is missing some bits and failing some tests because of Wayland. Or rather, because FreeBSD now has Wayland, but not Qt5-Wayland, and no Weston either (the reference implementation of a Wayland compositor).

Today I went hunting for the bits and pieces needed to make that happen. Fortunately, all the heavy lifting has already been done: there is a Weston port prepared and there was a Qt5-Wayland port well-hidden in the Area51 plasma5/ branch.

I have taken the liberty of pulling them into the Area51 repository as branch qtwayland. That way we can nudge Weston forward, and/or push Qt5-Wayland in separately. Nicest from a testing perspective is probably doing both at the same time.

I picked a random “Hello World” Wayland tutorial and also built a minimal Qt program (using QMessageBox::question, my favorite function to hate right now, because of its i18n characteristics). Then, setting XDG_RUNTIME_DIR to /tmp/xdg, I could start Weston (as an X11 client), wayland-hello (as a Wayland client, displaying in Weston) and qt-hello (as either an X11 client, or as a Wayland client). The result is this:

Screenshot with Plasma 5 and Weston

Plasma 5 Desktop from Area51, with Weston running in X as a Wayland compositor, and two sample Wayland clients displaying in Weston.

So this gives users of Area51 (while shufflinig branches, granted) a modern desktop and modern display capabilities. Oh my!

It will take a few days for this to trickle up and/or down so that the CI can benefit and we can make sure that KWin’s tests all work on FreeBSD, but it’s another good step towards tight CI and another small step towards KDE Plasma 5 on the desktop on FreeBSD.

This is the report for the time since the last deadline. I successfully passed my exams at the university, but I did not have a university stipend :( For last period I did: I began to study MongoDB Rewrote the backend to work with MongoDB. Now backend can write information...

July 08, 2017

The cool part of sharing your knowledge is that often others pop up and improve what you shared. Time ago Quentin linked me his tool to manage Plasma Activities from command line. It’s written in Python, so it’s named Pytivity.

With Pytivity you can create, edit, delete, start, stop and activate Activities. But the true power of this tool relies on a feature of Activities that is still not exposed in the graphical user interface: placing scripts and *.desktop launchers in some particular hidden folders you will be able to define which apps or scripts will be executed when you start, stop, activate or deactivate an Activity.

Pytivity simply exposes it as command line interface, the syntax to create an Activity and set Dolphin to open when you activate the Activity is the following:

pytivity create --started dolphin MyNewActivity

My main use case is to start/stop software and restart them if needed when I come back to the activity, like a browser, mail client or a project in pycharm. I also use it to open tabs in Yakuake and execute some shell commands (ssh, change directory, activate python env, …).

Quentin Dawans, developer of Pytivity

The use cases are not limited to starting apps, you could use it to start/stop services. If like me you use Docker you will appreciate how setting an Activity to start/stop Docker is easy:

pytivity create --started "systemctl start docker" --stopped "systemctl stop docker" ActivityName

You could also set an Activity to start Docker and run a container from KDE Neon to have a Plasma desktop from git-stable or git-unstable repos confined in an Activity.

Another use case is starting Kontact with an Activity and close it killing Akonadi (that uses a lot of my 2GB of RAM also when Kontact is not running anymore) when you stop the Activity.

I imagine that it could be used also with kwriteconfig, that is a tool to edit apps settings, to change the preferences of an app before starting it. I would appreciate if will be possible to change system color scheme from command line but at the moment it seems impossible from kwriteconfig. Also, some apps, especially command line apps, often let you define the path of config file. With Pytivity you could specifying a different config file according to the running Activity.

Now you have another article you can link to those users that every time they read about Plasma Activities replies “Activities are useless”, “I can’t get what are the advantages of Activities” and “what’s the difference between Activities and virtual desktops?”. To the last question you can also reply: “virtual desktops are a way to manage windows, Activities are a way to manage tasks and related apps, services and configurations“.

Be sure to don’t miss the future realease of Plasma Vault by Ivan, the developer of Activities: it will let you encrypt and decrypt specific folders when you start/stop Activities. You can easily imagine it combined with an encrypted Firefox profile.

Meta: I updated my blog with some UX improvements, like categories/tags filter options in the sidebar to let you quickly find contents you are interested in. I also added a “subscribe” button that appear when you are visiting a category, a tag or a combination of them; that button will generate the RSS feed for the category/tag you selected. This because I will post on my blog stuff not strictly related to KDE. The post that are displayed in Planet KDE are the one from “Blog posts” category (that is the category that gather all the stuff in English) tagged as “For Planet KDE”.

I’m also experimenting a way to link blog articles to related Diaspora posts. You should see a button below to see comments on Diaspora. Sadly because of the decentralized nature of Diaspora, you need to be registered to my same pod to comment using the link I provide with the article. Alternatively you have to find me on Diaspora from your pod to add a comment.

Edit:I managed to develop a “comment on Diaspora button” that ask for your pod’s domain and then redirect to my post on your pod. It works with most Diaspora pods, if it doesn’t work probably your pod doesn’t federate with mine (sechat.org).

July 07, 2017

A new release of KStars is out! v2.7.9 is now available for Linux, Mac, and Windows. At the same time, we released an update to KStars Lite for Android.

This is mostly a bugfix release where we concentrated on fixing bugs as reported by our users. But that didn't stop us from creating new tools that were long requested by the community, including the Mount Control tool that facilitates motion and GOTO commands with a simple interface.

On the other hand, KStars GSoC 2017 project started with lots under the hood work taking place. Csaba went full throttle with migrating the C++ code to modern C++14 standards while updating the build process to be stricter, more secure, and efficient. The whole code base was formatted and countless warnings resolved. Furthermore, several enhancements were made to decrease memory usage and increase performance of tools such as Observation Planner.

For KStars Lite on Android, it now supports Automatic Mode where you can identify stars, planets, and constellations simply by pointing the phone toward the sky and looking at the live Sky Map within the App. Significant performance gains were made in FITS files that support WCS data. Since processing WCS is very CPU intensive, loading WCS data is now performed on demand. This is especially important for embedded devices where resources are limited.

Robert Lancaster implemented automatic download of Astrometry.net index files directly from the GUI. No longer the user have to go and hunt for index files, calculate FOVs for different equipment...etc. With the updated system, all the required index files are highlighted automatically by Ekos, and all you have to do is to click and download them!

Special thanks to all those who made this release possible. Outstanding work from KDE's Craft team, led by Hannah Von Reth, for making this great tool that enables KStars to reach users across all major platforms.

New Field Rotator Conrol

After the two-part series on the fundamentals of Xwayland, I want to briefly introduce the basic idea for my Google Summer of Code (GSoC) project for X.Org. This means I’ll talk about how Xwayland currently handles the graphic buffers of its applications, why this leads to tearing and how we plan to change that.

The project has its origin in my work on KWin. In fact there is some connection to my unsuccessful GSoC application from last year on atomic mode setting and layered compositing in KWin. You can read up on these notions and the previous application in some of my older posts, but the relevant part of it to this year’s project is in short the transfer of application graphic buffers directly onto the screen without the Wayland server compositing them into a global scene before that. This can be done by putting the buffers on some overlay planes and let the hardware do the compositing of these planes into a background provided by the compositor or in the simpler case by putting a single buffer of a full screen application directly onto the primary plane.

At the beginning of the year I was working on enabling this simpler case in KWin. In a first working prototype I was pretty sure I got the basic implementation right, but my test, a full screen video in VLC, showed massive tearing. Of course I suspected at first my own code to be the problem, but in this case it wasn’t. Only after I wrote a second test application, which was a simple QML application playing the same video in full screen and showing no tearing, I had the suspicion that the problem wasn’t my code but Xwayland, since VLC was running on Xwayland while my test application was Wayland native.

Indeed the Wayland protocol should prevent tearing overall, as long as the client respects the compositor’s messages. It works like this: After committing a newly drawn buffer to the server, the client is not allowed to touch it anymore and only after the compositor has sent the release event, the client is again allowed to repaint or delete it. If the client needs to repaint in the meantime it is supposed to allocate a different buffer object. But this is exactly, what Xwayland based applications are not doing, as Daniel Stone was quick to tell me after I asked for help from him for the tearing issues I experienced.

Under Xwayland an app only ever uses one buffer at all and repaints are always done into this one buffer. This means that the buffer is given to the compositor at some point but the application doesn’t stop repainting into it. So in my case the buffer content changed, although at the same time it was presented to the user on the primary plane. The consequence is tearing. Other developers noticed that as well the same time around as documented in this bug report.

The proposed solution is to bolster the Present extension support in Xwayland. In theory with that extension an X based application should be able to paint into more than one Pixmap, which then translate to different Wayland buffers. On the other side Xwayland notifies the app through the Present extension when it can reuse one of its Pixmaps based on the associated Wayland buffer event. The Present extension is a relatively new extension to the Xserver, but is already supported by most of the more interesting applications. It was written by Keith Packard, and you can read more about it on his blog. In theory it should only be necessary to add support for the extension to the Xwayland DDX. But there are some issues in the DIX side of the extension code, which first need to be ironed out. I plan on writing more about the Present extension in general and the limitations we encounter in our Xwayland use case in the next articles.

One of the new features of the upcoming Qt 5.10 is the introduction of the shapes plugin to Qt Quick. This allows adding stroked and filled paths composed of lines, quadratic curves, cubic curves, and arcs into Qt Quick scenes. While this has always been possible to achieve via QQuickPaintedItem or the Canvas type, the Shape type provides a genuine first-class Qt Quick item that from the scene graph’s perspective is backed by either actual geometry or a vendor-specific GPU accelerated path rendering approach (namely, GL_NV_path_rendering).

The shapes example, running on an Android tablet

Why is This Great?

  • There is no rasterization involved (no QImage, no OpenGL framebuffer object), which is excellent news for those who are looking for shapes spanning a larger area of a possibly high resolution screen, or want to apply potentially animated transformations to the shapes in the scene.
  • The API is fully declarative and every property, including stroke and fill parameters, path element coordinates, control points, etc., can be bound to in QML expressions and can be animated using the usual tools of Qt Quick. Being declarative also means that changing a property leads to recalculating only the affected sets of the underlying data, something that has been traditionally problematic with imperative painting approaches (e.g. QPainter).
  • There are multiple implementations under the hood, with the front Qt Quick item API staying the same. The default, generic solution is to reuse the triangulator from QPainter’s OpenGL backend in QtGui. For NVIDIA GPUs there is an alternative path using the GL_NV_path_rendering OpenGL extension. When using the software renderer of Qt Quick, a simple backend falling back to QPainter will be used. This also leaves the door open to seamlessly adding other path rendering approaches in the future.

Status and Documentation

Right now the feature is merged to the dev branch of qtdeclarative and will be part of 5.10 onces it branches off. The documentation snapshots are online as well:

(due to some minor issues with the documentation system some types in Particles get cross-linked in the Inherited By section and some other places, just ignore this for now)

The canonical example is called shapes and it lives under qtdeclarative/examples/quick as expected.

Let’s See Some Code

Without further ado, let’s look at some code snippets. The path specification reuses existing types from PathView, and should present no surprises. The rest is expected to be fairly self-explanatory. (check the docs above)

1. A simple triangle with animated stroke width and fill color.


Shape {
    id: tri
    anchors.fill: parent

    ShapePath {
        id: tri_sp
        strokeColor: "red"
        strokeWidth: 4
        SequentialAnimation on strokeWidth {
            running: tri.visible
            NumberAnimation { from: 1; to: 20; duration: 2000 }
            NumberAnimation { from: 20; to: 1; duration: 2000 }
        ColorAnimation on fillColor {
            from: "blue"; to: "cyan"; duration: 2000; running: tri.visible

        startX: 10; startY: 10
        PathLine { x: tri.width - 10; y: tri.height - 10 }
        PathLine { x: 10; y: tri.height - 10 }
        PathLine { x: 10; y: 10 }

2. Let’s switch over to dash strokes and disable fill. Unlike with image-backed approaches, applying transformations to shapes are no problem.


Shape {
    id: tri2
    anchors.fill: parent

    ShapePath {
        strokeColor: "red"
        strokeWidth: 4
        strokeStyle: ShapePath.DashLine
        dashPattern: [ 1, 4 ]
        fillColor: "transparent"

        startX: 10; startY: 10
        PathLine { x: tri2.width - 10; y: tri2.height - 10 }
        PathLine { x: 10; y: tri2.height - 10 }
        PathLine { x: 10; y: 10 }

    SequentialAnimation on scale {
        running: tri2.visible
        NumberAnimation { from: 1; to: 4; duration: 2000; easing.type: Easing.InOutBounce }
        NumberAnimation { from: 4; to: 1; duration: 2000; easing.type: Easing.OutBack }

3. Shape comes with full linear gradient support. This works exactly like QLinearGradient in the QPainter world.


Shape {
    id: tri3
    anchors.fill: parent

    ShapePath {
        strokeColor: "transparent"

        fillGradient: LinearGradient {
            x1: 20; y1: 20
            x2: 180; y2: 130
            GradientStop { position: 0; color: "blue" }
            GradientStop { position: 0.2; color: "green" }
            GradientStop { position: 0.4; color: "red" }
            GradientStop { position: 0.6; color: "yellow" }
            GradientStop { position: 1; color: "cyan" }

        startX: 10; startY: 10
        PathLine { x: tri3.width - 10; y: tri3.height - 10 }
        PathLine { x: 10; y: tri3.height - 10 }
        PathLine { x: 10; y: 10 }

    NumberAnimation on rotation {
        from: 0; to: 360; duration: 2000
        running: tri3.visible

4. What about circles and ellipses? Just use two arcs. (note: one ShapePath with two PathArcs is sufficient for a typical circle or ellipse, here there are two ShapePath due to the different fill parameters)


Shape {
    id: circle
    anchors.fill: parent
    property real r: 60

    ShapePath {
        strokeColor: "transparent"
        fillColor: "green"

        startX: circle.width / 2 - circle.r
        startY: circle.height / 2 - circle.r
        PathArc {
            x: circle.width / 2 + circle.r
            y: circle.height / 2 + circle.r
            radiusX: circle.r; radiusY: circle.r
            useLargeArc: true
    ShapePath {
        strokeColor: "transparent"
        fillColor: "red"

        startX: circle.width / 2 + circle.r
        startY: circle.height / 2 + circle.r
        PathArc {
            x: circle.width / 2 - circle.r
            y: circle.height / 2 - circle.r
            radiusX: circle.r; radiusY: circle.r
            useLargeArc: true

5. Speaking of arcs, PathArc is modeled after SVG elliptical arcs. Qt 5.10 introduces one missing property, xAxisRotation.


Repeater {
    model: 2
    Shape {
        anchors.fill: parent

        ShapePath {
            fillColor: "transparent"
            strokeColor: model.index === 0 ? "red" : "blue"
            strokeStyle: ShapePath.DashLine
            strokeWidth: 4

            startX: 50; startY: 100
            PathArc {
                x: 150; y: 100
                radiusX: 50; radiusY: 20
                xAxisRotation: model.index === 0 ? 0 : 45

Repeater {
    model: 2
    Shape {
        anchors.fill: parent

        ShapePath {
            fillColor: "transparent"
            strokeColor: model.index === 0 ? "red" : "blue"

            startX: 50; startY: 100
            PathArc {
                x: 150; y: 100
                radiusX: 50; radiusY: 20
                xAxisRotation: model.index === 0 ? 0 : 45
                direction: PathArc.Counterclockwise

6. Quadratic and cubic Bezier curves work as expected. Below is a quadratic curve with its control point animated.


Shape {
    id: quadCurve
    anchors.fill: parent

    ShapePath {
        strokeWidth: 4
        strokeColor: "black"
        fillGradient: LinearGradient {
            x1: 0; y1: 0; x2: 200; y2: 200
            GradientStop { position: 0; color: "blue" }
            GradientStop { position: 1; color: "green" }

        startX: 50
        startY: 150
        PathQuad {
            x: 150; y: 150
            controlX: quadCurveControlPoint.x; controlY: quadCurveControlPoint.y

Rectangle {
    id: quadCurveControlPoint
    color: "red"
    width: 10
    height: 10
    y: 20
    SequentialAnimation on x {
        loops: Animation.Infinite
        NumberAnimation {
            from: 0
            to: quadCurve.width - quadCurveControlPoint.width
            duration: 5000
        NumberAnimation {
            from: quadCurve.width - quadCurveControlPoint.width
            to: 0
            duration: 5000

7. The usual join and cap styles, that are probably familiar from QPainter and QPen, are available.


Shape {
    anchors.fill: parent

    ShapePath {
        strokeColor: "red"
        strokeWidth: 20
        fillColor: "transparent"
        joinStyle: ShapePath.RoundJoin

        startX: 20; startY: 20
        PathLine { x: 100; y: 100 }
        PathLine { x: 20; y: 150 }
        PathLine { x: 20; y: 20 }

    ShapePath {
        strokeColor: "black"
        strokeWidth: 20
        capStyle: ShapePath.RoundCap

        startX: 150; startY: 20
        PathCubic {
            x: 150; y: 150; control1X: 120; control1Y: 50; control2X: 200
            SequentialAnimation on control2Y {
                loops: Animation.Infinite
                NumberAnimation { from: 0; to: 200; duration: 5000 }
                NumberAnimation { from: 200; to: 0; duration: 5000 }

Any Downsides?

Does this mean the time has finally come to add hundreds of lines and curves and arcs to every Qt Quick scene out there?

Not necessarily.

Please do consider the potential performance implications before designing in a large number of shapes in a user interface. See the notes in the Shape documentation page.

In short, the most obvious gotchas are the following:

  • [generic backend] Shapes with a lot of ShapePath child objects will take longer to generate the geometry. The good news is that this can be mitigated by setting the asynchronous property to true which, as the name suggests, leads to spawning off worker threads without blocking the main UI. This comes at the cost of the shape appearing only after the non-asynchronous UI elements.
  • [GL_NV_path_rendering backend] Geometry generation is a non-issue here, however due to the way the “foreign” rendering commands are integrated with the Qt Quick scene graph, having a large number of Shape items in a scene may not scale very well since, unlike plain geometry-based scenegraph nodes, these involve a larger amount of logic and OpenGL state changes. Note that one Shape with several ShapePath children is not an issue here since that is really just one node in the scenegraph.
  • Antialiasing is currently covered only through multi or super sampling, either for the entire scene or for layers. Note that Qt 5.10 introduces a very handy property here: layer.samples can now be used to enable using multisample renderbuffers, when supported.
  • Shape is not a replacement for rectangles and rounded rectangles provided by Rectangle. Rectangle will always perform better and can provide some level of smoothing even without multisampling enabled.

Nevertheless we expect Shape to be highly useful to a large number of Qt Quick applications. The Qt 5.10 release is due H2 this year, so…stay tuned!

The post Let There Be Shapes! appeared first on Qt Blog.

Some time ago I published a couple of blog posts talking about Qt WebGL Streaming plugin. The time has come, and the plugin is finally merged into the Qt repository. In the meantime, I worked on stabilization, performance and reducing the calls sent over the network. It also changed a bit in the way the connections are handled.

New client approach

In the previous implementations, the client was accepting more than one concurrent connections. After the latest changes, the plugin is going to behave like a standard QPA plugin. Now, only one user per process is allowed. If another user tries to connect to the web server, it will see a fancy loading screen until the previous client disconnects.
The previous approach caused some problems with how the desktop applications and GUI frameworks are designed. Everyone can agree that desktop applications are not intended to work with concurrent physical users, even if the window instances were different for all users.

No more boilerplate code

Previously the application had to be modified to support this platform plugin. This code was needed to make the application work with the plugin:

class EventFilter : public QObject
    virtual bool eventFilter(QObject *watched, QEvent *event) override
        if (event->type() == QEvent::User + 100) {
            return true;
        } else if (event->type() == QEvent::User + 101) {
            return true;

        return false;

And install the event filter into the QGuiApplication.

No more modifications in applications are needed anymore.

How to try

So, if you want to give it a try before Qt 5.10 is released (~November 2017) do the following:


Since WebGL was modelled using OpenGLES2 as reference, first thing you will need is to have an OpenGLES2 version of Qt built. To do that, you need to pass the parameter -opengl es2 to the configure before building.

./configure -opensource -confirm-license -opengl es2

Depending on your system it is possible you will need some aditional headers and libraries to be able to use es2.

Testing the plugin

After building everything, you can try to run a Qt Quick example.

To try the photoviewer example we need to build it and run with the -platform webgl parameters:

./photoviewer -platform webgl

If you want to try the Qt Quick Controls 2 Text Editor:

./texteditor -platform webgl

Supported options

Currently, the plugin only supports an option to configure the port used by the embedded HTTP Server. If you want to listen in the default HTTP port you should write -platform webgl:port=80.

The future

The plugin will be part of Qt 5.10 release as a technology preview (TP), as it needs to be improved. Currently, the plugin contains an HTTP Server and a Web Sockets server to handle the browser connections. I’m planning to remove the servers from the plugin and start using a Lightweight QtHttpServer we are working on right now. Once it’s ready, you will be able to create an application server to launch different process inheriting the web socket to communicate with the browsers. This will allow for supporting more than one concurrent user instead of sharing applications among users.

Note: The plugin only works using the Threaded Render Loop. If you use Windows or a platform using a different render loop ensure you set QSG_RENDER_LOOP environment variable to threaded

The post Qt WebGL Streaming merged appeared first on Qt Blog.

Older blog entries

Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.