December 27, 2017

At the end of 2012, I started a series of post explaining the genesis of the KDE Manifesto. In that series, I pointed out that it took our community six years to go from the pains generated by our growth to creating the needed tool to solve them: namely the KDE Manifesto.

It took us an awfully long time... but it looks like we're getting better at dealing with this kind of community changes. Indeed, this time it took us only three years between Akademy 2014 where Paul Adams delivered his wake up call about our loss in cohesion and Akademy 2017 where a group of old timers (including yours truly) proposed a way to federate our community behind common goal again. It's too bad Paul decided to retire before he could witness that change!

It took a bit of time to put things in motion, but the community finally chose three goals for itself. This nicely concludes the Evolving KDE initiative pushed by Lydia.

I'm excited to see how those goals will progress in 2018 and beyond and how they will impact our community. Will they indeed bring more cohesion again? Will it be measurable?

Those are interesting questions that I'd like to explore... Paul retired, so maybe it's time I try myself at the community metrics work he was doing.

As the year 2017 is ending and the year 2018 is almost upon us, I'm looking forward to another year of work by all the busy bees forming the KDE Community.

We need some help to get there though, if like me you like the direction set in the KDE goals, consider participating in the End of 2017 Fundraiser. Don't wait, only four days left! You can power KDE too!

December 25, 2017

It’s been an honor to have had the community select my KDE goal: focus on usability and productivity. This is a topic that’s quite dear to my heart, as I’ve always seen a computer for a vehicle for giving substance to your thoughts. Low-quality computer operating systems and software get in your way and knock you out of a state of flow, while high quality versions let you create at the speed of thought. KDE Plasma is already pretty good in this department, but I think we can make it even better–we can turn it into the obvious choice for people who need to get things done.

To make this happen, I’m proposing to focus on the following broad approaches:

  • Make it easier to find, install, launch, update, and remove new software
  • Increase the speed and efficiency with which files and data can be transferred from one program or context to another
  • Write new features that make common operations easier, faster, more efficient, or less frustrating
  • Make common features look and behave identically to make use of users’ pre-existing muscle memory and knowledge of how things work
  • Fix bugs that make important features not work properly or at all
  • Improve the state of laptop touchpad input with libinput, which is rapidly becoming the new standard
  • Fix commonly-complained-about visual design and UX issues

In the past month, KDE contributors (myself included) have implemented the following important improvements–many of them commonly wished for:

  • Improved the default size of the Clock widget’s text (KDE Bug 375969)
  • Worked with the VLC devs to make VLC 3.0 work out of the box for streaming videos over Samba shares on KDE desktops (VLC Bugs 18993 and 18991)
  • Made sections in Dolphin’s Places panel hide-able (KDE Bug 300247)
  • Added a dedicated section for removable devices to open and save dialogs (Phabricator revision D8348)
  • Made Gwenview open Targa Image (.tga) files KDE Bug 332485)
  • Gave Gwenview’s move/copy/link dialog a title that reflects the selected files (Phabricator revision D8409)
  • Got Discover’s screenshot pop-ups navigation buttons that respond to keyboard arrow keys (KDE Bugs 387816, 387858, and 387879)
  • Implemented a Wayland-compatible Redshift-esque Night Color feature (KDE Bug 371494)
  • Restored advanced print options to the Qt print dialog (Qt Bug 54464)

And we’re just getting started! You can find a longer list of the improvements we’re targeting in the coming months and years on the proposal page, and the full list here on our Bugzilla.

How you can help

If you have programming skills, please feel free to work on any of the above bugs. Patches are submitted using Phabricator; additional developer documentation can be found here.

If you’re more visually inclined, please feel free to start giving feedback and input in the KDE Visual Design Group chatroom.

If you’re detail-oriented and like categorizing things, we’re always in dire need of bug screeners and triagers. Every day, KDE receives on average 15-25 new Bugzilla tickets. Many of these are duplicates of existing bug reports. Many have already been fixed. Many are caused by configuration issues on the reporter’s computer. And many are real bug or legitimate feature requests. But without bug screeners to categorize them appropriately, they just all pile up into an intimidating mountain. It’s an important job; the bug lists above could not have been compiled if not for KDE’s bug triagers. Read more about it here!

And finally, financial contributions are very much appreciated, too. KDE’s annual fundraiser is going on right now, so hop on over and donate! Every little bit helps.

Could you tell us something about yourself?

My name is Rositsa (also known as Roz) and I’m somewhat of a late blooming artist. When I was a kid I was constantly drawing and even wanted to become an artist. Later on I chose a slightly different path for my education and career and as a result I now have decent experience as a web and graphic designer, front end developer and copywriter. I am now completely sure that I want to devote myself entirely to art and that’s what I’m working towards.

Do you paint professionally, as a hobby artist, or both?

I mainly work on personal projects. I have done some freelance paintings in the past, though. I’d love to paint professionally full time sometime soon, hopefully for a game or a fantasy book of some sort.

What genre(s) do you work in?

I prefer fantasy most of all and anything that’s not entirely realistic. It has to have some magic in it, something from another world. That’s when I feel most inspired.

Whose work inspires you most — who are your role models as an artist?

I’m a huge fan of Bill Tiller’s work for The Curse of Monkey Island, A Vampyre Story and Duke Grabowski, Mighty Swashbuckler! Other than him I’m following countless other artists on social networks and use their work to get inspired. Also, as a member of a bunch of art groups I see great
artworks from artists I’ve never heard of every single day and that’s also a huge inspiration.

How and when did you get to try digital painting for the first time?

My first encounter with digital painting was in 2006-2007 on deviantART but it wasn’t until 2010-2011 when I finally got my precious Wacom Bamboo tablet (which I still use by the way!) that I could finally begin my own digital art journey for real.

What makes you choose digital over traditional painting?

Digital painting combines my two loves – computers and art. It only seems logical to me that I chose it over traditional art but back then I didn’t give it that much thought – I just thought how awesome all the paintings I was seeing at the time were and how I’d love to do that kind of art myself. I’ve since come to realize that one doesn’t really have to choose one or the other – I find doing traditional art every once in a while incredibly soothing, even though I’ve chosen to focus on digital art as my career path.

How did you find out about Krita?

I think I first got to know about Krita from David Revoy on Twitter some years ago, but it wasn’t until this year when I finally decided to give it a try.

What was your first impression?

My first impression was just WOW. I thought “OMG, it’s SO similar to Photoshop but has all these features in addition and it’s FREE!” I was really impressed that I could do all that I was used to do in Photoshop but in a native Linux application and free of charge.

What do you love about Krita?

Exactly what I mentioned above. I’m still kind of a newbie with Krita so there’s not so much to tell but I’m sure I’m yet to discover a lot more to love as time goes by.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I’d like to see an improved way of working with the Bezier Curve Selection Tool as I use it a lot but am having trouble doing perfect selections at one go. I’d really like to be able to variate between corner anchor points and curves on the fly, as I’m creating the selection, instead of creating a somewhat messy selection and then having to go back and clean it up by adding and subtracting parts of it until it looks the way I’d intended. That would certainly save me a lot of time.

What sets Krita apart from the other tools that you use?

That it’s free to use but not any less usable than the most popular paid applications of the sort! Also, the feeling I get whenever I’m involved with Krita in any way – be it by reading news about it, interacting on social media or painting with it. I’m just so excited that it exists and grows and is getting better and better. I feel somewhat proud that I’m contributing even in the tiniest way.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I love everything I’ve created in Krita so far. I don’t think it’s that much about the software you create a certain artwork with but rather allthe love you put in it as you’re creating it.

What techniques and brushes did you use in it?

I’m trying to use less brushstrokes and more colorful shapes as I paint. I mainly use the Bezier Curve Selection Tool, the Gradient Tool and a small set of Krita’s predefined brushes for my artworks. I have tried creating my own custom brushes but with little luck so far (I think I have much more reading and experimenting to do before I succeed).

Where can people see more of your work?

I have a portfolio website (in Bulgarian):; but you can find me on Facebook, Twitter, Behance, Artstation and a bunch of other places either as ArtofRoz, Rositsa Zaharieva or some combo/derivative of both.

Anything else you’d like to share?

I’d like to tell everyone that’s been using other software for their digital paintings to definitely give Krita a try, too. Not that other software is bad in any way, but Krita is awesome!

December 24, 2017

A few years back, I shifted kdesrc-build to a release model where it was to be used essentially straight from the git-master version.  This was both beneficial from a time management perspective, and also reflected the ongoing reality of how it was actually being used, since most developers were using kdesrc-build directly from git by then to take advantage of ongoing updates to the sample KF5 configurations.

While this was helpful to free up more time for development, it also meant that release announcements stopped being published.  Since things that aren’t written down somewhere might as well have never happened, I figured I’d go ahead and collect some of the more notable changes over the past couple of years.

  • --include-dependencies (a flag to automatically pull in KDE-based repository dependencies into a build) was adapted to find any KDE-based repository, instead of only ones that could already be located somewhere within your configuration file.  The intent to this was to permit even simpler configuration files (set your source-dir, install dir, and go… at least in concept).
  • Update the code to refer consistently to the kdesrc-build docs available at rather than  The former are always built (from the same doc/ sources in kdesrc-build.git) and were much better maintained.
  • Added a --query flag to the script.  This flag permits retrieving information about the various git repositories without having to dust off scripts in kde-dev-scripts or kde-build-metadata, or go through a full kdesrc-build run.  Note that you should probably include the --pretend flag to get output suitable for use by automated scripts, and kdesrc-build is not very fast on this.
  • Fixed the ${option-name} syntax in the configuration file, which permits referring later in the configuration file to options previously set, including options not recognized by kdesrc-build.
  • Allowed options blocks in the configuration file to apply to entire (named) module sets, not just individual modules.  Of course, you’ll want to be careful not to have ambiguous names between module sets and modules…
  • Removed support for installing KDE git repositories from tarball snapshots.  This was far more useful back when we used Subversion; with git it actually seems to take more system resources to generate the tarballs than it saves on the whole, so the KDE sysadmins stopped generating them last year, and there are no other users supported by kdesrc-build.  Much of the code has been removed already, the rest will be removed as I get to it.
  • Some work to move the included kdesrc-build-setup script to belated point to Qt5-based repositories and branches.  The most ideal option is probably still to have kdesrc-build itself either work with a built-in good default configuration, or to generate a sample config directly… but that needs someone to do the work.
  • The normal set of code refactorings.  Five years ago, when I started trying to modularize the core script with commit 88b0657, kdesrc-build was a 9,304 line single monolithic Perl script.  I had started to get concerned at the time that if I ever left maintenance of the script, or if we needed to port it away from Perl, that it would be very difficult to make that transition.  While the script is not exactly straightforward even today, it is in much better shape and I hope to keep that going.
  • Also, I have finally committed some work I’d been pursuing off and on over the past year, to remove the use of the kde_projects.xml file that kdesrc-build used to gather metadata about the various KDE source repositories.  At one point it was looking like I’d be using a dedicated Web API that would expose this repository metadata, but then we realized that it would be easier for all involved to just go straight to the source: the sysadmin/repo-metadata.git repository.  That code landed earlier this week, and although I’ve already received a couple of bug reports on it, it will significantly ease the burden on the KDE build infrastructure both in terms of compute power and of sysadmin time in maintaining the script which generates the required XML as they evolve their own backend.
    • You’ll want to install the YAML::XS Perl module.  On Debian/Ubuntu based systems this seems to be libyaml-libyaml-perl. In particular the plain YAML Perl module seems to be buggy, at least for the YAML we are using in our repo-metadata.
  • Finally, while this support still remains ‘unofficial’, I have managed to get enough of the Qt5 build system working in kdesrc-build that I have been able to start locally keeping Qt5 up to date with kdesrc-build and only a minimal amount of manual updates (usually revolving around git submodules).

That’s quite a bit of work, but my personal effort represents less than 2/3rds  of the git commits made in the last 2 years.  The KDE community at large have been instrumental in keeping the default build configuration up to date and fixing bugs in the script itself.

Going forward my major goals are to resurrect a useful test suite (something broken in the modularization effort), and to use that to support making the script significantly less noisy (think one line of output per module, tops).

Both of these efforts have already started to some extent, and the test suite has its own branch (“testing-restructure”).

That’s what I want to do… what do you think I should do?  How should kdesrc-build continue to evolve to support the KDE Community?

The Notebookbar implementation Tabbed Compact is finished and can be tested. Download LibreOffice Master for

Configure Notebookbar:

  • Tools -> Options -> Advanced -> Enable experimental features
  • View -> Toolbar Layout -> Notebookbar
  • View -> Notebookbar -> Tabbed Compact

And this is how it will look like

The idea of the Toolbar is that you get all actions grouped in different tabs. The benefit is that there is enough space for most actions from LibreOffice and you can show action icons and labels. In addition the UI is sizable.

Download and test it.

December 23, 2017

Following the recent Kdenlive 17.12 release, a few annoying issues were reported in the AppImage. So we decided to focus on these and you can now download an updated AppImage with these 2 nice changes:

  • Translations are now included in the AppImage, for easy editing in your native language!
  • Fixed a high CPU usage on idle due to the sound pipeline

The 17.12 updated AppImage can be downloaded here :
edit 25th of december : AppImage updated following CPU issues.

The 18.04 updated alpha (not ready for production) also includes a fix for saving/loading track state:

So I wish you all a happy end of year and successful editing… and if you love our work, please consider a donation to KDE’s end of year fundraiser!

I’ve got a chance to share a part of my upcoming book here.

The book is in production, I’m expecting it to come out in February.

There will be a discount on December 24th and 25th: Half off everything at Manning. You can use codes dotd122417 and dotd122517 – just go to Manning webpage.

The biggest problem in software development is handling complexity. Software systems tend to grow significantly over time and they quickly outgrow the original designs. When it turns out that the features that need to be implemented collide with the design, we must either re-implement significant portions of the system or introduce horrible quick-and-dirty hacks to make things work.

See the whole excerpt in this PDF


December 22, 2017

At the end of the year 2007 I sent my first patch to KWin. At that time 4.0 was about to be released, so that patch didn’t end up in the repo in 2007, but only in beginning of 2008 and was released with 4.1.

Ten years is a long time and we have achieved much in KWin during that time. My personal highlight is of course that KWin is nowadays a Wayland compositor and no longer an X11 window manager. According to git shortlog I added about 5000 commits to KWin and another 500 commits to KWayland.

So up to the next ten years! Merry Christmas and a happy new year 2018.

Hello =D Today I'm here to talk with you about the KDE End of Year Fundraising. I'm part of KDE community since the end of 2015. And my file it's a LOT better because of it. I was able to grow a lot as a developer and as a person. On the developer part, I... Continue Reading →

December 21, 2017

So again in the time for xmas, i basically done the base kdelibs 2.2.2 port. Is far from be perfect as stated on my, but can be perfected now due start to porting kdebase.

If someone asked why i’m doing some ( alleged ) useless work, is because i’m really want to restore KDE 2 as well and improve my porting skills, since i think is a valuable skill for any programmer.

I think in a future, companies and organizations will have the need to porting or maintaining legacy C/C++ software like already happened for COBOL software and we need to be ready for this.

And i really love do that, is part of my KDE history…

So, for kdelibs, most of the tests works, dcopserver works perfectly, graphics work, so is a little beyond of proof of concept.

Autotools proved to be a worthy adversary, but i found my way around it, so Cmake it is.

The super repo still in github, but when i decided at some point kdebase done, i will request a proper place in out home base, the KDE Git repository

KDE 2 Super Repo

Follow a copy of for the lazy ones:

Merry Xmas and a Happy New Year


KDE Restoration Project – KDE 2.2.2This is an Software Engineering Archeology work. The intention is to keep original KDE 2 working as along as possible in modern architectures ( Unix and Linux only for now )Small premises are taken to go with this port:

  • Keep the original code as original as possible
  • Replace current BuildSystem for a modern one. The actual choice was Cmake since i do know it better and current KDE uses it

The current status:

Qt2 is done with some remarks:

  • There’s a issue as Qt2 didn’t recognize ARGB visuals on thos times ( of course ! ), so thanks to Gustavo Boiko that found the issue. So, if you do intend to run software like Qt designer, export this on command line: XLIB_SKIP_ARGB_VISUALS=1
  • Compilation depends on byacc. ONe of the sources are not ported to modern bison/flex. Thanks to @EXl for pinting this out

kdelibs is done with some remarks:

  • arts is not compiled yet. It is my nemesis since i worked at Conectiva several years ago and still a pain. Help welcome
  • Documentation is not generated. This is secondary and wil be dealt after kdebase
  • Install part is done, but is not 100% proved if is done properly
  • libtool porting “should” work, but then, not properly tested.
  • MOst software can compiled directo from the super repo, but to test unfortunately we need to run dcop and have install directory properly setted. This will be preperly test when kdebase port start ( soon i hope, crossing my fingers).


  • Clone the super repo: git clone --recursive
  • Enter the directory: cd kde2
  • Create an non source build dir ( i usually use build ): mkdir build
  • Run cmake: cmake ..

My default compiler is Clang on Fedora Linux 27 at this moment I can’t remember all required libraries, so for now you need run cmake and see what is missing on your side.

I will thanks any help, been clear that this is probably a uselles project, but has some meaning for me at least.

Again thanks to:

Gustavo Boiko – @boiko


When coding at Kontact you normally don't have to care a lot about dependencies in between the different KDE Pim packages, because there are great tools available already. kdesrc-build finds a solution to build all KDE Pim packages in the correct order. The Kde Pim docker image gives you an environment with all dependencies preinstalled, so you can start hacking on KDE Pim directly.

While hacking on master is nice, most users are not using master on their computers in daily life. To reach the users, distributions need to compile and ship KDE Pim. I am active within Debian and would like to make the newest version of KDE Pim available to Debian users. Because Qt deprecated Qt Webkit within Qt 5.5 KDE Pim had to switch from Qt Webkit to Qt WebEngine. Unfortunately Qt WebEngine wasn't available in Debian, so I had to package Qt WebEngine for Debian before packaging KDE Pim for Debian. Qt WebEngine itself is a beast to package. It was only possible to package Qt WebEngine for the last stable release named "Stretch" in time with the help of others of the Debian Qt/KDE mainatiners especially Scarlett Clark, Dmitry Shachnev and Simon Quigley, and we could only upload it some hours before the deep freeze. So if you have asked yourself why Debian doesn't ship 16.08 within their last stable release, this is the answer. The missing dependency for KDE Pim named Qt WebEngine.
There is a second consequence of the switch: Kontact will only be available for those architectures that are supported by Qt WebEngine. Of 19 supported architectures for 16.04, we can only support five architectures in future.

Now after Debian has woken up again from its slumber, we first had to update Qt and KDE Frameworks. After the first attempt at packaging KDE Pim 17.08.0, that was released for experimental, we are now finally reaching the point where we can package and deliver KDE Pim 17.08.3 to Debian unstable. Because Pino Toscano and I had time we started packaging it and stumbled across the issue of having to package 58 source packages, all dependent on each other. Keep in mind all packaging work is not a oneman or twoman show, mostly all in the Qt/KDE Debian mantainers are involved somehow. Either by putting their name under a upload or by being available via IRC, mail and answering questions, making jokes or doing what so ever. Jonathan Riddell visualized the dependencies for KDE Pim 16.08 with graphviz. But KDE Pim is a fast moving target, and I wanted to make my own graphs and make them more useful for packaging.

Full dependency graph for KDE Pim 17.08

The dependencies you see on this graph are created out of the Build dependencies within Debian for KDE Pim 17.08. I stripped out every dependency that isn't part of KDE Pim. In contrast to Jonathan, I made the arrows from dependency to package. So the starting point of the arrow is the dependency and it is pointing to the packages that can be built from it. The green colour shows you packages that have no dependency inside KDE Pim. The blue indicates packages with nothing depending on them. But to be honest, neither Jonathan's nor my graph tells me any more than they do you. They are simply too convoluted. The only thing these graphs make apparent is that packaging KDE Pim is a very complex task :D

But fortunately we can simplify the graphs. For packaging, I'm not interested in "every" dependency, but only in "new" ones. That means, if a <package> depends on a,b and c, and b depends on a, than I know: Okay, I need to package b and c before <package> and a before b. I would call a an implicit dependency of <package>. Here again in a dot style syntax:

a -> <package>
b -> <package>
c -> <package>
a -> b

can be simplified to:

b -> <package>
c -> <package>
a -> b

With this quite simple rule to strip all implicit dependencies out of the graph we end up with a more useful one:

Simplified dependency graph for KDE Pim 17.08

(You can find the dot file and the code to create such a graph at

At least this is a lot easier to consume and create a package ordering from. But still it looks scary. So I came up with the idea to define tiers, influenced by the tier model in KDE Frameworks. I defined one tier as the maximum set of packages that are independent from each other and only depend on lower tiers:

Build tiers for KDE Pim 17.08 (The dot file and the code to create such a graph you can find at

Additionally I only show the dependencies, from the last tier to the current one. So a dependency from tier 0 -> tier 1 but not from tier 0 -> tier 2. That's why it looks like nothing depends on kdav or ktnef. But the ellipse shape tells you, that in higher tiers something depends on them. The lightblue diamond shaped ones in contrast indicating, nothing depending on them anymore. So here you can see the "hotpath" for dependencies. This shows that the bottleneck is libkdepim->pimcommon. Interestingly this is also, more or less, the border between the former split of kdepimlibs and kdepim during KDE SC 4 times.
I think this is a useful visualization of the dependencies and may be a starting point to define a goal, what the dependencies should look like.

You also may ask yourself why an application needs so much more tiers than complete KDE Frameworks? Well, the third tier of KDE Frameworks is more of a collection for leftovers that don't reach tier 1 or tier 2. See the definition of tier 3 is: "Tier 3 Frameworks can depend only on other Tier 3 Frameworks, Tier 2 Frameworks, Tier 1 Frameworks, Qt official frameworks, or other system libraries.". The relevant part is that a framework tier 3 can depend on other tier 3 frameworks. If you use my tier definition in contrast, then you end up with more than ten tiers for KDE Frameworks, too.

After building all of these nice graphs for Debian, I wanted to see if I could create such graphs for KDE Pim directly. As KDE is mostly using the kde-build-metadata.git for documenting dependencies I updated my scripts to create graphs from this data directly:

Simplified dependency graph for for KDE Pim 17.12 Build tiers for KDE Pim 17.12

(the code to build the graphs yourselves is available here: kde-dev-scripts.git/

In detail this graph looks different and not just because of the version difference (17.08 vs. master). I think we need to update the dependencies data. This also may explain why sometimes kdesrc-build don't manage it to compile complete KDE Pim in the first run.

Some of you may know that KDAB employees enjoy flexibility on working hours as well as location, and some choose to work from home, with the opportunity to share childcare, do part-time study or simply enjoy an out-of-the-way location. All that's required is a decent bandwidth for KDAB work.

About once a year, all of KDAB employees come together so we can get to know each other in a different way. Among other things, it allows new members of the KDAB family to introduce themselves via a ten-minute "newbie talk" on anything they are interested in.

The year Daniel Vrátil joined KDAB, he gave us this talk. We thought to show it again as a Christmas bonus to our readers. continue reading

The post An Unexpected C++ Journey appeared first on KDAB.


Through their partnership, Qi and KDAB have developed imaging software to help researchers gain a deeper understanding of the progression of cancer on a cellular level, using Qt.

Qi, KDAB and Qt have now taken their collaboration one step further by introducing nanoQuill, otherwise known as “The Coloring Book of Life,” which is a crowdsourced coloring book and mobile app that gives anybody the opportunity to color cancer images to help annotate organelles inside those electron microscopy images. Qi is then able to take the crowdsourced annotations to measure a cell’s detail, render 3D images from the colored 2D images, and ultimately train new deep learning algorithms, all in the name of advancing cancer research.

With our user interface capabilities and KDAB’s unmatched technology expertise, Qi is able to advance its research without being impeded by technological restraints. Furthermore, with the nanoQuill coloring book project, Qi is able to leverage an entire global community to exponentially accelerate the world-changing contributions it’s making towards curing cancer.” —Michel Nederlof - CEO of Quantitative Imaging Systems

Just about everybody gets affected by cancer sooner or later, as a patient, a colleague, a relative, a caregiver, a neighbor, or a friend. Certainly, this applies to KDAB employees, too, which is why we partnered with Oregon Health & Science University (OHSU) and Quantitative Imaging (Qi) on several projects and we are very proud to be able to contribute, alongside with The Qt Company, to another great one — nanoQuill.

nanoQuill is a great opportunity for us to apply our world-class engineering skills in the area of Qt, 3D, OpenGL, C++, and platform-independent software development to an excellent cause. We are aware that many years of cancer research are still ahead of us before “cancer as we know it” can be considered a thing of the past, but with every colored image, we, you, and everybody who is contributing, advance that research a little bit.

Learn more and order your copy from nanoquill. continue reading

The post nanoQuill appeared first on KDAB.

So, last time we spoke, I left you this teaser...

[video width="900" height="570" webm="" loop="true" mp4="" autoplay="true"][/video]

I mean how hard could it be? right?

OK So, hum, an easy enough exercise..... for a path-based animation like we saw last time.

But how do I draw such a path? How do I time the inflections in speed for that path, since time of occurrence is dependent on speed and not position?

Hmm, a roller-coaster seems like the ideal thing. I mean, it is a path and then the internal animation in that path depends on the path itself. In other words, the lower you are the faster you go.... right?

So let's start drawing, beginning with a plain svg path that I draw in my old Inkscape.... that is the basis of our roller coaster.

All good, right?  Next I need to move this to the 0,0 in Inkscape (not the page 0,0, but the real 0,0). Once the path is in place, copy the svg path code and insert it on the:

PathInterpolator {
id: pathInterpolator

path: Path {
PathSvg { path: "M 0,147.09358 C..........}..}

This works great! Make a car item bound to the progress property and... this is easy! But it's not very lifelike: it's just a car going at a constant speed across a a path.

Now the big question.... How do I make the animation believable?

I know, I will use my old friend spreadsheet and numerical math...
First of all, let's break this path into multiple segments so that I have an x,y coordinate system of sorts.

Inserting a console log that spews out the Y position with a timer did the trick for me.

OK, next we move this to a spreadsheet and say, for simplification, that the speed is Y.
All good, now we have a position in path vs speed graphic, we had an initial speed factor, and a final speed, so we can play with it.. and we get this:


But what we need is a time vs speed graphic.

Now let's calculate, at each position, its time in a cumulative fashion. We have that p=po+vt (we can say that v=(v0+v1)/2) (p=position). So because p0=p-1, we have:


This produces our new x axis ....for the speed.

Next we just need to integrate the speeds over the time periods (position in the path at time) in order to create a curve shape that makes sense to easing.bezierCurve: 


Then we need to move this image (orange line) back into Inkscape in order to create a path out of it.

Almost there.... OK, remember that the direction on Inkscape is y down so we have to mirror this path down. Also, QML does not take real svg paths for this but a rather normalized curve that starts on 0,0 and ends on 1,1, and all nodes contain control points on both ends. To make things even simpler, they need to be described in absolute coordinates (Inkscape likes to save as relative). After some fiddling, I found out I can force Inkscape to use absolute coordinates when saving its xml... and I managed to put the path at point 0,0 to 1,1.

Now for the final step, just create a little script that transforms the svg string into a bezier array of points and we are good to go...... almost.

Add 2 more carts connected to the same base animation, make them all randomly bounce a bit, make them rotate while in the "fake" jump for fun and we are done.

Download the rolercoster.ods file if you for are interested.

Note: The jump phase implies that the path used for the car movements and the path used to acquire the speed are not exactly the same so a bit of testing and fiddling was needed.


It is not the easiest of processes to achieve such an animation, but it's flexible enough to solve a lot of similar problems where you know the speed you want at a given position, but don't know how to get there. It's useful if you want to simulate a physics-driven animation but you think adding a real physics engine to do it is overkill.

# The size of the bezier curve is limited for longer curves I would suggest breaking our curve into multiple ones.

The next blog will be about abusing the particle systems for profit, and fun... But that is something for next year.

  continue reading

The post Riding the curve appeared first on KDAB.

December 20, 2017

Tips for streamlining KDE applications for deployment on Microsoft Windows

In KDE we have a great group of developers hacking on a variety of applications. We usually have no problems getting our software out to end users on Linux because distributions help us packaging and distributing our software.

Where we do have problems, for the major applications, is ensuring our software works on other platforms such as Windows and macOS. This short guide gives a few tips to understand how to make your KDE pet project work on Windows by at least testing it once on said system and verifying a few basic things.

Build your application on Windows

Set up a Windows VM (Windows 7+), a compiler (MSVC2015+ (recommended) -- or let Craft auto-setup MinGW during bootstrap) and build your application using Craft. If set up properly, all you need to do now is to run something along:

craft filelight  

Craft will take care of installing all dependencies required to build filelight (i.e. deps of Qt5, Qt5, deps of KF5, KF5, ...) and only then builds the filelight project.

The end result is one single install root which contains the images of every package you've just built. Now starting filelight is as easy as running this in the terminal:


Run the unit tests

Again, easy to do with Craft. If your pet project has some unit tests availabe, you can easily run them by invoking e.g.:

craft --test kcoreaddons  

This will invoke ctest in the correct build directory and run the project's test suite.

Package the application

Craft on Windows has functionality to create installers out of installed packages.

craft --package filelight  

The way this is implemented is pretty simple but powerful

  • Craft collects all files of every image directories of the packages your application depends on
  • Craft collects all files of the application's image directory
  • Craft puts them into an intermediate install root.
  • After that, Craft will strip unneeded files according to blacklists (cf. 'blacklist.txt' and similar functionality in blueprints)
  • After that custom scripts may be run
  • After that makensis is called (from the NSIS installer framework) which basically zips up the whole install root and generates a final installer executable

Things you usually need to fixup

Installer icon

Handled by: Craft

The very first impression counts, so why don't make your installer binary as sexy as possible?

Again let's take the installer generated for filelight. One version without an installer icon set, and one with the filelogo set as logo:

Tips for streamlining KDE applications for deployment on Microsoft Windows Before: No installer icon
Tips for streamlining KDE applications for deployment on Microsoft Windows After: With custom installer icon

This is very easy to do with Craft which contains a few helpers to instruct the NSIS installer framework (the scriptable tool we use to generate Windows installers to begin with) properly to our likings.

An exemplary patch in craft-blueprints-kde.git (KDE's blueprint collection for Craft):

commit 1258a4450a1ee2f620856c150678dcaf5b5e7bad  
Author: Kevin Funk <>  
Date:   Mon Nov 20 14:13:04 2017 +0100

    filelight: Add application icon

    Created with:
      convert /usr/share/icons/breeze/apps/48/filelight.svg ./kde/kdeutils/filelight/filelight.ico

diff --git a/kde/kdeutils/filelight/filelight.ico b/kde/kdeutils/filelight/filelight.ico  
new file mode 100644  
index 0000000..1b6a71b  
Binary files /dev/null and b/kde/kdeutils/filelight/filelight.ico differ  
diff --git a/kde/kdeutils/filelight/ b/kde/kdeutils/filelight/  
index 0003f30..ac602c6 100644  
--- a/kde/kdeutils/filelight/
+++ b/kde/kdeutils/filelight/
@@ -30,6 +30,7 @@ class Package(CMakePackageBase):
         self.defines["productname"] = "Filelight"
         self.defines["website"] = ""
         self.defines["executable"] = "bin\\filelight.exe"
+        self.defines["icon"] = os.path.join(self.packageDir(), "filelight.ico")

         self.ignoredPackages.append("libs/qt5/qtdeclarative") # pulled in by solid

This patch adds an ICO file to the repository and references it in the filelight blueprint. Craft takes care of telling NSIS to use this ICO file as the installer icon internally while building your package with craft --package filelight.

Application icon

Handled by: CMake

Imagine starting your application via the Windows Start Menu:

Tips for streamlining KDE applications for deployment on Microsoft Windows Before: No application icon
Tips for streamlining KDE applications for deployment on Microsoft Windows After: With custom application icon

For getting the custom application, you need to embrace using Extra CMake Modules ECMAddAppIcon module which provides the CMake function ecm_add_app_icon(...) which in turn allows you to amend your executable with an application icon.

Here's an exemplary patch taken from filelight.git:

commit d7c7f1321547197e5bb9ceba6b8ccc51790bef8b  
Author: Kevin Funk <>  
Date:   Mon Nov 20 23:14:30 2017 +0100

    Add app icon

diff --git a/CMakeLists.txt b/CMakeLists.txt  
index 4c1e5dc..b8b6eda 100644  
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -25,6 +25,7 @@ cmake_minimum_required (VERSION 2.8.12 FATAL_ERROR)
 find_package(ECM 1.3.0 REQUIRED NO_MODULE)

diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt  
index 622c9d8..06f5e8d 100644  
--- a/src/CMakeLists.txt
+++ b/src/CMakeLists.txt
@@ -34,8 +34,16 @@ set(filelight_SRCS
-    main.cpp)
+    main.cpp
+    ${CMAKE_CURRENT_SOURCE_DIR}/../misc/16-apps-filelight.png
+    ${CMAKE_CURRENT_SOURCE_DIR}/../misc/32-apps-filelight.png
+    ${CMAKE_CURRENT_SOURCE_DIR}/../misc/48-apps-filelight.png
+    ${CMAKE_CURRENT_SOURCE_DIR}/../misc/64-apps-filelight.png
+ecm_add_app_icon(filelight_SRCS ICONS
+    ${filelight_ICONS})
 ki18n_wrap_ui(filelight_SRCS dialog.ui)
 add_executable(filelight ${filelight_SRCS})

Use Breeze icon theme

Handled by: Craft -- you don't need to do anything.

Breeze-icons (KDE's default icon theme), when configured with te CMake option -DBINARY_ICONS_RESOURCE=ON, installs .rcc files (binary resources, loadable by Qt).

Craft by default passes -DBINARY_ICONS_RESOURCE=ON when the breeze-icons package is installed and thus the RCC file is available by default. When a Craft blueprint using breeze-icons is packaged, the RCC file is automatically included in the resulting artifact.

When the application starts up, KIconTheme scans some directories for RCC files, and if it finds them they're automatically opened and loaded => icons are available.

Further reading about the initial design of the feature by David Faure:

Check presentation of file paths in user interface

Make sure that file paths in your application are rendered consistently. One usual problem we face is that the user interface ends up with strings like C:/Program Files (x86)/KDevelop\, which makes use of forward and backward slashes inconsistently.

Tips for streamlining KDE applications for deployment on Microsoft Windows Bad: Mixed forward/backward slashes in file paths

Instead, decide for one form of slashes. While on Windows backward slashes are the usual form; the Qt framework makes it a little difficult to print path names with them. API such as QUrl::toDisplayString(...) will always return paths using forward slashes, and one would need to add little helpers everywhere using QDir::toNativeSeparators to do it properly.

At the very minimum use either form; don't mix forward and backward slashes in one file path. Applications on Windows these days handle paths containing forward slashes just fine, by the way.

commit 0de04d386e403ded74554951a8c4dcb9ee9bc1f9  
Author: Kevin Funk <>  
Date:   Mon Nov 20 15:32:54 2017 +0100

    File::fullPath: Nicer file path on Windows

diff --git a/src/fileTree.cpp b/src/fileTree.cpp  
index 9a5d06e..28ed689 100644  
--- a/src/fileTree.cpp
+++ b/src/fileTree.cpp
@@ -21,6 +21,8 @@

 #include "fileTree.h"

+#include <QUrl>
 File::fullPath(const Folder *root /*= 0*/) const
@@ -32,5 +34,6 @@ File::fullPath(const Folder *root /*= 0*/) const
     for (const Folder *d = (Folder*)this; d != root && d; d = d->parent())

-    return path;
+    const QUrl url = QUrl::fromLocalFile(path);
+    return url.toDisplayString(QUrl::PreferLocalFile | QUrl::StripTrailingSlash);

Install C/C++ runtime

Handled by: Craft -- you don't need to do anything.

When installing an application on Windows, the package author also needs to make sure the appropriate C/C++ runtime is injected into the system as part of the installation process. For instance, if your project was compiled using Microsoft Visual C++ 2015 and you want this project to run on another machine, you need to make sure the Microsoft Visual C++ 2015 Redistributable (which contains the C/C++ runtime components) is installed there.

Packages we need on Windows:

  • If project compiled with MSVC:
    • VCRedist installer (contains all necessary libraries)
  • If project compiled with MinGW:
    • Needs another set of libraries (e.g. libstdc++-6.dll, libgccssjlj-1.dll, ...)

But, don't be desperate: Craft has that all covered and will automatically include the binaries for either C++ runtime in the package and make sure it is properly installed as part of the installation of your KDE application on the target machine.

More ideas

If you'd like to know anything else I can probably add a few more paragraphs to this blog post for future reference. Just comment / mail me!

December 19, 2017

It's that time of the year again - the time for our End-of-Year fundraiser!

After an exciting and successful year, we give you all an opportunity to help us recharge our proverbial batteries.

You've always wanted to contribute to a Free and open source project, right? Maybe you wondered how you could do that.
Well, supporting our fundraiser is a perfect way to get started. Donations are also a great way to show gratitude to the developers of your favorite KDE applications, and to ensure the project will continue.

Besides, you know that this is a project worth backing, because we get things done. Since proof is in the pudding, let's take a quick look at what we did this year.

2017 Software Landmarks

  • In 2017, we released 3 major versions of Plasma - 5.9, 5.10, and 5.11
  • KDE Applications also saw 3 major releases, with the last one released just recently
  • There were 2 big releases of KDevelop, with improved support for PHP, Analyzers mode with plugins like Heaptrack and cppcheck, and support for Open Computing Language
  • We kept pushing Kirigami forward with releases 2.0 and 2.1, and several applications newly ported to the framework. Thanks to the new Kirigami, even more apps can be ported to a wider range of desktops and mobile environments
  • There were 4 releases of digiKam, the image management software, which also got a new, prettier website design
  • Krita continued to amaze everyone with its high-quality features, and it just keeps getting better
  • We welcomed a new browser Falkon (formerly known as Qupzilla) into our KDE family. We were also joined by several new applications, including Elisa, a simple and straightforward music player
  • Our developers focused on accessibility and made our applications more usable for everybody during the Randa Meetings developer sprint
  • Into 2018 with You

    We look forward to the new year with all its challenges and excitements, and we don't plan on slowing down.

    There will be new Plasma and KDE Applications releases, with a new Plasma LTS release (5.12) planned for the end of January. Season of KDE will bring a stream of fresh contributors. Konversation 2.0 will present a completely redesigned interface, and you can be sure it's not the only application that will positively surprise you in 2018.

    We will spend a lot of time and effort on our long-term goals, which include improving software usability, protecting your privacy, and making it easier to become a KDE contributor. And as always, we'll be on the lookout for more Free Software projects that we can bring under our wing and help the developers bring their ideas to fruition.

    But we cannot do all this without you. The community - that is, people just like you - is what drives KDE forward. Your energy motivates us. Your feedback helps us improve. Your financial support allows us to organize community events and developer sprints, maintain our websites and infrastructure, and so much more.

    Help us build a bigger, better, more powerful KDE Community by donating to our End-of-Year fundraiser. We appreciate every contribution, no matter how modest.

    You can also support us and power up our fundraiser by posting about it on social media. Share the link, tell others about it, or write a post on your blog and share that. Tweet us a link to your blog post, and we will share it on our social media.

    Let's empower each other!

    Dot Categories:

    December 18, 2017

    Obviously I still use FreeBSD on the desktop; with the packages from area51 I have a full and modern KDE Plasma environment. We (as in, the KDE-FreeBSD team) are still wrestling with getting the full Plasma 5 into the official ports tree (stalled, as so often it has been, on concerns of backwards compatibility), but things like CMake 3.10.1 and Qt 5.9 are sliding into place. Slowly, like brontosauruses driving a ’57 Cadillac.

    In the meantime, I do most of my Calamares development work — it is a Linux installer, after all — in VMs with some Linux distro installed. Invariably — and especially when working on tools that do the most terrible things to the disks attached to a system — I totally break the system, the VM no longer starts at all, and my development environment is interrupted for a bit.

    That’s always a good moment to switch distro’s .. since I’m going to spend an hour or so re-invigorating the VM anyway, reminding myself that this time I’ll make a clone and keep snapshots and whatnot, I may as well see what others use Calamares for. I’ve left a dusty and broken KDE Neon and a Manjaro behind me, and today I’m starting on Kaos.

    For the very reason that Calamares can break stuff, and because new Calamares versions need to be tested on (older) live ISO images, there’s a deploycala Python3 script that helps install all the (development) dependencies needed. This is a bit of a drag, since it’s tied to a whole bunch of distro’s specific package names, but it gets the job done. I update it whenever I hit a new distro.

    Kaos Linux LogoKaos Linux Logo

    Kaos is a KDE-leading-edge distro, and daaannnggg is it ever slick in the first 10 minutes of use. The customization of Calamares is some of the nicest I’ve seen. The release-notes page could use some work (on my part) .. if only because I could read that stuff during unsquashfs, instead of before it. First-time startup with Kaptan is a hardly-intrusive way of configuring a bunch of separate things that would require a careful trip through systemsettings.

    Look for the next few Calamares releases (in particular 3.2) to emerge from the Kaos, then.

    Lately I found myself working on an ARM64 (aka aarch64) based system which in turn I don’t own. So I needed to get a system to build and test things on.


    First of all, you need to have qemu static builds installed. For example, for ArchLinux you need to get them from AUR:
    $ yaourt -S qemu qemu-user-static binfmt-support

    And then enable the aarch64
    # update-binfmts --enable qemu-aarch64

    $ docker run -ti --rm -v /usr/bin/qemu-aarch64-static:/usr/bin/qemu-aarch64-static apol/test bash


    Once set up, it can be used as any regular docker image. For example passing -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix will give us access to X11, allowing to run apps:
    $ docker run -ti --rm -v /usr/bin/qemu-aarch64-static:/usr/bin/qemu-aarch64-static -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix apol/test bash

    One possibly wants to use Xephyr to test some stuff, probably on the safer side.

    Similarly, you’ll get to use gdb or any tooling you need and is available for the architecture.


    You get to code against your project against the platform you need by passing the extra required arguments, then most features discussed here apply.


    Granted it will work rather slow, as it’s emulation, but given the alternative is not being able to work at all, I’m happy.

    Note that most of this applies for any architecture, replacing aarch64 for any architecture supported by qemu (and your distro of choice): You can check /usr/bin/qemu-*-static.

    December 16, 2017

    Dois meses atrás participei do sprint do KDE Edu em Berlim. Essa foi a primeira vez que participei de um sprint do KDE (pois é, sou contribuidor do KDE desde 2010 e nunca tinha ido a um sprint!) e por conta disso estava bastante animado com o que iria encontrar.

    KDE Edu é um guarda-chuva específico para softwares educativos do KDE. O projeto tem um monte deles, e essa é a principal suíte de softwares educativos no mundo do software livre. Apesar disso, o KDE Edu tem recebido pouca atenção no quesito organização. Um exemplo são os próprios sprints: o último ocorreu há muitos anos atrás, o website do projeto está com alguns bugs, entre outros problemas.

    Portanto, esse sprint não foi apenas uma oportunidade para trabalhos de desenvolvimento (o que se espera desse tipo de encontro), mas também um bom momento para muito trabalho na parte de organização do projeto.

    Nesse aspecto, discutimos sobre o rebranding de alguns dos softwares mais relacionados com trabalho universitário do que com a “educação” em si, como o Cantor ou o Labplot. Há um desejo de se criar algo como um KDE Research/Science de forma a colocar todos esses softwares e outros como o Kile e KBibTex sob um mesmo guarda-chuva. Há uma discussão sobre esse tema em andamento.

    Outro tópico também discutido foi um novo site, mais direcionado a ensinar como utilizar softwares do KDE no contexto educacional do que apenas apresentar uma lista de softwares. Acredito que precisamos implementar essa ideia até para termos uma entrada própria na página de produtos do KDE.

    Em seguida, os desenvolvedores do sprint concordaram com a política de multi-sistemas operacionais para o KDE Edu. Softwares do KDE podem ser compilados e distribuídos para usuários de diferentes sistemas operacionais, não apenas Linux. Durante o sprint, alguns desenvolvedores trabalharam no desenvolvimento de instaladores para Windows, Mac OS, no port de aplicações para Android, e mesmo na criação de instaladores independentes para qualquer distribuição Linux usando flatpak.

    Ainda relacionado aos trabalhos organizativos, criei uma regra para enviar e-mails para a lista de e-mails do KDE Edu para cada novo Differential Revision nos softwares do projeto no Phabricator. Desculpem devs, nossas caixas de e-mail estão cheias por minha culpa. ��

    Já nos trabalhos relacionados a desenvolvimento, foquei-me em trabalhar pesado no Cantor. Primeiro, fiz alguns trabalhos de triagem de tarefas na workboard, fechando, abrindo, e colocando mais informações em algumas delas. Em seguida, revisei alguns trabalhos feitos por Rishabh Gupta, meu estudante durante o GSoC 2017. Ele portou o backend de Lua e R para QProcess, que estarão disponíveis logo mais.

    Após isso trabalhei no port do backend de Python 3 para usar a API Python/C. Esse é um trabalho em andamento e espero finalizá-lo para lançamento com a versão 18.04.

    E claro, além desse monte de trabalho nos divertimos com cervejas e comidas alemãs (e alguma comida americana, chinesa, árabe, e italiana também). Algo legal foi ter completado meus 31 anos no primeiro dia do sprint, portanto obrigado KDE por ter vindo à minha festa repleta de código-fonte, boas cervejas e pratos de comida com carne de porco. ��

    Finalizando, é sempre um prazer encontrar outros  gearheads como os amigos espanhóis Albert e Aleix, o único outro usuário Mageia que já encontrei pessoalmente em minha vida Timothée, meu aluno do GSoC Rishabh, meu camarada Sandro, e os novos amigos Sanjiban e David.

    Obrigado KDE e.V por fornecer os recursos necessários para que o sprint acontecesse e valeu Endocode por sediar o evento.

    Here comes the last KStars release for 2017! KStars v2.8.9 is available now for Windows, MacOS, and Linux.

    Robert Lancaster worked on improving PHD2 support with Ekos. This includes retrieving the guide star image, drift errors and RMS values, among other minor improvements and refactoring of the Ekos PHD2 codebase to support future extensions.

    Furthermore, Robert added drift plot support to Ekos Guide module which provides a visual indication of the accuracy of the guiding.

    The Ekos Guide module received further improvements to make it more straightforward to use with end users. The calibration button is now removed and performed automatically whenever guiding starts. The user can clear the calibration at any time to restart the process.

    Meridian Flip support improved with various fixes to post-meridian-flip operations including autofocus. Filter Manager received several fixes to improve filter switching during various phases of the capture process. Users can also control when to run the In-Sequence focus check. By default, the check is executed after each frame, but now can be configured to be executed only after several frames are captured.

    A minor but quite useful addition is the Meridian Line. It can be turned on so that users can get a visual indication on how close the mount is to executing a meridian flip procedure.

    Enjoy the new release, and do not forget to report any bugs or suggestions over at

    December 15, 2017

    We are happy to announce the latest Kdenlive version, part of the KDE Applications 17.12 release, making it the last major release using the current code base. This is a maintenance release focused on stability, while feature development is going in next year’s 18.04 version. Proxy clips were given some attention and should give you better seeking experience as well as reduced memory usage for images. Other fixes include fixes in timeline preview, a crash when using a Library clip and smoother seeking on rewind playback.

    We have been pushing the AppImage packages lately because it allow us to put all required dependencies inside one file that can easily be downloaded and run on all linux distros. Today, we can also announce the immediate availability of the Kdenlive 17.12 AppImage, downloadable here :

    AppImage related fixes:

    • Fix audio distortion affecting the 17.08.3 AppImage
    • Include Breeze style

    Vincent Pinon is also continuing the support for the Windows version, and you can get Kdenlive 17.12 for Windows here:

    We are also making available the first usable “preview” AppImage of the refactoring branch which will receive all development focus from now and will be released as 18.04. It is not ready for production but allows you to have a look at Kdenlive’s future. You may follow the development progress here.

    Kdenlive 18.04 alpha 2 release:

    Meet us:
    Next Kdenlive Café is tonight on #kdenlive at 21PM (CET), so feel free to join us for some feedback!



    • Packagers must take note that libsamplerate is now a dependency due to recent changes in FFMPEG.
    • It is recommended for Ubuntu (and derivatives) users to use the AppImage version until further notice.


    Full list of changes

    • Remove useless audio bitrate on pcm proxy encoding. Commit.
    • Update proxy profiles. Commit.
    • Make sure playlist proxies have an even height. Commit.
    • Fix crash on playlists concurrent jobs using library clips. Commit. See bug #386616
    • Timeline preview fixes: Don’t invalidate on expand/collapse effect, invalidate on master clip edit. Commit.
    • Don’t restart clip if trying to play backwards from clip monitor end. Commit.
    • Use smaller size for image proxies. Commit. Fixes bug #353577
    • Fix playing backwards forwards one second. Commit. Fixes bug #375634
    • Fix extension in transcode file dialog. Commit.
    • Sort clip zones by position instead of name. Commit.
    • Set a proper desktop file name to fix an icon under Wayland. Commit.
    • FreeBSD does not have sys/asm.h — for what is this include needed on linux?. Commit.
    • Doc: fix option (qwindowtitle instead of caption). Commit.
    • Fix terminology: mimetype(s) -> MIME type(s). Commit.
    • Fix UI string: Control Center -> System Settings. Commit.
    • Const’ify code. Commit.
    • Fix import image sequence. Commit.

    KDE Partition Manager 3.3 is now ready. It includes some improvements for Btrfs, F2FS, NTFS file systems. I even landed the first bits of new LUKS2 on-disk format support, now KDE Partition Manager can display LUKS2 labels. More LUKS2 work will follow in KPM 3.4. There were changes in how LVM devices are detected. So now Calamares installer should be able to see LVM logical volumes. Once my pull request lands, Calamares should also support partitioning operations on LVM logical volumes (although Calamares would need more work before installation and booting from root file system on LVM works. I tested Calamares with KPMcore 3.3 and it successfully installed rootfs in LVM volume and successfully booted). KPMcore library now only depends on Tier 1 Frameworks instead of Tier 3 (although, we will later require Tier 2).

    Most of the work is now done in sfdisk branch.  Currently, the only functional KDE Partition Manager backend uses libparted but sfdisk backend is now fully working (I would say RC quality). I would have merged in already but it requires util-linux 2.32 which is not yet released.

    Yet another branch on top of sfdisk is KAuth branch which allows KPM to run as unprivileged user and uses Polkit when necessary to gain root rights. Everything except SMART support is working. To get SMART working too we would have to port away from (unmaintained) libatasmart to calling smartctl. Feel free to help! It should be fairly easy task but somebody has to do the work. Other than that you can already perform all partitioning operations using KAuth with one caveat. Right now KPM calls KAuth helper many times while performing partitioning operations. It can happen that KAuth authorization will expire in the meantime (KAuth remembers it for about 5 minutes) and KAuth will request a user to enter root password. If the user enters correct password, operation would finish. However, if authorization is not granted we may end up with half completed operation. And of course we don’t want to leave partition half moved, the data will almost surely be lost (half-resized partition is probably okay…). I suppose we can fix this by refactoring KPM operation runner, so that it calls KAuth helper just once with a list of all commands that have to be run. Unfortunately, this KPM Operation Runner refactoring might be bigger than what I would like, as significant changes would be necessary in partition data copying code. Maybe GSoC project then… Or ar there any better ideas on how to prevent KAuth authorization dialog in the middle of partitioning operations?

    You can grab tarballs from standard locations on server.

    December 14, 2017

    Elisa is a music player designed to be simple and nice to use. It allows to browse music by album, artist or all tracks. You can build and play your own playlist. We aim to build a fluid interface that is easy to use.

    Alexander has made yet another round of improvements to the UI and especially improved a lot the playlist by making it more fluid. It really feels much better now (at least for me).

    I have added the possibility to load or save playlists. It is currently restricted to tracks already known to Elisa. More work is needed to support any audio files.

    Screenshot_20171214_212504Current state of the interface.

    The following things have been integrated in Elisa git repository:

    • delete outdated todo list by Alexander Stippich ;
    • fix player being stuck when same track is played in random mode by Matthieu Gallien ;
    • enhance content view navigation by Alexander Stippich:
      • The backspace key is now also accepted for back navigation in e.g. the artist album view ;
      • A mouse area is added to MediaBrowser to allow back navigation with the mouse backbutton ;
      • Focus is forced on the stackview when arist or album is opened. Without this, the back key navigation is only working when it is actively focused by the user, e.g. by a mouse click ;
    • deduplicate code for the content view and adjust visual appearance by Alexander Stippich ;
    • merge MediaPlayList and PlayListControler to reduce size of code by Matthieu Gallien ;
    • make margins consistent and remove horizontal separator in playlist by Alexander Stippich ;
    • show passive notification when a track fail to play by Matthieu Gallien ;
    • remove context menus by Alexander Stippich ;
    • overhaul RatingStar by Alexander Stippich ;
    • add possibility to load and save a playlist at format m3u from two buttons at bottom of playlist by Matthieu Gallien ;
    • fix problems with dependencies and make some optional ones be required. Require all tier1 dependencies if they are also supported on Windows and
      Android by Matthieu Gallien ;
    • more improvements to playlist by Alexander Stippich:
      • Some more improvements to the playlist. First of all, adds more animations and fixes T6295.
      • Fixes some scaling issues and makes Show Current Track also select the item to highlight it better.
      • Converts actions to use signals
    • shorten the play now and replace play list button text by Alexander Stippich;
    • add a readme for packagers following some feedback in #kde-devel by Matthieu Gallien ;


    Two months ago I start to finalize the existing Elementary icon theme for LibreOffice. It’s about 2.000 icons and now they are available in LibreOffice 6.0 beta. In addition all icons are available as svg file so it can be used and edit in an easy way.

    Please download and test LibreOffice 6.0 beta to give feedback. You switch the icon theme with Tools -> Options -> View -> Icon Style. We talk about a lot icons not all are perfect. Feedback is always welcome.

    Test LibreOffice 6.0 beta

    Mary Christmas and an shiny new year with LibreOffice 6.0.



    December 13, 2017


    In the previous article we gave an overview of the process for creating a custom aspect and showed how to create (most of) the front end functionality. In this article we shall continue building our custom aspect by implementing the corresponding backend types, registering the types and setting up communication from the frontend to the backend objects. This will get us most of the way there. The next article will wrap up by showing how to implement jobs to process our aspect's components.

    As a reminder of what we are dealing with, here's the architecture diagram from part 1:

    Creating the Backend

    One of the nice things about Qt 3D is that it is capable of very high throughput. This is achieved by way of using jobs executed on a threadpool in the backend. To be able to do this without introducing a tangled web of synchronisation points (which would limit the parallelism), we make a classic computer science trade-off and sacrifice memory for the benefit of speed. By having each aspect work on its own copy of the data, it can schedule jobs safe in the knowledge that nothing else will be trampling all over its data.

    This is not as costly as it sounds. The backend nodes are not derived from QObject. The base class for backend nodes is Qt3DCore::QBackendNode, which is a pretty lightweight class. Also, note that aspects only store the data that they specifically care about in the backend. For example, the animation aspect does not care about which Material component an Entity has, so no need to store any data from it. Conversely, the render aspect doesn't care about Animation clips or Animator components.

    In our little custom aspect, we only have one type of frontend component, FpsMonitor. Logically, we will only have a single corresponding backend type, which we will imaginatively call FpsMonitorBackend:

    [sourcecode lang="cpp" title="fpsmonitorbackend.h"]
    class FpsMonitorBackend : public Qt3DCore::QBackendNode
    : Qt3DCore::QBackendNode(Qt3DCore::QBackendNode::ReadWrite)
    , m_rollingMeanFrameCount(5)

    void initializeFromPeer(const Qt3DCore::QNodeCreatedChangeBasePtr &change) override
    // TODO: Implement me!

    int m_rollingMeanFrameCount;

    The class declaration is very simple. We subclass Qt3DCore::QBackendNode as you would expect; add a data member to mirror the information from the frontend FpsMonitor component; and override the initializeFromPeer() virtual function. This function will be called just after Qt 3D creates an instance of our backend type. The argument allows us to get at the data sent from the corresponding frontend object as we will see shortly.

    Registering the Types

    We now have simple implementations of the frontend and backend components. The next step is to register these with the aspect so that it knows to instantiate the backend node whenever a frontend node is created. Similarly for destruction. We do this by way of an intermediary helper known as a node mapper.

    To create a node mapper, just subclass Qt3DCore::QNodeMapper and override the virtuals to create, lookup and destroy the backend objects on demand. The manner in which you create, store, lookup and destroy the objects is entirely up to you as a developer. Qt 3D does not impose any particular management scheme upon you. The render aspect does some fairly fancy things with bucketed memory managers and aligning memory for SIMD types, but here we can do something much simpler.

    We will store pointers to the backend nodes in a QHash within the CustomAspect and index them by the node's Qt3DCore::QNodeId. The node id is used to uniquely identify a given node, even between the frontend and all available aspect backends. On Qt3DCore::QNode the id is available via the id() function, whereas for QBackendNode you access it via the peerId() function. For the two corresponding objects representing the component, the id() and peerId() functions return the same QNodeId value.

    Let's get to it and add some storage for the backend nodes to the CustomAspect along with some helper functions:

    [sourcecode lang="cpp" title="customaspect.h"]
    class CustomAspect : public Qt3DCore::QAbstractAspect
    void addFpsMonitor(Qt3DCore::QNodeId id, FpsMonitorBackend *fpsMonitor)
    m_fpsMonitors.insert(id, fpsMonitor);

    FpsMonitorBackend *fpsMonitor(Qt3DCore::QNodeId id)
    return m_fpsMonitors.value(id, nullptr);

    FpsMonitorBackend *takeFpsMonitor(Qt3DCore::QNodeId id)
    return m_fpsMonitors.take(id);

    QHash<Qt3DCore::QNodeId, FpsMonitorBackend *> m_fpsMonitors;

    Now we can implement a simple node mapper as:

    [sourcecode lang="cpp" title="fpsmonitorbackend.h"]
    class FpsMonitorMapper : public Qt3DCore::QBackendNodeMapper
    explicit FpsMonitorMapper(CustomAspect *aspect);

    Qt3DCore::QBackendNode *create(const Qt3DCore::QNodeCreatedChangeBasePtr &change) const override
    auto fpsMonitor = new FpsMonitorBackend;
    m_aspect->addFpsMonitor(change->subjectId(), fpsMonitor);
    return fpsMonitor;

    Qt3DCore::QBackendNode *get(Qt3DCore::QNodeId id) const override
    return m_aspect->fpsMonitor(id);

    void destroy(Qt3DCore::QNodeId id) const override
    auto fpsMonitor = m_aspect->takeFpsMonitor(id);
    delete fpsMonitor;

    CustomAspect *m_aspect;

    To finish this piece of the puzzle, we now need to tell the aspect about how these types and the mapper relate to each other. We do this by calling QAbstractAspect::registerBackendType() template function, passing in a shared pointer to the mapper that will create, find, and destroy the corresponding backend nodes. The template argument is the type of the frontend node for which this mapper should be called. A convenient place to do this is in the constructor of the CustomAspect. In our case it looks like this:

    [sourcecode lang="cpp" title="customaspect.cpp"]
    CustomAspect::CustomAspect(QObject *parent)
    : Qt3DCore::QAbstractAspect(parent)
    // Register the mapper to handle creation, lookup, and destruction of backend nodes
    auto mapper = QSharedPointer<FpsMonitorMapper>::create(this);

    And that's it! With that registration in place, any time an FpsMonitor component is added to the frontend object tree (the scene), the aspect will lookup the node mapper for that type of object. Here, it will find our registered FpsMonitorMapper object and it will call its create() function to create the backend node and manage its storage. A similar story holds for the destruction (technically, it's the removal from the scene) of the frontend node. The mapper's get() function is used internally to be able to call virtuals on the backend node at appropriate points in time (e.g. when properties notify that they have been changed).

    The Frontend-Backend Communications

    Now that we are able to create, access and destroy the backend node for any frontend node, let's see how we can let them talk to each other. There are 3 main times the frontend and backend nodes communicate with each other:

    1. Initialisation — When our backend node is first created we get an opportunity to initialise it with data sent from the frontend node.
    2. Frontend to Backend — Typically when properties on the frontend node get changed we want to send the new property value to the backend node so that it is operating on up to date information.
    3. Backend to Frontend — When our jobs process the data stored in the backend nodes, sometimes this will result in updated values that should be sent to the frontend node.

    Here we will cover the first two cases. The third case will be deferred until the next article when we introduce jobs.

    Backend Node Initialisation

    All communication between frontend and backend objects operates by sending sub-classed Qt3DCore::QSceneChanges. These are similar in nature and concept to QEvent but the change arbiter that processes the changes has the opportunity to manipulate them in the case of conflicts from multiple aspects, re-order them into priority, or any other manipulations that may be needed in the future.

    For the purpose of initialising the backend node upon creation, we use a Qt3DCore::QNodeCreatedChange which is a templated type that we can use to wrap up our type-specific data. When Qt 3D wants to notify the backend about your frontend node's initial state, it calls the private virtual function QNode::createNodeCreationChange(). This function returns a node created change containing any information that we wish to access in the backend node. We have to do it by copying the data rather than just dereferencing a pointer to the frontend object because by the time the backend processes the request, the frontend object may have been deleted - i.e. a classic data race. For our simple component our implementation looks like this:

    [sourcecode lang="cpp" title="fpsmonitor.h"]
    struct FpsMonitorData
    int rollingMeanFrameCount;

    [sourcecode lang="cpp" title="fpsmonitor.cpp"]
    Qt3DCore::QNodeCreatedChangeBasePtr FpsMonitor::createNodeCreationChange() const
    auto creationChange = Qt3DCore::QNodeCreatedChangePtr<FpsMonitorData>::create(this);
    auto &data = creationChange->data;
    data.rollingMeanFrameCount = m_rollingMeanFrameCount;
    return creationChange;

    The change created by our frontend node is passed to the backend node (via the change arbiter) and gets processed by the initializeFromPeer()
    virtual function

    [sourcecode lang="cpp" title="fpsmonitorbackend.cpp"]
    void FpsMonitorBackend::initializeFromPeer(const Qt3DCore::QNodeCreatedChangeBasePtr &change)
    const auto typedChange = qSharedPointerCast<Qt3DCore::QNodeCreatedChange<FpsMonitorData>>(change);
    const auto &data = typedChange->data;
    m_rollingMeanFrameCount = data.rollingMeanFrameCount;

    Frontend to Backend Communication

    At this point, the backend node mirrors the initial state of the frontend node. But what if the user changes a property on the frontend node? When that happens, our backend node will hold stale data.

    The good news is that this is easy to handle. The implementation of Qt3DCore::QNode takes care of the first half of the problem for us. Internally it listens to the Q_PROPERTY notification signals and when it sees that a property has changed, it creates a QPropertyUpdatedChange for us and dispatches it to the change arbiter which in turn delivers it to the backend node's sceneChangeEvent() function.

    So all we need to do as authors of the backend node is to override this function, extract the data from the change object and update our internal state. Often you will then want to mark the backend node as dirty in some way so that the aspect knows it needs to be processed next frame. Here though,
    we will just update the state to reflect the latest value from the frontend:

    [sourcecode lang="cpp" title="fpsmonitorbackend.cpp"]
    void FpsMonitorBackend::sceneChangeEvent(const Qt3DCore::QSceneChangePtr &e)
    if (e->type() == Qt3DCore::PropertyUpdated) {
    const auto change = qSharedPointerCast<Qt3DCore::QPropertyUpdatedChange>(e);
    if (change->propertyName() == QByteArrayLiteral("rollingMeanFrameCount")) {
    const auto newValue = change->value().toInt();
    if (newValue != m_rollingMeanFrameCount) {
    m_rollingMeanFrameCount = newValue;
    // TODO: Update fps calculations

    If you don't want to use the built in automatic property change dispatch of Qt3DCore::QNode then you can disable it by wrapping the property notification signal emission in a call to QNode::blockNotifications(). This works in exactly the same manner as QObject::blockSignals() except that it only blocks sending the notifications to the backend node, not the signal itself. This means that other connections or property bindings that rely upon your signals will still work.

    If you block the default notifications in this way, then you need to send your own to ensure that the backend node has up to date information. Feel free to subclass any class in the Qt3DCore::QSceneChange hierarchy and bend it to your needs. A common approach is to subclass Qt3DCore::QStaticPropertyUpdatedChangeBase,
    which handles the property name and in the subclass add a strongly typed member for the property value payload. The advantage of this over the built-in mechanism is that it avoids using QVariant which does suffer a little in highly threaded contexts in terms of performance. Usually though, the frontend properties don't change too frequently and the default is fine.


    In this article we have shown how to implement most of the backend node; how to register the node mapper with the aspect to create, lookup and destroy backend nodes; how to initialise the backend node from the frontend node in a safe way and also how to keep its data in sync with the frontend.

    In the next article we will finally make our custom aspect actually do some real (if simple) work, and learn how to get the backend node to send updates to the frontend node (the mean fps value). We will ensure that the heavy lifting parts get executed in the context of the Qt 3D threadpool so that you get an idea of how it can scale. Until next time. continue reading

    The post Writing a Custom Qt 3D Aspect – part 2 appeared first on KDAB.

    December 12, 2017

    This is a little blog post from India. I’ve been invited to give not one, but two talks at Swatantra 2017, the triennial conference organised by ICFOSS in Thiruvananthapuram (also known by its shorter old name, Trivandrum), Kerala.

    I’ll have the pleasure to give a talk about GCompris, and another one about Synfig studio. It’s been a long time since I didn’t talk about the latter, but since Konstantin Dmitriev and the Morevna team were not available, I’ll do my best to represent Synfig there.

    (little teaser animation of the event banner, done with Synfig studio)

    I’ll also meet some friends from Krita, David Revoy and Raghavendra Kamath, so even if there is no talk dedicated to Krita, it should be well represented.

    The event will happen the 20th and 21st of December, and my talks will be on the second day. Until then, I’m spending a week visiting and enjoying the south of India.

    You can find more info on the official website of the event: Many thanks again to the nice organization team at ICFOSS for the invitation !

    December 11, 2017

    Two months ago I attended to KDE Edu Sprint 2017 at Berlin. It was my first KDE sprint (really, I send code to KDE software since 2010 and never went to a sprint!) so I was really excited for the event.

    KDE Edu is the an umbrella for specific educational software of KDE. There are a lot of them and it is the main educational software suite in free software world. Despite it, KDE Edu has received little attention in organization side, for instance the previous KDE Edu sprint occurred several years ago, our website has some problems, and more.

    Therefore, this sprint was an opportunity not only for developers work in software development, but for works in organization side as well.

    In organization work side, we discuss about the rebranding of some software more related to university work than for “education” itself, like Cantor and Labplot. There was a wish to create something like a KDE Research/Science in order to put software like them and others like Kile and KBibTex in a same umbrella. There is a discussion about this theme.

    Other topic in this point was the discussions about a new website, more oriented to teach how to use KDE software in educational context than present a set of software. In fact, I think we need to do it and strengthen the “KDE Edu brand” in order to have a specific icon+link in KDE products page.

    Follow, the developers in the sprint agreed with the multi operating system policy for KDE Edu. KDE software can be built and distributed to users of several OS, not only Linux. During the sprint some developers worked to bring installers for Windows, Mac OS, porting applications to Android, and creating independent installers for Linux distributions using flatpak.

    Besides the discussions in this point, I worked to bring a rule to send e-mail to KDE Edu mailing list for each new Differential Revisions of KDE Edu software in Phabricator. Sorry devs, our mailboxes are full of e-mails because me.

    Now in development work side, my focus was work hard on Cantor. First, I made some task triage in our workboard, closing, opening, and putting more information in some tasks. Secondly, I reviewed some works made by Rishabh Gupta, my student during GSoC 2017. He ported the Lua and R backend to QProcess and it will be available soon.

    After it I worked to port Python 3 backend to Python/C API. This work is in progress and I expect to finish it to release in 18.04.

    Of course, besides this amount of work we have fun with some beers and German food (and some American food and Chinese food and Arab food and Italian food as well)! I was happy because my 31 years birthday was in the first day of the sprint, so thank you KDE for coming to my birthday party full of code and good beers and pork dishes. ��

    To finish, it is always a pleasure to meet the gearheads like my Spanish friends Albert and Aleix, the only other Mageia user I found personally in my life Timothée, my GSoC student Rishabh, my irmão brasileiro Sandro, and the new friends Sanjiban and David.

    Thank you KDE e.V for provide resources to the sprint and thank you Endocode for hosting the sprint.

    A software developer's life can get lonely. Developers spend a lot of time immersed in their code, and often don't get to see the direct impact of their work on the users. That is why events like Randa Meetings are necessary. They help build bonds within the community and show the developers that their work is appreciated.

    Randa Meetings are crucial to the KDE community for developer interaction, brainstorming, and bringing great new things to KDE. --- Scarlett Clark

    Randa Meetings are a yearly collection of KDE Community contributor sprints that take place in Randa, Switzerland. With origins dating back to a Plasma meeting in 2009, Randa is one of the most important developer-related events in the community.

    View from Randa. Photo by Gilles Caulier, CC BY 2.0

    This year, Randa Meetings were held from September 10 to September 16, and were centered around a meaningful theme - accessibility.

    Accessibility in Free and Open Source Software

    Accessibility is incredibly important, yet so often neglected in the process of software development, and implemented almost as an afterthought. Users with disabilities and impairments often find themselves excluded from using the software, and prevented from participating in activities that the rest of us take for granted. Essentially, the software makes them feel as if they don't matter.

    To remedy this, many governments enforce laws requiring software to be accessible. There are also accessibility standards, guidelines, and checklists to help developers make their software accessible to all.

    FOSS communities and projects have the potential to play a major role in driving software accessibility efforts because of their open nature. People with disabilities can communicate directly with developers, report issues, and request features that they need. Proprietary products are rarely this open to feedback, not to mention the fact they are often very expensive.

    Accessibility Settings module in Plasma 5

    Assistive technology covers a wide range of products and solutions: from screen magnifiers, screen readers, and text prediction methods to text-to-speech interfaces, speech recognition software, and simplified computer interfaces. There are also advanced solutions like 3D-printed prosthetic limbs, and those that allow controlling the mouse by moving the head or just the eyes.

    The best thing about all this technology is that it benefits everyone. Although people usually associate the word "accessibility" with hearing or visually impaired people, assistive technology can make life easier for many other groups: dyslexic or illiterate people, cognitively disabled people, the elderly, anyone with limited mobility or just bad eyesight.

    The analogy is clear with wheelchair-accessible spaces, which are useful to parents with baby strollers, people with bicycles and shopping carts, and delivery drivers. Likewise, improving keyboard navigation, image and color contrast, and text-to-speech tools results in satisfaction among non-disabled users. Making software accessible means making software better.

    Making KDE Software More Accessible

    Generally speaking, there are two ways to make software accessible: either by building special accessibility-focused tools from scratch, or by implementing accessibility features and improvements into existing applications. The latter is what the Randa Meetings 2017 were all about.

    digiKam with David Edmundson from Plasma in the background.
    Photo by Gilles Caulier, CC BY 2.0

    The developers created a useful KWin plugin that can simulate different types of color blindness. This will help in all future accessibility efforts, as it helps developers understand what their color schemes will look like to visually impaired users.

    KMyMoney was made more accessible via improvements to keyboard navigation. New keyboard shortcuts were added, and others simplified to make them easier to use.

    Randa Meetings 2017 were made special by Manuel, a visitor from Italy who stayed at the Randa house during the sprint. Manuel is a deaf user, and he took the time to explain the problems that hearing-impaired users encounter with KDE software, and software in general. His feedback was extremely valuable in the context of the sprint's theme, and helped developers come up with accessibility-oriented solutions.

    Meeting in Randa with other participants makes you aware of deficiencies, possibilities, and needs. For a newcomer, like myself, it was also a chance to meet some community members, see what kind of people build KDE software, and take a look behind the scenes. --- Lukasz Wojnilowicz

    Apart from fixing individual applications, a lot of work was done on the Plasma desktop environment itself. Accessibility-related improvements include the ability to navigate the Plasma panel using voice feedback and the keyboard. The following video demonstrates this feature in action:

    KRunner was made completely accessible, and this change is visible in Plasma 5.11. The Orca Screen Reader works well with KRunner, and can read the entered query, as well as let the user know which element of the interface is currently focused.

    There was also a round of discussions on best practices for accessibility in KDE software. When testing software, the developers should try to use it only with their keyboard, and then only with the mouse. Too much customization is not a good idea, so it should be avoided, especially when it comes to colors, fonts, and focus handling.

    Another good practice is to test the application with a screen reader. This experience should highlight potential issues for disabled users. In the end, it all comes down to empathy - being able to put yourself in the user's shoes, and the willingness to make your software available to as many people as possible, without excluding anyone.

    More Plans and Achievements from the Randa Meetings 2017

    Of course, the developers worked on so much more than just accessibility. The KDE PIM team discussed the results of their KMail User Survey, and tackled the most pressing issue reported by the uses - broken and unreliable search. They also ported the entire Kontact codebase away from the obsolete KDateTime component.

    Lukasz Wojnilowicz from KMyMoney and Volker Krause
    from KDE PIM. Photo by Gilles Caulier, CC BY 2.0

    KMyMoney saw some important changes. All plugin KCM modules were ported to KF5. The backup feature is, well, back up and available to use, and database loading was improved so as to prevent the incompatibility between new and old KMyMoney files.

    After successfully participating in Google Summer of Code, the digiKam team gathered in Randa to further polish the new code contributed by students. They also worked on the media server functionality, which allows users to share their photo collections across different devices (smartphones, tablets, TVs...) using DLNA.

    Scarlett Clark (KDE CI), Emmanuel LePage (Ring), Simon Frei
    (digiKam) and David Edmundson discussing Plasma Accessibility.
    Photo by Gilles Caulier, CC BY 2.0

    Marble Maps now has a new splash screen, and the entire interface of the Bookmarks dialog is responsive to touch. There are plans to implement a better text-to-speech module for navigation.

    The Public Transport Plasma applet has been completely rewritten as a Kirigami application. The applet's UI is now much more flexible and easier to adapt to mobile devices.

    The developers of Kube worked on resolving an annoying issue with live queries which slows down email synchronization, especially when there are thousands of emails. They also discussed the possibility of implementing an HTML composer to edit emails in Kube, and made plans for GPG implementation. In collaboration with developers from other KDE projects, they explored the options for making Kube cross-platform, and looked for the best way to build Kube on macOS and Windows. Finally, they implemented a visualization in the configuration dialog which indicates when the user input is invalid in configuration fields.

    Last but not least, Kdenlive received color correction improvements, and the developers worked on bringing back the popular TypeWriter effect. They also fixed the import of image sequences, worked on porting Kdenlive to Windows and macOS, and removed the warning about missing DVD tools that used to appear when starting Kdenlive for the first time.

    Looking Forward to the Next Randa Meetings

    With another successful Randa Meetings behind us, we can start planning for the next session.

    If you like the work our developers are doing, you can directly support it by donating to KDE. You can also contribute to KDE and make an impact on the users by joining our mentored project called Season of KDE.

    Who knows, maybe in the future you too will attend the Randa Meetings!

    Could you tell us something about yourself?

    I’m Rytelier, a digital artist. I’ve had an interest in creating art for a few years, I mainly want to visualize my original world.

    Do you paint professionally, as a hobby artist, or both?

    Currently I do only personal work, but I will look for some freelance job in the future.

    What genre(s) do you work in?

    I work mainly in science fiction – I’m creating an original world. I like to try various things, from creatures to landscapes and architecture. There are so many things to design in this world.

    Whose work inspires you most — who are your role models as an artist?

    It’s hard to point out certain artists, there are so many. Mainly I get inspired by fantasy art from the internet, I explore various websites to find interesting art.

    I recommend looking at my favourites gallery, there are many works that inspire me.

    How and when did you get to try digital painting for the first time?

    It was years ago, I’ve got interested in the subject after I saw other people’s work. It was obviously confusing, how to place strokes, how to mix colors, and I had to get used to not looking at my hand when doing something on the tablet.

    What makes you choose digital over traditional painting?

    I like the freedom and flexibility that digital art gives. I can create a variety of textures, find colors more easily and fix mistakes.

    How did you find out about Krita?

    I saw a news item about Krita on some website related to digital art and decided to try it.

    What was your first impression?

    I liked how many interesting brushes there were. As time went on I discovered more useful features. It was surprising to find out that some functions aren’t available in Photoshop.

    What do you love about Krita?

    It has many useful functions and very high user convenience. I love the brush editor – it’s clean and simple to understand, but powerful. The dynamics curve adjustment is useful, the size dependent brush with sunken curve allows me to paint fur and grass more easily.

    Also different functional brush engines. Color smudge is nice for more traditional work, like mixing wet paint. Shape brush is like a lasso, but better because it shows the shape instantly, without having to use the fill tool. Filter brush is nice too, I mainly use it as sharpen and customizable burn/dodge. There are also ways to color line art quickly. For a free program that functionality is amazing — it would be amazing even for a paid program! I like this software much more than Photoshop.

    What do you think needs improvement in Krita? Is there anything that really annoys you?

    The performance is the thing I most want to see improved for painting and filters. I’m happy to see multi-threaded brushes in the 4.0 version. Also I would like more dynamic preview on applying filters like the gradient map, where it updates instantly when moving the color on the color wheel. It annoys me that large brush files (brushes with big textures) don’t load, I
    have to optimize my textures by reducing the size so the brush can load.

    What sets Krita apart from the other tools that you use?

    The amount of convenience is very high compared to other programs. The amount of “this one should be designed in a better way, it annoys me” things is the smallest of all the programs I use, and if something is broken, then most of these functions are announced to improve in 4.0.

    If you had to pick one favourite of all your work done in Krita so far,  what would it be, and why?

    It’s hard to pick a favourite. I think this, because I challenged myself in this picture and they are my original character, which I like a lot.

    What techniques and brushes did you use in it?

    I use brushes that I’ve created myself from resources found on the internet and pictures scanned by myself. I like to use slightly different ways of painting in every artwork, still looking for techniques that suit me best. Generally I start from sketch, then paint splatter going all over the canvas, then adding blurry forms, then adding details. Starting from soft edges allows me to find good colors more easily.

    Where can people see more of your work?
    I will open galleries in other sites in the future.

    Anything else you’d like to share?

    I hope that Krita will get more exposure and more people, including professionals, will use it and will donate to its development team instead of buying expensive digital art programs. Open source software is having a great time, more and more tools are being created that replace these expensive ones in various categories.

    Older blog entries



    Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.