The KWinFT project with its two major open source offerings KWinFT and Wrapland was announced one month ago. This made quite some headlines back then but I decided to keep it down afterwards and push the project silently forward on a technical side.
Now I am pleased to announce the release of a beta version for the next stable release 5.19 in two weeks. The highlights of this release are a complete redesign of Wrapland's server library and two more projects joining KWinFT.
One of the goals of KWinFT is to facilitate large upsetting changes to the internal structure and technological base of its open source offerings. As mentioned one month ago in the project announcement these changes include pushing back the usage of Qt in lower-level libraries and instead making use of modern C++ to its full potential.
We achieved the first milestone on this route in an impressively short timeframe: the redesign of Wrapland's server library for improved encapsulation of external libwayland types and providing template-enhanced meta-classes for easy extension with new functionality in the future.
This redesign work was organized on a separate branch and merged this weekend into master. In the end that included over 200 commits and 40'000 changed lines. Here I have to thank in particular Adrien Faveraux who joined KWinFT shortly after its announcement and contributed several class refactors. Our combined work enabled us to deliver this major redesign already now with the upcoming release.
Aside from the redesign I used this opportunity to add clang-based tools for static code analysis: clang-format and clang-tidy. Adding to our autotests that run with and without sanitizers Wrapland's CI pipelines now provide efficient means for handling contributions by GitLab merge requests and checking back on the result after merge. You can see a full pipeline with linters, static code analysis, project build and autotests passing in the article picture above or check it out here directly in the project.
With this release Disman and KDisplay join the KWinFT project. Disman is a fork of libkscreen and KDisplay one of KScreen. KScreen is the main UI in a KDE Plasma workspace for display configuration and I was its main contributor and maintainer in the last years.
Disman can be installed in parallel with libkscreen. For KDisplay on the other side it is recommended to remove KScreen when KDisplay is installed. This way not both daemons try to meddle with the display configuration at the same time. KDisplay can make use of plugins for KWinFT, KWin and wlroots so you could also use KDisplay as a general replacement.
Forking libkscreen and KScreen to Disman and KDisplay was an unwanted move from my side because I would have liked to keep maintaining them inside KDE. But my efforts to integrate KWinFT with them were not welcomed by some members of the Plasma team. Form your own opinion by reading the discussion in the patch review.
I am not happy about this development but I decided to make the best out of a bad situation and after forking and rebranding directly created CI pipelines for both projects which now also run linters, project builds and autotests on all merge requests and branch pushes. And I defined some more courageous goals for the two projects now that I have more control.
One would think after years of being the only person putting real work into KScreen I would have had this kind of freedom also inside KDE but that is not how KDE operates.
Does it need to be this way? What are arguments for and against it? That is a discussion for another article in the future.
There is an overall technical vision I am following with KWinFT: building a modern C++ framework for Wayland compositor creation. A framework that is built up from independent yet well interacting small libraries.
Take a look at this task for an overview. The first one of these libraries that we have now put work in was Wrapland. I plan for the directly next one to be the backend library that provides interfacing capabilities with the kernel or a host window system the compositor runs on, what in most cases means talking to the Direct Rendering Manager.
The work in Wrapland is not finished though. After the basic representation of Wayland objects has been improved we can push further by layering the server library like this task describes. The end goal here is to get rid of the Qt dependency and make it an optional facade only.
You can try out KWinFT on Manjaro. At the moment you can install KWinFT and its dependencies on Manjaro's unstable images but it is planned to make this possible also in the stable images with the upcoming 5.19 stable release.
I explicitly recommend the Manjaro distribution nowadays to any user from being new to Linux to experts. I have Manjaro running on several devices and I am very pleased with Manjaro's technology, its development pace and its community.
If you are an advanced user you can also use Arch Linux directly and install a KWinFT AUR package that builds KWinFT and its dependencies directly from Git. I hope a package of KWinFT's stable release will also be soon available from Arch' official repositories.
If you want to contribute to one of the KWinFT projects take a look at the open tasks and come join us in our Gitter channel. I am very happy that already several people joined the project who provide QA feedback and patches. There are also opportunities to work on DevOps, documentation and translations.
I am hoping KWinFT will be a welcoming place for everyone interested in high-quality open source graphics technology. A place with low entry barriers and many opportunities to learn and grow as an engineer.
Hello, I am a 36-year-old freelance illustrator and concept artist based in Brittany, France. I have worked in the industry since 2007 and am now a freelancer.
I paint professionally but it is important to me to also make a lot of personal work in my spare time. It allows me to try new techniques and processes.
I love Science-Fiction and fantasy and anything relative to alternative worlds, and anything relative to legends, tales… I particularly love ghost stories and old castle ruins… I also love nature and I spend a lot of time in the forest, observing the wildlife.
I have a lot of influences and artists I admire like Thomas Scholes for his deep and colorful architectures, Piotr Jabłoński for his subtle shape language and textures with mysterious moods, Richard Wright for his mastery of composition and color, Andrey Surnov for his very original way of rendering contrasts, lights and surfaces, Karl Sisson for his refreshing way to create absolutely original concepts, and lot and lot of others, but also Dofresh for his mastery of composition and colors, and the way he creates organic shapes in his mech designs. Also, there is often a subtle social dimension on his thematics.
I actually painted on a computer for the first time in the 90s with a program called Canvas on my very old Atari ST… I made absolutely ugly artworks at this time.
I actually do both but for professional work it is much easier to go digital, because you can do retakes and deliver your work much more easily.
I wanted to try someting different and a friend of mine showed me Krita in 2017.
I loved how intuitive Krita is, I handled the program very fast, more over my Wacom tablet worked perfectly on it, and that was not the case with oher applications at this time.
I love how fast I can paint with Krita. Also, the brush customisation is very nice and complete.
I want to emphasize the fact that Krita is much more stable in 2020 than in 2017, the crashes are very rare now, even with 8k files. (I love to work with very high definition files in order to print my work in the future.)
The main thing I would like to see improved is the fluidity of the brushes. It is actually decent, but can be improved.
It is a complete and reliable solution for digital painting, almost it is very light.
My favorite work so far is probably “Refugees” I learned a lot doing this one, I have tried to make a crowd with minimal details, I tried to find the essence of what makes a crowd looks like a crowd.
I made my own brush with the fantastic brush engine that mimics a kind of knife brush and made the basis for the crowd with very loose brush strokes.
I tested a lot of brushes, and Wet_Bristle_Rough was used to refine details of the crowd. I like its oil paint feeling.
You can see my work mainly on Artstation, Twitter and Instagram:
I would like to encourage young artists not to give up, the road is long to learn how to draw, it is a long term project. If you train regularly, if you spend 10 minutes every day at drawing, I garantee you will be able to draw in the long term.
Kdenlive’s Titler Tool rewrite began with GSoC 2019 and now I am happy to announce that we have an MLT producer which can play .qml files with animations! The producer is being now integrated in Kdenlive.
These are the QML Animation types that are supported and tested so far:
Along with grouping (ParallelAnimation and SequentialAnimation)
Meaning theoretically you can now create animated qml clips that can be added to your video!
(All examples are run at 25 fps, so you may see a slight jerkiness in some animations)
Carl Schwan is also helping us make a UI for the titler: https://invent.kde.org/carlschwan/libvectorgraphicsquick – right now, this allows some basic editing capabilities for a .qml file.
We are now neck-deep into testing and integrating the producer with Kdenlive. The animations seem to be playing fine in the Clip Monitor, but work still remains with adding it to the Timeline and rendering it. Also, there are some known issues when playing certain animations, for instance, the last frame of the PathAnimation isn’t rendered.
If you want to test out the QML MLT producer, you could do it with melt, MLT’s command-line player. You can find the source code for the qml producer here: https://github.com/akhilam512/mlt/tree/qmlproducer
Note you will need to use CMake to build mlt in order to have the qml producer built. Once that is done, run:
The rendering logic can be found here: https://invent.kde.org/akhilkgangadharan/QmlRenderer/-/tree/multithreaded-mlt
Some major changes were made to the rendering logic. We now use a separate render thread for rendering. This took a bit of time but it is essential as rendering on the main thread was leading to a crash when a qml clip was dragged to the Timeline in Kdenlive due to conflicts between QOpenGlContext makeCurrent() calls of the Timeline and the Renderer. See this commit for details.
More information on the Kolab Now blog!
“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to: kube-project.com
It has been a long time since I have written about Elisa. In the meantime, I have been busy working on Elisa and also some other personal side projects. I plan to write about them later.
One area, Elisa is not fulfilling my needs is the support for UPnP DLNA. I am working actively on that but this is a lot of work and my plan is to probably release a preview of it in the next release to get feedback on it.
That is a way to connect different devices on your home network to share, play and control multimedia content.
One of the typical use case is to have some content stored on a NAS and use your smartphone to play it on some smart speakers.
This specification is important to me for the following reasons:
I have started some years ago a library written using Qt and leveraging the excellent KDSOAP library from KDAB . It is implementing both parts of UPnP:
I plan to continue working on this library to tidy it up, improve the documentation and write automated tests.
The main reason I started writing my own code is that the existing libraries I tested would require writing C code to generate XML trees using malloc and free. I am not feeling confident doing that. The other reason is that the amount of code I had to write is minimal due to using an existing SOAP library. At the moment, the main challenge is writing automated tests to stabilize the current code.
In Elisa, there is code using that library that integrates discovery of music servers. One can browse them and enqueue individual tracks for local play in Elisa.
This integration work has needed quite some re-engineering of existing code. I have used this opportunity to apply the lessons I have learned during the last years working on it. Quite clearly, this was sorely needed.
This led me to understand that re-engineering as soon as you understand what needs to be improved is much better than waiting and doing it later. Quite clearly, the features being worked by me during the past months have been hard to integrate because the code was not ready for it. The situations will soon be much better.
The next steps is to finish a first usable state . I hope to be able to get this merged on the next stable release of Elisa. This first release will be done to get feedback on the stability of the code. I have seen exiting UPnP media servers return broken answers. I still need to find workarounds for those bugs. I suspect I will need to add a few of them.
I want to move the UPnP library through review by KDE developers to be able to make a proper release. Until that review, I will have to make beta releases. Without those beta releases, it will be hard for people to package this code and to distribute it to users.
This should also enable Elisa to interact with remote devices. That should help to integrate the support for more device types like ChromeCast for example.
Feedback is very much welcome on this work
After years of using Phabricator, KDE has officially begun the migration to GitLab! So far we are using it for patch review, and developer task tracking will be migrated soon. We are still using Bugzilla for bugs and feature requests as migrating those functions to GitLab is a significant project in and of itself! Already the KDE community is enjoying GitLab’s smoother workflow; why not take advantage of this and submit a merge request?
But that’s not all: big changes for Plasma 5.20 have started to land too. It promises to be a very significant release! Check it out:
KDE Software is made by people just like you, often on their free time! If you know a KDE developer, send them a kind note. Developers like to put on a logical face but they need love and care too, especially during trying times like these.
More generally, have a look at https://community.kde.org/Get_Involved to discover ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!
Thanks to the hard work of Sharaf Zaman, Krita is now available in the Google Play Store for Android tablets and Chromebooks (not for Android phones).
This beta, based on Krita 4.2.9, is the full desktop version of Krita, so it doesn’t have a special touch user interface. But it’s there, and you can play with it.
Unlike the Windows and Steam store, we don’t ask for money for Krita in the store, since it’s the only way people can install Krita on those devices, but you can buy a supporter badge from within Krita to support development.
For KDE Itinerary it’s crucial we know the correct timezone for each element in the timeline, precisely enough to also handle complex situations like daylight saving time changes during an international flight. How can we reliably determine the timezone though, e.g. given a geographic coordinate, offline and on a resource-constraint mobile device?
There’s more than 400 timezones defined in the IANA timezone database,
which is also what’s behind
QTimeZone. The geographic shapes associated with them
are available here, extracted
from OSM. This is about 100 MB of high precision vector data.
That’s a bit excessive for a desktop application, and out of the question for mobile or embedded use. Worse, doing hit detection on complex polygons of that size is also quite expensive at runtime.
As IANA timezones largely follow country or regional borders, determining those based on a geographic coordinate is a very similar problem. To illustrate just how complex this can get, see the below map of the village of Baarle, which is stuck in a quantum superposition of being in Belgium and the Netherlands at the same time.
So we want a more space-efficient encoding for this data, optimized for fast lookup, while trading in some of the spatial precision. What are acceptable trade-offs here depends largely on the application, but I’d guess few need sub-meter precision, or anything below the usual consumer GPS accuracy for that matter. For KDE Itinerary even two or three orders of magnitudes less are sufficient.
You could of course query an online service for this, but that’s coming at a high privacy cost, considering this requires sharing fairly high-precision location data, so we want something that works offline.
One possible way to implement this was developed by a former colleague of mine, and is actually surprisingly simple and effective. The idea is to store the timezone map as a color image, with a different color for each timezone. For lookup you just map the coordinate to the corresponding pixel, and translate the color back to the timezone with a simple lookup table.
The key to make this efficient is in the selection of the image encoding format. Ideal is a format that allows independent access to individual scanlines, and that uses a run-length encoding for each scanline, such as TGA. With that you can access an individual pixel with constant memory cost, independent of the image size.
The use of an image format has the advantage that precision/cost trade-offs are pretty obvious, it’s very easy to create using the above mentioned timezone shapefiles and QGIS, and debugging can be done visually with an image viewer.
This approach has been in use for the offline preparation of KDE Itinerary’s extractor engine knowledge base so far. Not so much for it’s runtime efficiency though (as we are using a gigantic 27942 x 13968 map), but for its ease of use.
The efficiency of this comes from the run-length encoding of scanlines, which is very good at leveraging one-dimensional spatial proximity of the encoded features, ie. a typical scanline only contains few continuous regions, independent of the resolution. It however doesn’t use the same property in the second dimension at all. Image formats that exploit this like e.g. PNG achieve an even better compression, but at the cost of constant memory decoding.
After the good results with using Z-order curves for the work on public transport line metadata recently, I tried to apply this to the timezone lookup problem as well. Z-order curves provide a one-dimensional representation of a multi-dimensional space while preserving spatial proximity. Applying a run-length encoding to this is similarly efficient as with the image approach, but applies to both dimensions.
This is easier to imagine as an efficient representation of a quadtree, where a tile is further subdivided if it covers multiple timezones, up to a certain depth limit. The depth limit defines the precision you can achieve.
As generating such a data structure requires computing intersections with the timezone geometry, this is best done within a tool made for such things, such as QGIS’ Python scripting. The generation script can be found here.
Compared to the image approach this brings in a bit of extra complexity, despite the actual z-order curve part being fairly straightforward:
Generation takes a lot of time. For the parameters picked for KDE Itinerary right now it’s 1-2 hours on 8 cores, a parallel implementation is therefore pretty much mandatory. The image approach hardly ever needs more than a few seconds for comparison.
There’s various parameters you can tweak (which is actually good), but unlike with the image approach their exact impact can be hard to predict. That is particularly annoying with experimental evaluation needing hours of computing time.
To be fair, parts of the generation cost comes from the somewhat more advanced conflict handling stage on quadtree entries at the maximum depth still covering multiple timezones.
If you consider that timezones are mostly hour-aligned (with a few notable exceptions),
many of them have to be practically equivalent when looking at a reasonably small time window,
say the next year, and in many use-cases having an equivalent timezone is actually good enough.
That is, for showing the right time in 2020 it doesn’t matter if you are using
In the end this is worth it though I think. With about 300kB of static data we can correctly resolve the timezone from a given coordinate on 99.2% of the covered area, and for 99.6% we at least get a correct equivalent timezone. The maximum error distance is set to about 300m (varies with the latitude). The covered area isn’t the entire earth though, as we cut off the polar regions below -65°S and above 80°N. This gives us about 20% more “space” on the z-order curve to increase precision in areas more relevant for the vast majority of users.
Precision turned out good enough to entirely replace the timezone information we previously carried for all airports and train stations explicitly, which allows us to recoup about 74kB of disk space. This wasn’t entirely expected, as there are some tricky locations very close to a border in there (e.g. Geneva (GVA)).
One aspect motivating this work was its applicability to other discrete features distributed with a strong spatial proximity,
such as countries or regions. However, halfway through this I realized that an IANA timezones actually implies a country. There is
only one exception to this, since the 2019a update northern Vietnam is assigned the
Asia/Bangkok timezone otherwise only used in Thailand.
That would be fixable though by assigning the northern Vietnam area a separate internal timezone identifier, and resolving this
differently depending on whether we are looking for country or timezone information.
After adding a slightly different strategy for handling ambiguous areas, we got a geo coordinate to country lookup almost for free as well, without even needing an extra index.
It’s worth keeping in mind though that all of this is of course an approximation, in places like Baarle where the country changes several times within our error margin this will not get you far.
As this might be useful for other applications as well, is there interest in having such location-related API in KDE Frameworks? Thinking about lookup, conversion and localization of ISO 3166-1/2 codes, IANA timezones, etc.
How often have you scanned a letter, a certificate or whatever and looked for the right way to call $UTILITY to convert it to a PDF that can be shared via internet?
For this very common use case I could not find a tool to make that really easy for the Linux desktop. Given my mission to help making the Linux desktop more common in the small business world (do you know Kraft?) I spent some time starting this little project.
Please welcome PDF Quirk, the desktop app to easily create PDFs out of images from storage or directly from the scanner!
It is just what this screenshot shows: A one page app to pick images from either the harddisk or from a scanner if configured, and save it right away to a multi page PDF file. The only option is to have either monochrome or color scan. No further scan chi-chi, just nice PDFs within seconds.
Of course I did not want to spend too much time and reinvent the wheel. PDF Quirk uses ImageMagicks convert utility and the command line scan client scanimage of the SANE Project in the background. Both are welknown standard commands on Linux, and serve well for this purpose.
Contributions and comments are very welcome. It is a fun little project!