February 06, 2020

Great things are happening in 2020 including the release of Qt 6 and a whole new decade of innovations to come.

We are very thrilled to announce the Rock Star speakers at Qt World Summit 2020 who will share the vision in software development and how to create successful UI/UX in 2020 and beyond: 

  • Lars Knoll, Chief Maintainer of Qt Project
  • Herb Sutter, Leading C++ authority and chair of the ISO C++ standards committee
  • Joe Nuxoll,   Design Director of Digital Products & Experience, Polaris
  • Euan Cameron, CTO of Esri
  • Matthew Hungerford, UX Team Lead at Chargepoint
  • Patrick Lorton, CTO & SVP at Schrödinger
  • Kris Dickie, R&D Team Lead, Clarius Mobile

Stay tuned for more great speakers! 

Super early bird tickets are available now for only $470 for 2 conference days and $695 for 3 days including training day and 2 conference days. It is a fraction of what our typical training courses costs so be sure to check out all the training courses which are also further down below.

GET SUPER EARLY BIRD TICKETS TO #QtWS20 
Super early bird sales ends on Tuesday, February 18, 2020.

Discover the future of software development, unifying 2D and 3D in Qt Quick, the new Qt graphics stack, performance optimization, developing for WebAssembly, microcontrollers and much much more. 

Advance your technical skill

Pre-Conference Training is a full day class offered on Tuesday, May 12, and includes lectures, hands-on training, and practical in-depth software development tactics by certified Qt training services. Classes range from introductory level to more advanced levels: 

  • User Experience Design for Embedded Devices 
  • Hands-on Qt on Raspberry Pi 
  • QML Programming — Fundamentals and Beyond  
  • Advanced QML 
  • Multi-threading 
  • Debugging & Profiling for Linux 
  • Introduction to Qt 3D
  • Microcontroller Programming with Qt for MCUs 
  • Get Started with Qt Design Studio  
  • Qt GUI Testing with Squish 

Find the full description of the technical training on the Pre-Conference Training page.

Relentless improvements have been made in Qt and we thank our partners for the value-added services, features and functionality that help our ecosystem thrive. This year's technical training will be provided by ICS, KDAB, froglogic, and Qt Professional Services. 

We invite designers, developers, technology managers and industry experts to learn how to advance the developer experience, user experience, and customer experience of their software technology projects.   

Need to convince your boss of why you need to attend? Use the justification letter to see what you and the team will take away from the conference.


This month sees a plethora of bugfix releases. While we all might like new features, we know that people love when mistakes and crashes get tidied up and solved too!

New releases

KDevelop 5.5

The big release this month comes from KDevelop 5.5, the IDE that makes writing programs in C++, Python and PHP easier. We have fixed a bunch of bugs and tidied up a lot of KDevelop’s code. We added Lambda init captures in C++, configurable predefined checkset selections are now in for Clazy-tidy and Clang, and look-ahead completion now tries harder. PHP gets PHP 7.4’s typed properties and support was added for array of type and class constant visibility. In Python, support has now been added for Python 3.8.

KDevelop is available from your Linux distro or as an AppImage.


KDevelop

Zanshin 0.5.71

Zanshin is our TODO list tracker which integrates with Kontact. Recent releases of Kontact had broken Zanshin and users were left not knowing what tasks to carry out next. But this month, a release was made to get it working again. We can now all easily find the tasks that we need to get on with!


Zanshin

Latte-dock 0.9.8

https://psifidotos.blogspot.com/2020/01/latte-bug-fix-release-v0981.html


Latte Dock

RKWard 0.7.1

https://rkward.kde.org/News.html


RKWard

Okteta 0.26.3

Okteta, KDE’s Hex editor, had a bugfix release and includes a new feature: a CRC-64 algorithm for the checksum tool. Okteta also updates the code for new Qt and KDE Frameworks.


Okteta

KMyMoney 5.0.8

KMyMoney, the KDE app that helps you manage your finances, included several bugfixes and an enhancement that added support for check forms with split protocol


KMyMoney

Incoming

Keysmith is a two-factor code generator for Plasma Mobile and Plasma Desktop that uses oath-toolkit. The user interface is written in the Kirigami, making it slick on any size of screen. Expect releases soon.

Chocolatey


Chocolatey

Chocolatey is a package manager for the Windows operating system. Chocolatey deals with installing and updating all the software on the system and can make the life smoother for Windows users.

KDE has an account on Chocolatey and you can install Digikam, Krita, KDevelop and Kate through it. Installation is very easy: All you have to do is choco install kdevelop and Chocolatey will do the rest. Chocolatey maintainers have high standards so you can be sure the packages are tested and secure.

If you are a KDE app maintainer, do consider getting your app added, and if you’re a large Windows rollout manager, do consider this for your installs.

Website Updates


KMyMoney
Our web team continues to update our online presence and has recently refreshed the KMyMoney website.

Releases 19.12.2

Some of our projects release on their own timescale and some get released en-masse. The 19.12.2 bundle of projects was released today and should be available through app stores and distros soon. See the 19.12.2 releases page for details. This bundle was previously called KDE Applications but has been de-branded to become a release service to avoid confusion with all the other applications by KDE and because there are dozens of different products rather than a single whole.

Some of the fixes included in this release are:

  • The Elisa music player now handles files that do not contain any metadata
  • Attachments saved in the Draft folder no longer disappear when reopening a message for editing in KMail
  • A timing issue that could cause the Okular document viewer to stop rendering has been fixed
  • The Umbrello UML designer now comes with an improved Java import

19.12.2 release notesPackage download wiki page19.12.2 source info page19.12.2 full changelog

Stores

KDE software is available on many platforms and software app stores.

Snapcraft Flathub Microsoft Store


February 05, 2020

Three git tidbits that improve my quality-of-life in working on Calamares.

  • (Configuration) No pager. I prefer the output to go to the terminal and stay there. If I need a pager, I’ll pipe to one. This leaves the output of, say, git branch visible afterwards.
    git config --global core.pager cat
    
  • (Configuration) No diff prefixes. This way, git diff produces something that can be applied with patch -p0 (rather than -p1), and when doing patches for packaging that’s convenient. It also makes selecting the filename in the terminal easier, no need to avoid the a/ and b/ prefixes.
    git config --global diff.noprefix true
    
  • (Functionality) Worktrees. These make a cheap clone of a given tag or commit in a subdirectory. Useful for exploring diffs, doing regressions, etc. Also easy to clean up. I use it in Calamares ci/txcheck.sh to quickly rebuild translations between a moving tag (translation) and master so that I can see if I’ve committed any string changes since the tag. That, in turn, informs my release decisions since I would like to avoid releases with untranslated strings. (Source for that translation-checking script)

The two configuration changes get applied in all my accounts and machines immediately; worktrees get a workout as needed (at least once per Calamares release, although that’s automated).


KUserFeedback is a framework for collecting user feedback for applications  via telemetry and surveys.

The library comes with an accompanying control and result UI tool.

https://download.kde.org/stable/kuserfeedback/

Signed by Jonathan Riddell <jr@jriddell.org> 2D1D5B0588357787DE9EE225EC94D18F7F05997E

KUserFeedback as it will be used in Plasma 5.18 LTS

 


I’m not as deep into Python these days as I was twenty years ago. Twenty years ago, Python was small language with clear documentation and clear, consistent ways of doing things. One thing was hard, though, and that was packaging your python application so people on different systems could use it.

These days, Python is big, has lots of computer-sciency features that I don’t grok, and packaging Python is still hard. And the documentation is, for the most part, not very useful. I didn’t care a lot about that, though, since we only use Python as Krita’s extension language together with PyQt. And we had a nice and working setup for that.

Well, nice… It’s a bit hacky, especially for Windows. Especially since we need to build Krita with mingw, because msvc has problems compiling the Vc library. And Python has problems getting built with mingw-gcc on Windows.

We have three related parts: python, sip, which creates Python libraries out of special hand-written header-like files, and PyQt, which binds Pyton and Qt.

So, we start with a system-wide install of Python. This is used to configure Qt and build sip and PyQt. Then we download an embeddable Python of exactly the same version as the system-wide install, and install that with Krita’s other dependencies.

But last week the KDE binary factory windows images got updated to Python 3.8.1 (from 3.6.0) so we had to update the references to Python in Krita’s build system. There are a couple of places where the exact version is hard-coded, not just in the build system, but also in the code that setups Python for actual usage when Krita runs.

And at that point, our house of cards fell apart. Python 3.8 has a new, improved, more consistent way of finding dll libraries on Windows. At least, that was the idea. Following discussion in Python’s bug tracker, a Python developer declared victory:

Given that the actual error we had was

c:\dev\krita>python
Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64
bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import PyQt5
>>> import PyQt5.Qt
>>> import PyQt5.QtCore
Traceback (most recent call last):
File "", line 1, in
ImportError: DLL load failed while importing QtCore: The specified module
could not be found.

This seemed related: the PYQt pyd files link to the Qt dll’s. The pyd files are in lib/krita-python-plugins, the Qt dll’s in the bin folder, and it looks like the QtCore pyd, even though dependency walker shows it can find the QtCore.dll, Python cannot find it anymore.

So, on to the changelog. This says:

DLL dependencies for extension modules and DLLs loaded with ctypes on Windows are now resolved more securely. Only the system paths, the directory containing the DLL or PYD file, and directories added with add_dll_directory() are searched for load-time dependencies. Specifically, PATH and the current working directory are no longer used, and modifications to these will no longer have any effect on normal DLL resolution. If your application relies on these mechanisms, you should check for add_dll_directory() and if it exists, use it to add your DLLs directory while loading your library. Note that Windows 7 users will need to ensure that Windows Update KB2533623 has been installed (this is also verified by the installer). (Contributed by Steve Dower in bpo-36085.)

Well, that sounds relevant, so let’s check the documentation and see if there are examples of using this add_dll_directory…

os.add_dll_directory(path)
Add a path to the DLL search path.

This search path is used when resolving dependencies for imported extension modules (the module itself is resolved through sys.path), and also by ctypes.

Remove the directory by calling close() on the returned object or using it in a with statement.

See the Microsoft documentation for more information about how DLLs are loaded.

Availability: Windows.

New in version 3.8: Previous versions of CPython would resolve DLLs using the default behavior for the current process. This led to inconsistencies, such as only sometimes searching PATH or the current working directory, and OS functions such as AddDllDirectory having no effect.

In 3.8, the two primary ways DLLs are loaded now explicitly override the process-wide behavior to ensure consistency. See the porting notes for information on updating libraries.

Well, no… So let’s google for it. That didn’t find a lot of useful links. The most useful was a patch for VTK that uses this method from the C++ wrapper The examples in that tweet would’ve been a good addition to the documentation.

So I started experimenting myself. And failed. Our code that detects whether PyQt5 is available isn’t very complicated, but no matter what I did — even copying the Qt5 dll’s to the PyQt5 folder, using add_dll_directory in various ways, I would always get the same error.

And now I’m stuck. Downgrading to Python 3.6 makes everything work again, another hint that this dll finding change is the problem, but that’s not the way forward, of course.

For now, the beta of Krita 4.2.9 is delayed until we find a solution.

 


Jupyter project files

In the recent release of Cantor – KDE Frontend to mathematical applications – the support for Jupyter notebook format was announced. To cite from Cantor’s release announcement:

Jupyter is a a very popular open-source web-based application that provides an interactive environment for different programming languages. The interactive documents are organized in “notebooks”. This application is widely used in different scientific and educational areas and there is a lot of shared notebooks publically available on the internet. As an example for a collection of such notebooks see this collection.

For Cantor, which is very similar in spirit to Jupyter, we decided to add the ability to read and save Jupyter’s notebook format in order to benefit from the big amount of available content for Jupyter. The implementation required for this was mainly done by Nikita Sirgienko as part of the Google Summer of Code 2019 project. His series of blog posts contains many examples as well as implementational details that will be omitted here.

Thanks to this work, in the coming release of LabPlot we’ll be able to open Jupyter project files:


Jupyter Python Notebook

The example Jupyter notebook shown here was taken from the BMC collection providing lecture notes and code around scientific computing for biomechanics and motor control.

Contrary to Cantor which is able to open and to save .ipynb files (Jupyter’s notebooks), LabPlot allows only to open such projects. After such a file is loaded, its content is shown in a computer algebra worksheet in LabPlot. The modified CAS Worksheet is saved in the native project file together with other native objects like spreadsheets, worksheets, etc.

Cantor project files

In the past it was already possible to create new CAS Worksheets in LabPlot where internally the machinery of Cantor was used, see this blog post for couple of examples. But it was not possible to open Cantor’s project files directly (*.cws file extension). We corrected this and the handling of Cantor project files is similar now to the handling of Jupyter projects described above.

The screenshot below shows an Octave project from Cantor’s collection of example projects for different computer algebra systems:


Cantor Octave Project


February 04, 2020

Long time no see, Choqok users!

First of all Choqok has a new and shiny website. Kudos to Carl Schwan for taking care of the theme!

To me, version 1.7.0 was meant to be released more than one year ago, while I just released it today.

The main reason of the delay (a part from lack of time) is because I wanted 1.7.0 to be bullet proof (spoiler: it’s not).

I wanted Choqok 1.7.0 to have full Mastodon support, proper media attachments and a lot more.

Let’s try to start somewhere. With this version I want to close a Choqok era and prepare us for the next one. Stay tuned!

Changes for this release:

  • Port to QtNetworkAuth and drop qoauth dependency
  • Allow to disable accounts
  • Honour the default font #372291
  • Make the sign footer consisent between all microplugins
  • Unread post count in the application title sums all unread posts’ accounts
  • Twitter: update char limit to 280
  • Twitter: support extended tweets #370260
  • Twitter: fix list browsing #382392
  • Twitter: fix followers list
  • Twitter: show client source even for private messages
  • Twitter: show user’ real name when no description is set
  • GNU Social: do not rely over qvitter to get the post url
  • GNU Social: hide linkback statuses
  • GNU Social: show user’ real name when no description is set
  • Pump.io: escape the description
  • Pump.io: show user’ real name when no description is set
  • Drop yFrog support from ImagePreview plugin
  • Plugin compability break: MicroBlog::profileUrl returns a QUrl instead
  • Plugin compability break: MicroBlog::postUrl returns a QUrl instead

Thank to (random order): Andrea Scarpino, Nicolas Fella, Luca Beltrame, Luigi Toscano, Pino Toscano, Heiko Becker, Andreas Sturmlechner, Yuri Chornoivan for their contributions to keep Choqok up.

And here you can find a longer list of bugs fixed in this release.

Download Choqok 1.7

You can download Choqok 1.7 source code package from here. For Kubuntu users I think Adilson will update his PPA.

Support Choqok

You can always support Choqok development via reporting bugs, translating it, promoting it, helping in code and donating money.


Supervisory control and data acquisition (SCADA) systems have been around since the 1950’s, far longer than most other types of computer applications. Their rock-solid performance has been responsible for the streamlining of any industry that needs precise and consistent controls: building automation, energy management, part machining, printing and packaging, robotic assembly, ship building, water treatment, woodworking, and many more. However, this long legacy can also carry a hidden drawback – the user interfaces of many SCADA devices are a flashback that looks more appropriate as part of Windows for Workgroups than the modern age.

This situation is ripe for change. Now that everyone carries superior user-interfaces in their pocket at all times, even the non-designers responsible for running the system expect their SCADA human-machine interface (HMIs) to have a certain level of polish and sophistication. Having implemented attractive SCADA HMIs for our customers, we’ve discovered that Qt is the right tool to build the modern SCADA system – here’s why.

Close to the Metal

SCADA interfaces to equipment that often has hard real-time requirements. In these cases, responding slightly late may be just as bad as not responding at all. You want the switch that flips the electrical grid to happen exactly when you tell it to – not after a brownout or an overload. With Qt’s C/C++ backbone, developers are able to get maximum performance from their SCADA applications. You can create code that talks directly to the hardware – no virtual environments, garbage collection, or unanticipated events happening in between.

The building blocks of any SCADA system are programmable logic controllers (PLCs), industrial computers that have a large number of I/O ports that are used to scan gauges or control actuators. The typical PLC implementation uses direct memory mapping to access its connected hardware, meaning you must be able to read and write to specific memory addresses. This is easy for C/C++ (and thus Qt), but not typically available in other languages.

Deployable Everywhere

Important to any modern SCADA system is flexibility – the HMI needs to be accessible from the embedded system, remote desktops, and roving mobile devices – something that is easily achievable with Qt.

On the embedded side, you need a platform that can run on PLC hardware. While not every PLC is capable of running a graphical interface directly on its hardware, those that can are very likely to run a real-time operating system like VxWorks, QNX, WinRT, INTEGRITY, and RTLinux. Qt supports these operating systems, making SCADA systems as simple to support as any other embedded system. SCADA HMIs are also found on the computers monitoring the systems in the back office, computers using standard desktop operating systems such as Windows, Linux, or macOS. And due to the increased capability and need for mobility, having SCADA HMIs running on phones or tablets is increasingly a needed product feature to support mobile trouble-shooters, supervisors, and headless machines. Of course SCADA apps on mobile devices must run on iOS, Android, and/or Windows 10, which is simple with Qt’s cross-platform support.

Thankfully, Qt is a very broadly used and actively supported platform that runs on all these operating systems and is easily adapted to others. Perhaps most importantly though, all of these platforms are supported from the same code base, allowing developers to build components that can run in all three environments – embedded, desktop, and mobile.

Modern HMI

To make a modern SCADA system, you need to incorporate modern user interface elements; the arcane keyboard commands, 2D sprites, and 8-bit graphics of yesteryear aren’t enough. All of the now expected user-interface paradigms, controls, and behaviours – like drag and drop; spinners, sliders, and splitters; touch, pinch, and zoom; tables and popups; and much more – are part of Qt. That makes it easy to design really attractive HMIs with controls and interactions that users will easily understand. To customize the appearance of the HMI for either customer-specific branding, or white-labeled products is a nice detail that enables your products to be tailored for each customer engagement. That’s relatively straightforward with Qt’s widget styling and easily modified Qt Quick screens. These features let developers make global style changes to the application in a few highly localized classes that will percolate throughout the HMI.

Many SCADA interfaces require three-dimensional models of the plant floor, building, or equipment being automated. That’s achievable with Qt 3D, an entire set of Qt classes that lets you build 3D objects and can take advantage of GPU hardware-accelerated graphics. Developers are able to directly import any necessary 3D models using assimp, the Open Asset Import Library. Another option is to use QOpenGLWindow, which allows developers to use OpenGL code more directly – a great option if there’s already OpenGL code that does what you want.

Video is used in SCADA to check remote locations and equipment, which allows a supervisor to monitor situations that may be dangerous or impractical for a person to always be physically present. Whether it’s making sure the cooling system for a nuclear power plant is operating properly, verifying a robot’s painting job for parts on an assembly line, or detecting a quality inspection machine that accepts incorrect products – remote cameras are a necessary part of many SCADA systems. Qt has APIs for embedding video streams in individual windows, letting a single HMI provide oversight of the entire operation instead of requiring dedicated video monitors.

Remote Observation and Control

Video isn’t the only important remote feature of a SCADA system. Even more important is the ability to remotely run the HMI. In these situations, the hardware supplies the data that is visualized on a remote tablet or desktop to allow operators from across the floor – or across the country – to keep tabs on their equipment. While that’s critical for PLC systems that have no displays, requiring people physically present at every machine is extremely inefficient. Any modern factory expects their SCADA equipment to have remote views and operation.

Qt has a number of tools very handy for remoting. Qt VNC provides a remote desktop using the standardized VNC protocol for perhaps the simplest solution. However, newer additions to the Qt family include WebGL for transporting 3D imagery and WebASM for speedy browser run code that allow even more customized remoting solutions. And for developers that want to link their back-end PLC to a unique front-end application, QRemoteObjects allows you to seamlessly share logical state information between the two machines.

Stable, Safe, and Secure

Another critical attribute of SCADA systems is their longevity. SCADA equipment may be deployed in the field for many years, even decades. It cannot afford to rely on tech fads or moving targets. Qt has been around a long time and is built on reliable, proven technology. The Qt Company offers Long Term Support (LTS) on select Qt versions, ensuring that those platforms will be stable, supported and maintained. Companies using Qt LTS can decide on what patches or relevant updates they wish to incorporate, keeping products relevant for years to come.

Depending on their industry, SCADA systems may fall under ISO or IEC certification requirements. To assist with that, the Qt Company also provides the Qt Safe Renderer.  A safety-critical system that uses the Qt Safe Render to implement its graphical warnings can be certified under ISO 26262 for automotive, EN 50128 for railway systems, IEC 62304 for medical equipment, or IEC 61508 for generic safety systems.

As SCADA is inherently dependent on remote control, security is an obvious necessity. Because Qt incorporates the latest SSL and TLS implementations, data exchange can be encrypted with the latest, safest, libraries. That enables SCADA systems with Qt to transfer data, HMI images, or control information securely.

Qt for SCADA

Here I’ve listed many of the reasons that Qt makes an excellent tool for those building SCADA systems, especially those who want to modernize their equipment’s HMI to reflect a more modern ascetic. Hopefully readers who are building the next generation of SCADA system will bring us HMIs that will make iPhone users take notice.

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Making Industrial Applications Match iPhone Expectations appeared first on KDAB.


We are happy to announce that the Qt 3D Studio 2.6 is now available via the online offline installers. For detailed information about the Qt 3D Studio, visit the online documentation page.


February 03, 2020

FOSDEM has come and gone for 2020, so it’s time to look back at another huge event (it was a birthday event, although I didn’t notice it that much). Like most years, I was non-stop busy with either the booth or talking to people, so no photographs.

Friday

Pre-FOSDEM, the FreeBSD community gets together for a Devsummit, and we talked about X11 and graphics and KDE and Wayland and .. well, that’s the stuff I was paying attention to, anyway.

I encountered some old friends and met some new ones, Lara (touch screen support) and Kamila (kernel bits) and Raichoo (wayland), and there was coffee and pastry as befits Brussels.

If there’s a main takeaway from this day for me, it’s that KDE on Wayland on FreeBSD is not close yet, but we’ll be working towards it for the next six months and coordinating with Gnome and the rest of the desktop stack to make that happen. Raichoo will be leading the Wayland bits. (Over two years ago I wrote a bit about Weston already!)

In the evening I defected and met up with Bhushan and the Plasma Mobile and UBPorts and PostmarketOS people for dinner. I don’t know mobile, so this was a learning experience.

Saturday

Um, I think I was at the booth. There were people. I was hoarse by the end of the day, and then GitLab went and bought us some beer and it turns out restaurants close in Brussels at ten and so fries happened.

Sunday

More booth time, but also “typical FOSDEM”: I went looking for Jonathan (not Riddell) and Jonathan came looking for me, and we must have passed each other several times but not actually spotted each other.

On the way, though, I met Phil from Manjaro and Erik from ArcoLinux, both of whom are “customers” of Calamares. Excellent customers who send good bug reports, even, and it was great to finally meet people that I work with every day. We went to talk to KiwiTCMS about Calamares testing and how to better handle the workload – and once we found Alex at the booth we had a really enlightening talk with him. We walked away with a much better idea of what we need to put together on that front.

So that fills me with more plans for Calamares as well.

Balusankar from GitLab, who also coordinates the Malayalam translation for Calamares, was there, while I didn’t get to meet him at conf.kde.in. So there’s more and more people who are people, and not just IRC nicknames to me.

Somewhere in between I gave a quick talk on the state of KDE on FreeBSD; more entertainment than serious info, I thought, but there were sharks in the audience and I appreciate that.

Monday

Train back home, laden with chocolate and goodies from Open Source friends, and with just one new T-shirt (the FreeBSD devsummit one; also thanks to Lara for pointing out teeturtle for deliciously cute shirts).



February 02, 2020

KDevelop 5.5 released

We are happy to announce the availability of KDevelop 5.5 today bringing half a year of work mainly on stability, performance, and future maintainability.

KDevelop 5.5.0 in action

New features have not been added. The existing ones have received small improvements:

Improved C++ language support

  • Fix missing header guard warning for a standalone header being always present. (commit)
  • Don't crash when signatures don't match in AdaptSignatureAssistant. (commit)
  • Clazy: add configurable predefined checkset selections. (commit)
  • Clang-tidy: add configurable predefined checkset selections. (commit)
  • Don't get confused when encountering parse errors in default args. (commit. See bug #369546)
  • Fix ClangUtils::getDefaultArguments when encountering macros. (commit. fixes bug #369546)
  • Skip clang-provided override items from code completion. (commit)
  • Unbreak move-into-source for non-class functions. (commit)
  • Lambda init captures are visited starting with clang 9.0.0. (commit)
  • Try a bit harder to find types for look-ahead completion. (commit)

Improved PHP language support

  • Fix uses of function call parameters after closures. (commit)
  • Add support for PHP 7.4's typed properties. (commit. code review D26254)
  • Support importing functions and constants from other namespaces. (commit. fixes bug #408609. code review D25956)
  • Fix rename of a variable. (commit. fixes bug #317879. code review D25587)
  • Add support for "array of type". (commit. code review D24921)
  • Add support for class constant visibility. (commit)
  • I18n: update message to new default. (commit)

Improved Python language support

Other Changes

  • Welcome page: remove background in active window when plugin is disabled. (commit)
  • No longer install modeltest.h, not used externally and deprecated. (commit)
  • Fix "invalid project name" hint not always showing. (commit)
  • Use default scheme option of KColorSchemeManager if available. (commit)
  • Read the global color scheme name from its file. (commit)
  • Fix qmljs comment parsing. (commit)
  • This fixes the comment formatting for the Doxygen variants:. (commit)
  • Qmakebuilder: remove unused kcfg files. (commit)
  • Fix reformat for long strings. (commit)
  • Introduce shell-embedded message area, to avoid dialog windows. (commit)
  • Clazy, clang-tidy: share code via new private KDevCompileAnalyzerCommon. (commit)
  • Make tar archives reproducible by setting Pax headers. (commit. code review D25494)
  • Kdevplatform: remove About data feature. (commit)
  • Support for rebasing. (commit)
  • Add a setting to disable the close buttons on tabs. (commit)
  • CMake: Show project name in showConfigureErrorMessage. (commit)
  • TemplatePreview: Enable word-wrap for messagebox and Lines Policy Label. (commit)
  • Filetemplates: load and show tooltip for custom options. (commit)
  • Pass environment variables from process environment and set up with flatpak environment. (commit)
  • Remove usage of columns argument in arch detection since old LTS systems may not have that flag. (commit)
  • Pass the android toolchain file path to CMake as a local file path not as a URI. (commit. code review D21936)
  • Formater: Hide KTextEditor minimap for the formater preview. (commit)
  • Shell: use KAboutPluginDialog in LoadedPluginsDialog. (commit)
  • Mention all fetch project sources in the documentation. (commit. fixes bug #392550. code review D25342)
  • Script launcher: add env profile configure dialog button to config UI. (commit. fixes bug #410914)
  • Cmake: FindClang: Detect llvm-project.git checkout. (commit)

Get it

Together with the source code, we again provide a pre-built one-file-executable for 64-bit Linux as an AppImage. You can find it on our download page.

The 5.5.0 source code and signatures can be downloaded from download.kde.org.

Should you find any issues in KDevelop 5.5, please let us know on the bug tracker.

kossebau Sun, 2020/02/02 - 21:02
Category
Tags

Finally, I am going to write about my experience as a student of Season of KDE 2020. A winter learning new things, learning what matters is not just writing code but writing good code. I would like to thank GCompris and KDE for giving me such an opportunity to be a part of the community and to try to bring happiness to people and kids using it around the world.


The last few weeks I've done quite a bit of QCA cleanup.

Bit of a summary:
* Moved to KDE's gitlab and enable clazy and clang-tidy continuous integration checks
* Fixed lots of crashes when copying some of the classes, it's not a normal use case, but it's supported by the API so make it work :)
* Fixed lots of crashes because we were assuming some of the backend libraries had more features than they actually do (e.g. we thought botan would support always support a given crypto algorithm, but some versions don't, now we check if the algorithm it's supported before saying it is)
* Made all the tests succeed :)
* Dropped Qt4 support
* Use override, nullptr (Laurent), various of the "sanity" QT_* defines, etc.
* botan backend now requires botan2
* Fixed most of the compile warnings

What I probably also did is maybe break the OSX and Windows builds, so if you're using QCA there you should start testing it and propose Merge Requests.

Note: My original idea was to actually kill QCA because i started looking at it and lot of the code looked a bit fishy, and no one wants crypto fishy code, but then i realized we use it in too many places in KDE and i'd rather have "fishy crypto code" in one place than in lots of different places, at least this way it's easier to eventually fix it.


The release of Plasma 5.18 is upon us! In 10 more days, it will be yours to have and to hold. Until then, the Plasma developers have been working feverishly to fix bugs–and land some welcome improvements in 5.19! Check it out:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

How You Can Help

Please test the Plasma 5.18 beta release! You can find out how here. If you find bugs, file them–especially if they are regressions from Plasma 5.17. We want to get as much testing, bug filing, and bugfixing as possible during the one-month beta period so that the official release is as smooth as possible.

More generally, have a look at https://community.kde.org/Get_Involved and find out more ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Finally, consider making a tax-deductible donation to the KDE e.V. foundation.


February 01, 2020

Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.

We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).

We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.

I am very happy to announce that Elisa has been submitted to the Windows Store.

Screenshot_20200201_125643Elisa on Windows 10 built with Craft

I strongly believe that we should push our applications on the popular computing platforms for two reasons: increase software freedom by having good free open source application easily usable and to possibly get more contributors and/or testers.

I would like to thanks all people that have made it possible to reach this state: Hannah von Reth, Kevin Funk, Christoph Cullman, Ben Cooksley and all people involved in Craft and the binary-factory and the Elisa contributors (Alexander Stippich and Nate Graham in particular but also new contributors Jerome Guidon, Edward Kigwana, Puneeth Chanda and others. The amount of support we get from being a part of the KDE Community is amazing and I would like to encourage people to donate to KDE.

On a side note, I have setup some accounts on Patreon, Liberapay and Paypal. Thanks a lot for the first donator that has motivated me to do that and donated some money. You can support my work via those services.


Dear digiKam fans and users, Just few words to inform the community that 7.0.0-beta2 is out and ready to test one month later the first beta release published at Christmas time. After a long triaging stage, this new version come with more than 600 bug-fixes since last stable release 6.4.0 and look very promising. Nothing is completed yet, as we plan one beta version before next spring, when we will publish officially the stable version.


January 31, 2020

It has been a packed two month again around KDE Itinerary! Nextcloud Hub integrated the itinerary extraction engine, a presentation at 36C3 and working towards more elaborate assistance features are just some of the highlights since the last report.

New Features

One of the biggest recent feature additions is of course the integration with Nextcloud Hub, see the separate post on that.

A Deutsche Bahn train ticket display in Nextcloud Mail (screenshot by Nextcloud). Nextcloud Mail showing a Deutsche Bahn train booking.

That’s far from all though, last month work also started on a major new assistance feature, automatic suggestions on how to fill the gaps in your itinerary, between elements extracted from reservations or tickets. That is, when do you need to leave home to be at the train station on time? How do you get from the airport to your hotel? Etc.

For this KDE Itinerary can now insert transfer elements into the timeline, and fill them with local public transport journeys retrieved via KPublicTransport.

KDE Itinerary showing local transportation details from a train station to a booked hotel. KDE Itinerary showing local transportation details from a train station to a booked hotel.

Technically transfer elements can either be attached to the beginning or end of a booked element, with a stored time difference (ie. how much before departure do you want to be at the station, or how much time is needed to collect your baggage on arrival). This makes them move along if things change due to delays for example.

This feature is still very much work in progress, there are currently still a number of limitations on where such elements can be added, as well as only a single home location that can be configured. This should become more powerful as we gather feedback on how this behaves in use, maybe to the point where it automatically adjusts your alarm clock based on delays on early morning departures :)

Another notable new feature is the ability to show the platform and vehicle layout for (some) departures. That’s particularly useful for the big multi-part long-distance trains with seat reservations, to guide you to the right place to board the train. As a side-effect it can also show you to way to some onboard amenities.

KDE Itinerary showing a train section layout in KDE Itinerary. Layout of a double segment Deutsche Bahn ICE train with the reserved coach highlighted in KDE Itinerary.

This currently only works for ICE and IC trains in Germany, simply because that’s the only trains we have API endpoints for which provide the necessary information. We found another one for India, but there we still need to implement support for the more basic public transport information queries first. It’s very likely similar API exists for other operators too, but it might not be discoverable without knowing the local language, any hints very welcome!

Like many other parts of KDE Itinerary this would also benefit heavily from some more attention to the visuals, with graphics done by professionals rather than myself, as well as fixes for a few remaining high DPI glitches.

And finally KDE Itinerary got a new statistics page that shows you a few information about all your trips, as well as the year over year changes.

KDE Itinerary statistics page showing data for 2019 and changes to the previous year. KDE Itinerary statistics view showing data from a single year.

This includes a CO₂ metric using simple average emission values per means of transport. That’s obviously far from accurate and could probably improved by taking more trip parameters into account, but it’s a good start if you want to monitor the environmental impact of your travel over time.

Infrastructure Work

Many things happened in the background as well of course, around both the data extraction engine as well as the real-time public transport data retrieval.

  • The extractor now has the ability to decrypt VDV e-Ticket barcodes. Those are used by a number of German local transport providers, and contain a few hundred bytes of binary payload describing what they are valid for. While we don’t understand much of the content yet (due to missing documentation, and due to needing operator-specific coding tables), this already works as a reliable trigger for custom extractors (such as the one for Deutsche Bahn, which works on some of those tickets). It’s also a good basis for further research into privacy aspects of those tickets, given we found fields for traveler name, birthday and gender in there (not always filled though). Probably worth a dedicated post.

  • We made some progress in decoding binary ticket barcodes of VR (Finish railway). They seem to contain 43 bytes of payload with unaligned bit fields, and a 64 byte signature. The current state is documented in the wiki, the biggest remaining mystery is understanding the encoding of the departure and arrival station codes.

  • Custom extractor for iCal files can now trigger on iCal event properties, and got new API for safely extracting date/time values from iCal events without losing the timezone information (working around a JavaScript limitation).

A major addition to KPublicTransport has bees the support for OpenTripPlanner based backends. OpenTripPlanner is an open source trip planning server, deployed by a number of commercial operators as well as a few community projects. There’s unfortunately not a single version of it, but there seem to be three major variants: The original REST API, the GraphQL API added by Digitransit, and the GraphQL API added by Entur. KPublicTransport can now deal with all of them.

In practice this means we gained national coverage in Finland, Norway and Tunisia, as well as a few more European and US cities. Digitransit based backends seem to be popular with community-run public transport services, such as the one in Münster, Germany or Ulm, Germany. Non-commercially run services can be preferable from a privacy point of view, so it’s great we can interface with those as well now.

Fixes & Improvements

There’s plenty of smaller but still noteworthy changes as well of course.

Extractor engine:

  • New extractors for Indian Railways SMS confirmations, CCC tickets, RegioJet iCal attachments, Kintetsu Railway tickets, Norwegian bookings and boarding passes, SNCB and Thalys tickets, and DB seat reservations.
  • Renfe tickets now also benefit from Wikidata station data augmentation.
  • Bus trips on Deutsche Bahn tickets are now correctly handled.
  • Applet Wallet passes have a proper icon in the file browser now.
  • We fixed merging of train tickets and ticket-less train seat reservations, as well as merging of conflicting seat information for flight bookings and boarding passes.

KDE Itinerary app:

  • Calendar import on Android works again with newer DavDroid versions.
  • Journey views for alternative connection searches are now sorted correctly by departure time (rather than by departure time of the first transit section).
  • Journey elements with suspicious properties (such as too long walks) are highlighted in red in the journey icon summary view.
  • The public transport backend configuration page now has section headers per country, to deal with the increasing numbers of backends.
  • Finding of some optional dependencies has been fixed, so the nightly Android builds have working notifications and day/night mode computation for the weather forecasts again.
  • Disabling the weather forecast actually hides forecast elements now in the timeline rather than showing outdated information for the next nine days.
  • The flight page shows airport names correctly even if we only have partial information.
  • The ticket selector shows better labels for anonymous tickets now.
  • We fixed saving the result of an alternative connection search.

KPublicTransport:

  • Negative results for departure and journey queries are now also cached, reducing the amount of network queries a bit.
  • Routes can now also have a destination property alongside the existing direction property, which allows better data aggregation when available (e.g. circular lines do not have a destination).
  • Journey sections got a distance property, which is filled by backend information where available or otherwise computed from geo coordinates. That’s particularly useful for walking sections.
  • Location queries can also select appropriate backend services based on already known country information now, for example for address searches.

Get in touch!

If you’ll be at FOSDEM this weekend and want to learn more about all this, drop by the KDE stand in building K, the Nextcloud stand in building H or my talk about KDE Itinerary (Saturday 17:00, room H.2215).

This work continues to rely on donated data samples, thanks to everyone who has helped with this so far! If you happen to have Thalys tickets with Aztec barcodes in them, those would be especially interesting at the moment as we are trying to decode their binary content.

If you want to help in other ways than donating test samples too, see our Phabricator workboard for what’s on the todo list, for coordinating work and for collecting ideas. For questions and suggestions, please feel free to join us on the KDE PIM mailing list or in the #kontact channel on Matrix or Freenode.


I arrived in Brussels yesterday, and today feels like the day before the storm. Closing some work from the hotel room, meeting some people before the fosdem chaos, doing some preparatory stuff for foss-north.

Make sure to checkout the foss-north Community Day page. It is mostly scaffolding yet, but it will grow quite quickly. Also, if you bump in to me, grab a foss-north flyer and help spread the word!


January 30, 2020

Tyson Tan provided now a Breeze variant of the new Kate icon, too.

This is now in the master branch of the breeze-icons repository.

I hope other icon themes will update their Kate icon variants to match our new style, too.

I first tried to enforce the use of the new icon by renaming it, but I got reminded this is too harsh and I should give the icon theme authors a chance to update them in their own pace.

I guess that is the right approach and hope current themes will really catch up with this.

Thanks already in advance to all people that might spend time on this in the future.

We have both the Breeze and the non-Breeze variant as SVG files available in our repository. You can base your work on that, if you like. For the licensing situation, take a look at the previous post.

Thanks for the help during the submission process to Breeze to Tyson, Nate and Noah, see D27044 and T12594. The documentation how to create a proper Breeze icon is very useful, too!


January 29, 2020


Latte Dock v0.9.8.1  has been released containing important fixes and improvements!


Go get  v0.9.8.1   from  store.kde.org*

-----
* archive has been signed with gpg key: 325E 97C3 2E60 1F5D 4EAD CF3A 5599 9050 A2D9 110E

Fixes:


- KDE Help Menu -

  • provide new way to set which application launcher in all docks/panels has the highest priority to trigger with Meta. The one having a global shortcut applied is the one that has the highest priority
  • consider plasma panels under x11 environment in order for dock settings window to not overlap with them
  • fix which Plasma theme colors are used for all Latte painting mechanisms and make them consistent with Plasma
  • Use KDE frameworks official Help Menu
  • Provide KDE frameworks official way to set application's language
  • add hidden debug option for "--kwinedges"
  • paint properly the dock settings window external shadows
  • fix margins/padding for applets that must follow Fitt's Law at the thick screen edge and at the same time be consisten with all surrounding applets
  • add new LastActiveWindow APIs for window properties Closable/Minimizable/Maximizable etc. and provide them to applets. Applet Window Buttons applet is already using it in order to identify buttons that should not be drawn for specific windows
  • add availableScreenRegion calculations for Left and Right screen edge docks/panels in order to be ready for new Plasma 5.18 API that will let us expose to plasma what are the free areas that are not occupied by Latte panels/docks
  • fix wayland crash when showing dock settings window
  • improve kwin workarounds in order to reapply properly docks/panels activities to them when kwin faulty is losing them
     


Donations:

You can find Latte at Liberapay if you want to support,     Donate using Liberapay


or you can split your donation between my active projects in kde store.

Part 1 - Hey everyone.. This is the first time, i get the chance to work for an organisation and that too the mighty one - KDE. My project is to add multiple datasets in some of Gcompris activities. My first exposure to KDE was in December 2020, i was bit lost in...


January 28, 2020

Could you tell us something about yourself?

Hi, I’m Spihon, I’m a born Desert Rat of the United States who loves to read, dabble in a bit of writing and play tabletop games from D&D to Palladium campaigns.

And a heavy user of nom de plumes. I’m still nervous about putting my real name on my art, so like one of my other pen names I’m an enygma.

Oh and I am a proud member of the 501st Legion for Charity, who do walks for causes like hemophilia and autism.

Do you paint professionally, as a hobby artist, or both?

Well to be honest I would love to draw professionally and making it a living, so right now it’s a hobby that I take seriously, but hopefully I can make it work out in the end.

What genre(s) do you work in?

O_O…^^;

Boy that’s a broad section to cover, but I think I can narrow it down to sci-fi and fantasy with a bit of abstract surrealism (I honestly need to try that in Krita later). Anyway those are some of the things I feel comfortable working in.

Whose work inspires you most — who are your role models as an artist?

Again another tough question, I actually have several people who inspired me. One of them is Don Bluth and one of his employees, Dermot O’Connor, who has been a huge help in my art growth. And also Mike Crilley. Of the others I don’t personally know their names so it’s going to be their handles (which you can find on DeviantArt): my best friend Gothicraft, Rottenribcage, Onixx, Ipku, Taleea, Zarla, cgratzlaff, AbsoluteDream.

And finally my family: my cousins, Mrs. Battle, Stardewvalleycoconut, ToastyPaw, Nana, my aunt Nikki and my sister Sailorstar237, my uncles JGriz and Vraptor, and last but not least my mother. It’s because of these people that I got inspired to even dabble in art and also, without my family’s support, I don’t think I would have made it this far.

How and when did you get to try digital painting for the first time?

When I started digital painting — it makes me feel old but I think the first time was when I was about seven when I tried MS Paint for the first time, it was the only thing I had available as a kid and back then I wasn’t as good at it as I am now.

What makes you choose digital over traditional painting?

Well, for one thing digital is a bit more forgiving than traditional (well, if it doesn’t crash on you when you’ve been working on a piece for hours in it). And also money since I can’t keep replacing art supplies all the time.

Also I’ll be honest, I’m a lot neater with digital compared to traditional. Now don’t get me wrong: I love doing traditional, but someone who walked around with half their art project almost everywhere on their body…I guess you can get the picture. I get literally into my art.

How did you find out about Krita?

That’s an easy one, Which ties in with digital… money. About 2018 I was busy looking for a free art program that I could animate with, since I’m struggling with trying to find a job, so I thought I could do try my hand at making videos for YouTube. And speaking of YouTube, that’s where I found it, from this guy’s video on how to animate, and I was sold so I downloaded it and I’m not going back on it.

Actually, the anniversary of when I found it is next month, February 18th, so I’ll have been using it for two years.

What was your first impression?

Truthfully a bit intimidating at first, until I got the hang of it and it became my go to art program for everything I do, from simple paintings to comics. Heck, David Revoy even got me inspired to do it… Sure, I could have added him to the “who inspires me” section but come on! He needs a special place as my Krita Rockstar…

Anyhoo, I draw more these days than I play video games.

What do you love about Krita?

It’s a free professional program, I don’t have to worry about paying monthly subscription fees, and it’s easy to handle.

Also the continuous line tool and the auto smooth curve option, those make my line work a lot nicer.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Well, as it is still in development I can’t complain much but… I guess the old fonts, I was using one of the original fonts that came with Krita at the time, and I can’t seem to find it again.

The similar color selection tool doesn’t like to be used much or I’m still lacking knowledge in it… Which is more than likely with me.

Now that’s just the stuff that annoys me. As for improvements, I think the gradients could have variety in changing opacity and a bit more control in direction when using it.

But other than that, it’s an amazing program.

What sets Krita apart from the other tools that you use?

Compared to my first experience of using MS Paint or Gimp, it’s a lot more user friendly (to me, that is). I can easily bind keys to my keyboard instead of struggling to have it stay in the art program itself.

If you had to pick one favorite of all your work done in Krita so far, what would it be, and why?

That’s a tough one to answer, since thanks to Krita, I’ve improved a lot in my art, but if I had to name one it would be my Jewel Thief PhantomWing. The one you’re seeing here in this interview is a redraw, since my sketch and my first attempt at drawing digital didn’t turn out the way I wanted so I challenged myself to attempt it with Krita, and I’m much happier with the results than before.

What techniques and brushes did you use in it?

If I can remember the steps and the brushes I used for it I’ll tell ya but I’m kind of of drawing a blank since when I did it was in the wee hours of the morning… And actually it was three days of that in sequence. I also have a lot of brushes in Krita right now…

So…Yeah what was the question again XD…^^;

My attempt at humor aside, the technique I used (if you can even call it that) is “splash colors on it and make it look pretty”. But to refine it I used the Bezier curve tool and my mouse to do the line work, the selection tools (except the similar color one), and the paint bucket. And my art tablet for any detail work.

Now as my first sketch was me just making the lines add up to make a picture, I used the reference image tool and used that to help finalize details for the final product. And lastly the gradient tool for easy shading.

As for the brushes, I mostly used the HC Concept brushes for the whole thing, so yeah it was literally that and the gradient tool.

Where can people see more of your work?

You can find me at my DeviantArt account, which is Spihon https://www.deviantart.com/spihon

To be honest I don’t do much social media, but you can find some of my current comic projects (some original, some fan) and a lot of photos of my earlier career as a 501st member when I didn’t have a bucket to speak of (which is my group’s slang for the helmets, if that helps).

Anything else you’d like to share?

Well, stay in school, and may good fortune be your ever constant companion!

 


KUESA™ is a solution that provides an integrated and unified workflow for designers and developers to create, optimize and integrate real time 3D content in a 3D or hybrid 2D/3D software user interface. Models, including geometry, materials, animations and more, can smoothly be shared between designers and developers. Kuesa relies on glTF 2.0, an open standard which is being broadly used in the industry, so it can easily and seamlessly integrate with a growing number of authoring tools.

Kuesa is made of 3 main components:

  • In addition to supporting any compliant glTF 2.0 files, Kuesa provides plugins for content creation tools Blender and 3DS Max for creating glTF 2.0 files which support custom extensions.
  • A Qt module designed to load, render and manipulate glTF 2.0 models in applications using Qt 3D, making it easy to do things like triggering animations contained in the glTF files, finding camera details defined by the designer, etc. It also includes a library of post processing effects which can be easily extended.
  • Tools to help designers and developers inspect and optimise the models, including command line tools to automate the process

This blog post outlines the new features and changes in the 1.1.0 release.

The main objective for this release was to ensure compliance with the Khronos glTF 2.0 specification. We made sure that Kuesa works on a wide range of target hardware, be it desktop, mobile or embedded. In addition, a great deal of work has been put into a nice technical documentation, following the Qt standards. This documentation is available at the following address : https://www.kdab.com/kuesa/

Kuesa 1.1.0 supports Qt 5.12.5, 5.13 and the recently released Qt 5.14.

Kuesa Ecosystem

This release is marked by an ever-growing ecosystem of Kuesa-related tools, bindings and integrations.

First and foremost, full Python bindings for Kuesa have been developed – learn more about this here. Support for Kuesa extensions, such as layer support, is being worked on in the official Khronos Blender glTF exporter ; we will keep you posted about the progress on that. The 3DS Max Kuesa exporter can on its side be downloaded here.

gltfEditor (ex-assetpipelineeditor) has been improved, and a simple, minimal-dependency glTF 2 viewer has been implemented. It supports a few command line flags useful for testing and is simply named gltfViewer.

A new tool to process glTF 2.0 assets from the command line has been implemented. It is named assetprocessor. It allows for instance to compress meshes, and embed buffers in a glTF file or extract them.

Breaking changes

Some tools have been renamed in order to make their main use case clearer.

  • assetpipelineeditor has been renamed gltfEditor.
  • The extension of shader graph save files has been changed from .qt3d to .graph (the files in kuesa/src/core/shaders/graphs).
  • The Material API has been refactored : all the properties on materials (MetallicRoughnessMaterial, UnlitMaterial) are now instead grouped in material-specific “Properties” class – MetallicRoughnessProperties, UnlitProperties. This allows to decouple the actual material logic from the parameters needed for rendering it.

New features

glTF 2.0 support has been greatly improved in this release. Morph targets, which were missing in Kuesa 1.0, are now available.

Support for a few extensions and niceties has been added :

  • KHR_lights_punctual : this feature manifests itself through the new Kuesa.DirectionalLight, Kuesa.PointLight and Kuesa.SpotLight objects.
  • KHR_materials_unlit : unlit materials are now also supported in glTF 2.0 files.
  • Support for loading GLB files (single-file binary glTF format) has landed
  • The GLTF2Importer now supports multiple glTF scenes.
  • Support for 2 UV sets was added as the spec requires it.
  • A BRDF look-up table texture is now used.

The Kuesa API has been augmented in a few ways : The Kuesa API now provides a way to pass options to the parser, through the GLTF2Options C++ class and Kuesa.GLTF2Options QML object.

Post-processing filters have seen an overhaul, and can now leverage depth textures. In addition, a few new effects were implemented : depth-of-field and bloom. Bloom can be tested in the car scene demo, and all effects are showcased in the framegraph_scene test project.

Car with bloom

Not as shiny as the latest Corvette, but still with a heavy dose of bloom!

Normals and tangents can now be automatically generated when missing (through the help of the MikkTSpace open-source library) – see the GLTF2Options.

gltfEditor (ex-assetpipelineeditor) has gained a few new features, too :

  • It is now possible to configure the exposure, gamma, and the tonemapping algorithm in the settings.
  • The editor is able to save and load camera settings, under the camera pane.
  • Tangents can conveniently be generated from the user interface.
gltf editor

Polly model in the gltfEditor

A few new examples and demos were added :

  • A music box moved by a mechanical arm. Bonus points for who will find the original melody used in the demo 🙂
music box

The music box demo

  • A demo of the various tonemapping algorithms implemented in Kuesa :
tonemapping

The various tonemapping filters available, as showcased in the tonemapping demo

General improvements

The codebase was thoroughly tested on a wide range of hardware ; as such, many fixes in multiple areas of the codebase were implemented. Various HiDPI and multisampling related issues were fixed. For instance, post-processing effects now support multisampling correctly.

Better performance was achieved for the car demo on low-power devices thanks to the use of pregenerated images for the user interface.

The tonemapping and PBR pipelines were also reworked. The changes fix color space issues in the frame graph, in particular with post-processing effects. The PBR rework also fixes various subtle rendering artefacts.

Contributing to Kuesa

An infrastructure change has been put in place during this release cycle : dissatisfied by the pull request UI of Github, we migrated the code review to an open Gerrit instance, linked with our Github repository.

This instance is available at https://kuesa-codereview.kdab.com. Please use it to contribute!

Finally, a big thanks to all the contributors for this new version : Jim Albamont, Robert Brock, Timo Buske, Juan Jose Casafranca, Jean-Michaël Celerier, Wieland Hagen, Sean Harmer, Mike Krus, Paul Lemire, David Morgan, Mauro Persano, Nuno Pinheiro, Allen Winter.

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post New features and changes in the Kuesa 1.1.0 release appeared first on KDAB.


When I read “Qt offering changes 2020” yesterday, my first reaction was to write a pissy blog post. I’m still writing a blog post with my thoughts about the changes, but I’ll be nice. There are three parts to this post: a short recap of my history with Qt and then my thoughts on what this means for KDE, for Krita and for free software.

I started programming using Qt and PyQt when I read about Qt in Linux Journal, which I was subscribing to back in 1996. That means that I’ve been using Qt for about 25 years. I initially wanted to write an application for handling linguistic field data, and I evaluated GTK+, wxWidgets, Qt, Tk, fltk, V and a few others that have been forgotten in the mists of time. I choose Qt because it had great documentation, a consistent API, the most logical (to me…) way of doing things like setting up a window with a menu or handling scrollbars and finally because it made C++ as easy as Java.

I’ve stayed with Qt for 25 years because, through all the vicissitudes, it kept those qualities. Mostly. There are now a lot more modules, most of which aren’t necessary for my work, there are people working on Qt who seem to be a bit ashamed that Qt makes C++ as easy as Java and want to make Qt as computer-sciency as C++, there have been the licensing issues with the QPL, the changes to GPL, to LGPL and then again some modules back to GPL there have been the Nokia years, the Digia times.

But I’ve always felt that I could build on Qt. And the reason for that is the KDE Free Qt Foundation. To summarize: this is a legal agreement that keeps Qt free software. If the Qt company won’t release a version of Qt under a free software license within a year of a release, Qt becomes licensed under the BSD license.

With yesterday’s message, the Qt company is searching the utter boundaries of this agreement. To recap:

  • Long Term Support releases remain commercial only (the post doesn’t mention this, but those releases also need to be released under a free software license within a year to adhere to the agreement, at least to my understanding).
  • Access to pre-built binaries will be restricted: put behind an account wall or be only available to commercial license holders
  • And there’s a new, cheaper license for small companies that they can use to develop, but not deploy their work to customers.

This is a weirdly mixed bag of “changes”. The last one is a bit silly. Even the “commercial” side of Krita is too big to qualify! We’re five people and have a budget of about 125k…

The middle point is worth considering as well. Now there is nothing in any free software license that talks about a duty to make binaries available.

For a very long time, Krita, when part of KOffice, only made source tarballs available. Right now, we, like the Qt company, have binaries for Linux, Windows, macOS and (experimentally) Android. The Windows binaries are for sale in the Windows Store and on Steam, the Linux binaries are for sale on Steam. And all binaries can be downloaded for free from krita.org and other places.

This move by the Qt company would be like the Krita project shutting down the free downloads of our binaries and only make them available in the various stores. It would be legal, but not nice and would cost us hundreds of thousands of users, if not millions. It is hard not to wonder what the cost to the Qt community will be.

The first change, the restriction of the LTS releases to commercial customers has all kinds of unexpected ramifications.

First off, Linux distributions. Disitributions already rarely use LTS releases, and in any case, with Qt 3 and Qt 4 there didn’t use to be any LTS releases. But disitributions do have to keep older versions of Qt around for unported applications for a longer time, so they do need security and bug fixes for those older versions of Qt.

Then there’s the issue of how fixes are going to land in the LTS releases. At the last Qt contributor summit the Qt project decided on a process where all fixes go through “dev” and then are ported to the stable branches/LTS branches. That’s going to break when Qt6 becomes dev: patches won’t apply to Qt 5.

Albert has already blogged about this change as well, but he only really focused on distributions and KDE Plasma; there is of course much more to KDE than the Plasma desktop and Linux distributions.

As for Krita, we’re using Qt 5.12 for our binaries because we carry a lot of patches that would need porting to Qt 5.13 or 5.14 and because Qt 5.13 turned out to be very, very buggy. For Krita, using a stable version of Qt that gets bug fixes is pretty important, and that will be a problem, because we will lose access to those versions.

In my opinion, while we’ve done without stable, LTS releases of Qt for years, it’s inevitable that Qt 5.15 will be forked into a community edition that gets maintained, hopefully not just by KDE people, but by everyone who needs a stable, LGPL licenced release of Qt5 for years to come.

Splitting up the Qt community, already responsible for handling a huge amount of code, is not a good idea, but it looks like the Qt company has made it inevitable.

And once there’s a community maintained fork of Qt, would I contribute to the unforked Qt? Probably not. It’s already a lot of work to get patches in, and doing that work twice, nah, not interested. If there’s a maintained community version of Qt 5, would I be interested in porting to Qt 6? Probably not, either. It isn’t like the proposed changes for Qt 6 excite me. And I don’t expect to be the only one.

As for the more intangible consequences of these changes: I’m afraid those aren’t so good. Even in our small Krita community, we’ve had people suggest it might be a good idea to see whether we couldn’t port Krita to, say, Blender’s development platform. This would be a sheer impossible task, but that people start throwing out ideas like that is a clear sign that the Qt company has made Qt much less attractive.

If I were to start a new free software project, would I use Qt? Last Sunday the answer would have been “of course!”. Today it’s “hm, let’s first check alternatives”. If I had a big GTK based project that’s being really hampered by how bad, incomplete and hard to use GTK is, would I consider porting to Qt? Same thing. If the KDE Free Qt Foundation hadn’t that agreement with the Qt company, the answer would probably be no, right now, it’s still probably a yes.

Now as for the actual announcement. I think the way the Qt company represents the changes is actually going to help to harm Qt’s reputation. The announcement is full of weasel-wording…

“General Qt Account requirement” — this means that in order to download Qt binaries, everyone is going to need a Qt account. Apparently this will make open-source users more eager to report bugs, since they will already have an account. And, yay, wonderful, you need an account to access the completely useless Qt marketplace. And it allows, and now we’re getting at the core reason, the Qt company to see which companies are using the open source version of Qt and send salespeople their way. (But only if the people making the accounts are recognizable, of course, not if they make the account with their gmail address.) When I was working for Quby, I was unpleasantly surprised at how expensive Qt is, how little flexibility the Qt company shows when dealing with prospective customers — and how we never downloaded the installer anyway.

“LTS and offline installer to become commercial-only” — this will break every free software project that uses services like travis to make builds that download Qt in the build process. Of course, one can work around that, but the way the Qt company represents this is “We are making this change to encourage open-source users to quickly adopt new versions. This helps maximize the feedback we can get from the community and to emphasize the commercial support available to those with longer product life cycles that rely on a specific Qt version.” Which of course means “our regular releases are actually betas which we expect you freeloaders to test for us, to provide bug fixes for us, which we can use to provide the paying customers with stable releases”.

And yes, theoretically, the main development branch will have all bug fixes, too, and so nobody misses out on those bug fixes, and everyone has stability… Right? The problem is that Qt has become, over the years, bigger and buggier, and I doubt whether releases made fresh off the main development branch will be stable enough to provide, say, a stable version of Krita to our millions of users. Because, apart from all the bug fixes, they will also have all the new regressions.

“Summary”. The Qt Company is committed to the open-source model of providing Qt technology now and in the future and we are investing now more than ever. ” — though only to the extent that the Qt Company is forced to adhere to the open-source model by the KDE Free Qt Foundation.

We believe that these changes are necessary for our business model and the Qt ecosystem as a whole. ” — my fear is that the Qt Company will not survive the fracturing of the Qt ecosystem that this decision practically guarantees.


If you remember my previous post, I mentioned that some work has been happening thanks to an intern we had for a few months at enioka Haute Couture. Well, we thought it would be unfair to just take credit of her work and so she wrote a piece about her internship that we’re sharing with the world.

The following is the description of the current state of ComDaAn in her own words. It has been previously published in french on the enioka blog and on her own blog as well

Thanks again Christelle for the wonderful work you did. The stage is yours now!

Current State of ComDaAn: Community Data Analytics, by Christelle Zouein

Faced with the growing number of tools developers use to write and distribute code, we’ve decided to have a closer look at the data footprint these leave behind. ComDaAn analyzes said footprint to offer an insight into the inner workings of developer teams and open sources communities.

It can be used to examine how different communities and teams function by for example looking at the commits of a git repository over time. This can be particularly useful in the case of a technical audit or to select the software “brick” whose community has the best long-term chances.

This article will cover community data analytics in the context of open source development and free software. More specifically it will seek to introduce ComDaAn and explain the newest additions to the project.

A little bit of history

Paul Adams is a developer renowned for his work in the field of free software and his many contributions to the KDE FOSS community. Before retiring from KDE, Adams provided the community with a service in the form of community data visualization using git repositories. To ensure the continuity of the service, Kevin Ottens, Libre software craftsman and developer at enioka Haute Couture, decided to take over.

And so, ComDaAn took form as a way of modernizing Paul Adams’ scripts while staying true to his vision.

That later turned into a complete rewrite with the purpose of creating a solid base for community data analytics in general. The project then became a suite of tools to study and analyze data produced by software communities and teams.

Features

ComDaAn became what it is today thanks to the common efforts of multiple developers. It has many features, most of which will be explained in what follows.

Supported data sources

To conduct analyses on open source communities and software teams, ComDaAn uses the data these entities produce, mainly git repositories, mailing lists and GitLab issues.

Git repositories

Perhaps the most intuitive data source to consider would be git repositories. Indeed, git repositories are a way to store files as well as track the changes that have been made to them. Git is a tool designed to coordinate work between programmers and is currently the one most commonly used by both the free software and proprietary software communities alike.

So, analyzing their git repositories and thus the products they put forth would be the most direct way to study these communities.

This was the starting point for ComDaAn.

Mailing Lists

Mailing lists are a sort of discussion lists or forums where internet users can communicate and discuss certain subjects. Because of the advantages they present over other means of communications, such as the possibility to work offline, that of signing one’s emails via GPG or even using a mail client’s filtering features, they are commonly used in open source communities.

Therefore, analyzing the public mailing lists of an open source community or the private archives of a team of developers, offers an insight into the discussion within them as well as their readiness for discussion with both users and developers. It thus seemed appropriate to add this data source to the project to establish a more complete profile of the community or team studied.

GitLab Issues

Some Git repository managers like GitLab or GitHub offer an issue tracking system (ITS) that manages and maintains lists of issues which consist of a way to measure the work needed to improve a project. Popular amongst developer teams and communities, ITSs have become a tool for software maintenance. Analyzing them therefore allows the scrutiny of this aspect of a project.

Moreover, certain actors of the software community still went unnoticed even with two data sources. And analyzing the issues of a project helps to overcome this problem since it focuses on the lesser known members of the community: those who report bugs (the reporters) and those who discuss them (the commenters).

For these reasons, ComDaAn needed to be expanded to include the analysis and fetching of GitLab issues.

Types of analyses

There are 5 types of analyses in ComDaAn that a user can perform on the supported data sources: activity, size, network, centrality and response time.

In what follows, the team or community mentioned is formed by the authors of the entries and the project by the data source or sources submitted for analysis. The duration of the project thus becomes equal to the duration of all the entries combined. In addition to that, the results obtained represent the active members of the entity in question and not all of the entity.

Activity

The activity analysis can be seen as a tool used to visualize the activity of the members of a community or a team, week by week, for the duration of a certain project. The term “activity” here represents the number of commits, email messages, issues or comments created by the members of the team or community during the period considered.

And so, it becomes possible to identify who has been the most active out of a group. It is also possible to tell when a member has joined or left the team or community considered which gives an idea about the recruitment and retention trends.

The activity visualization consists of a list of the author names and their weekly activity over the life of a project.


Team member activity

Thanks to the above visualization, we can examine the authors that have contributed to the project and the frequency at which they have. And the darker the color, the more active the author has been during the corresponding week.

Size

The size analysis can be seen as a tool that is used to plot the variation of a community’s size and activity over the duration of a project. Unlike the activity analysis which studies the team on a per person basis, the size analysis looks at the team as a whole. Indeed, the analysis produces a scatterplot that, thanks to some smoothing, into two curves, one representing the number of authors in a community as a function of time and one representing the number of entries in the corresponding project as a function of time.


Team activity and size over time

With such a visualization, it becomes possible to reveal certain events that would have otherwise gone unnoticed, by looking at the general tendencies of the team size or activity. In the previous example, an interesting question that might arise would be what event happened during the life of the project that generated an activity peak in 2012.

This visualization is fairly easy to read and consists of a guide to choose more specific analyses on more relevant periods. For instance, it would be worth considering looking at the contributor network (further explained in the following section) of the above project before 2012 and then during 2012 to further investigate the causes of the activity peak.

Network

The network analysis is a tool to show the relationships between the different members of a team or community as well as its structure. To create the graph representing said structure and calculate the centrality of each node or author, we proceed differently for each data source. For git repositories, we mainly consider the changes an author made to a set of files and the more said author has modified files a lot of others have modified, the more they have collaborated with the rest of their team (at the least to synchronize and discuss their modifications). They will appear to be more central to the contributor network. For mailing lists, we mainly use the references in a message. And thus, the more a sender has been referenced by different senders, the larger their weight. And finally, for GitLab issues, we rely on the discussion surrounding the issues of a project which entails that the more an author reports bugs and comments on discussion threads, the larger their weight.


A network of Contributors

It thus becomes easy to identify the key contributors of a certain project or community over a given period. Typically, the more central person or people of a project are those who maintain it. A use case of the network analysis would then be for instance to conduct it on different periods during the life of a project to see if the maintainers have changed. A project that can survive a change of maintainers could be labeled as a more resilient project.

Centrality

Where a network analysis shows the centrality of the different members of a team or a community, a centrality analysis shows the variation of that of an individual author over time. In it, the centrality of an author is calculated similarly to how it is in the network analysis, but instead of considering the duration of the whole project, it considers a smaller interval and calculates the centrality of the author over it. It then repeats the computation over the course of said project. Thus, this type of visualizations displays the variation of the centrality of a member over time. It also displays the activity variation of the author over time and that of the size of the community over the same time.

Instead of studying the entity in question at the community or team scale like its counterparts, the centrality analysis allows us to study each member at the individual scale and their impact on the team or community.


Contributor centrality over time

This particular visualization is the most computationally costly and the hardest to read. Indeed, the amplitude of the centrality plot would vary depending on the size of the team. Therefore, centrality tendency analyses are only relevant over periods of relative stability team/community size-wise (hence the third plot in the above example).

Responsiveness

Essentially, issues are the support that the creators of a certain software offer to its users. So, the study of the response time of a team to reported bugs and issues offers an insight into the maintenance of said software and the dedication of the team to technical support.

This analysis calculates, on one hand, the duration between the creation of the issue and the start of the discussion around it to plot the variation of the response time over time, and on the other the rate of unanswered issues at each point in time.

More specifically, we plot a figure, where the curve representing the variation over time of the response time to issues and a bar chart representing the number of unanswered issues.


Responsiveness to issues over time

We can thus represent in a more tangible way, the dedication of a community or team to the success of their products after deployment. Subsequently such a visualization can for example help in the choice of new tools to adopt.

Interface

The ComDaAn user interface has been designed to be efficient and easy to use. It is a hybrid interface and is made up of a script to collect the data sources from the GitLab API and a python library for parsing, analyzing and displaying data.

ComDaAn interface

The library consists of three functions for the parsing of the three different data sources, of five functions for the five different analyses, and of one function to display the results. It mainly uses “pandas.DataFrame”, making it more universal. It also allows the user to modify these data frames according to their needs as well as to choose the columns they want to use for their analysis. Indeed, to use the library, the user must specify the data set as well as the columns to be considered when calling each analysis function.

To better illustrate, let’s consider the following code snippet:


commits = cd.parse_repositories('~/path_to_repositories')
activity = cd.activity(commits, 'id', 'author_email', 'date') 
cd.display(activity)

Here, we want to perform an activity analysis of some git repositories located at a certain path from the root. We first start by calling the parsing function that corresponds to our data source. That function returns a data frame of the different entries.

Then, we call the activity analysis function on the data frame and specify the columns to be considered.

Finally, we display our result with the display function.

However, to have the authors’ email addresses in our final DataFrame instead of their names, we enter the name of the author email column where the function expects the author names. The result then displays the activity per week for individuals whose emails are those found in the dataset.


Activity analysis displayed by email

Moreover, the display function offers many options. It can receive as a parameter a heterogenous list of elements and displays them differently according to their types. It can also receive a simple pandas data frame and display it. Additionally, to better compare two or more results of the same object type, it is possible to display them on the same figure with the same axes. Examining them would have us look at one figure with different plots instead of repeatedly switch between tabs.

To better illustrate let’s consider this time the following code snippet:


commits = cd.parse_repositories('~/path_to_data')
issues = cd.parse_issues('~/path_to_data')
commits_ts = cd.teamsize(commits, 'id', 'author_name', 'date')
issues_ts = cd.teamsize(issues, 'id', 'author', 'created_at') 
cd.display([commits_ts, issues_ts])

Here, we decided to use data from different sources but apply the same analysis to them and then display them at once. The two different figures are then overlaid on the same figure.


Team activity and size using both team commits and issues

Finally, thanks to the separation of the different stages of the ComDaAn process, it is possible to avoid potential redundancies. Indeed, the parsing which is the most time-consuming step of the whole process is done once per data source where it used to be done by each script called and the display is optimized. The global execution time is thus smaller.

And now …

During my four-month internship at enioka Haute Couture, I worked on ComDaAn And more specifically on optimizing the code performance wise, on adding mailing lists and issues as additional data sources, on using the LOWESS regression for smoothing of the different curves displayed, and finally on the design and programming of the interface.

The ComDaAn project has evolved over time thanks to the work and dedication of many contributors. In the same manner, it will continue to evolve and find new ways to serve community data analytics as well as the open source community.

Christelle Zouein


January 27, 2020

Obvious disclaimer, this is my opinion, not KDE's, not my employer's, not my parents', only mine ;)

Big news today is that Qt Long-term-supported (LTS) releases and the offline installer will become available to commercial licensees only.

Ignoring upcoming switch to Qt6 scenario for now, how bad is that for us?

Let's look at some numbers from our friends at repology.

At this point we have 2 Qt LTS going on, Qt 5.9 (5.9.9 since December) and Qt 5.12 (5.12.6 since November).

How many distros ship Qt 5.9.9? 0. (there's macports and slackbuilds but none of those seem to provide Plasma packages, so I'm ignoring them)

How many distros ship Qt 5.12.6? 5, Adélie Linux, Fedora 30, Mageia 7, OpenSuse Leap 15.2, PCLinux OS (ALT Linux and GNU Guix also do but they don't seem to ship Plasma). Those are some bigger names (I'd say specially Fedora and OpenSuse).

On the other hand Fedora 28 and 29 ship some 5.12.x version but have not updated to 5.12.6, Opensuse Leap 15.1 has a similar issue, it's stuck on 5.9.7 and did not update to 5.9.9 and so is Mageia 6 which is stuck on Qt 5.9.4

Ubuntu 19.04, 19.08 and 20.04 all ship some version of Qt 5.12 (LTS) but not the lastest version.

On the other a few of other "big" distros don't ship Qt LTS, Arch and Gentoo ship 5.14, our not-distro-distro Neon is on 5.13 and so is flatpak.

As I see it, the numbers say that while it's true that some distros are shipping the latest LTS release, it's not all of them by far, and it looks more like an opportunistic use, the LTS branch is followed for a while in the last release of the distro, but the previous ones get abandoned at some point, so the LTS doesn't really seem to be used to its fully potential.

What would happen if there was no Qt LTS?

Hard to say, but I think some of the "newer" distros would actually be shipping Qt 5.13 or 5.14, and in my book that's a good thing, moving users forward is always good.

The "already released" distros is different story, since they would obviously not be updating from Qt 5.9 to 5.14, but as we've seen it seems that most of the times they don't really follow the Qt LTS releases to its full extent either.

So all in all, I'm going to say not having Qt LTS releases is not that bad for KDE, we've lived for that for a long time (remember there has only been 4 Qt LTS, 4.8, 5.6, 5.9 and 5.12) so we'll do mostly fine.

But What about Qt 5.15 and Qt 6 you ask!


Yes, this may actually be a problem, if all goes to plan Qt 5.15 will be released in May and Qt 6.0 in November, that means we will likely get up to Qt 5.15.2 or 5.15.3 and then that's it, we're moving to Qt 6.0

Obviously KDE will have to move to Qt 6 at some point, but that's going to take a while (as example Plasma 5 was released when Qt was at 5.3) so for let's say that for a year or two we will still be using Qt 5.15 without any bugfix releases.

That can be OK if Qt 5.15 ends being a good release or a problem if it's a bit buggy. If it's buggy, well then we'll have to figure out what to do, and it'll probably involve some kind of fork somewhere, be it by KDE (qt already had that for a while in ancient history with qt-copy) or by some other trusted source, but let's hope it doesn't get to that, since it would mean that there's two set of people fixing bugs in Qt 5.15, The Qt Company engineers and the rest of the world, and doing the same work twice is not smart.

I spent a week in Delhi on a trip to be part of conf.kde.in. One of the talks I gave had a line in it Translation is Accessibility.

I would probably add accessibility is a right, although that would be hypocritical of me, given that Calamares’s accessibility isn’t all that good (part of that is down to Qt and a languishing patch for making Qt-applications-as-root accessible). There’s some open issues on that front, and I hope that we’re going to find some progress in the next few months.

In any case, one of the talks was on the transition of the Janayugom newspaper to Free Software – Scribus and KDE applications. That includes the challenges of dealing with fonts, writing, transliteration, and more. Read the upstream story from the people who did the work. At conf.kde.in both Kannan and Subin spoke about Malayalam topics; Kannan about the newspaper, and Subin about KDE bits. I showed off Calamares running in Malayalam as well, although since I hadn’t prepared that, I didn’t have proper Indic fonts installed and it was terribly ugly. In Hindi it looked ok, so there’s plenty of work for system integrators to do to deliver a good-looking localized desktop there.

Since I was also giving a talk about translations and one about Calamares, I decided to canvas for more translators. Gujrati, for instance, has only one translator and not much work done, so I was hoping to find some helpers.

What I didn’t expect is for SuperX to stand up. It’s a distro developed in Assam and Wrishiraj said he’d get right on the translation into Assamese. He tried to teach me to pronounce it correctly; I doubt he succeeded.

Assamese

What did succeed is the addition of Assamese to Calamares. After setting up a translation team, Wrishiraj did a little translation and I added it to the Calamares configuration. That means you can choose the language for installation, although the overall translation state is far-from-complete. That will change in the next few weeks though.

I hit one interesting bug right away: the language Assamese has language code as, and Asturian is ast. Asturian was already available in Calamares, and somewhere the code was looking for shortest matches: so picking Assamese as the installer language would set the system language to Asturian. That is probably not a common combination. Once I got that sorted I pushed out a new Calamares release 3.2.18 to show off the new language (and fix some other bugs).

As I write this, I realise that Calamares isn’t very friendly to updating-translations-as-you-go; it wants a recompile to pack everything into QRC files. However, some time ago I built machinery to search for translations in multiple places, so expect the next Calamares release to support updating translations outside of the main source package (for testing and for supporting translation efforts like this one).

So chalk this up to conf.kde.in: new languages, new collaborations, and bugfixing in far-flung corners of the Free Software world. That’s the kind of activity that KDE e.V. is there to support.


It has been a while since the last AppStream-related post (or any post for that matter) on this blog, but of course development didn’t stand still all this time. Quite the opposite – it was just me writing less about it, which actually is a problem as some of the new features are much less visible. People don’t seem to re-read the specification constantly for some reason 😉. As a consequence, we have pretty good adoption of features I blogged about (like fonts support), but much of the new stuff is still not widely used. Also, I had to make a promise to several people to blog about the new changes more often, and I am definitely planning to do so. So, expect posts about AppStream stuff a bit more often now.

What actually was AppStream again? The AppStream Freedesktop Specification describes two XML metadata formats to describe software components: One for software developers to describe their software, and one for distributors and software repositories to describe (possibly curated) collections of software. The format written by upstream projects is called Metainfo and encompasses any data installed in /usr/share/metainfo/, while the distribution format is just called Collection Metadata. A reference implementation of the format and related features written in C/GLib exists as well as Qt bindings for it, so the data can be easily accessed by projects which need it.

The software metadata contains a unique ID for the respective software so it can be identified across software repositories. For example the VLC Mediaplayer is known with the ID org.videolan.vlc in every software repository, no matter whether it’s the package archives of Debian, Fedora, Ubuntu or a Flatpak repository. The metadata also contains translatable names, summaries, descriptions, release information etc. as well as a type for the software. In general, any information about a software component that is in some form relevant to displaying it in software centers is or can be present in AppStream. The newest revisions of the specification also provide a lot of technical data for systems to make the right choices on behalf of the user, e.g. Fwupd uses AppStream data to describe compatible devices for a certain firmware, or the mediatype information in AppStream metadata can be used to install applications for an unknown filetype easier. Information AppStream does not contain is data the software bundling systems are responsible for. So mechanistic data how to build a software component or how exactly to install it is out of scope.

So, now let’s finally get to the new AppStream features since last time I talked about it – which was almost two years ago, so quite a lot of stuff has accumulated!

Specification Changes/Additions

Web Application component type

(Since v0.11.7) A new component type web-application has been introduced to describe web applications. A web application can for example be GMail, YouTube, Twitter, etc. launched by the browser in a special mode with less chrome. Fundamentally though it is a simple web link. Therefore, web apps need a launchable tag of type url to specify an URL used to launch them. Refer to the specification for details. Here is a (shortened) example metainfo file for the Riot Matrix client web app:

<component type="web-application">
  <id>im.riot.webapp</id>
  <metadata_license>FSFAP</metadata_license>
  <project_license>Apache-2.0</project_license>
  <name>Riot.im</name>
  <summary>A glossy Matrix collaboration client for the web</summary>
  <description>
    <p>Communicate with your team[...]</p>
  </description>
  <icon type="stock">im.riot.webapp</icon>
  <categories>
    <category>Network</category>
    <category>Chat</category>
    <category>VideoConference</category>
  </categories>
  <url type="homepage">https://riot.im/</url>
  <launchable type="url">https://riot.im/app</launchable>
</component>

Repository component type

(Since v0.12.1) The repository component type describes a repository of downloadable content (usually other software) to be added to the system. Once a component of this type is installed, the user has access to the new content. In case the repository contains proprietary software, this component type pairs well with the agreements section.

This component type can be used to provide easy installation of e.g. trusted Debian or Fedora repositories, but also can be used for other downloadable content. Refer to the specification entry for more information.

Operating System component type

(Since v0.12.5) It makes sense for the operating system itself to be represented in the AppStream metadata catalog. Information about it can be used by software centers to display information about the current OS release and also to notify about possible system upgrades. It also serves as a component the software center can use to attribute package updates to that do not have AppStream metadata. The operating-system component type was designed for this and you can find more information about it in the specification documentation.

Icon Theme component type

(Since v0.12.8) While styles, themes, desktop widgets etc. are already covered in AppStream via the addon component type as they are specific to the toolkit and desktop environment, there is one exception: Icon themes are described by a Freedesktop specification and (usually) work independent of the desktop environment. Because of that and on request of desktop environment developers, a new icon-theme component type was introduced to describe icon themes specifically. From the data I see in the wild and in Debian specifically, this component type appears to be very underutilized. So if you are an icon theme developer, consider adding a metainfo file to make the theme show up in software centers! You can find a full description of this component type in the specification.

Runtime component type

(Since v0.12.10) A runtime is mainly known in the context of Flatpak bundles, but it actually is a more universal concept. A runtime describes a defined collection of software components used to run other applications. To represent runtimes in the software catalog, the new AppStream component type was introduced in the specification, but it has been used by Flatpak for a while already as a nonstandard extension.

Release types

(Since v0.12.0) Not all software releases are created equal. Some may be for general use, others may be development releases on the way to becoming an actual final release. In order to reflect that, AppStream introduced at type property to the release tag in a releases block, which can be either set to stable or development. Software centers can then decide to hide or show development releases.

End-of-life date for releases

(Since v0.12.5) Some software releases have an end-of-life date from which onward they will no longer be supported by the developers. This is especially true for Linux distributions which are described in a operating-system component. To define an end-of-life date, a release in AppStream can now have a date_eol property using the same syntax as a date property but defining the date when the release will no longer be supported (refer to the releases tag definition).

Details URL for releases

(Since v0.12.5) The release descriptions are short, text-only summaries of a release, usually only consisting of a few bullet points. They are intended to give users a fast, quick to read overview of a new release that can be displayed directly in the software updater. But sometimes you want more than that. Maybe you are an application like Blender or Krita and have prepared an extensive website with an in-depth overview, images and videos describing the new release. For these cases, AppStream now permits an url tag in a release tag pointing to a website that contains more information about a particular release.

Release artifacts

(Since v0.12.6) AppStream limited release descriptions to their version numbers and release notes for a while, without linking the actual released artifacts. This was intentional, as any information how to get or install software should come from the bundling/packaging system that Collection Metadata was generated for.

But the AppStream metadata has outgrown this more narrowly defined purpose and has since been used for a lot more things, like generating HTML download pages for software, making it the canonical source for all the software metadata in some projects. Coming from Richard Hughes awesome Fwupd project was also the need to link to firmware binaries from an AppStream metadata file, as the LVFS/Fwupd use AppStream metadata exclusively to provide metadata for firmware. Therefore, the specification was extended with an artifacts tag for releases, to link to the actual release binaries and tarballs. This replaced the previous makeshift “release location” tag.

Release artifacts always have to link to releases directly, so the releases can be acquired by machines immediately and without human intervention. A release can have a type of source or binary, indicating whether a source tarball or binary artifact is linked. Each binary release can also have an associated platform triplet for Linux systems, an identifier for firmware, or any other identifier for a platform. Furthermore, we permit sha256 and blake2 checksums for the release artifacts, as well as specifying sizes. Take a look at the example below, or read the specification for details.

<releases>
​  <release version="1.2" date="2014-04-12" urgency="high">
​    [...]
​    <artifacts>
​      <artifact type="binary" platform="x86_64-linux-gnu">
​        <location>https://example.com/mytarball.bin.tar.xz</location>
​        <checksum type="blake2">852ed4aff45e1a9437fe4774b8997e4edfd31b7db2e79b8866832c4ba0ac1ebb7ca96cd7f95da92d8299da8b2b96ba480f661c614efd1069cf13a35191a8ebf1</checksum>
​        <size type="download">12345678</size>
​        <size type="installed">42424242</size>
​      </artifact>
​      <artifact type="source">
​        <location>https://example.com/mytarball.tar.xz</location>
​        [...]
​      </artifact>
​    </artifacts>
​  </release>
​</releases>

Issue listings for releases

(Since v0.12.9) Software releases often fix issues, sometimes security relevant ones that have a CVE ID. AppStream provides a machine-readable way to figure out which components on your system are currently vulnerable to which CVE registered issues. Additionally, a release tag can also just contain references to any normal resolved bugs, via bugtracker URLs. Refer to the specification for details. Example for the issues tag in AppStream Metainfo files:

<issues>
​  <issue url="https://example.com/bugzilla/12345">bz#12345</issue>
​  <issue type="cve">CVE-2019-123456</issue>
​</issues>

Requires and Recommends relations

(Since v0.12.0) Sometimes software has certain requirements only justified by some systems, and sometimes it might recommend specific things on the system it will run on in order to run at full performance.

I was against adding relations to AppStream for quite a while, as doing so would add a more “functional” dimension to it, impacting how and when software is installed, as opposed to being only descriptive and not essential to be read in order to install software correctly. However, AppStream has pretty much outgrown its initial narrow scope and adding relation information to Metainfo files was a natural step to take. For Fwupd it was an essential step, as Fwupd firmware might have certain hard requirements on the system in order to be installed properly. And AppStream requirements and recommendations go way beyond what regular package dependencies could do in Linux distributions so far.

Requirements and recommendations can be on other software components via their id, on a modalias, specific kernel version, existing firmware version or for making system memory recommendations. See the specification for details on how to use this. Example:

<requires>
  <id version="1.0" compare="ge">org.example.MySoftware</id>
​  <kernel version="5.6" compare="ge">Linux</kernel>
​</requires>
<recommends>
​  <memory>2048</memory> <!-- recommend at least 2GiB of memory -->
​</recommends>

This means that AppStream currently supported provides, suggests, recommends and requires relations to refer to other software components or system specifications.

Agreements

(Since v0.12.1) The new agreement section in AppStream Metainfo files was added to make it easier for software to be compliant to the EU GDPR. It has since been expanded to be used for EULAs as well, which was a request coming (to no surprise) from people having to deal with corporate and proprietary software components. An agreement consists of individual sections with headers and descriptive texts and should – depending on the type – be shown to the user upon installation or first use of a software component. It can also be very useful in case the software component is a firmware or driver (which often is proprietary – and companies really love their legal documents and EULAs).

Contact URL type

(Since v0.12.4) The contact URL type can be used to simply set a link back to the developer of the software component. This may be an URL to a contact form, their website or even a mailto: link. See the specification for all URL types AppStream supports.

Videos as software screenshots

(Since v0.12.8) This one was quite long in the making – the feature request for videos as screenshots had been filed in early 2018. I was a bit wary about adding video, as that lets you run into a codec and container hell as well as requiring software centers to support video and potentially requiring the appstream-generator to get into video transcoding, which I really wanted to avoid. Alternatively, we would have had to make AppStream add support for multiple likely proprietary video hosting platforms, which certainly would have been a bad idea on every level. Additionally, I didn’t want to have people add really long introductory videos to their applications.

Ultimately, the problem was solved by simplification and reduction: People can add a video as “screenshot” to their software components, as long as it isn’t the first screenshot in the list. We only permit the vp9 and av1 codecs and the webm and matroska container formats. Developers should expect the audio of their videos to be muted, but if audio is present, the opus codec must be used. Videos will be size-limited, for example Debian imposes a 14MiB limit on video filesize. The appstream-generator will check for all of these requirements and reject a video in case it doesn’t pass one of the checks. This should make implementing videos in software centers easy, and also provide the safety guarantees and flexibility we want.

So far we have not seen many videos used for application screenshots. As always, check the specification for details on videos in AppStream. Example use in a screenshots tag:

​<screenshots>
​  <screenshot type="default">
​    <image type="source" width="1600" height="900">https://example.com/foobar/screenshot-1.png</image>
​  </screenshot>
​  <screenshot>
​    <video codec="av1" width="1600" height="900">https://example.com/foobar/screencast.mkv</video>
​  </screenshot>
​ </screenshots>

Emphasis and code markup in descriptions

(Since v0.12.8) It has long been requested to have a little bit more expressive markup in descriptions in AppStream, at least more than just lists and paragraphs. That has not happened for a while, as it would be a breaking change to all existing AppStream parsers. Additionally, I didn’t want to let AppStream descriptions become long, general-purpose “how to use this software” documents. They are intended to give a quick overview of the software, and not comprehensive information. However ultimately we decided to add support for at least two more elements to format text: Inline code elements as well as em emphases. There may be more to come, but that’s it for now. This change was made about half a year ago, and people are currently advised to use the new styling tags sparingly, as otherwise their software descriptions may look odd when parsed with older AppStream implementation versions.

Remove-component merge mode

(Since v0.12.4) This addition is specified for the Collection Metadata only, as it affects curation. Since AppStream metadata is in one big pool for Linux distributions, and distributions like Debian freeze their repositories, it sometimes is required to merge metadata from different sources on the client system instead of generating it in the right format on the server. This can also be used for curation by vendors of software centers. In order to edit preexisting metadata, special merge components are created. These can permit appending data, replacing data etc. in existing components in the metadata pool. The one thing that was missing was a mode that permitted the complete removal of a component. This was added via a special remove-component merge mode. This mode can be used to pull metadata from a software center’s catalog immediately even if the original metadata was frozen in place in a package repository. This can be very useful in case an inappropriate software component is found in the repository of a Linux distribution post-release. Refer to the specification for details.

Custom metadata

(Since v0.12.1) The AppStream specification is extensive, but it can not fit every single special usecase. Sometimes requests come up that can’t be generalized easily, and occasionally it is useful to prototype a feature first to see if it is actually used before adding it to the specification properly. For that purpose, the custom tag exists. The tag defines a simple key-value structure that people can use to inject arbitrary metadata into an AppStream metainfo file. The libappstream library will read this tag by default, providing easy access to the underlying data. Thereby, the data can easily be used by custom applications designed to parse it. It is important to note that the appstream-generator tool will by default strip the custom data from files unless it has been whitelisted explicitly. That way, the creator of a metadata collection for a (package) repository has some control over what data ends up in the resulting Collection Metadata file. See the specification for more details on this tag.

Miscellaneous additions

(Since v0.12.9) Additionally to JPEG and PNG, WebP images are now permitted for screenshots in Metainfo files. These images will – like every image – be converted to PNG images by the tool generating Collection Metadata for a repository though.

(Since v0.12.10) The specification now contains a new name_variant_suffix tag, which is a translatable string that software lists may append to the name of a component in case there are multiple components with the same name. This is intended to be primarily used for firmware in Fwupd, where firmware may have the same name but actually be slightly different (e.g. region-specific). In these cases, the additional name suffix is shown to make it easier to distinguish the different components in case multiple are present.

(Since v0.12.10) AppStream has an URI format to install applications directly from webpages via the appstream: scheme. This URI scheme now permits alternative IDs for the same component, in case it switched its ID in the past. Take a look at the specification for details about the URI format.

(Since v0.12.10) AppStream now supports version 1.1 of the Open Age Rating Service (OARS), so applications (especially games) can voluntarily age-rate themselves. AppStream does not replace parental guidance here, and all data is purely informational.

Library & Implementation Changes

Of course, besides changes to the specification, the reference implementation also received a lot of improvements. There are too many to list them all, but a few are noteworthy to mention here.

No more automatic desktop-entry file loading

(Since v0.12.3) By default, libappstream was loading information from local .desktop files into the metadata pool of installed applications. This was done to ensure installed apps were represented in software centers to allow them to be uninstalled. This generated much more pain than it was useful for though, with metadata appearing two to three times in software centers because people didn’t set the X-AppStream-Ignore=true tag in their desktop-entry files. Also, the generated data was pretty bad. So, newer versions of AppStream will only load data of installed software that doesn’t have an equivalent in the repository metadata if it ships a metainfo file. One more good reason to ship a metainfo file!

Software centers can override this default behavior change by setting the AS_POOL_FLAG_READ_DESKTOP_FILES flag for AsPool instances (which many already did anyway).

LMDB caches and other caching improvements

(Since v0.12.7) One of the biggest pain points in adding new AppStream features was always adjusting the (de)serialization of the new markup: AppStream exists as a YAML version for Debian-based distributions for Collection Metadata, an XML version based on the Metainfo format as default, and a GVariant binary serialization for on-disk caching. The latter was used to drastically reduce memory consumption and increase speed of software centers: Instead of loading all languages, only the one we currently needed was loaded. The expensive icon-finding logic, building of the token cache for searches and other operations were performed and the result was saved as a binary cache on-disk, so it was instantly ready when the software center was loaded next.

Adjusting three serialization formats was pretty laborious and a very boring task. And at one point I benchmarked the (de)serialization performance of the different formats and found out the the XML reading/writing was actually massively outperforming that of the GVariant cache. Since the XML parser received much more attention, that was only natural (but there were also other issues with GVariant deserializing large dictionary structures).

Ultimately, I removed the GVariant serialization and replaced it with a memory-mapped XML-based cache that reuses 99.9% of the existing XML serialization code. The cache uses LMDB, a small embeddable key-value store. This makes maintaining AppStream much easier, and we are using the same well-tested codepaths for caching now that we also use for normal XML reading/writing. With this change, AppStream also uses even less memory, as we only keep the software components in memory that the software center currently displays. Everything that isn’t directly needed also isn’t in memory. But if we do need the data, it can be pulled from the memory-mapped store very quickly.

While refactoring the caching code, I also decided to give people using libappstream in their own projects a lot more control over the caching behavior. Previously, libappstream was magically handling the cache behind the back of the application that was using it, guessing which behavior was best for the given usecase. But actually, the application using libappstream knows best how caching should be handled, especially when it creates more than one AsPool instance to hold and search metadata. Therefore, libappstream will still pick the best defaults it can, but give the application that uses it all control it needs, down to where to place a cache file, to permit more efficient and more explicit management of caches.

Validator improvements

(Since v0.12.8) The AppStream metadata validator, used by running appstreamcli validate <file>, is the tool that each Metainfo file should run through to ensure it is conformant to the AppStream specification and to give some useful hints to improve the metadata quality. It knows four issue severities: Pedantic issues are hidden by default (show them with the --pedantic flag) and affect upcoming features or really “nice to have” things that are completely nonessential. Info issues are not directly a problem, but are hints to improve the metadata and get better overall data. Things the specification recommends but doesn’t mandate also fall into this category. Warnings will result in degraded metadata but don’t make the file invalid in its entirety. Yet, they are severe enough that we fail the validation. Things like that are for example a vanishing screenshot from an URL: Most of the data is still valid, but the result may not look as intended. Invalid email addresses, invalid tag properties etc. fall into that category as well: They will all reduce the amount of metadata systems have available. So the metadata should definitely be warning-free in order to be valid. And finally errors are outright violation of the specification that may likely result in the data being ignored in its entirety or large chunks of it being invalid. Malformed XML or invalid SPDX license expressions would fall into that group.

Previously, the validator would always show very long explanations for all the issues it found, giving detailed information on an issue. While this was nice if there were few issues, it produces very noisy output and makes it harder to quickly spot the actual error. So, the whole validator output was changed to be based on issue tags, a concept that is also known from other lint tools such as Debian’s Lintian: Each error has its own tag string, identifying it. By default, we only show the tag string, line of the issue, severity and component name it affects as well a short repeat of an invalid value (in case that’s applicable to the issue). If people do want to know detailed information, they can get it by passing --explain to the validation command. This solution has many advantages:

  • It makes the output concise and easy to read by humans and is mostly already self-explanatory
  • Machines can parse the tags easily and identify which issue was emitted, which is very helpful for AppStream’s own testsuite but also for any tool wanting to parse the output
  • We can now have translators translate the explanatory texts

Initially, I didn’t want to have the validator return translated output, as that may be less helpful and harder to search the web for. But now, with the untranslated issue tags and much longer and better explanatory texts, it makes sense to trust the translators to translate the technical explanations well.

Of course, this change broke any tool that was parsing the old output. I had an old request by people to have appstreamcli return machine-readable validator output, so they could integrate it better with preexisting CI pipelines and issue reporting software. Therefore, the tool can now return structured, machine-readable output in the YAML format if you pass --format=yaml to it. That output is guaranteed to be stable and can be parsed by any CI machinery that a project already has running. If needed, other output formats could be added in future, but for now YAML is the only one and people generally seem to be happy with it.

Create desktop-entry files from Metainfo

(Since v0.12.9) As you may have noticed, an AppStream Metainfo file contains some information that a desktop-entry file also contains. Yet, the two file formats serve very different purposes: A desktop file is basically launch instructions for an application, with some information about how it is displayed. A Metainfo file is mostly display information and less to none launch instructions. Admittedly though, there is quite a bit of overlap which may make it useful for some projects to simply generate a desktop-entry file from a Metainfo file. This may not work for all projects, most notably ones where multiple desktop-entry files exists for just one AppStream component. But for the simplest and most common of cases, a direct mapping between Metainfo and desktop-entry file, this option is viable.

The appstreamcli tool permits this now, using the appstreamcli make-desktop-file subcommand. It just needs a Metainfo file as first parameter, and a desktop-entry output file as second parameter. If the desktop-entry file already exists, it will be extended with the new data from tbe Metainfo file. For the Exec field in a desktop-entry file, appstreamcli will read the first binary entry in a provides tag, or use an explicitly provided line passed via the --exec parameter.

Please take a look at the appstreamcli(1) manual page for more information on how to use this useful feature.

Convert NEWS files to Metainfo and vice versa

(Since v0.12.9) Writing the XML for release entries in Metainfo files can sometimes be a bit tedious. To make this easier and to integrate better with existing workflows, two new subcommands for appstreamcli are now available: news-to-metainfo and metainfo-to-news. They permit converting a NEWS textfile to Metainfo XML and vice versa, and can be integrated with an application’s build process. Take a look at AppStream itself on how it uses that feature.

In addition to generating the NEWS output or reading it, there is also a second YAML-based option available. Since YAML is a structured format, more of the features of AppStream release metadata are available in the format, such as marking development releases as such. You can use the --format flag to switch the output (or input) format to YAML.

Please take a look at the appstreamcli(1) manual page for a bit more information on how to use this feature in your project.

Support for recent SPDX syntax

(Since v0.12.10) This has been a pain point for quite a while: SPDX is a project supported by the Linux Foundation to (mainly) provide a unified syntax to identify licenses for Open Source projects. They did change the license syntax twice in incompatible ways though, and AppStream already implemented a previous versions, so we could not simply jump to the latest version without supporting the old one.

With the latest release of AppStream though, the software should transparently convert between the different version identifiers and also support the most recent SPDX license expressions, including the WITH operator for license exceptions. Please report any issues if you see them!

Future Plans?

First of all, congratulations for reading this far into the blog post! I hope you liked the new features! In case you skipped here, welcome to one of the most interesting sections of this blog post! 😉

So, what is next for AppStream? The 1.0 release, of course! The project is certainly mature enough to warrant that, and originally I wanted to get the 1.0 release out of the door this February, but it doesn’t look like that date is still realistic. But what does “1.0” actually mean for AppStream? Well, here is a list of the intended changes:

  • Removal of almost all deprecated parts of the specification. Some things will remain supported forever though: For example the desktop component type is technically deprecated for desktop-application but is so widely used that we will support it forever. Things like the old application node will certainly go though, and so will the /usr/share/appdata path as metainfo location, the appcategory node that nobody uses anymore and all other legacy cruft. I will be mindful about this though: If a feature still has a lot of users, it will stay supported, potentially forever. I am closely monitoring what is used mainly via the information available via the Debian archive. As a general rule of thumb though: A file for which appstreamcli validate passes today is guaranteed to work and be fine with AppStream 1.0 as well.
  • Removal of all deprecated API in libappstream. If your application still uses API that is flagged as deprecated, consider migrating to the supported functions and you should be good to go! There are a few bigger refactorings planned for some of the API around releases and data serialization, but in general I don’t expect this to be hard to port.
  • The 1.0 specification will be covered by an extended stability promise. When a feature is deprecated, there will be no risk that it is removed or become unsupported (so the removal of deprecated stuff in the specification should only happen once). What is in the 1.0 specification will quite likely be supported forever.

So, what is holding up the 1.0 release besides the API cleanup work? Well, there are a few more points I want to resolve before releasing the 1.0 release:

  • Resolve hosting release information at a remote location, not in the Metainfo file (#240): This will be a disruptive change that will need API adjustments in libappstream for sure, and certainly will – if it happens – need the 1.0 release. Fetching release data from remote locations as opposed to having it installed with software makes a lot of sense, and I either want to have this implemented and specified properly for the 1.0 release, or have it explicitly dismissed.
  • Mobile friendliness / controls metadata (#192 & #55): We need some way to identify applications as “works well on mobile”. I also work for a company called Purism which happens to make a Linux-based smartphone, so this is obviously important for us. But it also is very relevant for users and other Linux mobile projects. The main issue here is to define what “mobile” actually means and what information makes sense to have in the Metainfo file to be future-proof. At the moment, I think we should definitely have data on supported input controls for a GUI application (touch vs mouse), but for this the discussion is still not done.
  • Resolving addon component type complexity (lots of issue reports): At the moment, an addon component can be created to extend an existing application by $whatever thing This can be a plugin, a theme, a wallpaper, extra content, etc. This is all running in the addon supergroup of components. This makes it difficult for applications and software centers to occasionally group addons into useful groups – a plugin is functionally very different from a theme. Therefore I intend to possibly allow components to name “addon classes” they support and that addons can sort themselves into, allowing easy grouping and sorting of addons. This would of course add extra complexity. So this feature will either go into the 1.0 release, or be rejected.
  • Zero pending feature requests for the specification: Any remaining open feature request for the specification itself in AppStream’s issue tracker should either be accepted & implemented, or explicitly deferred or rejected.

I am not sure yet when the todo list will be completed, but I am certain that the 1.0 release of AppStream will happen this year, most likely before summer. Any input, especially from users of the format, is highly appreciated.

Thanks a lot to everyone who contributed or is contributing to the AppStream implementation or specification, you are great! Also, thanks to you, the reader, for using AppStream in your project 😉. I definitely will give a bit more frequent and certainly shorter updates on the project’s progress from now on. Enjoy your rich software metadata, firmware updates and screenshot videos meanwhile! 😀


A few days ago Marc Mutz, colleague of mine at KDAB and also author in this blog, spotted this function from Qt’s source code (documentation):

/*!
    Returns \c true if the string only contains uppercase letters,
    otherwise returns \c false.
*/
bool QString::isUpper() const
{
    if (isEmpty())
        return false;

    const QChar *d = data();

    for (int i = 0, max = size(); i < max; ++i) {
        if (!d[i].isUpper())
            return false;
    }

    return true;
}

Apart from the mistake of considering empty strings not uppercase, which can be easily fixed, the loop in the body looks innocent enough. How would we figure out if a string only contains uppercase letters (as per the documentation in the snippet), anyhow?

  • Look at the string character by character;
  • If we see a non-uppercase character, the string is not uppercase;
  • Otherwise, it is uppercase.

That’s exactly what the for loop in the code above is doing, right?

Well, no.

The code above is broken.

It falls into the same trap of endless other similar code: it doesn’t take into account that QString does not contain characters/code points, but rather UTF-16 code units.

All operations on a QString (getting the length, splitting, iterating, etc.) always work in terms of UTF-16 code units, not code points. The reality is: QString is Unicode-aware only in some of its algorithms; certainly not in its storage.

For instance, if a string contains simply the character “𝐀” — that is, MATHEMATICAL BOLD CAPITAL A (U+1D400) — then its QString storage would actually contain 2 “characters” reported by size() (again, really, not characters in the sense of code points but two UTF-16 code units): 0xD835 and 0xDC00.

The naïve iteration done above would then check whether those two code units are uppercase, and guess what, they’re not; and therefore conclude that the string is not uppercase, while instead it is. (Those two code units are “special” and used to encode a character outside the BMP; they’re called a surrogate pair. When taken alone, they’re invalid.)

Wherefore art thou, Unicode?

If you want to know more about what all of this Unicode story is about, please take a few minutes and read this and this. The resources linked are also good reads.

The problem of Unicode-aware iteration over string data is so common and frequent that back in 2014 I contributed a new class to Qt to solve it. The class is called, unsurprisingly, QStringIterator.

From its own documentation:

QStringIterator is a Java-like, bidirectional, const iterator over the contents of a QString. Unlike QString’s own iterators, which manage the individual UTF-16 code units, QStringIterator is Unicode-aware: it will transparently handle the surrogate pairs that may be present in a QString, and return the individual Unicode code points.

Any code that walks over the contents of a QString should consider using QStringIterator, therefore preventing all such possible mistakes as well as leaving the burden of decoding UTF-16 into a series of code points into Qt. Indeed, QStringIterator is now used in many critical places inside Qt (text encoding, font handling, text classes, etc.).

How do I use it?

For various reasons (see below) QStringIterator is private API at the moment. Code that wants to use it has to include its header and enable the usage of private Qt APIs, for instance like this by using qmake:

QT += core-private

Or similarly with CMake:

target_link_libraries(my_target Qt5::CorePrivate)

Then we can include it, and use it to properly implement isUpper():

#include <private/qstringiterator_p.h>

bool QString::isUpper() const
{
    QStringIterator it(*this);
 
    while (it.hasNext()) {
        uint c = it.next();
        if (!QChar::isUpper(c))
            return false;
    }

    return true;
}

The call to next() will read as many code units are necessary to fully decode the next code point, and it will also do error checking.

(In this case it will return U+FFFD (REPLACEMENT CHARACTER), which has the nice property of not being uppercase, therefore making the function return false. But this is an implementation detail; calling QString algorithms on a string that contains illegal UTF-16 encoded data is unspecified behavior already, so don’t do it.)

QStringIterator‘s API is quite rich; it supports bidirectional iteration, some customization of what should happen in case of decoding failure, as well as unchecked iteration (iteration that assumes that the QString contents are valid UTF-16; this allows skipping some checks).

That’s it, no more excuses, start using QStringIterator today!

Regarding the QString::isUpper() function that we started this journey with: trying to fix it caused quite a discussion during code review, as you can see here and here.

Why isn’t QStringIterator public API?

There are a few reasons why I am keeping QStringIterator as private API. It’s not because its API is in constant evolution — actually, it has not changed significantly in the past 6 years. QStringIterator even has complete documentation, tests and examples (the documentation is readable here).

From my personal point of view:

  • The API would benefit from a serious uplifting, becoming more C++ oriented, and way less Java oriented. Rather than writing this:
    QStringIterator i(str);
    while (i.hasNext())
      use(i.next());
    

    one should also be able to write something like this:

    // C++11
    for (auto cp : QStringIterator(str))
      use(cp);
    
    // C++20
    auto stringLenInCodePoints = std::ranges::distance(QStringIterator(str));
    bool stringIsUpperCase = std::ranges::all_of(QStringIterator(str), &QChar::isUpper);
    
    // C++20 + P1206
    auto decodedString = QStringIterator(str) | std::ranges::to<QVector<uint>>;
    

    None of the required APIs to make this possible exist at the moment — QStringIterator is neither a range nor an iterable type.

    Making it so opens up many, many API problems: e.g. minor things whether if QStringIterator is a good name, given it yields out iterators; to huge design problems, like how to add customization points to decide how to handle strings containing malformed UTF-16 data (skip? replace? stop? throw an exception?).

  • The implementation is optimized for clarity, not raw speed. At the moment, it doesn’t use SIMD or any similar intrisics. I strongly feel that it may benefit from such improvements, if we redesign its API (e.g. making the failure mode a customization point).
  • There is other, similar, more general purpose work happening elsewhere. For instance, in the glorious ICU libraries, in the work happening in the SG16 WG21 study group, in the proposed Boost.Text, and so on. We may just decide to use the results of some of that work, rather than coming up with a Qt-specific way of using a particular algorithm (UTF-16 decoding).
  • Unicode is complicated, and we may have forgotten to handle some corner case properly. If we set QStringIterator‘s API/ABI in stone (by making it public), we risk ending up with our hands tied for future necessary expansion.
  • Most of Qt assumes valid UTF-16 content in QStrings (see the comment above). We need a project-wide decision on how to actually detect and tackle invalid UTF-16 content, and enforce it consistently. QStringIterator should therefore follow such decision, and that becomes very hard if we’re again constrained by the public API promise.

With all of this in mind, I am not comfortable with committing QStringIterator as public API at the moment. But again, it doesn’t mean that you can’t use it in your code today, and maybe submit some feedback.

Happy hacking!

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post A little hidden gem: QStringIterator appeared first on KDAB.



Older blog entries