Criminalization of encryption : the 8 december case – La Quadrature du Net
Tags: tech, france, law, foss, surveillance, criticism, cryptography
A reminder of what’s going on in France… and it’s bleak. Lots of things can turn you into a suspect at this point.
You encrypt your communications or data? You try to avoid surveillance from the GAFAM? You use Tor or a VPN?
If anything bad happens around you, then you’ll turn into a suspect, this is due to the “clandestine behavior” you entertain… so for sure you are part of some “conspiracy”.
How to Kill a Decentralised Network (such as the Fediverse)
Tags: tech, gafam, facebook, fediverse, xmpp, standard, community
Facebook getting interested in the fediverse indeed looks like XMPP or OOXML all over again. Beware of those old tactics, they are very efficient against communities.
This is a neat example of what programming languages could check at compile time. This clearly brings way more safety when you get such contract validation at build time.
Rust fact vs. fiction: 5 Insights from Google’s Rust journey in 2022 | Google Open Source Blog
Tags: tech, programming, rust
Interesting to see what gets confirmed (slow compiler, nice compiler error messages, code quality) or debunked (steep learning curve, interoperability).
Good piece about the hype cycles our industry constantly fall into. So much undue complexity for nothing in some projects… and then we’ll complain about technical debt. It would have been much easier to pick the right tool instead of wanting to use whatever new got released by some big tech company.
Interesting way to look at our profession… I wonder if this is the core reason of why we have a hard time to turn into a proper engineering discipline, is it even possible at all then?
We’re about halfway through year! This update is a bit smaller than usual, and more focused on applications than Plasma. This isn’t for lack of time or trying, but I tried to deliberately clear out my backlog. That goal didn’t really pan out, but I’ll be trying again next month.
Craft
The Android build for Tokodon (and consequently it’s CI) was broken because I replaced the video player with libmpv, so I spent a good chunk of this month making sure it’s working again. This was a little difficult to fix, but I feel much more confident with Craft now.
If you’re not familiar with Craft, it’s a meta-build system created by KDE. Craft and it’s blueprints are written in Python. Blueprints describe how to build the application or library, and has easy utilities for working with existing build systems (AutoTools, CMake, Meson, etc). It may be a little daunting, but these blueprints easy to write and maintain. More importantly, Craft enables easy cross-compilation since it contains blueprints for the underlying libraries for KDE applications: OpenSSL, zlib, and so on.
Tokodon is (to my knowledge) the first KDE application to use libmpv on Android, so I needed to break a lot of new ground to make this happen. What’s exciting though is that any KDE application that uses libmpv (PlasmaTube?) doesn’t have to jump through hoops to make it build on Android.
Sorry, this section will be mostly text because I can’t exactly visualize build files!
zlib
zlib is already included in the Android NDK, but for some reason they don’t ship pkgconfig files for it (or anything, really). For example, Freetype declares a dependency on zlib in it’s pkgconfig, and then pkgconfig (the program) starts complain that it can’t “find zlib” although the library itself exists. That’s because pkgconfig is searching for zlib’s config, and doesn’t care if the .so or .a file exists on disk anyway.
For now, I propose enabling zlib on Android anyway. It’s odd that we have to build it again, but I don’t see an easier solution right now.
kirigami-addons
This is a really simple fix, QtMultimedia was missing as a dependency for this library. The new fullscreen image viewer uses QtMultimedia under the hood to display video, but it wasn’t declared in the blueprint yet.
The versions of libarchive currently in Craft had an Android-breaking header inclusion bug, preventing compilation if you use any of it’s headers.
Now it’s bumped to 3.6.2 (the latest version as of writing), see the merge request. Support for the last version is also being dropped, let’s see if that breaks anything!
ffmpeg
Tokodon uses libmpv to display videos from Mastodon and other services, which are served over HTTPS. libmpv uses ffmpeg to fetch video data, but the Craft blueprint for ffmpeg didn’t enable the TLS backend.
See the merge request which enables the OpenSSL backend for all platforms, which could benefit other KDE applications!
fribidi and mpv
When deploying to Android, we use a tool called androiddeplyqt to package the libraries required by an application. One of the checks it deploys is checking the version of the .so before putting it inside of the APK. If your application links to, say libmpv.so.1 then androiddeployqt will stop in it’s tracks. Fribidi and mpv felt the need to set versions, without a way to tell them to stop.
See this merge request which now builds these two libraries statically completely side-stepping the issue. I think this is an okay solution for now, as a shared library doesn’t make much of a difference on Android. If someone knows a better way to force them to stop putting out versioned shared libraries, let me know.
Meson
Some dependencies of mpv (Such as fribidi) used Meson as it’s only build system, which didn’t have cross compilation support in Craft yet. Meson is nice and all, but it’s cross compilation system is really obtuse, as I need to write a file! This is fed into the meson command, but I don’t understand why we can’t pass these as arguments or environment variables.
See the merge request for enabling cross compilation for Meson projects under Craft. There’s still a bit of work left to do for me, like figuring out how to handle switching between toolchains.
PlasmaTube
I’ve been using PlasmaTube more lately, so that means more and more fixes to it! First, non-video results are no longer displayed. That’s these blank little fellows in the video grid:
The two player modes (minimized and maximized) are now stored as a binary flag, instead of guessing it. In simpler terms this means that when you maximize the window, it doesn’t enter a weird state where half of the player is open.
I improved the video loading experience a bit. Now when you explicitly stop a video it doesn’t momentarily reappear too soon when you click on another one. This doesn’t affect clicking between videos while it’s still playing or paused though.
The videos on a channel page load now! It’s not categorized or anything yet, but it’s a good start.
I completed some important tasks for Tokodon this week! It may seem boring, but this includes important groundwork (including the Android stuff above) so we can focus on implementing more useful features.
Better authentication
I’m starting to remove the manual authentication code step which means it’s now even easier to login to Tokodon. Once you complete the login flow in your browser, it will return you to Tokodon magically. How to get this working in a Qt/KDE application is a mystery apparently (and on Android) so that might be future blog material.
I have some local changes enabling support for this on Android, and I’m hopeful this will appear in the next release. As a bonus, it lets you log into Pixelfed! For some reason, they don’t support copying the authcode and only support URI callbacks to return to the application. Once this lands, Tokodon can work as a Pixelfed client!
The settings page no longer crash because Sonnet was erroneously included on Android.
There was a bunch of missing icons on Android, now they are included!
When entering the instance URL during the login flow, the keyboard doesn’t try to uppercase the first letter.
Config improvements
I landed the much needed config overhaul which fixes two major pain points: configuration location and security. We used KConfig but only for the main configuration options, not for the account information. This meant there was this odd ~/.config/KDE/tokodon.conf and contained secret information to boot. Now everything is handled by KConfig, and the volatile account information is stored separately from the main configuration options in ~/.local/share/Tokodon/tokodonstaterc (possibly moving to ~/.local/state in KDE Frameworks 6).
The account token and client secrets are now stored in your system keychain (like KWallet or GNOME Secrets)! When you start Tokodon 23.08, it will automatically migrate your config files, so there’s nothing you need to do.
I’m also working on sharing through Tokodon! It still needs some work. My current plan is to have it open the composer if already running, or open in a standalone composer window if it isn’t.
This is my fourth blog post and a continuation to my previous blog posts for Google Summer of Code 2023 under KDE.
In this blog, I will be sharing my experiences during the 3rd and 4th weeks of GSoC'23.
Week 3 | Finalizing Account Moderation tool
In my previous blog post, I mentioned that I had worked on implementing the initial page of the Account Moderation Tool in the first two weeks. This week, I began implementing the MainAccountToolPage.qml which serves as the main page of the tool where moderators can view admin-level details and take actions against an account.
I started by parsing the API response JSON and implementing all the necessary models and methods in the cpp backend. The most interesting part while implementing the backend code was determining the API call for POST request. Initially, I was considering writing separate methods for each endpoint but after going through the source code of Tokodon, I noticed how cleverly Tokodon implements hash maps to handle different endpoints for a POST request. So I went with a similar implementation for my tool.
Next was implementing the QML front-end. As I am relatively new to writing QML and working with Kirigami frameworks, this part was rather more challenging. Fortunately, I have been a plasma user for a long time, so whenever I got stuck, I would refer to other KDE application’s source code or ask my mentor to help me out.
Finally, after lots of refactoring and code reviews, the maintainers approved my MR, and it got successfully merged into the master branch 😮💨.
Images of implemented account moderation tool.
Week 4 | Adding the initial page of the Report Moderation tool
In the fourth week, I started with the implementation of the Report Moderation tool into the codebase. As I had already implemented the Account Moderation tool, I expected things to be similar this time. I started with implementing cpp backend first, the only difference this time was using the Post class to parse the reported status. Using Post class to parse the reported status was a bit tricky to figure out as I had initially thought of writing individual methods for each parameter in ReportInfo class which would have been inefficient.
On the QML side, things didn’t go as smoothly this time. I faced many binding and polish loops while laying out the content, which were very tricky to solve. The QML error logs didn’t point to any specific line number or code block so I had to fix them by isolating individual blocks of code and debugging them individually.
By the end of the 4th week, I was able to implement the initial page of The Report Moderation tool. The MR is still under review and you can track it here
Image of the implemented initial page of the Report Moderation Tool.
I will be writing regular blog posts on my website. You can read my previous blog-posts and follow my progress here
regarding the compile time improvements, I have the suspicion that included moc files would help with incremental build times, possibly even quite noticeably (compiling the combined automoc file can be quite expensive), but no idea how that impacts clean builds
And while he was occupied with other things, this suspicion caught my interest and curiousity, so I found some slots to give it some closer look and also learn some more.
After all, people including myself had removed quite some explicit moc includes by the years, also in KDE projects, enjoying existing automoc magic for less manual code. Just that in the mean time, as soon noticed, Qt developers had stepped up efforts to add them for the Qt libraries were missing, surely for reasons.
Back to the basics…
Let’s take a simple example of two independent QObject sub-classes Foo and Bar, with own header and source files:
foo.h: class Foo : public QObject { Q_OBJECT /* ... */ };
bar.h: class Bar : public QObject { Q_OBJECT /* ... */};
bar.cpp: #include "bar.h" /* non-inline Bar method definitions */
CMake’s automoc will detect the respective Q_OBJECT macro usages and generate build system rules to have the moc tool create respective files moc_foo.cpp and moc_bar.cpp, which contains the code complementing the macro (e.g. for the class meta object).
CMake then, if no source files include those generated moc files, will have added rules to generate for each library or executable target a central file mocs_compilation.cpp which includes those:
// This file is autogenerated. Changes will be overwritten.
#include "<SOURCE_DIR_CHECKSUM>/moc_foo.cpp"
#include "<SOURCE_DIR_CHECKSUM>/moc_bar.cpp"
This results in a single compilation unit with all the moc code. It is faster to build compared to compiling all moc files in separate ones. Note the “all” here, as all moc code only needs to be build together in full project (re)builds.
Incremental build, wants to handle minimal size of sources
When working on a codebase, one usually does incremental builds, so only rebuilding those artifacts that depend on sources changed. That gives quick edit-build-test cycles, helping to keep concentration stable (when no office-chair sword duel tournaments are on-going anyway).
So for the example above when the header foo.h is edited, in an incremental build…
the file foo.cpp is recompiled as it includes this header…
next moc_foo.cpp to be regenerated from the header and then…
mocs_compilation.cpp to be recompiled, given it includes moc_foo.cpp.
Just, as mocs_compilation.cpp does not only include moc_foo.cpp, but also moc_bar.cpp, this means also the code from moc_bar.cpp is recompiled here, even if does not depend on foo.h.
So the optimization of having a single compilation unit for all moc files for headers, done for full builds, results in unneeded extra work for incremental builds. Which gets worse with any additional header that needs a moc file, which then also is included in mocs_compilation.cpp. And that is the problem Volker talked about.
Impact of mocs_compilation.cpp builds
On the author’s system (i5-2520M CPU @ 2.5 GHz, with SSD) some measurements were done by calling touch on a mocs_compilation.cpp file (touch foo_autogen/mocs_compilation.cpp), then asking the build system to update the respective object file and measuring that with the tool time (time make foo_autogen/mocs_compilation.cpp.o).
To have some reference, first a single moc file of a most simple QObject subclass was looked at, where times averaged around 1.6 s. Then random mocs_compilation.cpp found in the local build dirs of random projects were checked, with times measured in the range of 5 s to 14 s.
Multiple seconds spent on mocs_compilation.cpp, again and again, those can make a difference in the experience with incremental builds, where the other updates might take even less time.
Impact of moc include on single source file builds
Trying to measure the cost which including a moc file adds to (re)compiling a single source file, again the tool time was used, with the compiler command as taken from the build system to generate an object file.
A few rounds of measurement only delivered average differences that were one or two magnitudes smaller than the variance seen in the times taken, so the cost considered unnoticeable. A guess is that the compiler for the moc generated code added can reuse all the work already done for the other code in the including source file, and the moc generated code itself not that complicated relatively.
This is in comparison to the noticeable time it needs to build mocs_compilation.cpp, as described above.
Impact of moc includes on full builds, by examples
An answer to “no idea how that impacts clean builds” might be hard to derive in theory. The effort it takes to build the moc generated code separately in mocs_compilation.cpp versus the sum of the additional efforts it takes to build each moc generated code as part of source files depends on the circumstances of the sources involved. The measurements done before for mocs_compilation.cpp and single source files builds though hint to overall build time reduction in real-world situations.
For some real-world numbers, a set of patches for a few KDE repos have been done (easy with the scripts available, see below). Then some scenario of someone doing a fresh build of such repo using the meta-build tool kdesrc-build was run a few times on an otherwise idle developer system (same i5-2520M CPU @ 2.5 GHz, with SSD), both for the current codebase and then with all possible moc includes added.
Using the make tool, configured to use 4 parallel jobs, with the build dir always completely removed before, and kdesrc-build invoked with the –build-only option, so skipping repo updates, the timing was measured using the time tool as before. Which reports by “real” the wall clock timing, while “user” reports the sum of times of all threads taken in non-kernel processor usage. The time spent by related kernel processing (“sys”) was ignored due to being very small in comparison.
The numbers taken in all cases showed that there clean builds got faster with moc includes, with build times partially reduced by more than 10 %:
Further, less controlled own time measurements for other codebases support this impression, as well as reports from others (“total build time dropped by around 10%.”, Qt Interest mailing list in 2019). With that for now it would be assumed that times needed for clean build are not a reason against moc includes, rather the opposite.
And there are more reasons, read on.
Reducing need for headers to include other headers
moc generated code needs to have the full declaration of types used as values in signals or slots method arguments. Same for types used as values or references for Q_PROPERTY class properties, in Qt6 also for types used with pointers:
class Bar; // forward declaration, not enough for moc generated code here
class Foo : public QObject {
Q_OBJECT
Q_PROPERTY(Bar* bar READ barPointer) // Qt6: full Bar declaration needed
Q_PROPERTY(Bar& bar READ barRef) // full Bar declaration needed
Q_PROPERTY(Bar bar READ barValue) // full Bar declaration needed
Q_SIGNALS:
void fooed(Bar bar); // full Bar declaration needed
public Q_SLOT:
void foo(Bar bar); // full Bar declaration needed
// [...]
};
So if the moc file for class Foo is compiled separately and thus only sees the given declarations as above, if will fail to build.
This can be solved by replacing the forward declaration of class Bar with the full declaration, e.g. by including a header where Bar is declared, which itself again might need more declarations. But this is paid by everything else which needs the full class Foo declaration now also getting those other declarations, even if not useful.
Solving it instead by including the moc file in a source file with definitions of class Foo methods, with full class Bar declaration available there, as usually already needed for those methods, allows to keep the forward declaration:
#include "foo.h"
#include "bar.h" // needed for class Foo methods' definitions
// [definitions of class Foo methods]
#include "moc_foo.cpp" // moc generated code sourced
Which keeps both full and incremental project builds faster.
In KDE projects while making them Qt6-ready a set of commits with messages like “Use includes instead of forward decl where needed” were made, due to the new requirements by moc generated code with pointer types and properties. These would not have been needed with moc includes.
Enabling clang to warn about unused private fields
The clang compiler is capable to check and warn about unused private class members if it can see all class methods in the same compilation unit (GCC so far needs to catch up):
class Foo : public QObject {
Q_OBJECT
/* ... */
private:
bool m_unusedFlag;
};
The above declaration will see a warning if the moc file is included with the source file having the definition of all (normal) non-inline methods:
/.../foo.h:17:10: warning: private field 'm_unusedFlag' is not used [-Wunused-private-field]
bool m_unusedFlag;
^
But not if the moc file is compiled separately, as the compiler has to assume the other methods might use the member.
Better binary code, due to more in the compilation unit
A moc include into a source file provides the compiler with more material in the same compilation unit, which is said to be usable for some optimizations:
Indeed when building libraries in Release mode, so with some optimization flags enabled, it can be observed that size shrank by some thousandths for some. So at least size was optimized. For others though it grew a tiny bit, e.g. in the .text section with the code. It is assumed this is caused by the code duplications due to inlining. So there runtime is optimized at the cost of size, and one would have to trust the compiler for a sane trade-off, as done with all the other, normal code.
For another example, one of the commits to Qt’s own modules establishing moc includes for them reports in the commit message for the QtWidgets module:
A very simple way to save ~3KiB in .text[edit] size and 440b in data size on GCC 5.3 Linux AMD64 release builds.
So far it sounds like it is all advantages, so what about the disadvantages?
More manual code to maintain with explicit moc include statements
To have explicit include statements for each moc file covering a header (e.g. moc_foo.cpp for foo.h) means more code to manually maintain. Which is less comfortable.
Though the same is already the case for moc files covering source files (e.g. foo.moc for foo.cpp), those have to be included, given the class declarations they need are in that very source file. So doing the same also for the other type would feel not that strange.
The other manual effort needed is to ensure that any moc include is also done. At least with CMake’s automoc things will just silently work, any moc file not explicitly included is automatically included by the target’s mocs_compilation.cpp file. That one is currently always generated, built and linked to the target (TODO: file wish to CMake for a flag to have no mocs_compilation.cpp file).
One approach to enforce moc includes might be to add respective scripts as commit hooks, see. e.g. check-includemocs-hook.sh from KDAB’s KDToolBox.
No longer needed moc includes are also not critical with CMake’s automoc, an empty file will be generated and a warning added to the build log. So the developer can clean-up later when there is time.
So the cost is one include statement per moc-covered header and its occasional maintenance.
Automated moc file include statements addition, variant 6
There exist already a few scripts to scan sources and amend include statements for moc files where found missing, like:
includemocs from the KDE Development Scripts repo (as old as 23 (sic!) years, see introducing commit)
Initially I was not aware of all. The ones tested (KDE’s, KDAB’s & Remy van Elst’s) missed to cover matching header files with the basename suffixed by “_p” (e.g. foo_p.h) to source files without that suffix (e.g. foo.cpp). So there is now a (working for what used for) draft of yet another script, addmocincludes. Oh dear
Suspicion substantiated: better use moc includes
As shown above, it looks that the use of explicit includes also for header moc files improves things for multiple stakeholders:
developers: gain from faster full & incremental builds, more sanity check
users: gain from runtime improvements
CI: gains from faster full builds
packagers: gain from faster full builds
All paid by the cost of one explicit include statement for each moc-covered header and its occasional maintenance. And in some cases a slightly bigger binary size.
Seems a good deal, no? So…
pick of one the scripts above and have it add more explicit moc includes
check for some now possible forward declarations
look out for any newly discovered unused private members
PROFIT!!! (enjoy the things gained long term by this one-time investment)
Update (Aug 14th):
To have the build system work along these ideas, two issues have now been filed with CMake’s issue tracker:
Celebrating 25 years of The KDE Free Qt Foundation | KDE.news
Tags: tech, kde, foss, community, licensing
Happy birthday the KDE Free Qt Foundation! It’s really nice to see it survived the test of time. It is for sure an essential tool of the KDE ecosystem. I wish there would be more such foundations around.
Following up on his “The Free Software Foundation is dying” post, Drew DeVault has been working on the messaging part of his recommendations. The result is not bad at all!
Raters helped train Google’s AI. But after speaking out, they were fired. - The Washington Post
Tags: tech, machine-learning, ai, gpt, google
Maybe it’s time to make so called “reinforcement learning from human feedback” actually humane? It’s not the first account along those lines in the industry.
We went from quality to quantity it seems. We also have whole swats of developers who are just consuming content without critical thinking and it’s a problem. The conclusion says it all: “Don’t consume. Create. Ask questions. Stay curious.”
Why Static Typing Came Back • Richard Feldman • GOTO 2022 - YouTube
Tags: tech, programming, language, type-systems
Interesting point of view on why static typing seems to make a come back right now and why it’s likely to continue. I think a few of the arguments in here are wrongly framed (like some of the benefits of using an interpreter rather than a compiler are attributed to dynamic typing while it’s rather orthogonal) but a large part of the analysis seems valid to me.
Unsurprisingly, it’s not as simple as it sounds. Type hints in Python can be used for various reasons but performances is rarely the main motives. It’d need other adjustments to the runtime. People are working on it, and this article is an interesting dive on how things work under the hood.
Kind of sad to see asserts misused so much in the Python community. Still that’s a good lesson for everyone: when using an assert, expect it won’t get executed when in production.
Interesting deep dive in Rust and C++23 string formatting features. This shows the differences quite well. It also does a good job at highlighting the pros and cons for each approach.
Maps Distort How We See the World - by Tomas Pueyo
Tags: geography, map
Great article. We know that the projections we use can’t give a proper picture of the world. We often don’t realize by how much it distort our views and what we miss. This is a good summary of the various biases in our maps.
Having come across sources using include statements for some Qt module headers (like #include <QtDBus>), memories arose about a check from the static analyzer tool krazy as once run conveniently on KDE’s former ebn.kde.org site. That check, called includes, poked one not to use Qt module headers. Due to resulting in the inclusion of all the headers of those modules, and then again that of the other Qt modules used by the module. Which them meant more stuff to process by the compiler for compilation units with such module header includes.
So is that perhaps in 2023 no longer a real-world noticeable issue? A first look at some preprocessor outputs (with Qt5) for a single line file with just an include statement hinted though it might still be true:
foo.cpp: #include <QtDBus>
foo.cpp.i: 137477 lines
foo.cpp: #include <QDBusReply>
foo.cpp.i: 86615 lines
So 50862 more code lines of mainly declarations and inline methods, where a good part might not be needed at all by other code in a file including the header, yet is processed each time. And if such includes are placed in headers, happening for a lot of compilation units. Given most normal source files are shorter, it seemed like as result this difference might still be noticeable given that order of magnitude in the extreme example above..
Wait some minutes less now
The KDE Frameworks module NetworkManagerQt was found to use quite a lot of QtDBus module header includes. While overall following mostly the include-only-what-you-need-and-forward-declare-otherwise mantra. Possibly those includes have been a result of tools generating code and using the module headers to speed up initial development experience.
A patch to replace those QtDBus module header includes with includes of headers as just needed for the classes & namespace used was done. It turned out that the number of additional include statements needed afterwards was rather small, so no bigger costs there.
For a simple test on the real world effects, an otherwise idle developer system, with hot cache for the source files by previous runs, with SSD and old i5-2520M 2.5 GHz CPU, was used. For both variants the build dir would be cleaned by make clean and then a single job make run started, timed with the time tool, by time make. The results were this (repeated runs hinted those numbers are representative):
#include <QtDBus>
#include <[headerasneeded]>
real (wall clock)
18m51,032s
14m6,925s
user
17m58,326s
13m22,964s
sys
1m54,234s
1m26,826s
So an overall build time reduction by around a 1/4 for a clean(ed) build.
Incremental builds during development should also gain, but not measured, just assumed.
Wait* on the code, to not wait on the build
(*as in waiter)
So in the spirit of the old Krazy includes check, consider to take a look at your codebase if not some Qt module header includes (QtCore, QtDBus, QtQml, QtGui, QtWidgets, QtNetwork, …) have sneaked in which might be simple to replace by “normal” includes.
Note: there is at least one tricky include with QtConcurrent, as that module shares the name with the main C++ namespace. So one might have used #include <QtConcurrent>, due to used-to Qt patterns and because the API documentation also tells to do. Just, that include gets one the module header, which then also pulls in #include <QtCore> with all its headers. Looking at the include directory of that module, one can find dedicated headers to use instead, like QtConcurrentRun. While many codebases e.g. in KDE projects rely on those, they still need to be also officially documented (QTBUG-114663).
In case one would like some more modern automation tool to check for the use of Qt module header includes, take a look at the current work to add a check to the static code analyzer Clazy.
People are often asking the same questions again and again about some of my
projects, so it might be a good opportunity to write a small FAQ.
If you get redirected here, don’t take it personally; I am getting asked these
questions very often, and I feel people often misunderstand how open-source
projects work.
Why does X not have feature Y?
The most likely reason is that it still needs to be implemented. It doesn’t
mean that I or other maintainers are against this feature. It is just that X is
a purely non-commercial project, and I and others are currently working on it
during our free time. Unfortunately, Free time is a very limited and precious
resource. Between our day jobs or university projects, sleeping, eating, and
other social activities, little time and energy is left.
We definitively might implement the feature in the future, but the best way to
ensure this gets implemented promptly is to get involved and implement it
yourself. Feel free to join our development channel beforehand and confirm that
the feature is something we would like to have in the application.
We are happy to guide you to make the onboarding experience as good as possible
and to help you familiarize yourself with the code base. Getting involved with
open-source projects is also an excellent opportunity to learn how to program.
Check out the Google Summer of Code,
Outreachy, or
Season of KDE for special mentoring programs, but
getting involved outside these programs is also possible. On a personal note, I
learned most of my programming skills by contributing to KDE, so I am very
thankful for that.
When will you implement feature Y? Is Y on your roadmap?
Similarly to the previous question, X is a non-commercial project implemented
during my and others’ free time. We can’t say precisely when we will be done
with a particular feature as it depends on many factors: how much energy do we
currently have to work on this project after our day job/university projects,
are there more pressing issues, are there a technical blocker which needs to be
solved first, it is fun to implement…
Again the best way to speed this up is to get involved.
What is your roadmap?
We are a non-commercial project which is purely volunteer based. We are just a
bunch of developers and designers; we do not have managers, stakeholders, or
clients influencing our work directly by setting deadlines or by asking and
paying for specific features. Part of the reason we enjoy working on this
project is that we have a lot of freedom in terms of what we want to work on
and are also very flexible, allowing us to switch to a different task whenever
we want.
Again the best way to influence the direction of this project is to get
involved.
Unifying the KRunner sorting mechanisms for Plasma6 & further plans
In Plasma5, we had different sorting implementations for KRunner and Kicker. This had historical reasons, because Kicker only used a subset of the available KRunner plugins. Due to the increased reliability, we decided to allow all available plugins to be loaded. However, the model still hard-coded the order in which the categories are displayed.
This was reported in this bug which received numerous duplicates.
To address this concern, I focused on refactoring and cleaning up KRunner as part of KDE Frameworks 6. Among the significant architectural changes was the integration of KRunner’s model responsible for sorting into the KRunner framework itself. This integration enabled easier code sharing and simplified code maintenance. Consequently, the custom sorting logic previously present in Kicker could be removed.
Further plans
Now you know some of the improvements that have been done, but more interesting might be the future plans!
While the sorting in KRunner was in lots of regards better than the one from Kicker, it still has some flaws.
For instance, tweaking the order of results from a plugin developer’s perspective proved challenging, since rearranging categories could occur unintentionally.
Also, KRunner implements logic to prioritize often launched results. In practice, this did not work quite well, because it only changed one of two sorting factors that are basically the same (sounds messy, I know :D).
The plan is to have two separate sorting values: One for the categories and one for the results within a category. This allows KRunner to more intelligently learn which categories you use more the most and prioritize them for further queries.
Another feature request to configure the sorting of plugins. With the described change, this is far easier to implement. Some of the visuals were already discussed at the Plasma sprint last month.
There’s no better measure of success than having a diminutive eight-year-old girl demand to know the name of the painting program she has been using for the last 20 minutes.