Calling functions from another thread with signals
When I started to work with multithreading in Qt, I did all thread communication
with signals and slots, because that was the only simple way I knew.
I called functions from another thread using additional signals.
If Manager runs on thread A you can’t just call action() of course when
you’re operating on thread B.
However, you can emit manager->actionRequested(parameter) and Qt will call
the connected slot (action()) on thread A using a
Qt::QueuedConnection.
This approach has two big issues though:
You need to create weird signals for every function you want to call from
another thread.
You can’t handle the results.
Now there are of course solutions to point 2, you can i.e. create another
requested-signal on the calling object and call back, but your code will get
much harder to understand.
It gets even worse if there are multiple places from where a function needs to
be called.
How do you know which callback to execute then?
I also knew about the old-variant of QMetaObject::invokeMethod which
essentially has the same result handling problem, but additionally isn’t even
checked at compile-time.
I always wanted a solution that would allow to execute a lambda with captured
variables as that would solve all of my issues.
At some point I had the idea to create one signal/slot pair with a
std::function<void()> parameter to do that, but it’s even easier.
Since Qt 5.10 there’s a new version of QMetaObject::invokeMethod() that can
do exactly what I needed.
autoparameter=QStringLiteral("Hello");QMetaObject::invokeMethod(otherThreadsObject,[=]{qDebug()<<"Hello from otherThreadsObject's thread"<<parameter;});
Note: You can’t use a QThread object as argument here, because
the QThread object itself lives in the thread where it has been created.
I personally don’t like the name QMetaObject::invokeMethod, so I added an
alias to my projects:
The result handling as a caller also works pretty well:
runOnThread(object,[this,object]{// on object's threadautovalue=object->action();runOnThread(this,[value](){// on caller's thread againqDebug()<<"Calculated result:"<<value;});});
One could only complain that all your code needs to be indented deeper each
time you call runOnThread().
However, that could potentially be solved using C++20 coroutines.
Both Plasma Mobile Gear 22.09 and Plasma 5.26 have now hit Manjaro ARMs unstable branch, so it's time to get some testing in before it gets to stable branch.
To make testing easier, we have created new images, based on our unstable branch, with these shiny new packages. You can find the links below.
We did have a few hiccups along the way. Namely a long standing bug on our end, prohibited the panels from starting. This was a configuration issue on our end, made visible by the update to Plasma 5.26, which we have now fixed.
Another small issue was with the Dialer. Functionality is still hit'n'miss, but in 22.09 there wass an issue where the dialer was not quite fullscreen. It started in fulscreen and then resized a little bit. Like below.
This was also recently fixed with a configuration update!
So please help test out these new packages, so we can have a fairly stable new Beta release soon!
I agree, I don’t get why Wikipedia gets bad reputation in school. I’m dismayed at then whatever bogus argument they have being used to push for using Google instead… it’s like, back in the days, asking pupils to not use the encyclopedia they maybe had at home and walk into the nearby pub to find information.
Very good list. It sets the bar very high! I know most people will fail on a few of those items. It’s fine this gives a good direction and something to aim for.
KDE is now evaluating Sentry, a crash tracking system.
Who can get access? Everyone with a KDE developer account.
But what is it?
Since forever we have used Bugzilla to manage crash reports but this has numerous challenges that haven’t made any improvements in at least 10 years:
Finding duplicates crashes is hard and in our case involves a human finding them
When debug symbols are missing we need to ask the user to recreate the problem, which is not always possible
Users need to worry about debug symbols (this is in part improved by the rise of debuginfod - yay!)
We have no easily consumed graphs on how prevalent a specific crash is, and by extension we have a hard time judging the importance
The user needs to actually write a report for us to learn of the crash (spoiler: most crashes never get this far)
…
All in all it’s a fairly dissatisfactory situation we are in currently. Enter Sentry.
Sentry is a purpose-built crash tracking system. It receives crash reports via API ingestion points and traces missing frames with the help of debuginfod, can detect duplicates automatically and thus show us particularly aggressive crashes, and much more. Best yet, it supports many different programming languages which allows us to not only improve the quality of our software but also our infrastructure services.
The current evaluation instance is already amazing and helped fix numerous problems, and the current setup is not even using all features yet and we have hampered rollout a bit: only git builds currently submit data. If all goes well and we find it to be amazing I hope we’ll eventually be able to roll this out to production releases.
Let’s look at a crash I’ve fixed recently.
Here’s what Sentry received from the user:
Not terribly useful. So with the power of debuginfod it turned it into this:
I then applied some brain power to create a fix and consequently the crash has disappeared, as we can see in this neat graphic here:
Here’s a complete crash information page from a recent infrastructure problem in our bugzilla bot:
This year, I had the amazing opportunity to attend KDE Akademy in person for the first time! The host city was Barcelona. It is my second time visiting the city but it was my first time to attend KDE Akademy. Actually it was my first KDE event.
For KDE friends who don't know me, I mainly contribute to openSUSE, GNOME, Nextcloud, ownCloud and GNU Health. I have fewer contributions to Fedora, Ubuntu and ONLYOFFICE and a few here and there to FOSS projects.
Question. Why did you attend KDE Akademy? Two were the reasons. The first and main reason was to see the organization of the conference from the inside, since my University will host the next KDE Akademy. The second reason was to "introduce" myself to the KDE community, since I contribute to other projects. Actually, I know a person from the KDE board but community is not only one person.
The only familiar person I could meet was openSUSE's community manager. Unfortunately he couldn't attend, so he asked me to represent openSUSE. The duties were to have a booth and present something openSUSE related for 3 minutes. I had an idea to propose my friend George to do his first presentation to an open source conference and start his open source journey. He was very excited and he did it.
Day 0
There was a welcome event on Friday for us, where attendees got to know each other. Unfortunately, my flight was delayed and I arrived too late to attend the event. So I stayed at the hotel and tried to rest for my first Akademy day. I felt like going to school.
Day 1
The first thing we had to do was set up our booth. Well, the only promo material we had was stickers. I think all geeks like stickers so it was the best gift for everyone. I love stickers, not only from openSUSE but from other projects as well.
During setting up the booth, I met the rest of the guys from the sponsors like Ubuntu, Fedora, Qt and Slim Book.
Full Steam ahead! :Seen how Plasma fits into the Steamdeck and what aspects of KDE made us the right choice for their new userbase
Food at the coference wasn't the best for my taste. Maybe it's me. But the most interesting part of the conference was the fact that I had the chance to meet realy important people, developers that changed my point of view on softare developement.
You can see the first day, Room 1 here:
Day 2
After having fun the first day, I was excited for the second day. The first reason was that George and I (actually only George) will have the sponsor talk and the second reason was that the fact that the organizers would announce the place of next year's Akademy. Of cource that place is Thessaloniki and my University.
Unfortunately I didn't have any team to join the next BoFs days. I had a small hope that we could setup the working environment for the next Akademy but that didn't happen.
We didn't join the trip to the mountain. We went to see the city. It was my second time and I skipped some sites.
I really loved my first KDE Akademy. I would like to thank KDE ev that sponsored my trip to attend the Akademy.
I have a lot of stuff to work here with the organizing committee. We are working to host you all next year.
This year, I had the amazing opportunity to attend Akademy in person (@ Barcelona) for the first time!
For context, I first started contributing to Plasma Mobile in 2020, right around when easily testable hardware (ex. PinePhone) was taking shape. I originally started with some contributions to some applications to learn Qt and C++, but have since then taken more responsibility with tasks from all around the software stack.
We first had a welcome event on Friday where attendees got to know each other. It was quite cool to finally be able to match usernames to faces, and finally meet the people I had been working with in-person.
I also met up with amazing people from the postmarketOS team! I had the chance to see a Fairphone 4 with the OS running Plasma Mobile smoothly, which was amazing.
I had a great start… I missed the first two hours of talks on the first day because I slept in.
Luckily, my talk, Plasma Mobile in 2022 with Bhushan was in the afternoon. Unfortunately, Bhushan was unable to come in-person this time around. Hopefully I will be able to meet him next year!
You can see our talk here (at around 5:21:00):
I attended quite a few interesting talks:
Konquering the World: Are We There Yet (Nate Graham) - The state of Plasma shipping with hardware
Full Steam Ahead! (David Edmundson) - Steam Deck with Plasma
Stop Crashing Already! (Harald Sitter) - Integrating Dr. Konqi with powerful tools for developers to find issues
Getting your application ready for KF6 (Nicolas Fella & Alexander Lohnau)
Asahi Linux (Hector Martin) - Information about the Asahi Linux project, how it came about
Push Notifications - Infrastructure (not just) for Plasma Mobile (Volker Krause) - Status of building the framework to have push notifications on Plasma (Mobile!)
Fedora, KDE, Kinoite, and Mobile (Neal Gompa) - Overview of Fedora shipping Plasma
… and there were many other cool shorter talks as well!
There were also people from Pine64 and Manjaro there that I met. It was really cool hearing about Pine64’s future plans for RISC V (coming soon!), as well as seeing some of the devices that Manjaro had running Plasma Mobile on (portable game console, mini tablet/netbook?).
This was my first time attending a BoF, as well as hosting one. We unfortunately had some trouble with audio, which made it hard to communicate with online attendees.
However, we did discuss the following topics:
Helping Bhushan do Plasma Mobile Gear releases since he does them himself at the moment
Moving some Plasma Mobile applications to KDE Gear due to limited changes
How to proceed with QtFeedback (vibrations stack) for Qt 6 as it is unmaintained
Possibility of switching to feedbackd (by Purism) from hfd-service instead
Plan is one last release of Plasma Mobile with KF 5 (5.27), before branches are made for Plasma 6
No large changes anticipated for Plasma Mobile 6
SHIFTphones and Fairphone can run pmOS with Plasma Mobile, perhaps we can open communication channels with them?
Manjaro was working with 2 tablet (?) vendors, they have positive reception and had a few suggestions
KDE PIM (Personal Information Management) refers to the KDE stack for applications like KMail, KOrganizer, Kalendar, etc.
Improving Akonadi for mobile devices was discussed, including the possibility to try using SQLite by default, which may improve resource consumption.
I had started working on a mail application called Raven for Plasma Mobile recently, and so it was also discussed how to better share code between it and other applications that use the Akonadi stack.
I attended a BoF about an upcoming project, Plasma Ink, which brings Plasma to e-ink devices. They can be useful for reading, taking notes, as well as simply being an enjoyable interface to write content.
This platform however needs special consideration for the way content is rendered, since e-ink devices typically have slow-refresh screens, and so animations as well as colours need to be adjusted to be as light as possible.
Pine64 has been developing the PineNote which can run Plasma Ink theoretically, I had the chance to try one and was quite impressed at the responsiveness for pen notetaking.
While I likely will not be very involved with the project since I am preoccupied with Plasma Mobile, I hope it continues forward!
I sat down with Aleix to really focus on trying to fix this issue, since he knows about KWin for more than me, though his PinePhone had not been working and so he couldn’t replicate it previously.
After hours of jumping around git commits of kscreenlocker (compiling it on device is painful), we realized that the issue was elsewhere in the stack. There is some sort of Wayland registry failure that occurs when kscreenlocker is started immediately after waking from suspend, which causes it to crash. We unfortunately did not have enough time to pinpoint the issue, but we do have some ways to move forward with investigating in the future. We suspect it may be related to when KWaylandServer was merged into KWin.
The last BoF I attended was about convergent forms, and how we can design forms that use a single codebase, but have designs that work for both mobile and desktop.
Currently, we have special components in kirigami-addons for mobile, but are not necessarily great on desktop.
What was decided was to create a new “FormLayout” component in Kirigami, which can take a set of instructions to build the form, and generates the necessary components to display properly on whichever platform it is running on.
Over the past two years I tried out a few different keyboards for fun.
I started with common form factors like TKL boards, went over 75% boards like the Q1 and then to a 60% HHKB.
For typing feel, the HHKB is really amazing, but unfortunately the programmable features of a stock HHKB board are very limited.
Now that I went down to 60%, I will give a more extreme keyboard a chance, the 40% Planck ortholinear keyboard.
This one is fully open-source, you can even produce your own PCBs and Co.
You find more or less all stuff freely at GitHub.
Given that the Plank designer funded QMK, too, naturally you can fully customize the Planck boards.
Unlike for my tries with the Q1, this time I just went the plain QMK route, without any UI like the closed VIA or the open VIAL.
The Planck board offers a nice platform for experiments, given the plain grid layer that allows really to freely shuffle all your keys and experiment with extreme layouts.
Yes, the paper on the left of the keyboard is a printout of the lower & raise keyboard layers.
My typing speed is still abysmal on that new layout and I guess I need to build as second one for at work, otherwise I will never get used to the layout if I swap daily between this and a HHKB.
Therefore, if you like to try such a board and are not a lot more experienced with switching between different layouts: you will need some time to get used to this.
Even just the removed row staggering is confusing the first few days.
As many readers of this blog are aware, openSUSE has been offering packages of git snapshots from KDE since quite a while. They are quite useful for those wiling to test, report bugs, and / or hack on the code, but also for those who want to see what’s brewing in KDE land (without touching their existing systems). However, a major drawback for non English speakers was the lack of translations.
What’s the problem with translations?
KDE translations are not hosted on the community’s git repositories, but are instead stored in KDE’s SVN server. The main reason they were not moved to git was to preserve the existing workflows of the translation teams (who might not be as technical as the actual hackers). Translations are then placed in tarballs at the times of betas / RCs / releases.
This unfortunately means that having a git checkout, like what the OBS does when building the unstable packages, will not carry any translations whatsoever. Worse, existing -lang packages for stable versions will raise dependency problems if present (because they require the exact same version of their corresponding binary paclage).
Also, since the KDE team tries to keep the same set of package defintions (spec files) between the stable and unstable OBS projects, this meant some extra complexity to take into account the fact that translations might or might not be there.
As far as I remember, since some time there was some tooling in KDE infrastructure to download translations at build time, but it was a big no-no for the OBS, as there is no network access during building for security reasons.
Two sides of a solution
This proved to be a problem also for KDE’s own release management. On September 2nd, KDE hacker Albert Astals Cid outlined a proposal to have an automated way to copy translations from SVN into their corresponding repository, with a series of scripts that would do so periodically.
After some discussion and once at Akademy, the switch was turned on October 2nd. This means that our git checkouts were getting translations!
Of course, some adjustments were needed: spec files in the KDE Unstable projects were not taking the presence of translations into account, and thus quite a number of builds were failing. krop, KDE team member extraordinaire, stepped in and fixed all the spec files. This in turn made him realize that some upstream KDE projects were not actually handling translations correctly in their CMake code, so he fixed them too.
Within a couple days, all KDE Unstable repositories (Frameworks, Applications and Extra) had translations enabled, where applicable. After many years, it became possible to test the latest KDE software and have it in your own language.
Do I need to do anything?
If you don’t have the language packages installed and you have installation of recommended packages enabled (the default), they should be installed automatically. If you have, like myself, forcibly installed the language packages from the stable repositories, you can force install the new ones (for example with zypper install -f <packagename>), or if you’re on Tumbleweed, accept to swap them when prompted (this occurs when a new stable version of KDE software is published in a snapshot). Or you can install them manually should you prefer to do so.
Should any issues with the packaging arise (e.g., missing dependencies, conflicts), please file a ticket on bugzilla.opensuse.org.
Very interesting post about the history of UML and the MDA approach. Clearly MDA and UML v2 was the beginning of the end for UML. That’s too bad, I find UML still useful for sketching and communication between humans.
For a long time I have been fixing issues behind the scenes to support Autocrypt and fixing bugs around encryption.
But the best crypto support does not help if it is too complicated for users to use the system.
PGP is complex and a lot of things can go wrong, so the UI should support the user to find solutions, if things are going the wrong way.
For me it was obvious that I cannot do this on my own and found Eileen Wagner a UX designer who is experienced in crypto UX.
It was a lot fun to work together with Eileen to improve the UX in Kontact ;)
It soon became obvious that the part that needs an overhaul is mostly sending.
There is a lot that happens AFTER you press send.
You may be faced with information that the keys are not good enough, or that a used key is near expiry.
So we tried to improve the UX so that these issues will bubble up earlier so you can fix the issues before pressing send.
At least for me, it is often that I concentrate in order to finish a message before I need to go, and then press send in a hurry.
So all dialogs and warnings are facing me while I'm in a hurry and I just want them to disappear.
If instead, I know of those things in advance, I will have time to ask for a new key or search for the correct key for a particular recipient.
Here you see a sample of creating a message to several recipients after our improvments.
Eileen created a blog post about the thoughts behind the UX decisions made for Kontact.
After several months of working together with Eileen, I realized that for outsiders, it is still hard to distinguish Kontact and KMail.
Kontact is a bundle of several applications that are presented together.
You are free to start and use every of those applications directly and will see no difference.
You only will miss the small left-column to switch between the applications.
KMail is the application that Kontact is using for the mail tab.
I really like the feature that Kontact informs you about keys that will soon expire.
Doing this makes it clear that I need to care about a key update and I can trigger it in advance.
I know there is a lot of discussion about automatic key updates and several attempts to do this.
I am still using parcimonie for this task.
But unfortunately not everyone is using a key server to communicate key updates, and nowadays there are several sources for keys update keyserver, WKD, PKA, DANE, ...
Sure it would be easy to try an update in background, but starting a network connection without users consent is a no-go.
So the first step is now to show the user that keys are near to expiry.
In the second step, we will make it possible for the user to directly trigger an update.
This will also be true if no suitable key was found for a recipient.
As I was inside the near-expiry feature code, I also added a fourth category for your own keys.
I want to get informed about my own key expiry long before the key expires, to create and upload the key update, so when others are searching for the keys they already find the updated key.
Key cache usage
Until now Kontact always talked to gnupg directly using gpgme.
In itself this is not an issue, but this connection is slow.
This is why libkleo started to implement a key cache a while ago to cache all keys.
Before, we had to wait for gnupg to answer all our requests.
In my experience, that means sometimes I wait a minute or longer.
Now Kontact is also using this cache and we now can instantly show that we found PGP keys for all recipients, while typing the mail address.
Do you see how fast the "green check mark" toggles while writing in the video?
This "green check mark" indicates, that we found keys for all recipients.
That's possible because of the key cache.
Trust levels
Gnupg now has the TOFU (trust on first use) feature that creates statistics about key usage and when we seen keys in our messages.
When a key is used for a long time, the key is trusted more, and now you can detect a key with no history.
This makes it harder for someone to present you with a new fake key.
Of course, you get the best security by checking fingerprints and signing the recipient keys.
But let’s be honest, who can do this for every key that one is using in our busy lives.
For those who do not check every key, TOFU is actually a great improvement, as you build trust while using the keys.
In Kontact we are now display the trust levels instead of just validity, as the trust levels are taking the TOFU history into account.
I personally cannot see any disadvantage to enabling trust-model 'tofu+pgp' in gnupg via (~/.gnupg/gpg.conf) and would highly suggest that everyone enable that in gnupg.
It gives you the best of two worlds: You will be able to build trust on the keys by just using the keys (tofu part) and still can also check fingerprints (pgp part).
After enabling it I actually found out that until now the tofu data is not updated when I sent an encrypted message.
However, it is done, if I use gnupg from the command line.
I created a upstream bugreport for this.
Until this is fixed tofu is a little bit useless, because no statistics are created.
Key cache and key resolver also need to learn trust levels, as they are in charge to select the most trusted key to send messages to.
Settings
The settings in Kontact are in some corners a big list of checkboxes and it is not obvious where to find what.
For Encryption we decided to merge several tabs together to present one page and name it Encryption.
There is also the signature feature, that is not connected with Cryptography but just about the mail signatures.
The critical point is the defaults for new users and we ended up having default encryption settings that can be overridden for each identity.
As all the work was done with Autocrypt in mind, I also mark Autocrypt support as stable.
Now the user can enable Autocrypt within the settings page of the identity.
In the end I think these improvements take Kontact a big step forward and lets us use encryption more easily.
I'm proud about the current state, but my To-Do list is still full until we have looked into all the corner cases.