As mentioned previously on this blog, I took a break from my vacations for the past
couple of days. I attended Akademy 2023 over the week-end
and I’m now typing this during my trip back home.
Of course it was nice to see the people. After all it’s the main reason to have such
events. Still we had talks and it’s mainly what I will focus on here.
Saturday
Keynote: Libre Space Foundation - Empowering Open-Source Space Technologies
We started the day with a nice keynote about the Libre Space Foundation. It was very
interesting and inspiring to see how open source can go into space. The project started
in 2011 and despite all the regulation required they managed to get their first
satellite in orbit in 2017. This is not a small feat. Of course, everything they produce
is free and reusable by others. It was nice to touch upon some of their more specific
constraints which impact quite a bit the cycles for producing software and hardware.
KDE Goals - a review and plans going forward
This was followed by a session about the KDE Goals. All three current community goals
were covered.
First, the “Automation & Systematization” goal where a good chunk of tasks have been
implemented around automated tests (in particular Selenium GUI testing) and the CI to
ease some of our processes. It also meant updating quite some documentation.
Second, the Accessibility or “KDE For All” goal was covered. There’s been quite some
effort put into adding automated tests using Selenium to check our software compatibility
with screen readers and how the keyboard navigation fares. This obviously led to improvements
all across the board: in Plasma, in application, in our frameworks, in Qt and in Orca.
Third, the “Sustainable Software” goal progress has been presented. It’s been quite a bit
about documenting and presenting results of the efforts in various venues. But there has
also been projects to setup labs to measure our software and automated tests using Selenium
to implement user scenarios to measure. This can even be triggered on the CI to be executed
in a permanent lab in Berlin.
Did you notice the recurring theme between the three goals? I’ll get back to it.
After this session about the KDE Goals, we split in two tracks and so obviously I couldn’t
attend everything. I will write only about what I watched.
Measuring energy consumption of software
This session was a more in depth look at how the measurements of the KDE Eco effort
are actually done. This is in fact an active research topic and it’s not necessarily
easy to source hardware suitable for measuring properly the energy consumption of
software.
The guidelines and best practices are still missing in this field.
Interestingly, we have several options available for measuring. I’d summarize it
in three categories: cheap hardware, expensive specialized hardware and built-in
sensors (accessible via perf
if supported). Each category comes with its own set
of trade-offs which need to be kept in mind.
In any case, it’s clear that the first thing to do is general profiling and
optimizing. Then it’s time to focus on long-running processes, frequent workloads
and idle behavior. At this stage, this is where the energy specific measurements
become essential… but still difficult to do, the tools available are very
basic.
KDE e.V. Board report
Lots was done on the side of the KDE e.V., the board report highlighted some
of this work.
First we’re seeing the return of sprints, conferences and tradeshows. The presence
and meetings of the community seem back to pre-COVID19 levels.
Also, the fundraising efforts are clearly bearing fruits. In particular a new
platform has been put into place (Donorbox) which seems to work great for us.
This lead to very successful fundraisers for the end of year and for Kdenlive
(first fundraiser of its kind). Now, in the Kdenlive case it means we have to
start spending this money to boost its progresses.
The KDE e.V. is professionalizing faster as well. All the “Make a Living”
positions got filled. This is no longer a purely volunteers based organization,
we have around 10 employees and contractors.
There is already great interest in our software among hardware and software
vendors. Hopefully with all those efforts it will keep increasing.
Flatpak and KDE
I then attended a session about Flatpak. It was a quick recap of what it is.
Hopefully a widespread adoption could reduce the amount of our users which
run old outdated versions.
Interestingly, we seem to have more installations via Flatpak than I first
thought. Our most successful software on FlatHub seems to be Kdenlive with
around 575K installs. It’s followed by Okular with around 200K installs.
We also provide a KDE Runtime suitable for building our applications flatpaks
on. There is one branch of it per Qt version.
Last but not least, we have our own Flatpak remote. This is meant for nightlies
and not for stable use. Still if you want to help test the latest and greatest,
it’s a nice option.
KF6 - Are we there yet?
This is the big question currently. How much time will we need to reach a port
of KDE Frameworks, Plasma and our applications to Qt 6?
The work is on going with quite some progress made. Still there are a couple
of challenges in particular in term of coexistence between 5 and 6 components.
We’re not too bad in term of co-installability, but there are other dimensions
which need catering to in this time of transition.
The talk also covered how to approach the port of our applications. So if you’re
in this situation, I advise to get back to the recording and slides for details.
KDE Embedded - Where are we?
This was the last session of the day for me. It went back on what can be considered
an embedded device. In this talk the definition was quite a bit reduced: we assumed
a Linux kernel available but also a GPU and connectivity we’re kind of used to.
A bit of a luxury embedded if you wish. 😉
In any case this is a nice challenge to invest in, it can also lead the way to more
use of the KDE stack in the industry.
For such system integrations, we’re using the Yocto ecosystem as it is the industry
standard. We provide several Yocto layers already: meta-kf5
, meta-kf6
, meta-kde
and meta-kde-demo
. This covers KDE Frameworks and Plasma but none of the apps.
The range of supported hardware is already interesting. It covers among others
the Raspberry Pi 4, the Vision Five 2, the Beagle Play and the Beagle V. The RISC-V
architecture is definitely becoming very interesting and getting quite some traction
at the moment.
Plenty of dev boards have been produced during the last few years. The pain point on
such devices is generally the GPU, even more so on RISC-V unfortunately.
Our stack has plenty to provide in this context. meta-kf5
or meta-kf6
could be
used in industrial products of course, but Plasma and applications could be a good
benchmark tool for new boards. We might want to provide extra frameworks and
features for embedded use as well.
The biggest challenge for this effort is to make things approachable. The first
Yocto build is not necessarily a walk in the park.
Sunday
Keynote: Kdenlive - what can we learn after 20 years of development?
This keynote gave a good overview of the Kdenlive project. This is in fact a
much older project than I thought. It was started in 2003 but kind of got
stuck until the actual maintainer revived it.
Also it’s a good example of a project which had its own life before joining
KDE officially. Indeed it became an official KDE application in 2015 only.
They explained how they keep the conversation open with the user base and
how it feeds the vision for the project. It’s no surprise this is such
a nice tool.
The user base seems diverse, although the personal use is dominant. Still,
its used in schools and by some professionals already, maybe we can expect
those user groups to grow in the coming years.
They have a couple of challenges regarding testing and managing their
dependencies. Clearly its on their radar and we can expect this to get
better.
The fundraising effort paid off. It already allowed the maintainer to
reduce the work time at his current job, he can devote one day a week
to Kdenlive work.
Finally we got a tour of exciting new features they released. Most
notably the nested timelines but also the support of speech to text
to help creating subtitles.
They won’t stop here though and they hope to bring more AI supported
tools but also improve the GPU support and provide online collaborative
spaces.
Make it talk: Adding speech to your application
The first lightning talk I’ve seen on Sunday was advocating for more
text to speech uses in our applications. Indeed it can have some uses
beyond accessibility.
It also made the case that it’s actually easy to do through Qt which
provides the QtSpeech module for this with a simple API. The good
news being that it is supported on almost all platforms.
Another lightning talk, this time about the Community Working Group.
It didn’t a good job debunking some myths regarding that working group.
It is indeed not a “community police” but it’s mainly here to help the
community and provide assistance in case of issues.
They have a complex job aiming at maintaining healthy community channels.
It involves making sure there is no misunderstanding between people. This
is necessary to avoid loosing contributors due to a bad experience.
Interesting talk about a tool allowing to explore codebases. Having a
knack for this and having done it quite a few times in the past
obviously I was eager to attend this one.
The tool is made by Codethink and funded by Bloomberg. It uses LLVM
to parse C++ code and feed a model of the code stored in a relational
database.
In particular it takes care of finding all the C++ entities and their
dependencies. There’s also traceability on which source and header files
the entities and dependencies come from.
On top of this model they built visualizations allowing to track dependencies
between packages. It also allows to inspect where dependencies are coming
from and it makes the distinction between dependencies coming from tests
or not.
It also provides a plugin mechanism which allows to impact the behavior of
steps in the pipeline. And last but not least, command line tools are
provided to manipulate the database. This comes in handy for writing checks
to enforce on the CI for instance.
They took the time to try the tool on KDE products. This is after all a
big corpus of C++ code readily available to validate such a tool. This
way they showed examples of cyclic dependencies in some places (like a
Kate plugin). They’re not necessarily hard to fix but can go unnoticed.
Another interesting thing they attempted was to use hooks to tag modules
with their tiers. Which would then allow to differentiate tiers in the
dependency graphs allowing to see if we have any violation of the KDE
Framework model.
They have plans for providing more features out of the box like tracking
unused includes, spotting entities used without being included directly,
etc. This could be interesting, clearly it aroused interest in attendees.
Matrix and ActivityPub for everything
This short talk went over the Matrix and ActivityPub protocols. Both
are federated but the speakers highlighted the main differences. In
particular Matrix is end to end encrypted for personal communication
uses, while ActivityPub is not encrypted and tailored for social media
uses.
They also emphasized how both protocols are important for the KDE
community and how they can be used. Some of the ideas are upcoming
features which are already implemented.
In particular we’ve seen a couple of scenarios for location sharing
over Matrix so that you can get it to and from Itinerary or share vie
NeoChat. There’s also the future possibilities of synchronizing Itinerary
data over Matrix or importing Mobilizon event in your calendar.
Selenium GUI Testing
Remember when I mentioned a recurring theme during the session about the
KDE Goals? I hope that by now you realized this was about Selenium. So
of course, it was to be expected that we would have a session about it.
After all this effort to use Selenium for GUI testing helps push forward
all of our current community goals.
What has been created so far allows to easily write GUI tests reproducible
locally. This way we could catch up with industry standards, we were clearly
falling behind in term of GUI testing.
Selenium is known for being web oriented, but it can be used in other contexts.
What you need is mainly a WebDriver implementation and this is exactly what
has been created. So we now have such an implementation bridging between
Selenium and AT-SPI the protocol used for accessibility support.
One nice trait of all this which I noted is that the tests are run in a
nested Wayland session. This avoids leakage with the host session. Also
the session is screen recorded so we can see what happened in it after
the fact if needed.
Now help is needed for more such tests to be written using this bridge.
Doing so will help with all of our current goals.
After lunch we had a whole series of lightning talk. The first one demoing
Kyber. This is a new solution coming from VideoLAN to control machines
remotely.
The results are so far impressive. You can play a game from Linux remotely
controlling a Windows machine for instance. The delay over a 4G connection
is spiking at 40ms maximum, but most of the time is close to 20ms. This means
in most cases around a 1.5 frames delay if playing at 60 frame per seconds.
On the network level it uses QUIC and uses a few tricks to have crisp
rendering of the image, including text, despite the compression.
Of course it is portable and both the client and server are available for
various platforms. It can also leverage hardware for better performances.
Impressive and exciting. Looks like we might have a very viable FOSS
alternative to TeamViewer soon.
Fun with Charts: Green Energy in System Monitor
Next lightning talk was about a personal project bringing information
from a solar panel installation all the way to a Plasma desktop. Indeed,
those installations tend to be coupled to proprietary cloud applications,
it’d be nice to not go through those to control your installation.
We were shown funny uses. Like a KInfoCenter module summarizing all the
available data, or a KDED notifier which indicates the first ray of sun
in the day, the storage battery status etc. And of course some command
line tools to script the system allowing for instance to turn on and
off services based on the amount of energy available.
What has qmllint ever done for us?
Another lightning talk, this time about qmllint. It went through the
history of this tool which went from being only able to tell if a file
was a QML one or not, to providing lots of warnings about missuses of
the language.
Now it’s even possible to integrate it in the CI via a JSON file and
it’s the base of the warnings we get in the QML LSP support.
And last for not least, I learned it even has a plugin system nowadays
allowing to extend it to provide more project specific checks.
It became quite powerful indeed.
Wait, are first-run wizards cool again?
The last lightning talk of the track was about our new first-run wizard.
It has been introduced for good reasons, this time we’re not making it
for users to configure some settings like look and feel.
This is about on-boarding the users on first use. For instances it helps
them access the internet if needed, it introduces them to the important
Plasma concepts, it gives them an opportunity to get involved, and it
also remind them that donations are possible.
This is definitely done in a different spirit than the old wizard we
had back in the days.
The End?
This was only over two days for me… But this is not over yet!
The BoFs are going on strong for the rest of the week (even though
I unfortunately won’t attend them this year).
Also there will be a training day. If you’re interested in how the
KDE Stack is organized, make sure to not miss the training I will
hold online on Thursday morning.