Skip to content

Tuesday, 25 June 2024

Kommit 1.6.0 is a feature and bugfix release of our Git repo app which now builds with Q 5 or 6.

Improvements:

  • build without kdbusaddons on windows
  • Add flatpak support
  • Fix show date (using QLocale for it)
  • Fix mem leak
  • Reactivate open file in external apps in qt6
  • Add zoom support in Report Chart Widget
  • Replace a QTableWidget by QTreeWidget in report page
  • Fix crash when we didn't open git repository
  • Fix load style/icon on windows (KF >= 6.3)
  • Implement a gravatar cache
  • Fix i18n

URL: https://download.kde.org/stable/kommit
Source: kommit-1.6.0.tar.xz
SHA256: 4091126316ab0cd2d4a131facd3cd8fc8c659f348103b852db8b6d1fd4f164e2
Signed by: E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell jr@jriddell.org
https://jriddell.org/esk-riddell.gpg

Sunday, 23 June 2024

Cut through bullshit arguments fast and make project discussions more productive.

Thursday, 20 June 2024

AI is the current mega trend, and soon our mobiles and laptops will have the functionality integrated, bringing it closer to our daily lives.

While new technology such as AI surely has its use, it can also be healthy to critically have a wider perspective. With AI we can for instance:

  • Have emails written for us in a manner we perhaps don’t have the skills to present in person
  • Generate texts, such as CVs or cover letters, that we perhaps cannot back up
  • Generating texts that we don’t necessarily understand, can fact check or vouch for

My question is:

What do we human beings gain from that computers — on their own — are sophisticated?

It is a shift, where computers have gone from being a tool that assists, to taking a role on its own.

Here are areas where innovation is badly needed:

  • Mental health has plummeted seemingly connected with social media and mobiles. Is it VR glasses we need?
  • With the Google Effect, also known as digital amnesia, people tend to forget what they searched. Considering the wide spread usage of search engines in our lives, improvement in this area would be massive. (Scientific replication seems questionable and one can problematise)
  • The fast food markets managed to dopamine-hack customers with their perfected products, and currently the social media are successful at that as well, again at the cost of customers’ health. A form of “intelligent digital dopamine administration”, as wishful as it is, would mean the technology’s destructive impact is reduced
  • Reading comprehension on screens is less efficient, and probably the majority read most of their content on screens. This a big thing. Screens are massively marketed and used, but still they are largely less efficient than books. The causes could be many. Perhaps an invention of e-ink books would be a massive productivity boost.

The young IT entrepreneur is hailed and the markets value as they do. I believe it’s wise to question what directions they take us.

AI and other new technologies are exciting, but ensuring that computers are helpful and constructive for us, is imperative.

Wednesday, 19 June 2024

In 2021 I decided to take a break from contributing to KDE, since I felt that I’ve been losing motivation and energy to contribute for a while‚Ķ But I’ve been slowly getting back to hacking on KDE stuff for the past year, which ended in me going to Toulouse this year to attend the annual KDE PIM Sprint, my first in 5 years.

I’m very happy to say that we have /a lot/ going on in PIM, and even though not everything is in the best shape and the community is quite small (there were only four of us at the sprint), we have great plans for the future, and I’m happy to be part of it.

Day 0

The sprint was officially supposed to start on Saturday, but everyone arrived already on Friday, so why wait? We wrote down the topics to discuss, put them on a whiteboard and got to it.

Whiteboard with all discussion topics

We’ve managed to discuss some pretty important topics - how we want to proceed with deprecation and removal of some components, how to improve our test coverage or how to improve indexing and much much more.

I arrived to the sprint with two big topics to discuss: milestones and testing:

Milestones

The idea is to create milestones for all our bigger efforts that we work (or want to work) on. The milestones should be concrete goals that are achievable within a reasonable time frame and have clear definition of done. Each milestones should then be split to smaller tasks that can be tackled by individuals. We hope that this will help to make KDE PIM more attractive to new contributors, who can now clearly see what is being worked on and can find very concrete, bite-sized tasks to work on.

As a result, we took all the ongoing tasks and turned most of them into milestones in Gitlab. It’s still very much work in progress, we still need to break down many milestones to smaller tasks, but the general ideas are out there.

E2E Testing of Resources

Akonadi Resources provide “bridge” between Akonadi Server and individual services, like IMAP servers, DAV servers, Google Calendar etc. But we have no tests to verify that our Resources can talk to the services and vice versa. The plan is to create a testing framework (in Python) so that we can have automated nightly tests to verify that e.g. IMAP resource interfaces properly with common IMAP server implementations, including major proprietary ones like Gmail or Office365. We want to achieve decent coverage for all our resources. This is a big project, but I think it’s a very exciting one as it includes not just programming, but also figuring out and building some infrastructure to run e.g. Dovecot, NextCloud and others in a Docker to test against.

Day 1

On Saturday we started quite early, all the delicious french pastry is not going to eat itself, is it? After breakfast we continued with discussions, we dicussed tags support, how to improve our PR. But we also managed to produce some code. I implemented syncing of iCal categories with Akonadi tags, so the tags are becoming more useful. I also prepared Akonadi to be cleanly handle planned deprecation and retirement of KJots, KNotes and their acompanying resources, as well as planned removal of the Akonadi Kolab Resource (in favor of using IMAP+DAV).

One of the tasks I want to look into is improving how we do database transactions in the Akonadi Server. To get some data out of it, I shoved Prometheus exporter into Akonadi, hooked it up to a local Prometheus service, thrown together a Grafana dashboard, and here we are:

Grafana dashboard

We decided to order some pizzas for dinner and stayed at the venue hacking until nearly 11 o’clock.

Day 2

On the last day of the sprint we wrapped up on the discussions and focused on actually implementing some of the ideas. I spent most of the time extending the Migration agent to extract tags from all existing events and todos already stored in Akonadi and helped to create some of the milestones on the Gitlab board. We also came up with a plan for KDE PIM BoF on this years Akademy, where we want to present out progress on the respective milestones and to give a chance to contributors to learn what are the biggest hurdles they are facing when trying to contribute to KDE PIM and how we can help make it easier for them to get involved.

Conclusion

I think it was a very productive sprint and I am really excited to be involved in PIM again. Can’t wait to meet up with everyone again on Akademy in September.

Go check out Kevin’s and Carl’s reports to see what else have they been up to during the sprint.

Did some of the milestones caught your eye, or do you have have any questions? Come talk to us in our matrix channel.

Finally, many thanks to Kevin for organizing the sprint, Étincelle Coworking for providing us with nice and spacious venue and KDE e.V. for supporting us on travel.

Finally, if you like such meetings to happen in the future so that we can push forward your favorite software, please consider making a tax-deductible donation to the KDE e.V. foundation.

Monday, 17 June 2024

People of the Internet,

While working on keystroke events, I realized my improvements to the event.change property were still inconsistent for certain Unicode characters. This led me to delve into code units, code points, graphemes, and other cool Unicode concepts. I found this blog post to be very enlightening.

Here’s an update on my progress over the past two weeks:

MRs merged:

  • event.change : The change property of the event object now correctly handles Unicode, with adjustments to selStart and selEnd calculations. !MR998
  • cursor position and undo/redo fix : Fixed cursor position calculations to account for rejected input text and resolved merging issues with undo/redo commands in text fields. !MR1011
  • DocOpen Event implementation : Enabled document-level scripts to access the event object by implementing the DocOpen event. !MR1003
  • Executing validation events correctly : Fixed a bug where validation scripts wouldn’t run after KeystrokeCommit scripts in Okular. !MR999
  • Widget refresh functions for RadioButton, ListEdit and ComboEdit : Added refresh functions as slots for RadioButton, ListEdit, and ComboEdit widgets, aiding in reset functionality and script updates. !MR1012
  • Additional document actions in Poppler : Implemented reading additional document actions (CloseDocument, SaveDocumentStart, SaveDocumentFinish, PrintDocumentStart, PrintDocumentFinish) in the qt5 and qt6 frontends for Poppler. !MR1561 (in Poppler)

MRs currently under review:

  • Reset form implementation in qt6 frontend for Okular : Working on the reset form functionality in Okular, currently focusing on qt6 frontend details. !MR1564 (in Poppler)
  • Reset form in Okular : Using the Poppler API to reset forms within Okular. !MR1007
  • Fixing order of execution of events for text form fields : Addressing the incorrect execution order of certain events (e.g., calculation scripts) and ensuring keystroke commit, validation, and format scripts are evaluated correctly when setting fields via JavaScript. !MR1002

For the coming weeks, my focus will be on implementing reset forms, enhancing keystroking and formatting scripts, and possibly starting on submit forms. Let’s see how it goes.

See you next time. Cheers!

Sunday, 16 June 2024

This year again I participated to the KDE PIM Sprint in Toulouse. As always it was really great to meet other KDE contributors and to work together for one weekend. And as you might have seen on my Mastodon account, a lot of food was also involved.

Day 1 (Friday Afternoon)

We started our sprint on Thursday with a lunch at the legendary cake place, which I missed last year due to my late arrival.

Picture of some delicious cakes: a piece of cheesecake raspberry and basil, a piece of lemon tart with meringue and a piece of carrot cake)
Picture of some delicious cakes: a piece of cheesecake raspberry and basil, a piece of lemon tart with meringue and a piece of carrot cake)

We then went to the coworking space where we would spend the remaining of this sprint and started working on defining tasks to work on and putting them on real kanban board.

A kanban board with tasks to discuss and to implement
A kanban board with tasks to discuss and to implement

To get a good summary of the specific topics we discussed, I invite you to consult the blog of Kevin.

That day, aside from the high level discussion, I proceeded to port away the IMAP authentification mechanism for Outlook accounts away from the KWallet API to use the more generic QtKeychain API. I also removed a large dependency from libkleo (the KDE library to interact with GPG).

Day 2 (Saturday)

On the second day, we were greated by a wonderful breakfast (thanks Kevin).

Picture of croissant, brioche and chocolatine
Picture of croissant, brioche and chocolatine

I worked on moving EventViews (the library that renders the calendar in KOrganizer) and IncidenceEditor (the library that provides the event/todo editor in KOrganizer) to KOrganizer. This will allow to reduce the number of libraries in PIM.

For lunch, we ended up eating at the excellent Mexican restaurant next to the location of the previous sprint.

Mexican food
Mexican food

I also worked on removing the “Add note” functionality in KMail. This feature allow to store notes to emails following RFC5257. Unfortunatelty this RFC never left the EXPERIMENTAL state and so these notes were only stored in Akonadi and not synchronized with any services.

This allow to remove the relevant widget from the pimcommon library and the Akonadi attribute.

I also started removing another type of notes: the KNotes app which provided sticky notes. This application was not maintained anymore, didn’t work so well with Wayland. If you were using KNotes, to make sure you don’t loose your notes, I added support in Marknote to import notes from KNotes.

Marknote with the context menu to import notes
Marknote with the context menu to import notes

Finally I worked on removing visible Akonadi branding from some KDE PIM applications. The branding was usually only visible when an issue occurred, which didn’t help with Akonadi reputation.

We ended up working quite late and ordering Pizzas. I personally got one with a lot of cheese (but no photo this time).

Day 3 (Sunday)

The final day, we didn’t had any breakfast :( but instead a wonderful brunch.

Picture of the brunch
Picture of the brunch

Aside from eating, I started writing a plugin system for the MimeTreeParser which powers the email viewer in Merkuro and in Kleopatra. In the short term, I want to be able to add Itinerary integration in Merkuro but in the longer term the goal is to bring this email viewer to feature parity with the email viewer from KMail and then replace the KMail email viewer with the one from Merkuro. Aside from removing duplicate code, this will improve the security since the individual email parts are isolated from each other and this will makes it easier for the email view to follow KDE styling as this is just normal QML instead of fake HTML components.

I also merged and rebased some WIP merge requests in Marknote in preparation of a new release soon and reviewed merge requests from the others.

Last but not least

If you want to know more or engage with us, please join the KDE PIM and the Merkuro matrix channels! Let’s chat further.

Also, I’d like to thank again Étincelle Coworking and KDE e.V. to make this event possible. This wouldn’t be possible without a venue and without at least partial support in the travel expenses.

Finally, if you like such meetings to happen in the future so that we can push forward your favorite software, please consider making a tax-deductible donation to the KDE e.V. foundation.

Friday, 14 June 2024

This is the first blog post of my GSOC journey. I will be sharing my works and experiences here. Stay tuned for more updates. In this blog, I’ll be sharing my experiences of the first two weeks of GSOC, what are the works I did, what are challenges I faced and how did I overcome them ( Did I really overcome them :P ).

On my first week I tried to understand the codebase of discover first, via doing small changes. So, the first thing I added was a new way of verification of snap publishers which is officially supported by snapcraft. Snapcraft has two tiers of verification:

 

This is the release schedule the release team agreed on

  https://community.kde.org/Schedules/KDE_Gear_24.08_Schedule

Dependency freeze is in around 4 weeks (July 18) and feature freeze one
after that. Get your stuff ready!
 

After three months, KDE's Kirigami tutorial has been ported to Qt6.

In case you are unaware of what Kirigami is:

  • Qt provides two GUI technologies to create desktop apps: QtWidgets and QtQuick
  • QtWidgets uses only C++ while QtQuick uses QML (plus optional C++ and JavaScript)
  • Kirigami is a library made by KDE that extends QtQuick and provides a lot of niceties and quality-of-life components

Strictly speaking there weren't that many API changes to Kirigami. The most notable changes were:

  • switching from Kirigami.Overlaysheet to Kirigami.Dialog
  • switching from actions.{main,left,right} to raw actions
  • switching from Kirigami.BasicListItem to Controls.ItemDelegate/Kirigami.TitleSubtitle et al.

Which are all easy-to-understand API, so developers shouldn't have issues porting to those. In any case, there is a wiki page with some instructions on porting to Kirigami with Qt6.

No, the actual challenge came from a different front: moving to declarative registration.

The old method of registering types from C++ to QML involved using a global context or specific API calls, which usually meant instantiating the object in C++ code, which can be annoying. Depending on the registration type and how it was called, it would also be necessary to implement a factory.

The new method involves creating QML modules directly from CMake and adding one or two macros in the class you want to expose, so you don't really have to instantiate anything in your C++ code. As an additional bonus, you can add QML resources directly from CMake instead of writing XML code.

The relevant Qt API for the new declarative registration is qt_add_qml_module(), but for our KDE stuff we use ecm_add_qml_module(), which removes the upstream boilerplate.

One consequence of the new declarative method of registering QML modules is that URIs are expected to use reverse DNS naming schemes (for example, org.kde.kirigami.delegates) and their resource path becomes way too long:

<resource_prefix><import_URI><file>

An example would be qrc:/qt/qml/org/kde/example/SomeModule.qml.

Which leads to an API change: from engine.load(“qrc:/qt/qml/org/kde/example/SomeModule.qml”) to engine.loadFromModule(“org.kde.example”, “SomeModule”), which is much shorter.

loadFromModule() however requires, as the name implies, a QML module, which needs to start with an uppercase character. This means every main.qml needs to turn into Main.qml. This needs to be done everywhere in the tutorial, it's a chore task.

A nice-to-have benefit of the port to Qt6 is that we no longer need to specify import versions in QML files. For API users this means less concerns with version numbering and for tutorial writers this means not needing to hunt for a specific import version. But this also meant a chore task of removing imports from everywhere.

Since this is the more evident change that indicates the new content is using Qt6, that’s what I started the tutorial port with.

Quality of life changes🔗

A long-standing issue with the Kirigami tutorial had been the lack of up-to-date screenshots. In particular:

  • screenshots that don't reflect the code
  • screenshots that had X11 window icons
  • screenshots that had Wayland window icons
  • screenshots that looked plain bad
  • screenshots that were badly positioned

Because a product is only as good as its documentation, that is to say, when the product is docs, this had paramount importance. If the product is showcased in a non-professional way, it ends up becoming unattractive.

The API change from Overlaysheet to Dialog meant a drastic UI change. Since I needed to update those screenshots, I could use the new Dialog redesign by Carl Schwann, which looks great.

The API change to Kirigami actions also reflected in UI changes, but mostly on mobile. And Plasma Mobile was under-represented in the tutorial, so it needed updates too.

So I went on about updating most screenshots, which entails a few things:

  • make the screenshots represent the actual code
  • make the actual code represent the screenshots
  • fix spacing between comparison screenshots
  • fix screenshot positioning
  • add a desktop file for the examples so window icons show up on Wayland
  • explain how to install the application so window icons show up on Wayland

The first two meant changes in app design with the same questions posed in the next section of this blog post. The last two were in fact the result of my lackluster updates from years ago.

Historically the original Kirigami tutorial had many blatant issues such as missing code, few screenshots, barely any links, no build or run instructions, etc. When I updated it years ago, I did not go full length with it and I didn’t have the expertise necessary.

In order to have window icons on Wayland, the application needs to be installed on the system and provide a desktop file. On X11 it is possible to disregard the desktop file requirement, but the application still needs to be installed. Because I didn’t know how to install applications properly, I only provided build instructions and as a consequence wasn’t able to get screenshots with window icons. The updated tutorial has those.

In addition to that, because the code is more accurate and provides its own desktop file and correct install instructions, the life of any tutorial tester gets much easier in the future, myself included. You can blindly copy+paste the code and the project will be as it should. Who would have thought that providing reproducible code and instructions helps QA and saves time in the end? /s

With the current update, unless there are drastic API or tutorial changes, most of the bits I touched should last well into Plasma 7 at least.

Documentation challenges🔗

For someone dealing with documentation, updating a tutorial on writing desktop applications isn't a small or simple task. There are a few things one needs to think about:

  • what app design should I recommend to users?
  • what app design is actually in use?
  • what app design does upstream expect of API users?
  • what API should get the spotlight?
  • have I mentioned a keyword so the reader knows what to search for?
  • how to design code examples to be didactic?
  • how to design code examples that look good in screenshots?
  • what level of explanation should I provide?
  • what can I remove?
  • when should code examples be short or complete?
  • have I presented all concepts necessary for the user to finish the tutorial?
  • what should be a part of the tutorial and what should be a note?
  • what documentation practices should I follow and not follow?
  • should I include workarounds to common issues?
  • should I include notes about possible issues?
  • if so where?
  • how should screenshots be organized in the page?
  • what sections should I use for this page?
  • in which order should the sections be read?
  • have I made this consistent across all text I touched?
  • what screenshots are missing?
  • what links are missing?
  • what should I leave for later?

These questions need to be in your mind and you need to make lasting decisions on most of them in order to progress. This naturally does not mean you should address all of them at the same time, just that you need to keep those in mind.

The app design questions were the most difficult to think about, not so much for the code examples we use, but because of project file structure, which can be extremely malleable in C++. Even just searching through KDE projects, I managed to find 5 different project structures! Most of the time going counter to upstream Qt’s recommendations. For example, followed to the letter, Qt would recommend a file structure like the URI (src/org/kde/example/SomeModule.qml) while KDE almost never does that (src/SomeModule.qml or src/qml/SomeModule.qml or src/somedirname/SomeModule.qml).

I went with a sort of hybrid approach similar to solution 2 of this Qt blog post with Main.qml in the src/ directory and other QML modules in a separate subdirectory (or multiple subdirs), which is more didactic and scales better.

Another aspect one must not succumb to is demotivation caused by chore tasks! Things like thinking about app design or figuring out didactic ways to address issues can be challenging and fun, but chore tasks can drain your soul.

For me the most draining was updating screenshots, because I ended up having to do about 200 of those, either due to mistakes or due to didactic/design changes. In the end I had updated 51 tutorial screenshots or so, and the tutorial has more than that. The Kirigami tutorial alone has 30 pages of content after all (I touched only 19 of them).

Generally, when dealing with API documentation and tutorials, the writer needs to go through all of this trouble so that the reader does not go through any trouble at all and can focus on what they need.

Learning more🔗

To learn about more questions like these and how to address them, two good resources I recommend that are free are the Google Developer Documentation Style Guide and the Google Technical Writing Course. We don’t follow them to the letter, but it is to-the-point and useful as a starting point.

We also have the KDE Develop Style Guidelines which addresses some of the more specific KDE Develop instructions. I’d like to point out the last section: Follow standard documentation practices where it makes sense, which contains a list of other resources to learn about technical writing, including some content personally curated by me.

KDE has a good entrypoint wiki page for documentation explaining the various areas of documentation you can work with at KDE and the technologies used.

Documentation aside, we also have a dedicated page full of resources to learn Qt development, including a curated list of study materials.

I have a personal repository called minimal-qml6-examples which, while unfinished/unpolished, provides lots of code examples with raw Qt to understand the essentials of QML registration using the new CMake API.

Thursday, 13 June 2024

Hi! It has been over two weeks since the coding period began. In this blog post, I will provide a brief summary of my work during the first two weeks.

After spending some time reviewing the code, I decided to start by refactoring the existing code related to ASS format subtitles. This has two main goals: first, to enable Kdenlive to read as much information as possible from ASS subtitles (specifically, the features supported by libass) and load it into the memory; and second, to ensure that Kdenlive can save all this information back to the file. Since SubtitleModel already contains a significant amount of code for ASS format subtitles, my work mainly involved refining and expanding upon this existing code while maintaining compatibility.

So far, I have accomplished the following:

  • Added initial support for reading and saving embedded fonts in ASS subtitles
  • Optimized the storage method for subtitle styles
  • Migrated from V4Style to V4+Style

Tasks still to be completed:

  • Modify m_subtitleList to accommodate more information.
  • Write unit tests for each feature

Once these steps are completed, we will have more comprehensive support for ASS format subtitles, marking the end of this phase of the ASS code refactoring. The next focus will be on refactoring the functionality for modifying subtitle styles. These two parts will be my primary focus for the next two weeks. Stay tuned!