The first month of the coding period of GSoC has already passed! Since the last update, I added Python support for the remaining classes of KWidgetsAddons. It was only recently when I discovered that apart from the C++ classes, the libraries also have namespaces which I didn’t even know about. So it turned out that it wasn’t actually completed. But anyway, there were only a few so that is now done. I also added support to automatically build a Python wheel for the bindings.
Last week I improved some Python demos and added bindings for KCoreAddons. That was quicker than I expected, so I might end up adding support for 5-7 more libraries that it was initially planned for. Here’s a list of the libraries that I plan to add during the rest of the summer:
AI is the current mega trend, and soon our mobiles and laptops will have the functionality integrated, bringing it closer to our daily lives.
While new technology such as AI surely has its use, it can also be healthy to critically have a wider perspective. With AI we can for instance:
Have emails written for us in a manner we perhaps don’t have the skills to present in person
Generate texts, such as CVs or cover letters, that we perhaps cannot back up
Generating texts that we don’t necessarily understand, can fact check or vouch for
My question is:
What do we human beings gain from that computers — on their own — are sophisticated?
It is a shift, where computers have gone from being a tool that assists, to taking a role on its own.
Here are areas where innovation is badly needed:
Mental health has plummeted seemingly connected with social media and mobiles. Is it VR glasses we need?
With the Google Effect, also known as digital amnesia, people tend to forget what they searched. Considering the wide spread usage of search engines in our lives, improvement in this area would be massive. (Scientific replication seems questionable and one can problematise)
The fast food markets managed to dopamine-hack customers with their perfected products, and currently the social media are successful at that as well, again at the cost of customers’ health. A form of “intelligent digital dopamine administration”, as wishful as it is, would mean the technology’s destructive impact is reduced
Reading comprehension on screens is less efficient, and probably the majority read most of their content on screens. This a big thing. Screens are massively marketed and used, but still they are largely less efficient than books. The causes could be many. Perhaps an invention of e-ink books would be a massive productivity boost.
The young IT entrepreneur is hailed and the markets value as they do. I believe it’s wise to question what directions they take us.
AI and other new technologies are exciting, but ensuring that computers are helpful and constructive for us, is imperative.
In 2021 I decided to take a break from contributing to KDE,
since I felt that I’ve been losing motivation and energy to contribute for a while‚Ķ
But I’ve been slowly getting back to hacking on KDE stuff for the past year, which
ended in me going to Toulouse this year to attend the annual KDE PIM Sprint, my
first in 5 years.
I’m very happy to say that we have /a lot/ going on in PIM, and even though not
everything is in the best shape and the community is quite small (there were only
four of us at the sprint), we have great plans for the future, and I’m happy to be
part of it.
Day 0
The sprint was officially supposed to start on Saturday, but everyone arrived already
on Friday, so why wait? We wrote down the topics to discuss, put them on a whiteboard
and got to it.
We’ve managed to discuss some pretty important topics - how we want to proceed with
deprecation and removal of some components, how to improve our test coverage or how
to improve indexing and much much more.
I arrived to the sprint with two big topics to discuss: milestones and testing:
Milestones
The idea is to create milestones for all our bigger efforts that we work (or want
to work) on. The milestones should be concrete goals that are achievable within a
reasonable time frame and have clear definition of done. Each milestones should then
be split to smaller tasks that can be tackled by individuals. We hope that this
will help to make KDE PIM more attractive to new contributors, who can now clearly
see what is being worked on and can find very concrete, bite-sized tasks to work
on.
As a result, we took all the ongoing tasks and turned most of them into
milestones in Gitlab. It’s still very much work in progress,
we still need to break down many milestones to smaller tasks, but the general ideas
are out there.
E2E Testing of Resources
Akonadi Resources provide “bridge” between Akonadi Server and individual services,
like IMAP servers, DAV servers, Google Calendar etc. But we have no tests to verify
that our Resources can talk to the services and vice versa. The plan is to create
a testing framework (in Python) so that we can have automated nightly tests to
verify that e.g. IMAP resource interfaces properly with common IMAP server
implementations, including major proprietary ones like Gmail or Office365. We want
to achieve decent coverage for all our resources. This is a big project, but I think
it’s a very exciting one as it includes not just programming, but also figuring out
and building some infrastructure to run e.g. Dovecot, NextCloud and others in
a Docker to test against.
Day 1
On Saturday we started quite early, all the delicious french pastry is not going to
eat itself, is it? After breakfast we continued with discussions, we dicussed tags
support, how to improve our PR. But we also managed to produce some code. I
implemented syncing of iCal categories with Akonadi tags, so the tags are becoming
more useful. I also prepared Akonadi to be cleanly handle planned deprecation and
retirement of KJots, KNotes and their acompanying resources, as well as planned
removal of the Akonadi Kolab Resource (in favor of using IMAP+DAV).
One of the tasks I want to look into is improving how we do database transactions in
the Akonadi Server. To get some data out of it, I shoved Prometheus exporter into
Akonadi, hooked it up to a local Prometheus service, thrown together a Grafana
dashboard, and here we are:
We decided to order some pizzas for dinner and stayed at the venue hacking until
nearly 11 o’clock.
Day 2
On the last day of the sprint we wrapped up on the discussions and focused on actually
implementing some of the ideas. I spent most of the time extending the Migration agent
to extract tags from all existing events and todos already stored in Akonadi and helped
to create some of the milestones on the Gitlab board. We also came up with a plan for
KDE PIM BoF on this years Akademy, where we want to present out progress on the
respective milestones and to give a chance to contributors to learn what are the biggest
hurdles they are facing when trying to contribute to KDE PIM and how we can help make
it easier for them to get involved.
Conclusion
I think it was a very productive sprint and I am really excited to be involved in PIM
again. Can’t wait to meet up with everyone again on Akademy in September.
Go check out Kevin’s and Carl’s reports to see what else
have they been up to during the sprint.
Did some of the milestones caught your eye, or do you have have any questions? Come
talk to us in our matrix channel.
Finally, many thanks to Kevin for organizing the sprint,
Étincelle Coworking for providing us with nice and spacious
venue and KDE e.V. for supporting us on travel.
While working on keystroke events, I realized my improvements to the event.change property were still inconsistent for certain Unicode characters. This led me to delve into code units, code points, graphemes, and other cool Unicode concepts. I found this blog post to be very enlightening.
Here’s an update on my progress over the past two weeks:
MRs merged:
event.change : The change property of the event object now correctly handles Unicode, with adjustments to selStart and selEnd calculations. !MR998
cursor position and undo/redo fix : Fixed cursor position calculations to account for rejected input text and resolved merging issues with undo/redo commands in text fields. !MR1011
DocOpen Event implementation : Enabled document-level scripts to access the event object by implementing the DocOpen event. !MR1003
Executing validation events correctly : Fixed a bug where validation scripts wouldn’t run after KeystrokeCommit scripts in Okular. !MR999
Widget refresh functions for RadioButton, ListEdit and ComboEdit : Added refresh functions as slots for RadioButton, ListEdit, and ComboEdit widgets, aiding in reset functionality and script updates. !MR1012
Additional document actions in Poppler : Implemented reading additional document actions (CloseDocument, SaveDocumentStart, SaveDocumentFinish, PrintDocumentStart, PrintDocumentFinish) in the qt5 and qt6 frontends for Poppler. !MR1561 (in Poppler)
MRs currently under review:
Reset form implementation in qt6 frontend for Okular : Working on the reset form functionality in Okular, currently focusing on qt6 frontend details. !MR1564 (in Poppler)
Reset form in Okular : Using the Poppler API to reset forms within Okular. !MR1007
Fixing order of execution of events for text form fields : Addressing the incorrect execution order of certain events (e.g., calculation scripts) and ensuring keystroke commit, validation, and format scripts are evaluated correctly when setting fields via JavaScript. !MR1002
For the coming weeks, my focus will be on implementing reset forms, enhancing keystroking and formatting scripts, and possibly starting on submit forms. Let’s see how it goes.
This year again I participated to the KDE PIM Sprint in Toulouse. As always it
was really great to meet other KDE contributors and to work together for one
weekend. And as you might have seen on my
Mastodon account, a lot of food was also
involved.
Day 1 (Friday Afternoon)
We started our sprint on Thursday with a lunch at the legendary cake place,
which I missed last year due to my late arrival.
Picture of some delicious cakes: a piece of cheesecake raspberry and basil, a piece of lemon tart with meringue and a piece of carrot cake)
We then went to the coworking space where we would spend the remaining of this
sprint and started working on defining tasks to work on and putting them on
real kanban board.
A kanban board with tasks to discuss and to implement
To get a good summary of the specific topics we discussed, I invite you to
consult the blog of Kevin.
That day, aside from the high level discussion, I proceeded to port away the
IMAP authentification mechanism for Outlook accounts away from the KWallet API
to use the more generic QtKeychain API. I also removed a large dependency
from libkleo (the KDE library to interact with GPG).
Day 2 (Saturday)
On the second day, we were greated by a wonderful breakfast (thanks Kevin).
Picture of croissant, brioche and chocolatine
I worked on moving EventViews (the library that renders
the calendar in KOrganizer) and IncidenceEditor (the library that provides
the event/todo editor in KOrganizer) to KOrganizer. This will allow to
reduce the number of libraries in PIM.
Do not use a complex plugin system for handling calendar invitation and instead
provide the incidence editor as a binary and call it instead.
(MR 1,
MR 2
and MR 3)
For lunch, we ended up eating at the excellent Mexican restaurant next to the
location of the previous sprint.
Mexican food
I also worked on removing the “Add note” functionality in KMail.
This feature allow to store notes to emails following RFC5257.
Unfortunatelty this RFC never left the EXPERIMENTAL state and so these notes
were only stored in Akonadi and not synchronized with any services.
I also started removing another type of notes: the KNotes app which
provided sticky notes. This application was not maintained anymore, didn’t work
so well with Wayland. If you were using KNotes, to make sure you don’t loose
your notes, I added support in Marknote to import notes from KNotes.
We ended up working quite late and ordering Pizzas. I personally got one with a lot
of cheese (but no photo this time).
Day 3 (Sunday)
The final day, we didn’t had any breakfast :( but instead a wonderful brunch.
Picture of the brunch
Aside from eating, I started writing a plugin system for the MimeTreeParser
which powers the email viewer in Merkuro and in Kleopatra. In the short term,
I want to be able to add Itinerary integration in Merkuro but in the longer
term the goal is to bring this email viewer to feature parity with the email
viewer from KMail and then replace the KMail email viewer with the one from
Merkuro. Aside from removing duplicate code, this will improve the security
since the individual email parts are isolated from each other and this will
makes it easier for the email view to follow KDE styling as this is just
normal QML instead of fake HTML components.
I also merged and rebased some WIP merge requests in Marknote in preparation of
a new release soon and reviewed merge requests from the others.
Last but not least
If you want to know more or engage with us, please join the KDE PIM and the
Merkuro matrix channels! Let’s chat
further.
Also, I’d like to thank again Étincelle Coworking
and KDE e.V. to make this event possible. This wouldn’t be
possible without a venue and without at least partial support in the travel
expenses.
Finally, if you like such meetings to happen in the future so that we can push
forward your favorite software, please consider making a
tax-deductible donation to the
KDE e.V. foundation.
This is the first blog post of my GSOC journey. I will be sharing my works and experiences here. Stay tuned for more updates. In this blog, I’ll be sharing my experiences of the first two weeks of GSOC, what are the works I did, what are challenges I faced and how did I overcome them ( Did I really overcome them :P ).
On my first week I tried to understand the codebase of discover first, via doing small changes. So, the first thing I added was a new way of verification of snap publishers which is officially supported by snapcraft. Snapcraft has two tiers of verification:
Hi! It has been over two weeks since the coding period began. In this blog post, I will provide a brief summary of my work during the first two weeks.
After spending some time reviewing the code, I decided to start by refactoring the existing code related to ASS format subtitles. This has two main goals: first, to enable Kdenlive to read as much information as possible from ASS subtitles (specifically, the features supported by libass) and load it into the memory; and second, to ensure that Kdenlive can save all this information back to the file. Since SubtitleModel already contains a significant amount of code for ASS format subtitles, my work mainly involved refining and expanding upon this existing code while maintaining compatibility.
So far, I have accomplished the following:
Added initial support for reading and saving embedded fonts in ASS subtitles
Optimized the storage method for subtitle styles
Migrated from V4Style to V4+Style
Tasks still to be completed:
Modify m_subtitleList to accommodate more information.
Write unit tests for each feature
Once these steps are completed, we will have more comprehensive support for ASS format subtitles, marking the end of this phase of the ASS code refactoring. The next focus will be on refactoring the functionality for modifying subtitle styles. These two parts will be my primary focus for the next two weeks. Stay tuned!