Skip to content

Welcome to Planet KDE

This is a feed aggregator that collects what the contributors to the KDE community are writing on their respective blogs, in different languages

Wednesday, 16 June 2021

Hello!

Akademy starts in a few days, and the Champions and I will be focusing on that. However, there are still some interesting updates we’d like to share with you.

Let’s jump right in!

Akademy 2021 logo

Wayland

With every recent Plasma update (and especially the just released version 5.22) the list of features that are X11 exclusive gets smaller and smaller.

Conversely, many users may not be aware that the opposite is also happening: every day there are more features available on Wayland that cannot be found on X11!

There are many resources available describing the security advantages of Wayland over X11, but the ageing protocol has some other shortcomings as well. For example, the last update we highlighted was the recently released VRR support in 5.22. Among other things, this enables an important use case for me: it allows each of my connected displays to operate at their highest refresh rate. I have a 144Hz main display, but occasionally I plug in my TV, which works at 60Hz. Because of limitations of X11, for everything to work, my main display needs to be limited to 60Hz when both of them are active. But not any more thanks to Wayland!

While the KDE developers always try to bring new functionalities to all users, the above example shows that sometimes, either due to X11 limitations or for other reasons, feature parity will not be possible.

For different, but similar reasons, some other features that are Wayland exclusive are:

You can be sure that the list of Wayland exclusives will grow and grow as work progresses.

Méven and Vlad will have a Wayland Goal talk at Akademy. Check out the details here: https://conf.kde.org/event/1/contributions/5/

Consistency

When you think about consistency, you may think of how different parts of your Plasma desktop should look and behave in a similar way, like scrollbars should all look the same on all windows. Or like when you open a new tab, it should always open in the same place, regardless of the app.

But the KDE developers also think about the bigger picture, like: How can we achieve a consistent experience between your desktop and your phone? Here’s where Kirigami comes in! It makes sense to have applications like NeoChat and Tok on both Plasma desktop and Plasma Mobile, and, thanks to the Kirigami framework, users will feel at home on both form factors. Now I want to see Kirigami apps on a smartwatch!

NeoChat desktop and mobile powered by Kirigami NeoChat desktop and mobile powered by Kirigami

Speaking of Kirigami, there is work being done on a component called “swipenavigator” to increase its - you guessed it - consistency, among other fixes. Details of the rewrite are in the merge request.

Do you care about looks? Then you’ll be interested in two MR’s: the first regarding better shadows, and the other is the “Blue Ocean” style for buttons, checkboxes etc. There are some details at Nates blog.

Our Consistency Champion Niccolò has a Goal talk during Akademy, so be sure to watch it!

KDE is All About the Apps

As announced on the community mailing list and the Goals matrix room, there was a meeting last Monday to discuss the way forward with the huge list of topics mentioned in the previous update.

In the meeting, the conclusion was to start with the topics regarding the different platforms we support, as well as the automation of the build/release process of apps.

Taking advantage of the upcoming Akademy, the topics will be discussed during the BoF sessions. Check out the schedule to see when you can attend! Also, don’t miss the “Creating Plasma Mobile apps” BoF!

Of course, like the other Goal Champions, Aleix will have a talk on the first day of Akademy, don’t miss it!

Meta

Right after the three Goal talks at Akademy, there will be a KDE Goals round table, a session where Lydia and I will be joined by the Champions to answer community questions regarding the specific goals, and the goal program as a whole.

Later in the event, on Monday June 21st at 18:00 UTC, I will conduct a BoF regarding selecting the next goals! Be sure to join in, if you were thinking about becoming a Champion yourself, or if you’re just curious about the process.

See you there!

This is how I would look like in the Akademy t-shirt, if Akademy was an in-person event this year. And held outside. This is how I would look like in the Akademy t-shirt, if Akademy was an in-person event this year. And held outside.

Recently my 4 year-old stepson saw a kid with an RC racing car in a park. He really wanted his own, but with Christmas and his birthday still being a long way away, I decided to solve the “problem” by combining three things I’m really passionate about: LEGO, electronics and programming.

In this short series of blogs I’ll describe how to build one such car using LEGO, Arduino and a bit of C++ (and Qt, of course!).

LEGO

Obviously, we will need some LEGO to build the car. Luckily, I bought LEGO Technic Mercedes Benz Arocs 3245 (40243) last year. It’s a big build with lots of cogs, one electric engine and bunch of pneumatics. I can absolutely recommend it - building the set was a lot of fun and thanks to the Power Functions it has a high play-value as well. There’s also fair amount of really good MOCs, especially the MOC 6060 - Mobile Crane by M_longer is really good. But I’m digressing here. :)

Mercedes Benz Arocs 3245 (40243) Mercedes Benz Arocs 3245 (40243)

The problem with Arocs is that it only has a single Power Functions engine (99499 Electric Power Functions Large Motor) and we will need at least two: one for driving and one for steering. So I bought a second one. I bought the same one, but a smaller one would probably do just fine for the steering.

LEGO Power Functions engine (99499)

I started by prototyping the car and the drive train, especially how to design the gear ratios to not overload the engine when accelerating while keeping the car moving at reasonable speed.

First prototype of engine-powered LEGO car

Turns out the 76244 Technic Gear 24 Tooth Clutch is really important as it prevents the gear teeth skipping when the engine stops suddenly, or when the car gets pushed around by hand.

76244 Technic Gear 24 Tooth Clutch

Initially I thought I would base the build of the car on some existing designs but in the end I just started building and I ended up with this skeleton:

Skelet of first version of the RC car

The two engines are in the middle - rear one powers the wheels, the front one handles the steering using the 61927b Technic Linear Actuator. I’m not entirely happy with the steering, so I might rework that in the future. I recently got Ford Mustang (10265) which has a really interesting steering mechanism and I think I’ll try to rebuild the steering this way.

Wires

58118 Eletric Power Functions Extension Wires

We will control the engines from Arduino. But how to connect the LEGO Power Functions to an Arduino? Well, you just need to buy a bunch of those 58118 Electric Power Functions Extension Wires, cut them and connect them with DuPont cables that can be connected to a breadboard. Make sure to buy the “with one Light Bluish Gray End” version - I accidentally bought cables which had both ends light bluish, but those can’t be connected to the 16511 Battery Box.

We will need 3 of those half-cut PF cables in total: two for the engines and one to connect to the battery box. You probably noticed that there are 4 connectors and 4 wires in each cable. Wires 1 and 4 are always GND and 9V, respectively, regardless of what position is the switch on the battery pack. Wires 2 and 3 are 0V and 9V or vice versa, depending on the position of the battery pack switch. This way we can control the engine rotation direction.

Schematics of PF wires

For the two cables that will control the engines we need all 4 wires connected to the DuPont cable. For the one cable that will be connected to the battery pack we only need the outter wires to be connected, since we will only use the battery pack to provide the power - we will control the engines using Arduino and an integrated circuit.

I used the glue gun to connect the PF wires and the DuPont cables, which works fairly well. You could use a solder if you have one, but the glue also works as an isolator to prevent the wires from short-circuiting.

LEGO PF cable connected to DuPont wires

This completes the LEGO part of this guide. Next comes the electronics :)

Arduino

To remotely control the car we need some electronics on board. I used the following components:

  • Arduino UNO - to run the software, obviously
  • HC-06 Bluetooth module - for remote control
  • 400 pin bread board - to connect the wiring
  • L293D integrated circuit - to control the engines
  • 1 kΩ and 2 kΩ resistors - to reduce voltage between Arduino and BT module
  • 9V battery box - to power the Arduino board once on board the car
  • M-M DuPont cables - to wire everything together

The total price of those components is about €30, which is still less than what I paid for the LEGO engine and PF wires.

Let’s start with the Bluetooth module. There are some really nice guides online how to use them, I’ll try to describe it quickly here. The module has 4 pins: RX, TX, GND and VCC. GND can be connected directly to Arduino’s GND pin. VCC is power supply for the bluetooth module. You can connect it to the 5V pin on Arduino. Now for TX and RX pins. You could connect them to the RX and TX pins on the Arduino board, but that makes it hard to debug the program later, since all output from the program will go to the bluetooth module rather than our computer. Instead connect it to pins 2 and 3. Warning: you need to use a voltage divider for the RX pin, because Arduino operates on 5V, but the HC-06 module operates on 3.3V. You can do it by putting a 1kΩ resistor between Arduino pin 3 and HC-06 RX and 2kΩ resistor between Arduino GND and HC-06 RX pins.

Next comes up the L293D integrated circuit. This circuit will allow us to control the engines. While in theory we could hook up the engines directly to the Arduino board (there’s enough free pins), in practice it’s a bad idea. The engines need 9V to operate, which is a lot of power drain for the Arduino circuitry. Additionally, it would mean that the Arduino board and the engines would both be drawing power from the single 9V battery used to power the Arduino.

Instead, we use the L293D IC, where you connect external power source (the LEGO Battery pack in our case) to it as well as the engines and use only a low voltage signal from the Arduino to control the current from the external power source to the engines (very much like a transistor). The advantage of the L293D is that it can control up to 2 separate engines and it can also reverse the polarity, allowing to control direction of each engine.

Here’s schematics of the L293D:

L293D schematics

To sum it up, pin 1 (Enable 1,2) turns on the left half of the IC, pin 9 (Enable 3,4) turns on the right half of the IC. Hook it up to Arduino's 5V pin. Do the same with pin 16 (VCC1), which powers the overall integrated circuit. The external power source (the 9V from the LEGO Battery pack) is connected to pin 8 (VCC2). Pin 2 (Input 1) and pin 7 (Input 2) are connected to Arduino and are used to control the engines. Pin 3 (Output 1) and pin 6 (Output 2) are output pins that are connected to one of the LEGO engines. On the other side of the circuit, pin 10 (Input 3) and pin 15 (Input 4) are used to control the other LEGO engine, which is connected to pin 11 (Output 3) and pin 14 (Output 4). The remaining four pins in the middle (4, 5, 12 and 13 double as ground and heat sink, so connect them to GND (ideally both Arduino and the LEGO battery GND).

Since we have 9V LEGO Battery pack connected to VCC2, sending 5V from Arduino to Input 1 and 0V to Input 2 will cause 9V on Output 1 and 0V on Output 2 (the engine will spin clockwise). Sending 5V from Arduino to Input 2 and 0V to Input 1 will cause 9V to be on Output 2 and 0V on Output 1, making the engine rotate counterclockwise. Same goes for the other side of the IC. Simple!

Photo of all electronic components wired together Photo of all electronic components wired together

Conclusion

I also built a LEGO casing for the Arduino board and the breadboard to attach them to the car. With some effort I could probably rebuild the chassis to allow the casing to “sink” lower into the construction.

Photo of LEGO car with the electronics on board

The batterry packs (the LEGO Battery box and the 9V battery case for Arduino) are nicely hidden in the middle of the car on the sides next to the engines.

Photo of LEGO Battery Box Photo of Arduino 9V battery case

Now we are done with the hardware side - we have a LEGO car with two engines and all the electronics wired together and hooked up to the engines and battery. In the next part we will start writing software for the Arduino board so that we can control the LEGO engines programmatically. Stay tuned!

Tuesday, 15 June 2021

Every so often there appear some new pics from developer builds of Windows or even leaks such as the recent Windows 11 preview screenshots. More or less every time this happens there are comments from the Linux side that Windows is copying KDE Plasma – a desktop environment that is, granted, among the most similar...... Continue Reading →

Once again I plan to be at Akademy. I almost silently attended last year edition. OK… I had a talk there but didn’t blog. I even didn’t post my traditional sketchnotes post. I plan to do better this year.

I’ll try to sketchnote again, we’ll see how that works out. Oddly enough, I might do the 2020 post after the 2021 one. 😀

This year I’ll also be holding a training and a couple of talks. Last but not least, I’ll attend the KF6 BoF. I’ll see if I can attend a couple more but that’ll mainly depend how compatible it is with my schedule otherwise.

Also, I’m particularly proud to be joined by a couple of colleagues from enioka Haute Couture. Without further ado here is where you will or might find us:

  • Friday 18 June, starting at 18:00 CEST, I’ll be holding a 4 hours (!) training about the KDE Stack, if you’re interested in getting a better understanding on how KDE has built the stack for its applications and workspaces, but also how all the pieces are tied together, this will be the place to be;

  • Saturday 19 June, at 12:20 CEST, my colleague Méven Car will give an update about the Wayland Goal, he’ll be joined by Vlad Zahorodnii;

  • Following up his talk, at 13:00 CEST, Méven will also participate in the KDE Goals roundtable;

  • Still the same day, at 21:00 CEST, I’ll be on stage again to talk about KDE Frameworks Architecture, I’ll go back to how it’s structured in KF5 and will propose a potential improvement for KF6;

  • On Monday 20 June, a bunch of eniokians will participate in the KDE e.V. general assembly;

  • Somewhen during the week I’ll participate in the KF6 BoF (not scheduled yet at time of writing), obviously I’ll be interested in discussing the ideas from my talk with the rest of the KDE Frameworks contributors;

  • And finally, Friday 25 June, at 19:00 CEST, I’ll be joined by my colleague Christelle Zouein for our talk about community data analytics, we got a bunch of new toys to play with thanks to Christelle’s work and the community move towards GitLab and we’ll show some results for the first time.

Of course it also means I’m on my way to… ah well… no, I’m not on my way. I’ll just hook up my mic and camera like everyone else. See you all during Akademy 2021!

Bug triaging is a largely invisible and often thankless task. But it’s the foundation of quality in our software offerings. Every day, our users file between 30 and 50 bug reports on https://bugs.kde.org, and often up to 100 right after a big release! Many will be duplicates of pre-existing issues and need to be marked as such. Quite a few will be caused by issues outside of KDE’s control and this also needs to be marked as such. Many will be crash reports with missing or useless backtraces, and their reporters need to be asked to add the missing information to make the bug report actionable. And the rest need to be prioritized, moved to the right component, tagged appropriately, and eventually fixed.

All of this sounds pretty boring. And, to be honest, sometimes it is (I’m really selling this, right?). But it’s critically important to everything we do. Because when it’s not done properly:

  1. Users don’t feel listened to, and start trashing us and our software on social media.
  2. Critical regressions in new releases get missed and are still visible when reviewers check out the new version, so they also trash it in high-profile tech articles and videos.
  3. Un-actionable bug reports pile up and obscure real issues, so developers are less likely to notice them and fix them.
  4. Bugs that occur over and over again don’t accumulate dozens of duplicates, don’t look important enough to prioritize, and therefore don’t get fixed.
  5. Easy-to-fix bugs don’t get fixed by anyone and it’s pretty embarrassing.

Do you see a pattern? Most of these results end up with KDE software being buggier and KDE’s reputation being damaged. It’s not an accident that KDE’s software is less buggy than ever before that that we enjoy a good reputation today. These positive developments are driven by everyone involved, but they rest upon an invisible foundation of good bug triage. And as KDE software becomes more popular, users file more bug reports. So the need for bug triage constantly grows. Currently it is done by just a few people, and we need help. Your help! And it will truly be helpful! If you are a meticulous, detail-oriented person with some technical inclination but no programming ability, triaging bug reports may just be the best way to help KDE. If this sounds like something you’d like to get involved with, go over to https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging and give it a read! I would be happy to offer personal bug triaging mentorship, too. Just click the “Contact me” link here or at the top of the page and I’ll help you get started.

Like most online manuals, the Krita manual has a contributor’s guide. It’s filled with things like “who is our assumed audience?”, “what is the dialect of English we should use?”, etc. It’s not a perfect guide, outdated in places, definitely, but I think it does its job.

So, sometimes I, who officially maintains the Krita manual, look at other project’s contributor’s guides. And usually what I find there is…

Style Guides

The purpose of a style guide is to obtain consistency in writing over the whole project. This can make the text easier to read and simpler to translate. In principle, the Krita manual also has a style guide, because it stipulates you should use American English. But when I find style guides in other projects it’s often filled with things like “active sentences, not passive sentences”, and “use the Oxford comma”.

Active sentences versus passive sentences always gets me. What it means is the difference between “dog bites man” and “man is bitten by dog”. The latter sentence is the one in the passive voice. There’s nothing grammatically incorrect about it. It’s a bit longer, sure, but on the other hand, there’s value in being able to rearrange the sentence like that. For a more Krita specific example, consider this:

“Pixels are stored in working memory. Working memory is limited by hardware.”

“Working memory stores the pixels. Hardware limits the working memory.”

The first example is two sentences in the passive voice, the latter two in the active. The passive voice example is longer, but it is also easier to read, as it groups the concepts together and introduces new concepts later in the paragraph. Because we grouped the concepts, we can even merge the sentences:

“Pixels are stored in working memory, which is limited by hardware.”

But surely, if so many manuals have this in their guide, maybe there is a reason for it? No, the reason it’s in so many manuals’ style guide, is because other manuals have it there. And the reason other manuals have it there, is because magazines and newspapers have it there. And the reason they have that, is because it is recommended by style guides like The Elements of Style. There is some(?) value for magazines and newspapers in avoiding the passive voice because it tends to result in longer sentences than the active voice, but for electronic manuals, I don’t see the point of worrying about things like these. We have an infinite word count, so maybe we should just use that to make the text easier to read?

The problem of copying style rules like this is also obfuscated by the fact a lot of people don’t really know how to write. In a lot of those cases, the style guide seems to be there to allow role playing that you are a serious documentation project, if not a case of ‘look busy’, and it can be very confusing to the person being proofread. I’ve accepted the need for active voice in my university papers, because I figured my teachers wanted to help me lower my word count. I stopped accepting it when I discovered they couldn’t actually identify the passive voice, pointing at paragraphs that needed no work.

This kind of insecurity-driven proofreading becomes especially troublesome when you consider that sometimes “incorrect” language is caused by the writer using a dialect. It makes sense to avoid dialect in documentation, as they contain specific language features that not everyone may know, but it’s a whole other thing entirely to tell people their dialect is wrong. So in these cases, it’s imperative the proofreader knows why certain rules are in place so they can communicate why something should be changed without making the dialect speaker insecure about their language use.

Furthermore, a lot of such style guide rules are filled with linguistic slang, which is abstract and often derived from Latin. People who are not confident in writing will find such terms very intimidating, as well as hard to understand, and this in turn leads to people being less likely to contribute. In a lot of those cases, we can actually identify the problems in question via a computer program. So maybe we should just do that, and not fill our contributor’s guide with scary linguistic terms?

A section of the animation curves reStructuredText as shown in the gitlab UI. Several words are bolded.
One of our animation programmers, Emmet, has a tendency to bold words in the text, which is to help the reader find their way in large pieces of text. We do this nowhere else in the manual, but I’m okay with it? The only thing that bothers me is that the mark up used is the one for strong/bold, even though the sentences in question have clear semantic reasons to be highlighted like this. What this means is that it won’t work really well with a screen reader (markup for bold/strong tends to be spoken with a lot of force by these readers), but this is solved by finding proper semantic mark for this. But overall, this is done with the reader’s comfort in mind, so I don’t see why we shouldn’t spend time on getting this to work instead of worrying it’s non-standard.

LanguageTool

Despite my relaxed approach to proofreading, I too have points at which I draw the line. In particular, there’s things like typos, missing punctuation, errant white-spaces. All these are pretty uncontroversial.

In the past, I’d occasionally run LanguageTool over the text. LanguageTool is a java based style and grammar checker licensed under LGPL 2.1. It has a plugin for LibreOffice, which I used a lot when writing university papers. However, by itself LanguageTool cannot recognize mark-up. To run it over the Krita documentation, I had to first run the text through pandoc to convert from reStructuredText to plain text, which was then fed to the LanguageTool jar.

I semi-automated this task via a bash script:

#!/bin/sh

# Run this file inside the language tool folder.
# First argument is the folder, second your mother tongue.
for file in $(find $1 -iname "*.rst");
do
    pandoc -s $file -f rst -t plain -o ~/checkfile.txt 
    echo $file
    echo $file >> ~/language_errors.txt
    # Run language tool for en-us, without multiple whitespace checking and without the m-dash suggestion rule, using the second argument as the mother tongue to check for false friends.
    java -jar languagetool-commandline.jar -l en-US -m $2 --json -d WHITESPACE_RULE,DASH_RULE[1] ~/checkfile.txt >> ~/language_errors.txt
    rm ~/checkfile.txt
done

This worked pretty decently, though there were a lot of false positives (mitigated a bit by turning off some rules). It was also always a bit of a trick to find the precise location of the error, because the conversion to plaintext changed the position of the error.

I had to give up on this hacky method when we started to include python support, as that meant python code examples. And there was no way to tell pandoc to strip the code examples. So in turn that meant there were just too many false positives to wade through.

There is a way to handle mark-up, though, and that’s by writing a java wrapper around LanguageTool that parses through the marked-up text, and then tells LanguageTool which parts are markup and which parts can be analyzed as text. I kind of avoided doing this for a while because I had better things to do than to play with regexes, and my Java is very rusty.

One of the things that motivated me to look at it again was the appearance of the code quality widget in the Gitlab release notes. Because one of my problems is that notions of “incorrect” language can be used to bully people, I was looking for ways to indicate that everything LanguageTool puts out is considered a suggestion first and foremost. The code quality widget is just a tiny widget that hangs out underneath the merge request description, that says how many extra mistakes the merge request introduces, and is intended to be used with static analysis tools. It doesn’t block the MR, it doesn’t confuse the discussion, and it takes a JSON input, so I figured it’d the ideal candidate for something as trivial as style mistakes.

So, I started up eclipse, followed the instructions on using the Java api (intermissioned by me realizing I had never used maven and needing a quick tutorial), and I started writing regular expressions.

Reusing KSyntaxHighlighter?

So, people who know KDE’s many frameworks know that we have a collection of assorted regex and similar for a wide variety of mark up systems and languages, KSyntaxHighlighter, and it has support for reStructuredText. I had initially hoped I could just write something that could take the rest.xml file and use that to identify the mark up for LanguageTool.

Unfortunately, the regex needs of KSyntaxHighlighter is very different from the ones I need for LanguageTool. KSyntax needs to know whether we have entered a certain context based on the mark-up, but it doesn’t really need to identify the mark-up itself. For example, the mark up for strong in reStructuredText is **strong**.

The regular expression to detect this in rest.xml is \*\*[^\s].*\*\*, translated: Find a *, another *, a character that is not a space, a sequence of zero or more characters of any kind, another * and finally *.

What I ended up needing is: "(?<bStart>\*+?)(?<bText>[^\s][^\*]+?)(?<bEnd>\*+?)", translated: Find group of *, name it ‘bStart’, followed by a group that does not start with a space, and any number of characters after it that is not a *, name this ‘bText’, followed by a group of *, name this ‘bEnd’.

The bStart/bText/bEnd names allow me to append the groups separately to the AnnotatedTextBuilder:


if (inlineMarkup.group("bStart") != null) {
 builder.addMarkup(line.substring(inlineMarkup.start("bStart"), inlineMarkup.end("bStart")));
 handleReadingMarks(line.substring(inlineMarkup.start("bText"), inlineMarkup.end("bText")));
 builder.addMarkup(line.substring(inlineMarkup.start("bEnd"), inlineMarkup.end("bEnd")));
 }

So I had to abandon adopting the KSyntaxHighlighter format for this and do my own regexes.

Results

Eventually, I had something that worked. I managed to get it to write the errors it found to a json file that should work the code quality widget. I also implemented an accepted words list, which at the very least took a third off the initial set of errors. I’ve managed to actually get it to find about 105 errors on the 5000 word KritaFAQ, most of which are misspelled brand names, but it also found missing commas and errant white-spaces.

A small sample of the error output:

{
    "severity": "info",
    "fingerprint": "docs-krita-org/KritaFAQ.rst:8102:8106",
    "description": "Did you mean <suggestion>Wi-Fi<\/suggestion>? (This is the officially approved term by the Wi-Fi Alliance.) (``wifi``)",
    "check_name": "WIFI[1]",
    "location": {
      "path": "docs-krita-org/KritaFAQ.rst",
      "position": {
        "end": {"line": 176},
        "begin": {"line": 176}
      },
      "lines": {"begin": 176}
    },
    "categories": ["Style"],
    "type": "issue",
    "content": "Type: Other, Category: Possible Typo, Position: 8102-8106 \n\nIt might be that your download got corrupted and is missing files (common with bad wifi and bad internet connection in general), in that case, try to find a better internet connection before trying to download again.  \nProblem: Did you mean <suggestion>Wi-Fi<\/suggestion>? (This is the officially approved term by the Wi-Fi Alliance.) \nSuggestion: [Wi-Fi] \nExplanation: null"
  },
  {
    "severity": "info",
    "fingerprint": "docs-krita-org/KritaFAQ.rst:8379:8388",
    "description": "Possible spelling mistake found. (``harddrive``)",
    "check_name": "MORFOLOGIK_RULE_EN_US",
    "location": {
      "path": "docs-krita-org/KritaFAQ.rst",
      "position": {
        "end": {"line": 177},
        "begin": {"line": 177}
      },
      "lines": {"begin": 177}
    },
    "categories": ["Style"],
    "type": "issue",
    "content": "Type: Other, Category: Possible Typo, Position: 8379-8388 \n\nCheck whether your harddrive is full and reinstall Krita with at least 120 MB of empty space.  \nProblem: Possible spelling mistake found. \nSuggestion: [hard drive] \nExplanation: null"
  },
  {
    "severity": "minor",
    "fingerprint": "docs-krita-org/KritaFAQ.rst:8546:8550",
    "description": "Use a comma before 'and' if it connects two independent clauses (unless they are closely connected and short). (`` and``)",
    "check_name": "COMMA_COMPOUND_SENTENCE[1]",
    "location": {
      "path": "docs-krita-org/KritaFAQ.rst",
      "position": {
        "end": {"line": 177},
        "begin": {"line": 177}
      },
      "lines": {"begin": 177}
    },
    "categories": ["Style"],
    "type": "issue",
    "content": "Type: Other, Category: Punctuation, Position: 8546-8550 \n\nIf not, and the problem still occurs, there might be something odd going on with your device and it's recommended to find a computer expert to diagnose what is the problem.\n \nProblem: Use a comma before 'and' if it connects two independent clauses (unless they are closely connected and short). \nSuggestion: [, and] \nExplanation: null"
  }

There’s still a number of issues. Some mark up is still not processed, I need to figure out how to calculate the column, and just simply that I am unhappy with the command line arguments (they’re positional only right now).

One of the things I am really worrying about is the severity of errors. Like I mentioned before, dialects often get targeted by things that determine “incorrect” language, and LanguageTool does have rules that target slang and dialect. Similarly, people tend to take suggestions from computers more readily without question, so, I’ll need to introduce some configuration.

  1. Configuration to turn rules on and off.
  2. Errors that are uncontroversial should be marked higher, so that people are less likely to assume all the errors should be fixed.

But that’ll be at a later point…

Now, you might be wondering: “Where is the actual screenshot of this thing working in the Gitlab UI?” Well, I haven’t gotten it to work there yet. Partially because the manual doesn’t have CI implemented yet (we’re waiting for KDE’s servers to be ready), and partially because I know nothing about CI and have barely got an idea of Java, and am kinda stuck?

But, I can run it for myself now, so I can at the least do some fixes myself. I put the code up here, bear in mind I don’t remember how to use Java at all, so if I am committing Java sins, please be patient with me. Hopefully, if we can get this to work, we can greatly simplify how we handle style and grammar mistakes like these during review, as well as simplifying contributor’s guides.

After many struggles with using git LFS on repositories that need to store big files, I decided to spend some time on checking the status of the built-in partial clone functionality that could possibly let you achieve the same (as of git 2.30).

TL;DR: The building blocks are there, but server-side support is spotty and critical tooling is missing. It’s not very usable yet, but it’s getting there.

How partial clone works

Normally, when you clone a git repository, all file versions throughout the whole repository history are downloaded. If you have multiple revisions of multi-GB binary files, as we have in some projects, this becomes a problem.

Partial clone lets you download only a subset of objects in the repository and defer downloading the rest, until needed. Most of the time, it means checkout.

For example, to clone a repository with blobs in only the latest version of the default branch, you can do as follows:

git clone --filter=blob:none git@example.com:repo.git

The --filter part is crucial; it tells you what to include/omit. It’s called a filter-spec, across the git docs. The specifics of how you can filter are available in git help rev-list. You can filter based on blob size, location the in tree (slow! – guess why), or both.

The remote from which you cloned will be called a “promisor” remote, so-called because it promises to fulfill requests for missing objects when requested later:

[remote "origin"]
    url = git@example.com:repo.git
    fetch = +refs/heads/*:refs/remotes/origin/*
    promisor = true
    partialclonefilter = blob:limit=1048576 # limited to 1M

As you change branches, the required files will be downloaded on-demand during checkout.

Below is a video of a partial checkout in action. Notice how the actual files are downloaded during the checkout operation, and not during clone:

Comparison

I checked out Linux kernel from the GitHub mirror, through regular and partial clone, and recorded some statistics:

As you can see, there are some tradeoffs. Checking out takes longer because the actual file content has to be downloaded, not just copied from object store. There are savings in terms of initial clone and repository size because you’re not storing a copy of various driver sources deprecated since the late 90s. The gains would be even more pronounced in repositories that store multiple versions of big binary files. Think evolving game assets or CI system output.

So what are the problems?

Missing/incomplete/buggy server support

The server side needs to implement git v2 protocol. Many don’t do it yet, or do it in a limited manner.

No cleanup tool

As you check out new revisions with big files and download them, you will end up with lots of old data from the previous versions because it’s not cleaned up automatically. Git LFS has the git lfs prune tool. No such thing yet exists for partial clones. See this git mailing list thread.

No separate storage of big files on server side (yet)

Since you want server-side operations to happen quickly, it’s best to store the git repository on a very fast storage, which also happens to be expensive. It would be nice to store big files that don’t really need fast operations (you won’t do diffs on textures or sound files server-side) separately.

Christian Couder of GitLab is working on something around this. It’s already possible to have multiple promisor remotes queried in succession. For example, there could be separate promisor remote backed by a CDN or cloud storage (e.g. S3). However, servers will need to learn how to push the data there when users push their trees.

See this git mailing list thread.

Generally cumbersome UX

Since everything is fresh, you need to add some magical incantations to git commands to have it working. Ideally, some “recommended” filter should be stored server-side, so that users don’t have to come up with filter spec on their own, when cloning.

Resources

Below are some useful links, if you’d like to learn more about partial cloning in git:

Currently, a lot of effort around partial cloning is driven by Christian Couder of GitLab. You can follow some of the development under the following links:

If you would like to learn Git, KDAB offers an introductory training class.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post State of Native Big File Handling in Git appeared first on KDAB.

For years, most development discussion for Krita has happened on the #krita channel on the Freenode IRC network. IRC is a venerable chat system (that’s to say, it’s old and quirky) but it works very well for us because it’s free and open source software and because it treats chat as chat: it doesn’t keep logs for you if you’re not in the channel, there are many clients and interaction is simple and limited to just text.

However, the freenode IRC network is no longer a good host for our development work. The people currently managing the network are doing very strange things, and the people who used to manage the network have created a new network, libera.chat.

From today, if you want to chat to Krita’s developers and contributors, you’ll need to join the #krita channel on libera.chat.

Bridges with the Matrix network (a different, newer, more complex chat system) are in the works, and sometimes work, but sometimes don’t work. This means that if you join use Matrix to join the IRC channel people probably will see nothing of what you’re saying.

The post Developer chat moving appeared first on Krita.

Qt's networking code has always been one of its more obtuse parts, requiring using signals for something that didn't quite seem right for them. A linear flow of code would become a jumbled mess of member functions and signal connections.

When developing Challah's netcode, I quickly realised this wasn't going to suffice for the large amount of it I was going to be writing. Thus begins my journey through signal hell, arriving at many bumps before I discovered that integrating Qt's networking stuff with coroutines is possible.

Stop Zero

If you're just here to see some cool coroutine code, you can look at https://invent.kde.org/-/snippets/1711 for an single-file version of the tutorial in this blog-post, with a coroutine function already made and ready for you to play with. If you want something more full-fledged, look at https://invent.kde.org/cblack/coroutines.

But if you want to know what's actually going on behind the scenes, continue reading.

Stop One: Don't Do This

Approach one that I took to get out of signal hell was a simple while loop:

while (!reply->isFinished()) {
    QCoreApplication::processEvents();
}

This is a bad idea. Strange graphical glitches, bugs, crashes, etc. lie behind this innocuous code. Don't do it. Forcibly spinning the event loop in a lot of places causes a lot of bugs.

Step Two: Callback Hell

So, what do you do if you want to write somewhat linear code without doing that? Callbacks.

QObject::connect(val, &QNetworkReply::finished, [val, callback]() {
    if (val->error() != QNetworkReply::NoError) {
        val->deleteLater();
        callback({QStringLiteral("network failure(%1): %2").arg(val->error()).arg(val->errorString())});
        return;
    }
    
    auto response = val->readAll();
    
    protocol::chat::v1::CreateGuildResponse ret;
    if (!ret.ParseFromArray(response.constData(), response.length())) {
        val->deleteLater();
        callback({QStringLiteral("error parsing response into protobuf")});
        return;
    }
    
    val->deleteLater();
    callback({ret});
    return;
});

Every time I had to call into the netcode, I would have to provide a std::function<void(std::variant<Result,Error>)>. It's passable for a single call, but chaining them quickly violates “happy path sticks to the left”, making you wonder if you're actually writing Python with how far indented your code is.

c->call([c](auto response) {
  c->call([c](auto response) {
    c->call([c](auto response) {
      c->call([c](auto response) {
        c->call([c](auto response) {
        });
      });
    });
  });
});

Not good.

Step Three: Out of the Inferno, Into the Frying Pan

await/async as a mechanic in languages generally reduces the amount of callback hell you face by sugaring it for you. For example, in JS, this:

return fetch().then((it) => {
    return it.text
})

is equivalent to this:

return await fetch().text

Both of these return the exact same thing to the caller, but one is much easier to deal with when chained.

For a while, C++ didn't have this. Of course, with C++20, coroutines with co_await & company are now available* in most compilers.

Diversion: That Asterisk

Coroutines are technically available in most compilers, but you're going to have to do a lot of compiler-specific dances in both your code and the build system in order to pass the correct flags to enable coroutines.

I use this snippet to handle clang & gcc:

#if defined(__clang__)

#include <experimental/coroutine>
namespace ns = std::experimental;

#elif defined(__GNUC__)

#include <coroutine>
namespace ns = std;

#endif

where ns is aliased to the namespace containing coroutine-related stuff. Also note that clang will probably fail to compile stuff due to experimental/coroutine being part of libc++ while the rest of your system probably uses gcc's libstdc++. So, for all intents and purposes, gcc is your only option with coroutines on Linux.

Anyways...

Unhelpful Documentation

When hacking on coroutines, I quickly realised: the available documentation on what they were was both young and immature, as well as unhelpful in what they did have. So, this blog post is going to document them.

Parts of the Coroutine

Coroutines are largely split into two parts: the “promise” and the “awaitable”.

The promise is responsible for handling the “internal” side of any coroutine: it's what provides the return value, its yield behaviour, handling unhandled exceptions, the “await transformer”, and some other things. All coroutines need to return a type associated with one of these.

The awaitable handles the “external” side of the coroutine being co_awaited: it checks with the outside world whether or not the coroutine needs to suspend, listening to the outside world and reactivating the coroutine when it's ready, and providing the value received from the coroutine's completion.

As far as stuff that's directly interacting with coroutines goes, that's mostly it. However, there's still one type you need for an end-user API: the type returned by a coroutine, which is what the compiler uses to match your coroutine to a promise type.

SomeType a_coroutine() {
    auto foo = co_await another_thing();
    // ...
} 

This SomeType is what I will be calling the “future” or the “task” type, as it's basically the QFuture of the QPromise (or QFutureInterface to speak in Qt5 parlance).

C++ coroutines are very much “bring your own runtime and types”, and thankfully, that's what Qt already has, making it perfect for integrating into coroutines.

A QML-friendly future

C++ coroutines are the perfect fix to a long-standing issue in QML: representing asynchronous results in the UI in a manner that being an object property or model data can't.

So, let's get to constructing the base of our coroutines, and a type that will be suitable for passing into QML.

For ease of use, we'll want this done as a copyable Q_GADGET type with implicitly shared data. This will help us later down the road when we're passing the future type through lambdas.

(For the sake of simplicity of teaching how coroutines work, I'll be doing a simple non-template future type that talks in QVariants and doesn't have success/failure states. If you want to see code with more type safety and success/failure states, you can check out https://invent.kde.org/cblack/coroutines.)

We'll start by defining our shared data & behaviour.

class Future
{
    Q_GADGET

    struct Shared {
        QVariant result;
        std::function<void(QVariant)> then = [](QVariant) {};
        bool done = false;
    };
    QSharedPointer<Shared> d;

public:

    Future() {
        d.reset(new Shared);
    }
    Future(const Future& other) {
        this->d = other.d;
    }
};

This gives us a Future that is effectively a blob. First, let's define getters for done and result, so the outside world can actually tell what the current state of the Future is:

bool settled() const {
    return d->done;
}
QVariant result() const {
    return d->result;
}

Now we need a way to say that the future is completed and a result is available.

void succeed(const QVariant& value) const {
    if (d->done) {
        return;
    }
    d->done = true;
    d->result = value;
    d->then(d->result);
}

We don't want to trigger the callback again if we've already marked the future as done, so we abort early if we're already done.

You may notice that this function, despite being a setter semantically, is marked as const. This is mostly for ease of dealing with in lambdas later on.

Now, we need a way to register the callback function in the C++ side of things.

void then(std::function<void(QVariant)> then) const {
    d->then = then;

    if (d->done) {
        then(result());
    }
}

If the future is already done and you register a callback, you want to call it immediately. This is to handle situations like this:

Future fetch() {
  if (cache.has_thing()) {
    Future future;
    future.succeed(cache.get_thing();
    return future;
  }
  // ...
}
void main() {
    fetch().then([](QVariant) {
    });
}

If we didn't invoke the callback in then(), the callback would never be triggered as the future was returned to the caller in an already succeeded status.

Since I promised a QML-friendly future type, we should implement that now.

Q_INVOKABLE void then(const QJSValue& it) {
    then([va = it](QVariant result) mutable {
        va.call({va.engine()->toScriptValue(result)});
    });
}

We add a Q_INVOKABLE overload that takes a QJSValue, as JS functions are represented as QJSValues in C++. We then have a lambda which we pass back into the other then function, taking the QVariant, transforming it into a QJSValue, and calling the JavaScript callback with the variant.

JS usage looks like this:

future.then((it) => {
    console.warn(it)
})

If you've been following along, you should now have this class:

class Future
{

    Q_GADGET

    struct Shared {
        QVariant result;
        std::function<void(QVariant)> then = [](QVariant) {};
        bool done = false;
    };
    QSharedPointer<Shared> d;

public:

    Future() {
        d.reset(new Shared);
    }
    Future(const FutureBase& other) {
        this->d = other.d;
    }

    bool settled() const {
        return d->done;
    }
    QVariant result() const {
        return d->result;
    }

    void succeed(const QVariant& value) const {
        if (d->done) {
            return;
        }
        d->done = true;
        d->result = value;
        d->then(d->result);
    }

    void then(std::function<void(QVariant)> then) const {
        d->then = then;

        if (d->done) {
            then(result());
        }
    }
    Q_INVOKABLE void then(const QJSValue& it) {
        then([va = it](QVariant result) mutable {
            va.call({va.engine()->toScriptValue(result)});
        });
    }
};

The Backing Promise

Of course, the above class doesn't make a coroutine. For this, we need to implement the promise type.

For placement options, you have two places where you can put this: inside the future type itself, or outside of the type in an explicit coroutine_traits template specialisation.

The former would look like this:

class Future {
   // ...
   struct promise_type {
   };
};

While the latter would look like this:

template<typename ...Args>
struct ns::coroutine_traits<Future, Args...> {
    struct promise_type {
    };
};

(where ns is the namespace with the coroutine types, std:: on gcc or std::experimental on clang.)

The latter allows you to implement a promise type for any type, not just one that you have control of the declaration of.

For this blog post, I'll be going with the latter approach.

struct promise_type {
};

One of the things we'll need in our promise type is a member variable to hold the future type.

struct promise_type {
    Future _future;
};

Of course, the compiler doesn't know about this member variable, so we need to start implementing the promise_type interface:

struct promise_type {
    Future _future;
    Future get_return_object() noexcept {
        return _future;
    }
};

T get_return_object() noexcept is the exact type signature that needs to be implemented. We use noexcept here, as exceptions can cause a double free, and therefore, a crash.

Now, we need to implement two things: initial_suspend() and final_suspend(). These two functions are called at the start and end of your coroutine, and have to return a co_awaitable type.

In an expanded form, initialsuspend and finalsuspend look like this:

Future your_coroutine() {
    co_await promise.initial_suspend();
    // coroutine body here...
    co_await promise.final_suspend();
}

You could theoretically return anything you want here, but you'll likely be sticking with ns::suspend_never, which is an awaitable value that resumes execution of the coroutine immediately when co_awaited, and what we'll be using for this blog post.

Add the implementation to your promise type:

ns::suspend_never initial_suspend() const noexcept { return {}; }
ns::suspend_never final_suspend() const noexcept { return {}; }

Your promise is responsible for taking care of any values that are co_returned with the return_value function.

I recommend having two overloads: const T& (copy value) and T&& (move value).

For our promise_type, the implementation of those will simply look like this:

void return_value(const QVariant& value) noexcept
{
    _future.succeed(value);
}
void return_value(QVariant &&value) noexcept
{
    _future.succeed(std::move(value));
}

If your coroutine was returning something like a std::unique_ptr<T>, you would need the std::move in order to properly handle it, and thus why I say you should have this overload in your promise type.

With getreturnobject, initialsuspend, finalsuspend, and return_value, you only have one remaining function left to implement in your coroutine. It's quite simple.

void unhandled_exception() noexcept {

}

This is what gets called in the catch block of a try/catch block catching exceptions that the coroutine might throw. Use std::current_exception() to access the exception.

For this blog post, we'll simply be doing Q_ASSERT("unhandled exception"); in our implementation.

With that, the promise type is complete. It should look something like this:

struct promise_type {
    Future _future;

    Future get_return_object() noexcept {
        return _future;
    }

    ns::suspend_never initial_suspend() const noexcept { return {}; }
    ns::suspend_never final_suspend() const noexcept { return {}; }

    void return_value(const QVariant& value) noexcept
    {
        _future.succeed(value);
    }
    void return_value(QVariant &&value) noexcept
    {
        _future.succeed(std::move(value));
    }

    void unhandled_exception() noexcept {
        Q_ASSERT("unhandled exception");
    }
};

Remember that this needs to be within the Future type itself, or within the appropriate coroutine_traits overload.

The Awaitable

With the Future and promise completed, we don't have much left to do. We now need to implement operator co_await(Future) in order to allow the Future to be awaited.

Technically, we don't need the promise type to make a Future awaitable, only the Future and the co_await overload. Implementing a promise type allows your coroutine that's co_awaiting a Future to return another Future.

There's not really much to the co_await overload, so I'll just paste an implementation verbatim:

auto operator co_await(Future it) noexcept
{
    struct Awaiter {
        Future future;

        bool await_ready() const noexcept {
            return future.settled();
        }
        void await_suspend(ns::coroutine_handle<> cont) const {
            future.then([cont](QVariant) mutable {
                cont();
            });
        }
        QVariant await_resume() {
            return future.result();
        }
    };

    return Awaiter{ it };
}

co_await needs to return a type with the following methods:

bool await_ready() const noexcept: This function is called before suspending the coroutine to wait on the awaitable to resolve. If the function returns true, the coroutine continues execution without suspending. If the function returns false, it suspends and waits on the value.

Think back to the example of a function returning an already succeeded future to see why this is useful. If the future already holds a value, there's no need to waste time suspending the coroutine.

void await_suspend(ns::coroutine_handle<> callback) const: This function is called when the coroutine suspends with a coroutine handle passed in. For all intents and purposes, you can treat it as if it were a std::function holding a callback, except operator () resumes the coroutine instead of calling a function.

You call the coroutine_handle to resume the coroutine when whenever it's waiting on is ready, e.g. when the future in this example is marked as succeeded. You can connect this to just about anything, e.g. a QTimer::singleShot, a signal, using it as a callback, etc.

T await_resume() const: This is the value that gets assigned to the result of co_awaiting the expression:

auto it = co_await fetchFromNetwork();
  // |
  // ^ this is the value from await_resume()

Annd with all three types implemented, you now have a QML-friendly future, a promise, and an awaitable.

A Brand New Washing Machine

With your new coroutine, you can do just about anything asynchronously. This is an example function that returns a future that will be fufilled in N miliseconds:

Future timer(int duration) {
    Future it;

    QTimer::singleShot(duration, [it]() {
        it.succeed({});
    });

    return it;
}

Note that while it returns a Future, it is not a coroutine. It does not co_await. This is a completely normal function that you can call however you desire.

Making a coroutine using this function is easy:

Future thisIsACoroutine() {
    co_await timer(2000);

    co_return {};
}

Note that this function could not return void. The return type of a coroutine has to have an associated promise type, which void does not.

In a class exposed to QML...

class Singleton : public QObject
{
    Q_OBJECT

public:
    Q_INVOKABLE Future foo() {
        co_await timer(2000);

        co_return "i was returned by a coroutine";
    };
};

This can be used like so:

Button {
    anchors.centerIn: parent
    text: "a coroutine!"
    onClicked: Singleton.foo().then((val) => {
        console.warn(val)
    })
}

This will output the following in your console 2 seconds after every time you press this button:

qml: "i was returned by a coroutine"

The inline code from this blog post can be found in a single-file format at https://invent.kde.org/-/snippets/1711. Compile with C++20 and -fcoroutines-ts -stdlib=libc++ for clang, and -fcoroutines for gcc.

For a basic future+promise+awaitable deal, this is about all you need to know. However, the iceberg gets way deeper than that.

Stay tuned for more blog posts about the eldritch arcana you can pull off with coroutines. Next one will probably be about wrapping already-existing types such as a QNetworkReply* in order to make them co_await-able.

You can see more advanced (and arguably useful) code at https://invent.kde.org/cblack/coroutines.

Contact Me

Note that I wrote this blog post at 2 in the morning. If there's anything that doesn't make sense, please feel free to come to me and ask for clarification.

Or, want to talk to me about other coroutine stuff I haven't discussed in this blog post (or anything else you might want to talk about)?

Contact me here:

Telegram: https://t.me/pontaoski Matrix: #pontaoski:tchncs.de (prefer unencrypted DMs)

Tags: #libre

Tuesday, 15 June 2021. Today KDE releases a bugfix update to KDE Plasma 5, versioned 5.22.1.

Plasma 5.22 was released in June 2021 with many feature refinements and new modules to complete the desktop experience.

This release adds a week's worth of new translations and fixes from KDE's contributors. The bugfixes are typically small but important and include:

  • KWin: Platforms/drm: support NVidia as secondary GPU with CPU copy. Commit. Fixes bug #431062
  • Weather applet: Point bbcukmet to new location API. Commit. Fixes bug #430643
  • Wallpapers: Add milky way. Commit. Fixes bug #438349
View full changelog