GCompris is a high quality educational software suite, including a large number of activities for children aged 2 to 10, some of the activities are game orientated, but nonetheless still educational.
Currently it has more than 150 educational activities for kids. The goal of GCompris in GSoC’21 is to add more activities. I proposed (proposal link) to add the following activities during GSoC coding period:
Deplacements : It is a maze activity. The user will be be given a path on the grid. The user has to encode / decode this path, in the form of elementary movements defined by ‘arrow keys’.
Oware: This is traditional board game, popular in many parts of Africa. The game starts with 48 seeds equally divided between two players. The objective of the game is to capture more seeds than one’s opponent. The game is over when one player has captured 25 or more seeds, or both players have taken 24 seeds each (draw).
Community Bonding Period
During the community bonding period, I discussed the strategy and structure of the activities that I was going to work on during the coding period. I discussed design and also the structure from programming point of view with my mentors: Timothee, Johnny Emmanuel and Deepak . We discussed the requirement of assets for these activities as well.
Before starting with GSoC, I was working on adding 4 different ordering activities to the GCompris project. During the community bonding period, I continued my work on that project, and finished it (Merge Request). Following activities were a part of this merge request:
1. Ordering Numbers
Kids are given a few random numbers to sort in ascending / descending order.
2. Ordering Alphabets
Kids are given a few random letters in their preferred language, and they are asked to order them in ascending / descending order.
3. Ordering Sentences
Kids are given jumbled words and are asked to form a meaningful sentence by reordering them.
4. Ordering Chronology
Kids are given images of events. They have to sort those images in correct chronological order.
First Week Report
During the first week, I worked on building a solid base for the ‘Deplacements’ activity. I initialized a basic structure of the interface, and also initialized a basic structure of the datasets that will be required for this activity.
I took care of the modularity of the code. I kept the code as flexible as I could make it, so that it can be easily extended later on.
One of the major challenges that I am facing right now is to make the layout responsive, and make it adapt to the screen size for vertical devices. One my mentors Timothee Giet suggested me to use states for this, and display the layout according to the state variable. I’m working on it, and I believe I’ll get it working soon.
What next?
I’ll be starting with adding the actual game mechanics logic now. After this, I have to work on 4 different variants / modes of this activity. The base code for all these activities would be common, just special cases need to be handled separately for the individual activities. After finishing this activity, I have to move on the development of the next activity, Oware.
I am having a great time contributing to the project. Each day has a new learning for me. I learn something new everyday in this journey. With the support of awesome mentors, who make sure all our doubts are resolved as soon as possible, it has been a pleasant journey. I am looking forward to complete these activities, and give back to the community.
Last but not the least, thanks for taking out time to read! Stay tuned for further updates!
10 Admirable Attributes of a Great Technical Lead | by Elye | Jun, 2021 | Better Programming
Tags: tech, management, leadership, tech-lead
I think this is a very good summary of what being in the position of a tech lead entails. I especially like the bottom line of this article: it’s a constant balancing act between your heart or your mind.
Most of the points pushed forward in this article are things I’ve been trying to achieve for a long time. It also summarizes fairly well most of the topics I go through for coaching situations with tech leads or people growing in the position.
Stay away from the hype and introduce complexity in your systems only if it’s warranted by the problem space you have to tackle. Most organizations don’t need microservices, use them responsibly.
Second time I bump into this book being mentioned somewhere. This good summary really makes me want to read it. At least it gives a clear overview of complexity and how it’s tied to other softer topics. I especially like the distinction between tactical and strategic, it’s one I do often. I think I’m particularly willing to read the chapters about comments and TDD… from the summary it seems it’s where I’m the most at odd with the ideas in there.
Cutelyst, the C++/Qt web framework just got a new major release.
Under the hood it now has both Qt5 and Qt6 support, but because of this a few things had to be changed, the most important one was QMap usage as a multi-key container had to be changed to QMultiMap, if you used ParamsMultiMap types or auto when assigning there’s probably little to do, one major exception is that if you still use Grantlee it might not work well, but Cutelee is there with fixes and is almost ported to Qt6.
Another API changes were on the authentication classes that have some friendly methods for QString passwords, WSGI module is now called Cutelyst::Server, it allows you to embed Cutelyst on other applications or have it as a stand alone executable, allowing for better LTO as you can get all stuff compiled as static libraries.
For the future I’m leaning towards increasing minimum version to Qt 5.12 (sadly means LGPLv2 for Qt5.6 will be gone) to start using QStringView where possible.
Akademy starts in a few days, and the Champions and I will be focusing on that. However, there are still some interesting updates we’d like to share with you.
Let’s jump right in!
Wayland
With every recent Plasma update (and especially the just released version 5.22) the list of features that are X11 exclusive gets smaller and smaller.
Conversely, many users may not be aware that the opposite is also happening: every day there are more features available on Wayland that cannot be found on X11!
There are many resources available describing the security advantages of Wayland over X11, but the ageing protocol has some other shortcomings as well. For example, the last update we highlighted was the recently released VRR support in 5.22. Among other things, this enables an important use case for me: it allows each of my connected displays to operate at their highest refresh rate. I have a 144Hz main display, but occasionally I plug in my TV, which works at 60Hz. Because of limitations of X11, for everything to work, my main display needs to be limited to 60Hz when both of them are active. But not any more thanks to Wayland!
While the KDE developers always try to bring new functionalities to all users, the above example shows that sometimes, either due to X11 limitations or for other reasons, feature parity will not be possible.
For different, but similar reasons, some other features that are Wayland exclusive are:
When you think about consistency, you may think of how different parts of your Plasma desktop should look and behave in a similar way, like scrollbars should all look the same on all windows. Or like when you open a new tab, it should always open in the same place, regardless of the app.
But the KDE developers also think about the bigger picture, like: How can we achieve a consistent experience between your desktop and your phone? Here’s where Kirigami comes in! It makes sense to have applications like NeoChat and Tok on both Plasma desktop and Plasma Mobile, and, thanks to the Kirigami framework, users will feel at home on both form factors. Now I want to see Kirigami apps on a smartwatch!
NeoChat desktop and mobile powered by Kirigami
Speaking of Kirigami, there is work being done on a component called “swipenavigator” to increase its - you guessed it - consistency, among other fixes. Details of the rewrite are in the merge request.
Do you care about looks? Then you’ll be interested in two MR’s: the first regarding better shadows, and the other is the “Blue Ocean” style for buttons, checkboxes etc. There are some details at Nates blog.
Our Consistency Champion Niccolò has a Goal talk during Akademy, so be sure to watch it!
KDE is All About the Apps
As announced on the community mailing list and the Goals matrix room, there was a meeting last Monday to discuss the way forward with the huge list of topics mentioned in the previous update.
In the meeting, the conclusion was to start with the topics regarding the different platforms we support, as well as the automation of the build/release process of apps.
Taking advantage of the upcoming Akademy, the topics will be discussed during the BoF sessions. Check out the schedule to see when you can attend! Also, don’t miss the “Creating Plasma Mobile apps” BoF!
Of course, like the other Goal Champions, Aleix will have a talk on the first day of Akademy, don’t miss it!
Meta
Right after the three Goal talks at Akademy, there will be a KDE Goals round table, a session where Lydia and I will be joined by the Champions to answer community questions regarding the specific goals, and the goal program as a whole.
Later in the event, on Monday June 21st at 18:00 UTC, I will conduct a BoF regarding selecting the next goals! Be sure to join in, if you were thinking about becoming a Champion yourself, or if you’re just curious about the process.
See you there!
This is how I would look like in the Akademy t-shirt, if Akademy was an in-person event this year. And held outside.
Recently my 4 year-old stepson saw a kid with an RC racing car in a park. He really wanted
his own, but with Christmas and his birthday still being a long way away, I decided to
solve the “problem” by combining three things I’m really passionate about: LEGO, electronics
and programming.
In this short series of blogs I’ll describe how to build one such car using LEGO, Arduino and
a bit of C++ (and Qt, of course!).
LEGO
Obviously, we will need some LEGO to build the car. Luckily, I bought LEGO Technic Mercedes Benz Arocs 3245
(40243) last year. It’s a big build with lots of cogs, one electric engine and bunch of pneumatics.
I can absolutely recommend it - building the set was a lot of fun and thanks to the Power Functions it has
a high play-value as well. There’s also fair amount of really good MOCs, especially
the MOC 6060 - Mobile Crane by M_longer is really good. But I’m digressing here. :)
The problem with Arocs is that it only has a single Power Functions engine (99499 Electric Power Functions Large Motor)
and we will need at least two: one for driving and one for steering. So I bought a second one. I bought the same one,
but a smaller one would probably do just fine for the steering.
I started by prototyping the car and the drive train, especially how to design the gear ratios to not overload
the engine when accelerating while keeping the car moving at reasonable speed.
Turns out the 76244 Technic Gear 24 Tooth Clutch is really important as it prevents the gear
teeth skipping when the engine stops suddenly, or when the car gets pushed around by hand.
Initially I thought I would base the build of the car on some existing designs but in the end I just started building
and I ended up with this skeleton:
The two engines are in the middle - rear one powers the wheels, the front one handles the steering using the
61927b Technic Linear Actuator. I’m not entirely happy with the steering, so I might rework
that in the future. I recently got Ford Mustang (10265) which has a really interesting steering
mechanism and I think I’ll try to rebuild the steering this way.
Wires
We will control the engines from Arduino. But how to connect the LEGO Power Functions to an Arduino? Well, you
just need to buy a bunch of those 58118 Electric Power Functions Extension Wires, cut them and
connect them with DuPont cables that can be connected to a breadboard. Make sure to buy the “with one Light Bluish
Gray End” version - I accidentally bought cables which had both ends light bluish, but those can’t be connected to the
16511 Battery Box.
We will need 3 of those half-cut PF cables in total: two for the engines and one to connect to the battery box. You
probably noticed that there are 4 connectors and 4 wires in each cable. Wires 1 and 4 are always
GND and 9V, respectively, regardless of what position is the switch on the battery pack. Wires 2 and 3
are 0V and 9V or vice versa, depending on the position of the battery pack switch. This way we can control the engine
rotation direction.
For the two cables that will control the engines we need all 4 wires connected to the DuPont cable. For the one cable
that will be connected to the battery pack we only need the outter wires to be connected, since we will only use the
battery pack to provide the power - we will control the engines using Arduino and an integrated circuit.
I used the glue gun to connect the PF wires and the DuPont cables, which works fairly well. You could use a solder
if you have one, but the glue also works as an isolator to prevent the wires from short-circuiting.
This completes the LEGO part of this guide. Next comes the electronics :)
Arduino
To remotely control the car we need some electronics on board. I used the following components:
Arduino UNO - to run the software, obviously
HC-06 Bluetooth module - for remote control
400 pin bread board - to connect the wiring
L293D integrated circuit - to control the engines
1 kΩ and 2 kΩ resistors - to reduce voltage between Arduino and BT module
9V battery box - to power the Arduino board once on board the car
M-M DuPont cables - to wire everything together
The total price of those components is about €30, which is still less than what I paid for the LEGO engine and PF wires.
Let’s start with the Bluetooth module. There are some really nice guides online how to use them, I’ll try to describe
it quickly here. The module has 4 pins: RX, TX, GND and VCC.
GND can be connected directly to Arduino’s GND pin. VCC is power supply for the
bluetooth module. You can connect it to the 5V pin on Arduino. Now for TX and RX
pins. You could connect them to the RX and TX pins on the Arduino board, but that makes it
hard to debug the program later, since all output from the program will go to the bluetooth module rather than our
computer. Instead connect it to pins 2 and 3. Warning: you need to use a voltage
divider for the RX pin, because Arduino operates on 5V, but the HC-06 module operates on 3.3V. You can
do it by putting a 1kΩ resistor between Arduino pin 3 and HC-06 RX and 2kΩ resistor between
Arduino GND and HC-06 RX pins.
Next comes up the L293D integrated circuit. This circuit will allow us to control the engines. While in theory we
could hook up the engines directly to the Arduino board (there’s enough free pins), in practice it’s a bad idea. The
engines need 9V to operate, which is a lot of power drain for the Arduino circuitry. Additionally, it would mean that
the Arduino board and the engines would both be drawing power from the single 9V battery used to power the Arduino.
Instead, we use the L293D IC, where you connect external power source (the LEGO Battery pack in our case) to it as well
as the engines and use only a low voltage signal from the Arduino to control the current from the external power
source to the engines (very much like a transistor). The advantage of the L293D is that it can control up to 2 separate
engines and it can also reverse the polarity, allowing to control direction of each engine.
Here’s schematics of the L293D:
To sum it up, pin 1 (Enable 1,2) turns on the left half of the IC, pin 9 (Enable 3,4) turns
on the right half of the IC. Hook it up to Arduino's 5V pin. Do the same with pin 16 (VCC1), which powers
the overall integrated circuit. The external power source (the 9V from the LEGO Battery pack) is connected to
pin 8 (VCC2). Pin 2 (Input 1) and pin 7 (Input 2) are connected to Arduino and
are used to control the engines. Pin 3 (Output 1) and pin 6 (Output 2) are output pins that
are connected to one of the LEGO engines. On the other side of the circuit, pin 10 (Input 3) and
pin 15 (Input 4) are used to control the other LEGO engine, which is connected to pin 11 (Output 3) and pin 14 (Output 4). The remaining four pins in the middle (4, 5,
12 and 13 double as ground and heat sink, so connect them to GND (ideally both Arduino and
the LEGO battery GND).
Since we have 9V LEGO Battery pack connected to VCC2, sending 5V from Arduino to Input 1 and
0V to Input 2 will cause 9V on Output 1 and 0V on Output 2 (the engine will spin
clockwise). Sending 5V from Arduino to Input 2 and 0V to Input 1 will cause 9V to be on
Output 2 and 0V on Output 1, making the engine rotate counterclockwise. Same goes for the
other side of the IC. Simple!
Conclusion
I also built a LEGO casing for the Arduino board and the breadboard to attach them to the car. With some effort I
could probably rebuild the chassis to allow the casing to “sink” lower into the construction.
The batterry packs (the LEGO Battery box and the 9V battery case for Arduino) are nicely hidden in the middle
of the car on the sides next to the engines.
Now we are done with the hardware side - we have a LEGO car with two engines and all the electronics wired together
and hooked up to the engines and battery. In the next part we will start writing software for the Arduino board so
that we can control the LEGO engines programmatically. Stay tuned!
Every so often there appear some new pics from developer builds of Windows or even leaks such as the recent Windows 11 preview screenshots. More or less every time this happens there are comments from the Linux side that Windows is copying KDE Plasma – a desktop environment that is, granted, among the most similar...... Continue Reading →
GCompris is a high quality educational software suite, including a large number of activities for children aged 2 to 10, some of the activities are game orientated, but nonetheless still educational.
Currently GCompris offers more than 100 activities, and more are being developed. GCompris is free software, it means that you can adapt it to your own needs, improve it, and most importantly share it with children everywhere.
My project goals include adding four new activities to GCompris:
Subtraction decimal number activity.
Addition decimal number activity.
Programming maze loops activity.
Mouse control action activity.
Community Bonding Period
During this period I have interacted with my mentors, and discussed multiple design aspects for extending the original decimal activity, so that it can support both addition decimal activity and subtraction decimal activity, I started to add the decimal point character in numPad to support typing it for decimal activities, I managed to add a task for each activity on phabricator to track the progress.
First Week Report
So, the first week of coding period has ended. It was exciting and full of challenges. I am happy that I am on the right track and making progress as I’ve promised. I have started by adding subtraction decimal number activity, its goal is to learn subtraction for decimal numbers.
Here is a quick summary of the work done last week:
Adding multiple datasets, from which we generate two different decimal numbers.
Creating a new component MultipleBars.qml, from which the largest decimal number is represented as multiple bars, each bar consists of ten square units, some of them is semi-transparent according to the largest number shown.
Adding numPad.qml to the activity, so that we can ask the child to type the result if he represented the result correctly.
Adding TutorialBase.qml including instructions on how to play with the activity.
what’s next ?
I will start implementing Addition decimal activity, and wait for mentors’ reviews on subtraction decimal activity as it is still in progress.
I am delighted to have such an enthusiastic, helpful and inspiring community
Once again I plan to be at Akademy. I almost silently attended last year edition. OK… I
had a talk there but didn’t blog. I even didn’t post my traditional sketchnotes post. I
plan to do better this year.
I’ll try to sketchnote again, we’ll see how that works out. Oddly enough, I might do the
2020 post after the 2021 one. 😀
This year I’ll also be holding a training and a couple of talks. Last but not least, I’ll
attend the KF6 BoF. I’ll see if I can attend a couple more but that’ll mainly depend how
compatible it is with my schedule otherwise.
Also, I’m particularly proud to be joined by a couple of colleagues from enioka Haute
Couture. Without further ado here is where you will or might
find us:
Friday 18 June, starting at 18:00 CEST, I’ll be holding a 4 hours (!) training about the
KDE Stack, if you’re interested in
getting a better understanding on how KDE has built the stack for its applications and
workspaces, but also how all the pieces are tied together, this will be the place to be;
Saturday 19 June, at 12:20 CEST, my colleague Méven Car will give an update about the
Wayland Goal, he’ll be joined by Vlad
Zahorodnii;
Following up his talk, at 13:00 CEST, Méven will also participate in the KDE Goals
roundtable;
Still the same day, at 21:00 CEST, I’ll be on stage again to talk about KDE Frameworks
Architecture, I’ll go back to how it’s
structured in KF5 and will propose a potential improvement for KF6;
On Monday 20 June, a bunch of eniokians will participate in the KDE
e.V. general assembly;
Somewhen during the week I’ll participate in the KF6 BoF (not scheduled yet at time of
writing), obviously I’ll be interested in discussing the ideas from my talk with the
rest of the KDE Frameworks contributors;
And finally, Friday 25 June, at 19:00 CEST, I’ll be joined by my colleague Christelle
Zouein for our talk about community data
analytics, we got a bunch of new toys
to play with thanks to Christelle’s work and the community move towards
GitLab and we’ll show some results for the first time.
Of course it also means I’m on my way to… ah well… no, I’m not on my way. I’ll just
hook up my mic and camera like everyone else. See you all during Akademy 2021!
Like most online manuals, the Krita manual has a contributor’s guide. It’s filled with things like “who is our assumed audience?”, “what is the dialect of English we should use?”, etc. It’s not a perfect guide, outdated in places, definitely, but I think it does its job.
So, sometimes I, who officially maintains the Krita manual, look at other project’s contributor’s guides. And usually what I find there is…
Style Guides
The purpose of a style guide is to obtain consistency in writing over the whole project. This can make the text easier to read and simpler to translate. In principle, the Krita manual also has a style guide, because it stipulates you should use American English. But when I find style guides in other projects it’s often filled with things like “active sentences, not passive sentences”, and “use the Oxford comma”.
Active sentences versus passive sentences always gets me. What it means is the difference between “dog bites man” and “man is bitten by dog”. The latter sentence is the one in the passive voice. There’s nothing grammatically incorrect about it. It’s a bit longer, sure, but on the other hand, there’s value in being able to rearrange the sentence like that. For a more Krita specific example, consider this:
“Pixels are stored in working memory. Working memory is limited by hardware.”
“Working memory stores the pixels. Hardware limits the working memory.”
The first example is two sentences in the passive voice, the latter two in the active. The passive voice example is longer, but it is also easier to read, as it groups the concepts together and introduces new concepts later in the paragraph. Because we grouped the concepts, we can even merge the sentences:
“Pixels are stored in working memory, which is limited by hardware.”
But surely, if so many manuals have this in their guide, maybe there is a reason for it? No, the reason it’s in so many manuals’ style guide, is because other manuals have it there. And the reason other manuals have it there, is because magazines and newspapers have it there. And the reason they have that, is because it is recommended by style guides like The Elements of Style. There is some(?) value for magazines and newspapers in avoiding the passive voice because it tends to result in longer sentences than the active voice, but for electronic manuals, I don’t see the point of worrying about things like these. We have an infinite word count, so maybe we should just use that to make the text easier to read?
The problem of copying style rules like this is also obfuscated by the fact a lot of people don’t really know how to write. In a lot of those cases, the style guide seems to be there to allow role playing that you are a serious documentation project, if not a case of ‘look busy’, and it can be very confusing to the person being proofread. I’ve accepted the need for active voice in my university papers, because I figured my teachers wanted to help me lower my word count. I stopped accepting it when I discovered they couldn’t actually identify the passive voice, pointing at paragraphs that needed no work.
This kind of insecurity-driven proofreading becomes especially troublesome when you consider that sometimes “incorrect” language is caused by the writer using a dialect. It makes sense to avoid dialect in documentation, as they contain specific language features that not everyone may know, but it’s a whole other thing entirely to tell people their dialect is wrong. So in these cases, it’s imperative the proofreader knows why certain rules are in place so they can communicate why something should be changed without making the dialect speaker insecure about their language use.
Furthermore, a lot of such style guide rules are filled with linguistic slang, which is abstract and often derived from Latin. People who are not confident in writing will find such terms very intimidating, as well as hard to understand, and this in turn leads to people being less likely to contribute. In a lot of those cases, we can actually identify the problems in question via a computer program. So maybe we should just do that, and not fill our contributor’s guide with scary linguistic terms?
LanguageTool
Despite my relaxed approach to proofreading, I too have points at which I draw the line. In particular, there’s things like typos, missing punctuation, errant white-spaces. All these are pretty uncontroversial.
In the past, I’d occasionally run LanguageTool over the text. LanguageTool is a java based style and grammar checker licensed under LGPL 2.1. It has a plugin for LibreOffice, which I used a lot when writing university papers. However, by itself LanguageTool cannot recognize mark-up. To run it over the Krita documentation, I had to first run the text through pandoc to convert from reStructuredText to plain text, which was then fed to the LanguageTool jar.
I semi-automated this task via a bash script:
#!/bin/sh
# Run this file inside the language tool folder.
# First argument is the folder, second your mother tongue.
for file in $(find $1 -iname "*.rst");
do
pandoc -s $file -f rst -t plain -o ~/checkfile.txt
echo $file
echo $file >> ~/language_errors.txt
# Run language tool for en-us, without multiple whitespace checking and without the m-dash suggestion rule, using the second argument as the mother tongue to check for false friends.
java -jar languagetool-commandline.jar -l en-US -m $2 --json -d WHITESPACE_RULE,DASH_RULE[1] ~/checkfile.txt >> ~/language_errors.txt
rm ~/checkfile.txt
done
This worked pretty decently, though there were a lot of false positives (mitigated a bit by turning off some rules). It was also always a bit of a trick to find the precise location of the error, because the conversion to plaintext changed the position of the error.
I had to give up on this hacky method when we started to include python support, as that meant python code examples. And there was no way to tell pandoc to strip the code examples. So in turn that meant there were just too many false positives to wade through.
There is a way to handle mark-up, though, and that’s by writing a java wrapper around LanguageTool that parses through the marked-up text, and then tells LanguageTool which parts are markup and which parts can be analyzed as text. I kind of avoided doing this for a while because I had better things to do than to play with regexes, and my Java is very rusty.
One of the things that motivated me to look at it again was the appearance of the code quality widget in the Gitlab release notes. Because one of my problems is that notions of “incorrect” language can be used to bully people, I was looking for ways to indicate that everything LanguageTool puts out is considered a suggestion first and foremost. The code quality widget is just a tiny widget that hangs out underneath the merge request description, that says how many extra mistakes the merge request introduces, and is intended to be used with static analysis tools. It doesn’t block the MR, it doesn’t confuse the discussion, and it takes a JSON input, so I figured it’d the ideal candidate for something as trivial as style mistakes.
So, I started up eclipse, followed the instructions on using the Java api (intermissioned by me realizing I had never used maven and needing a quick tutorial), and I started writing regular expressions.
Reusing KSyntaxHighlighter?
So, people who know KDE’s many frameworks know that we have a collection of assorted regex and similar for a wide variety of mark up systems and languages, KSyntaxHighlighter, and it has support for reStructuredText. I had initially hoped I could just write something that could take the rest.xml file and use that to identify the mark up for LanguageTool.
Unfortunately, the regex needs of KSyntaxHighlighter is very different from the ones I need for LanguageTool. KSyntax needs to know whether we have entered a certain context based on the mark-up, but it doesn’t really need to identify the mark-up itself. For example, the mark up for strong in reStructuredText is **strong**.
The regular expression to detect this in rest.xml is \*\*[^\s].*\*\*, translated: Find a *, another *, a character that is not a space, a sequence of zero or more characters of any kind, another * and finally *.
What I ended up needing is: "(?<bStart>\*+?)(?<bText>[^\s][^\*]+?)(?<bEnd>\*+?)", translated: Find group of *, name it ‘bStart’, followed by a group that does not start with a space, and any number of characters after it that is not a *, name this ‘bText’, followed by a group of *, name this ‘bEnd’.
The bStart/bText/bEnd names allow me to append the groups separately to the AnnotatedTextBuilder:
So I had to abandon adopting the KSyntaxHighlighter format for this and do my own regexes.
Results
Eventually, I had something that worked. I managed to get it to write the errors it found to a json file that should work the code quality widget. I also implemented an accepted words list, which at the very least took a third off the initial set of errors. I’ve managed to actually get it to find about 105 errors on the 5000 word KritaFAQ, most of which are misspelled brand names, but it also found missing commas and errant white-spaces.
A small sample of the error output:
{
"severity": "info",
"fingerprint": "docs-krita-org/KritaFAQ.rst:8102:8106",
"description": "Did you mean <suggestion>Wi-Fi<\/suggestion>? (This is the officially approved term by the Wi-Fi Alliance.) (``wifi``)",
"check_name": "WIFI[1]",
"location": {
"path": "docs-krita-org/KritaFAQ.rst",
"position": {
"end": {"line": 176},
"begin": {"line": 176}
},
"lines": {"begin": 176}
},
"categories": ["Style"],
"type": "issue",
"content": "Type: Other, Category: Possible Typo, Position: 8102-8106 \n\nIt might be that your download got corrupted and is missing files (common with bad wifi and bad internet connection in general), in that case, try to find a better internet connection before trying to download again. \nProblem: Did you mean <suggestion>Wi-Fi<\/suggestion>? (This is the officially approved term by the Wi-Fi Alliance.) \nSuggestion: [Wi-Fi] \nExplanation: null"
},
{
"severity": "info",
"fingerprint": "docs-krita-org/KritaFAQ.rst:8379:8388",
"description": "Possible spelling mistake found. (``harddrive``)",
"check_name": "MORFOLOGIK_RULE_EN_US",
"location": {
"path": "docs-krita-org/KritaFAQ.rst",
"position": {
"end": {"line": 177},
"begin": {"line": 177}
},
"lines": {"begin": 177}
},
"categories": ["Style"],
"type": "issue",
"content": "Type: Other, Category: Possible Typo, Position: 8379-8388 \n\nCheck whether your harddrive is full and reinstall Krita with at least 120 MB of empty space. \nProblem: Possible spelling mistake found. \nSuggestion: [hard drive] \nExplanation: null"
},
{
"severity": "minor",
"fingerprint": "docs-krita-org/KritaFAQ.rst:8546:8550",
"description": "Use a comma before 'and' if it connects two independent clauses (unless they are closely connected and short). (`` and``)",
"check_name": "COMMA_COMPOUND_SENTENCE[1]",
"location": {
"path": "docs-krita-org/KritaFAQ.rst",
"position": {
"end": {"line": 177},
"begin": {"line": 177}
},
"lines": {"begin": 177}
},
"categories": ["Style"],
"type": "issue",
"content": "Type: Other, Category: Punctuation, Position: 8546-8550 \n\nIf not, and the problem still occurs, there might be something odd going on with your device and it's recommended to find a computer expert to diagnose what is the problem.\n \nProblem: Use a comma before 'and' if it connects two independent clauses (unless they are closely connected and short). \nSuggestion: [, and] \nExplanation: null"
}
There’s still a number of issues. Some mark up is still not processed, I need to figure out how to calculate the column, and just simply that I am unhappy with the command line arguments (they’re positional only right now).
One of the things I am really worrying about is the severity of errors. Like I mentioned before, dialects often get targeted by things that determine “incorrect” language, and LanguageTool does have rules that target slang and dialect. Similarly, people tend to take suggestions from computers more readily without question, so, I’ll need to introduce some configuration.
Configuration to turn rules on and off.
Errors that are uncontroversial should be marked higher, so that people are less likely to assume all the errors should be fixed.
But that’ll be at a later point…
Now, you might be wondering: “Where is the actual screenshot of this thing working in the Gitlab UI?” Well, I haven’t gotten it to work there yet. Partially because the manual doesn’t have CI implemented yet (we’re waiting for KDE’s servers to be ready), and partially because I know nothing about CI and have barely got an idea of Java, and am kinda stuck?
But, I can run it for myself now, so I can at the least do some fixes myself. I put the code up here, bear in mind I don’t remember how to use Java at all, so if I am committing Java sins, please be patient with me. Hopefully, if we can get this to work, we can greatly simplify how we handle style and grammar mistakes like these during review, as well as simplifying contributor’s guides.
A good illustration on why social media are toxic for our thinking. They push us to focus on anecdotes and as such miss the big picture. No wonder everything got so polarized so quickly there.
Very interesting discussion about decision making and forecasting. I’m discovering Tetlock as he is mentioned and his findings are interesting in their own right as well.
Stockdale Paradox — Why optimists don’t survive software projects | by Ben Hosking | Jun, 2021 | Dev Genius
Tags: tech, project-management, estimates
Excellent advises on project planning and management. It explains pretty well why being optimist in those areas will just drive your project through a wall.