Sok 2021 Update

I started my Sok project in January and in February my college decided to have the end semester exams, hence I had to halt the work a little bit but since my exams are done I have resumed work and will be continuing with the project.

Currently I have figured out the APIs and done test runs for twitter and mastodon but for some reason Facebook just doesn't want to comply and let me use their API . Now I'm figuring out the final endpoints for my API and then will start work on a basic frontend for the same.

The next step would be to enable proper logging of everything and dockerize this to enable the promo team for testing then same.

Looking forward to Completing this awesome and very useful project.

Tuesday

16 February, 2021

Calamares is a modular installer for Linux distributions using Qt and KDE technologies. It is used by dozens of Linux distro’s for that crucial step of “get the ISO onto the HDD”, or some modern variant thereof. It’s modular, so distro’s can pick-and-choose what is needed: OpenRC or systemd, for instance. But it’s hard to cover everything that Linux distributions might need, so Calamares also has an “extensions” repository for more specialised modules. Let’s take a look.

Calamares window with modules highlighted
Calamares window with modules highlighted

The screenshot here shows the first screen of Calamares (in demonstration mode, so “real” distro’s will probably have distro-specific styling). There’s a number of user-visible “pages” to Calamares: welcome, location, etc. There are modules that are responsible for each of those; the main Calamares distribution contains a configurable welcome module, and an alternative QML-based welcomeq module as well – plenty of choice for distro’s.

There are more modules than what is visible here, though: all the invisible steps like making a user, settting up the display manager, configuring OpenRC (or systemd) .. those are modules as well. For these invisible steps there are certainly some that make sense only for specialised distro’s, so while Calamares aims to be modular and address “most” of what a Linux needs to get installed, there is always one more module for that special case (like pre-installing CDE theming, or whatever).

So in the end there are two repositories for Calamares bits:

  • Calamares which is the main program, libraries, support bits, CMake infrastructure and all the modules that are generally useful (and some that aren’t, for historical reasons),
  • Calamares Extensions which contains more modules, more branding examples, and generally shows off that Calamares can be extended with “third party” code as well: distro’s can use this also to build their own collection of modules (although, as always, pull requests are welcome: I’d be happy to move as much as possible upstream if it is of interest to more than one single distro).

One extra ideas behind the extensions repo is that it is a test of how-reusable-is-Calamares-code and does-my-CMake-code-actually-work. So it is supposed to (naively) build C++ modules that use the full plugin API of Calamares. But this gives rise to a slight problem: how to shuffle the latest bits around. In a Continuous Integration (CI) setup things get built on every push. In split repositories, that becomes a little complicated unless you build all the repositories at every build: and that gets kind of expensive, computationally.

(This is by no means a surprising or new insight: KDE’s sysadmin team has roughly this in build.kde.org, KDE’s CI where one of the challenges is building KDE Applications against recent KDE Frameworks)

Drawing of CI flow
Drawing of CI flow

What I ended up doing is making a tarball of the build of Calamares, then shipping it across to CI builds of the extensions modules. To avoid excessive builds and tarball transfers, I’ve decided to do the tarballs every night, rather than on every push.

So part of my nightly CI build on KDE neon is to make install DESTDIR=stage followed by tar czf calamares.tar.gz stage to get all the bits that Calamares would have built. This is only slightly yucky: if I was more fastidious I might use CPack to build an installable package. Regardless, on pushes to the extensions repository, I can pick up that tarball as if it was a package, and unpack it to the container that is going to do the build of the extension modules. There is also a nightly build of the extensions repository that behaves the same.

What this gets me is that extensions try to build against a recent Calamares all the time, and changes to Calamares propagate to the extensions build as well. If (when) I break something, this will show up on IRC with a meaningful message.

Artifact upload is done with actions/upload-artifact@v2, one of the “standard” GitHub actions. The standard artifact download action does not know how to fetch artifacts from another repository, so I used dawidd6/action-download-artifact@v2. That does know how, so I can move the tarball around.

All this feels way more convoluted and clunky than GitLab. I’m glad KDE uses GitLab CI for some bits (and Jenkins for the rest) since all the time it feels like I’m fighting the system. Seriously, it takes a third-party action to move artifacts across repo’s? At this point I may as well be writing shell scripts again with a one-step action.

These build artifacts are never used by any distro: they’re for Calamares project consumption. Many distro’s do their own nightly builds: neon and Manjaro, for instance, build nightly packages for testing (and I’m grateful for their reports of problems as well). Why not use those packages instead? Well, getting those packages can be a bit tricksy outside of the scope of the distro-CI, and part of what I’m testing is the bits most distro’s won’t package anyway: the Calamares “SDK” for use by the extensions repo. So the various builds are complementary and catch different things.

One thing the CI is good for is making IRC lively(-ish) with notifications, I like that too.

<cala-ci> OK ci-push-xtn in calamares/calamares-extensions adriaandegroot on refs/heads/calamares
<cala-ci> .. f2e59e6 [image-slideshow] Add an example QML slideshow for images

Plasma Pass, a Plasma applet for the Pass password manager version 1.2.0 is out.

The applet now supports OTP codes (in the format supported by the pass OTP plugin). The ‘clock’ icon appears next to all passwords, even those that do not have OTP code. This is a limitation caused by the passwords being stored in files encrypted and being decrypted only when the user requests it - so the applet cannot know whether there’s an OTP code available in the password file until you click on it. There were also some small fixups and UI improvements.

Tarball:

https://download.kde.org/stable/plasma-pass/plasma-pass-1.2.0.tar.xz

Checksum:

SHA-256: 01f0b03b99e41c067295e7708d41bbe581c0d73e78d43b50bf86b4699969f780
SHA-1:   07a32d21b0c4dd38cad9c800d7b8f463f42c39c6

Signature:

0ABDFA55A4E6BEA99A83EA974D69557AECB13683 Daniel Vrátil <dvratil@kde.org>

Feel free to report any issues or feature requests to KDE Bugzilla.

Monday

15 February, 2021

I’ve been learning C++ lately. About two months ago I finished Codecademy’s C++ course (honestly really good for the basics), a month ago I managed to fetch the C++ Fundamentals book from PacktPub for free, and now I’m mostly following this amazing YouTube online course by The Cherno and taking a look at C++ Weekly. … Continue reading "KFluff — Kate’s External Tools"

 


Hello everyone,

let's improve our layout and view templates in order to make distros and users life easier when they share their Latte layouts and views. View in Latte stands for a Dock or Panel.

 

Contents;

Just go to Layouts Editor, select any layout you want, Export->Export As Template . A dialog will appear to choose which applets will maintain their configuration. You can use it to dismiss configuration for applets that contain any of your personal data such as user credentials, passwords etc. Layout templates and Layout files are identical structures with the only difference that for templates the user has approved the applets configuration.

- export template dialog -


- new layout menu -
For distros it is suggested to add their layout templates in folder: shell/package/contents/templates/ because this way they will always be available to their users to readd them through Layouts Editor -> New Layout menu.

For users all their extracted user layout templates can be found at folder: ~/.config/latte/templates  





2. View Templates

In the same manner Docks and Panels can become templates that the user can easily readd them through Dock/Panel Settings -> New [Actions menu].
 
Both for distros and users view templates are again found at folders:
  • shell/package/contents/templates/ 
  • ~/.config/latte/templates
- Dock, New Actions Menu -



You can be sure of course that your templates are included in your Latte Export Configuration file which is used from users to take a Full Backup of your Latte configuration. You can find it at: Layouts Editor -> File -> Import/Export Configuration

- Export Full Configuration -




Personally I do not think donations are necessary. Easier thing you can do is to just install from KDE Store the Latte related widgets, they work just fine even with plasma panels. KDE Store provides me some monthly beers because of this. Funny thing is that Latte Separator that I developed in a day provides me most of the beers and Latte Dock that I develop plenty of hours daily for the last three years provides me almost none.. :) I mention it as a funny fact, please do not take it differently.

In a case, if you still want to donate you can use the following:

You can find Latte at Liberapay ,     Donate using Liberapay


or you can split your donation between my active projects in kde store.




The first Qt for MCUs release of 2021 is out! Download it to get the latest features and create ever lighter yet impressive-looking Qt applications for microcontroller-powered devices.

OpenUK is an organisation promoting open tech, come join us and belong. OpenUK Belonging video.

Sign up to our letter by sharing it on social media with the #OpenUKBelonging? OpenUK seeks Belonging Partners – not for profit organisations who encourage a range diversity and inclusion through their activities –  to be a part of our ecosystem to advance belonging in Open Technology together and sign up to this letter by sharing it on social media. We will launch these partnerships on International Women’s Day on 8 March and will support each of the partners throughout the year.

Kate, KTextEditor and Co. did get a nice stream of updates in the first two weeks of February 2021.

I will just pick a few things I really liked, if you want to have a full overview, you can go through the list of all merged patches.

Even more multi-threading in search in files

After the initial parallelization of the actual search in files as described here, Kåre came up with the idea to parallelize the creation of the file list we use for the search, too.

We worked together on this in merge 220 and merge 221, the result will allow for even faster searches if no initial file list is provided already by e.g. the project plugin.

I didn’t actually believe that would be worth the hassle, but Kåre provided impressive speedups in merge 220, from over 30 seconds down to around 3 seconds, nice!

Improvements to the color picker plugin

The new color picker plugin got some further improvements by Jan Paul, if you missed what it does, see screenshot below, it shows the color for #hexcodes inline in the text editor view + provides a color picker to alter it.

Screenshot of Kate color picker

Open a project per command line

So far, Kate projects can only be opened indirectly by opening a file inside the project or launching Kate from a terminal inside some directory of a project.

Alexander rectified this now, Kate will allow to open a project just by passing the directory as argument after this merge.

This sounds like a rather trivial change, but I guess it will make working with projects more natural.

Allow to switch branches in Git

Waqar works on better Git integration. This has been an idea since years but we never got around to it ;=) Finally we see some progress.

Screenshot of Kate git branch selector

The current implementation inside the project plugin shows prominently the current branch of your clone below the project name. Pressing this button allows you to either switch to an existing branch or create a new one and switch to it. As usual, this quick open like dialog provides the fuzzy matching we now use close to everywhere in Kate.

Improving LSP contextual help

Waqar worked on better visualization of the contextual help via LSP, too. So far, we just used some normal tooltips, unfortunately that doesn’t work that well for the rich content we get from typical LSP servers like clangd. Now we have some custom widget for this with proper highlighting and font/color matching the editor theme.

LSP tooltip

Improved website theme

Perhaps this change is obvious, as already visible if you read this post on our website kate-editor.org, but we now use the shared theming other KDE websites use, too. If you read this from some aggregator, take a look at the start page screenshot below.

Screenshot of Kate website

Carl worked on this, he did a fantastic job.

There were several iterations until we arrived at the current state, thanks a lot (to Carl and naturally all others involved here)!

A big thanks to all the people that help to translate our websites (and tools), too!

Help wanted!

You want more nifty stuff? More speed? More of everything?

You want to make our website even nicer?

Show up and contribute.

We have a lot of ideas but not that many people working on them :)

Comments?

A matching thread for this can be found here on r/KDE.

Sunday

14 February, 2021

The upcoming version of Clang 12 includes a new traversal mode which can be used for easier matching of AST nodes.

I presented this mode at EuroLLVM and ACCU 2019, but at the time I was calling it “ignoring invisible” mode. The primary aim is to make AST Matchers easier to write by requiring less “activation learning” of the newcomer to the AST Matcher API. I’m analogizing to “activation energy” here – this mode reduces the amount of learning of new concepts must be done before starting to use AST Matchers.

The new mode is a mouthful – IgnoreUnlessSpelledInSource – but it makes AST Matchers easier to use correctly and harder to use incorrectly. Some examples of the mode are available in the AST Matchers reference documentation.

In clang-query, the mode affects both matching and dumping of AST nodes and it is enabled with:

set traversal IgnoreUnlessSpelledInSource

while in the C++ API of AST Matchers, it is enabled by wrapping a matcher in:

traverse(TK_IgnoreUnlessSpelledInSource, ...)

The result is that matching of AST nodes corresponds closely to what is written syntactically in the source, rather than corresponding to the somewhat arbitrary structure implicit in the clang::RecursiveASTVisitor class.

Using this new mode makes it possible to “add features by removing code” in clang-tidy, making the checks more maintainable and making it possible to run checks in all language modes.

Clang does not use this new mode by default.

Implicit nodes in expressions

One of the issues identified is that the Clang AST contains many nodes which must exist in order to satisfy the requirements of the language. For example, a simple function relying on an implicit conversion might look like.

struct A {
    A(int);
    ~A();
};

A f()
{
    return 42;
}

In the new IgnoreUnlessSpelledInSource mode, this is represented as

ReturnStmt
`-IntegerLiteral '42'
and the integer literal can be matched with
returnStmt(hasReturnValue(integerLiteral().bind("returnVal")))

In the default mode, the AST might be (depending on C++ language dialect) represented by something like:

ReturnStmt
`-ExprWithCleanups
  `-CXXConstructExpr
    `-MaterializeTemporaryExpr
      `-ImplicitCastExpr
        `-CXXBindTemporaryExpr
          `-ImplicitCastExpr
            `-CXXConstructExpr
              `-IntegerLiteral '42'

To newcomers to the Clang AST, and to me, it is not obvious what all of the nodes there are for. I can reason that an instance of A must be constructed. However, there are two CXXConstructExprs in this AST and many other nodes, some of which are due to the presence of a user-provided destructor, others due to the temporary object. These kinds of extra nodes appear in most expressions, such as when processing arguments to a function call or constructor, declaring or assigning a variable, converting something to bool in an if condition etc.

There are already AST Matchers such as ignoringImplicit() which skip over some of the implicit nodes in AST Matchers. Still though, a complete matcher for the return value of this return statement looks something like

returnStmt(hasReturnValue(
    ignoringImplicit(
        ignoringElidableConstructorCall(
            ignoringImplicit(
                cxxConstructExpr(hasArgument(0,
                    ignoringImplicit(
                        integerLiteral().bind("returnVal")
                        )
                    ))
                )
            )
        )
    ))

Another mouthful.

There are several problems with this.

  • Typical clang-tidy checks which deal with expressions tend to require extensive use of such ignoring...() matchers. This makes the matcher expressions in such clang-tidy checks quite noisy
  • Different language dialects represent the same C++ code with different AST structures/extra nodes, necessitating testing and implementing the check in multiple language dialects
  • The requirement or possibility to use these intermediate matchers at all is not easily discoverable, nor are the required matchers to saitsfy all language modes easily discoverable
  • If an AST Matcher is written without explicitly ignoring implicit nodes, Clang produces lots of surprising results and incorrect transformations

Implicit declarations nodes

Aside from implicit expression nodes, Clang AST Matchers also match on implicit declaration nodes in the AST. That means that if we wish to make copy constructors in our codebase explicit we might use a matcher such as

cxxConstructorDecl(
    isCopyConstructor()
    ).bind("prepend_explicit")

This will work fine in the new IgnoreUnlessSpelledInSource mode.

However, in the default mode, if we have a struct with a compiler-provided copy constructor such as:

struct Copyable {
    OtherStruct m_o;
    Copyable();
};

we will match the compiler provided copy constructor. When our check inserts explicit at the copy constructor location it will result in:

struct explicit Copyable {
    OtherStruct m_o;
    Copyable();
};

Clearly this is an incorrect transformation despite the transformation code “looking” correct. This AST Matcher API is hard to use correctly and easy to use incorrectly. Because of this, the isImplicit() matcher is typically used in clang-tidy checks to attempt to exclude such transformations, making the matcher expression more complicated.

Implicit template instantiations

Another surpise in the behavior of AST Matchers is that template instantiations are matched by default. That means that if we wish to change class members of type int to type safe_int for example, we might write a matcher something like

fieldDecl(
    hasType(asString("int"))
    ).bind("use_safe_int")

This works fine for non-template code.

If we have a template like

template  
struct TemplStruct {
    TemplStruct() {}
    ~TemplStruct() {}

private:
    T m_t;
};

then clang internally creates an instantiation of the template with a substituted type for each template instantation in our translation unit.

The new IgnoreUnlessSpelledInSource mode ignores those internal instantiations and matches only on the template declaration (ie, with the T un-substituted).

However, in the default mode, our template will be transformed to use safe_int too:

template  
struct TemplStruct {
    TemplStruct() {}
    ~TemplStruct() {}

private:
    safe_int m_t;
};

This is clearly an incorrect transformation. Because of this, isTemplateInstantiation() and similar matchers are often used in clang-tidy to exclude AST matches which produce such transformations.

Matching metaphorical code

C++ has multiple features which are designed to be simple expressions which the compiler expands to something less-convenient to write. Range-based for loops are a good example as they are a metaphor for an explicit loop with calls to begin and end among other things. Lambdas are another good example as they are a metaphor for a callable object. C++20 adds several more, including rewriting use of operator!=(...) to use !operator==(...) and operator<(...) to use the spaceship operator.

[I admit that in writing this blog post I searched for a metaphor for “a device which aids understanding by replacing the thing it describes with something more familiar” before realizing the recursion. I haven’t heard these features described as metaphorical before though…]

All of these metaphorical replacements can be explored in the Clang AST or on CPP Insights.

Matching these internal representations is confusing and can cause incorrect transformations. None of these internal representations are matchable in the new IgnoreUnlessSpelledInSource mode.

In the default matching mode, the CallExprs for begin and end are matched, as are the CXXRecordDecl implicit in the lambda and hidden comparisons within rewritten binary operators such as spaceship (causing bugs in clang-tidy checks).

Easy Mode

This new mode of AST Matching is designed to be easier for users, especially newcomers to the Clang AST, to use and discover while offering protection from typical transformation traps. It will likely be used in my Qt-based Gui Quaplah, but it must be enabled explicitly in existing clang tools.

As usual, feedback is very welcome!

Within KDE we have a service called the binary factory. It’s a Jenkins driven build pipeline which we use in the KMyMoney project to build certain binary installable versions of the project. For the generation of our AppImage version we package all dependencies into a large tar file so that we don’t have to rebuild them every day.

Due to new versions of the online banking libraries we use, I updated some package information and let the service do its thing to create the tar file with the pre-build dependencies. This is usually a matter of a few hours. When I checked the progress after a while, I found out that the build had failed. Too bad, I thought, and took a look at the console log of the build to see what was going on. To my surprise I saw the following:

10:24:51  [  0%] Performing download step (download, verify and extract) for 'ext_tcl'
10:24:51  -- Downloading...
10:24:51     dst='/home/appimage//appimage-workspace//downloads/core-8-6-8.zip'
10:24:51     timeout='none'
10:24:51     inactivity timeout='none'
10:24:51  -- Using src='https://github.com/tcltk/tcl/archive/core-8-6-8.zip'
10:24:51  -- [download 100% complete]
10:24:52  -- verifying file...
10:24:52         file='/home/appimage//appimage-workspace//downloads/core-8-6-8.zip'
10:24:52  -- MD5 hash of
10:24:52      /home/appimage//appimage-workspace//downloads/core-8-6-8.zip
10:24:52    does not match expected value
10:24:52      expected: '36fbbc668961044fdda89c5ee2ba67a2'
10:24:52        actual: 'b018a409832df1788f22b1a983fd7c5b'

So the checksum of a package which exists for quite some time failed due to a checksum mismatch. What? How could that be? It happened with just this very same package already in the past because someone added a file to the ZIP without changing the version number (a bad habit after all) so we had to adjust our checksum to the new value. Did this happen again? Who is so dumb and does these strange things? I started the quest to figure out what happened this time.

Downloading the file manually, I get the same checksum as the one shown as actual above. I looked for another copy of the file on the internet, but did not find it anymore. Luckily, I had a copy of the old version around on one of my disks, so I was able to compare their contents:

thb@thb-nb:~/Downloads$ md5sum tcl-core-8-6-8-old.zip tcl-core-8-6-8-new.zip
36fbbc668961044fdda89c5ee2ba67a2 tcl-core-8-6-8-old.zip
b018a409832df1788f22b1a983fd7c5b tcl-core-8-6-8-new.zip

Matches the above figures I found in the console log and checking the size shows that they have identical size:

thb@thb-nb:~/Downloads$ ls -l tcl-core-8-6-8-old.zip tcl-core-8-6-8-new.zip
-rw-r--r-- 1 thb users 7875812 13. Feb 17:05 tcl-core-8-6-8-new.zip
-rw-r--r-- 1 thb users 7875812 13. Feb 17:08 tcl-core-8-6-8-old.zip

Next, I extracted the contents into two directories and compared them recursively:

thb@thb-nb:~/Downloads$ diff -r tcl-core-8-6-8-old tcl-core-8-6-8-new
thb@thb-nb:~/Downloads$

No difference, but why does the checksum fail? Guess it needs a deep dive using binary comparison:

 thb@thb-nb:~/Downloads$ diff -u <(hexdump -C tcl-core-8-6-8-old.zip) <(hexdump -C tcl-core-8-6-8-new.zip)
 --- /dev/fd/63 2021-02-13 17:38:06.439702799 +0100
 +++ /dev/fd/62 2021-02-13 17:38:06.435702801 +0100
 @@ -492233,8 +492233,8 @@
 00782c80 75 00 74 63 6c 2d 63 6f 72 65 2d 38 2d 36 2d 38 |u.tcl-core-8-6-8|
 00782c90 2f 77 69 6e 2f 74 63 6c 73 68 2e 72 63 55 54 05 |/win/tclsh.rcUT.|
 00782ca0 00 01 47 7c 39 5a 50 4b 05 06 00 00 00 00 2c 08 |..G|9ZPK......,.|
-00782cb0 2c 08 cc 02 03 00 da 29 75 00 28 00 39 32 33 39 |,......)u.(.9239|
-00782cc0 61 62 64 65 65 62 39 31 36 38 66 63 31 34 39 65 |abdeeb9168fc149e|
-00782cd0 31 66 61 32 63 32 36 35 64 63 30 35 62 63 63 64 |1fa2c265dc05bccd|
-00782ce0 38 66 30 66                                     |8f0f|
+00782cb0 2c 08 cc 02 03 00 da 29 75 00 28 00 31 37 32 35 |,......)u.(.1725|
+00782cc0 62 37 34 36 39 35 36 30 66 38 30 32 66 31 33 32 |b7469560f802f132|
+00782cd0 61 38 37 61 62 63 65 35 39 61 65 33 32 32 38 64 |a87abce59ae3228d|
+00782ce0 64 30 65 64                                     |d0ed|
 00782ce4

782ce04hex is 7875812dec so that diff is at the very end of the file and up to the address 782cb0hex the files have the same content. On one hand good news, but on the other I still have no idea why that is. Let’s see what the built-in test of unzip has to report:

thb@thb-nb:~/Downloads$ unzip -t tcl-core-8-6-8-old.zip | grep -v OK
Archive: tcl-core-8-6-8-old.zip
9239abdeeb9168fc149e1fa2c265dc05bccd8f0f
No errors detected in compressed data of tcl-core-8-6-8-old.zip.

thb@thb-nb:~/Downloads$ unzip -t tcl-core-8-6-8-new.zip | grep -v OK
Archive: tcl-core-8-6-8-new.zip
1725b7469560f802f132a87abce59ae3228dd0ed
No errors detected in compressed data of tcl-core-8-6-8-new.zip.

Everything OK (the details of which I have suppressed here using grep -v OK) and the numbers spit out are the numbers I see at the end of the file.

Looking at Wikipedia’s article about the ZIP format unveils that the difference is in a field called comment. Great! Still the question remains: why did it change in the first place and when does it happen again (maybe other modules are affected as well)?