Skip to content

Wednesday, 20 December 2023

This is an update on the ongoing migration of jobs from Binary Factory to KDE's GitLab. Since the last blog a lot has happened.

A first update of Itinerary was submitted to Google Play directly from our GitLab.

Ben Cooksley has added a service for publishing our websites. Most websites are now built and published on our GitLab with only 5 websites remaining on Binary Factory.

Julius Künzel has added a service for signing macOS apps and DMGs. This allows us to build signed installers for macOS on GitLab.

The service for signing and publishing Flatpaks has gone live. Nightly Flatpaks built on our GitLab are now available at https://cdn.kde.org/flatpak/. For easy installation builds created since yesterday include .flatpakref files and .flatpakrepo files.

Last, but not least, similar to the full CI/CD pipeline for Android we now also have a full CI/CD pipeline for Windows. For Qt 5 builds this pipeline consists of the following GitLab jobs:

  • windows_qt515 - Builds the project with MSVC and runs the automatic tests.
  • craft_windows_qt515_x86_64 - Builds the project with MSVC and creates various installation packages including (if enabled for the project) a *-sideload.appx file and a *.appxupload file.
  • sign_appx_qt515 - Signs the *-sideload.appx file with KDE's signing certificate. The signed app package can be downloaded and installed without using the Microsoft store.
  • microsoftstore_qt515 - Submits the *.appxupload package to the Microsoft store for subsequent publication. This job doesn't run automatically.
Notes:
  • The craft_windows_qt515_x86_64 job also creates .exe installers. Those installers are not yet signed on GitLab, i.e. Windows should warn you when you try to install them. For the time being, you can download signed .exe installers from Binary Factory.
  • There are also jobs for building with MinGW, but MinGW builds cannot be used for creating app packages for the Microsoft Store. (It's still possible to publish apps with MinGW installers in the Microsoft Store, but that's a different story.)
The workflow for publishing an update of an app in the Microsoft Store as I envision it is as follows:
  1. You download the signed sideload app package, install it on a Windows (virtual) machine (after uninstalling a previously installed version) and perform a quick test to ensure that the app isn't completely broken.
  2. Then you trigger the microsoftstore_qt515 job to submit the app to the Microsoft Store. This creates a new draft submission in the Microsoft Partner Center. The app is not published automatically. To actually publish the submission you have to log into the Microsoft Partner Center and commit the submission.

Enabling the Windows CD Pipeline for Your Project

If you want to start building Windows app packages (APPX) for your project then add the craft-windows-x86-64.yml template for Qt 5 or the craft-windows-x86-64-qt6.yml template for Qt 6 to the .gitlab-ci.yml of your project. Additionally, you have to add a .craft.ini file with the following content to the root of your project to enable the creation of the Windows app packages.
[BlueprintSettings]
kde/applications/myapp.packageAppx = True

kde/applications/myapp must match the path of your project's Craft blueprint.

When you have successfully built the first Windows app packages then add the craft-windows-appx-qt5.yml or the craft-windows-appx-qt6.yml template to your .gitlab-ci.yml to get the sign_appx_qt* job and the microsoftstore_qt* job.

To enable signing your project (more precisely, a branch of your project) needs to be cleared for using the signing service. This is done by adding your project to the project settings of the appxsigner. Similarly, to enable submission to the Microsoft Store your project needs to be cleared by adding it to the project settings of the microsoftstorepublisher. If you have carefully curated metadata set in the store entry of you app that shouldn't be overwritten by data from your app's AppStream data then have a look at the keep setting for your project. I recommend to use keep sparingly if at all because at least for text content you will deprive people using the store of all the translations added by our great translation teams to your app's AppStream data.

Note that the first submission to the Microsoft Store has to be done manually.

Tuesday, 19 December 2023

All the Toolbx and Distrobox container images and the ones in my personal namespace on Quay.io are now signed using cosign.

How to set this up was not really well documented so this post is an attempt at that.

First we will look at how to setup a GitHub workflow using GitHub Actions to build multi-architecture container images with buildah and push them to a registry with podman. Then we will sign those images with cosign (sigstore) and detail what is needed to configure signature validation on the host. Finally we will detail the remaining work needed to be able to do the entire process only with podman.

Full example ready to go

If you just want to get going, you can copy the content of my github.com/travier/cosign-test repo and start building and pushing your containers. I recommend keeping only the cosign.yaml workflow for now (see below for the details).

“Minimal” GitHub workflow to build containers with buildah / podman

You can find those actions at github.com/redhat-actions.

Here is an example workflow with the Containerfile in the example sub directory:

name: "Build container using buildah/podman"

env:
  NAME: "example"
  REGISTRY: "quay.io/example"

on:
  # Trigger for pull requests to the main branch, only for relevant files
  pull_request:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/cosign.yml'
  # Trigger for push/merges to main branch, only for relevant files
  push:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/cosign.yml'
  # Trigger every Monday morning
  schedule:
    - cron:  '0 0 * * MON'

permissions: read-all

# Prevent multiple workflow runs from racing to ensure that pushes are made
# sequentialy for the main branch. Also cancel in progress workflow runs for
# pull requests only.
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: ${{ github.event_name == 'pull_request' }}

jobs:
  build-push-image:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repo
        uses: actions/checkout@v4

      - name: Setup QEMU for multi-arch builds
        shell: bash
        run: |
          sudo apt install qemu-user-static

      - name: Build container image
        uses: redhat-actions/buildah-build@v2
        with:
          # Only select the architectures that matter to you here
          archs: amd64, arm64, ppc64le, s390x
          context: ${{ env.NAME }}
          image: ${{ env.NAME }}
          tags: latest
          containerfiles: ${{ env.NAME }}/Containerfile
          layers: false
          oci: true

      - name: Push to Container Registry
        uses: redhat-actions/push-to-registry@v2
        # The id is unused right now, will be used in the next steps
        id: push
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        with:
          username: ${{ secrets.BOT_USERNAME }}
          password: ${{ secrets.BOT_SECRET }}
          image: ${{ env.NAME }}
          registry: ${{ env.REGISTRY }}
          tags: latest

This should let you to test changes to the image via builds in pull requests and publishing the changes only once they are merged.

You will have to setup the BOT_USERNAME and BOT_SECRET secrets in the repository configuration to push to the registry of your choice.

If you prefer to use the GitHub internal registry then you can use:

env:
  REGISTRY: ghcr.io/${{ github.repository_owner }}

...
  username: ${{ github.actor }}
  password: ${{ secrets.GITHUB_TOKEN }}

You will also need to set the job permissions to be able to write GitHub Packages (container registry):

permissions:
  contents: read
  packages: write

See the Publishing Docker images GitHub Docs.

You should also configure the GitHub Actions settings as follow:

  • In the “Actions permissions” section, you can restict allowed actions to: “Allow <username>, and select non-<username>, actions and reusable workflows”, with “Allow actions created by GitHub” selected and the following additionnal actions:
    redhat-actions/*,
    
  • In the “Workflow permissions” section, you can select the “Read repository contents and packages permissions” and select the “Allow GitHub Actions to create and approve pull requests”.

  • Make sure to add all the required secrets in the “Secrets and variables”, “Actions”, “Repository secrets” section.

Signing container images

We will use cosign to sign container images. With cosign, you get two main options to sign your containers:

  • Keyless signing: Sign containers with ephemeral keys by authenticating with an OIDC (OpenID Connect) protocol supported by Sigstore.
  • Self managed keys: Generate a “classic” long-lived key pair.

We will choose the the “self managed keys” option here as it is easier to setup for verification on the host in podman. I will likely make another post once I figure out how to setup keyless signature verification in podman.

Generate a key pair with:

$ cosign generate-key-pair

Enter an empty password as we will store this key in plain text as a repository secret (COSIGN_PRIVATE_KEY).

Then you can add the steps for signing with cosign at the end of your workflow:

      # Include at the end of the workflow previously defined

      - name: Login to Container Registry
        uses: redhat-actions/podman-login@v1
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ secrets.BOT_USERNAME }}
          password: ${{ secrets.BOT_SECRET }}

      - uses: sigstore/cosign-installer@v3.3.0
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'

      - name: Sign container image
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        run: |
          cosign sign -y --recursive --key env://COSIGN_PRIVATE_KEY ${{ env.REGISTRY }}/${{ env.NAME }}@${{ steps.push.outputs.digest }}
        env:
          COSIGN_EXPERIMENTAL: false
          COSIGN_PRIVATE_KEY: ${{ secrets.COSIGN_PRIVATE_KEY }}

2024-01-12 update: Sign container images recursively for multi-arch images.

We need to explicitly login to the container registry to get an auth token that will be used by cosign to push the signature to the registry.

This step sometimes fails, likely due to a race condition, that I have not been able to figure out yet. Retrying failed jobs usually works.

You should then update the GitHub Actions settings to allow the new actions as follows:

redhat-actions/*,
sigstore/cosign-installer@*,

Configuring podman on the host to verify image signatures

First, we copy the public key to a designated place in /etc:

$ sudo mkdir /etc/pki/containers
$ curl -O "https://.../cosign.pub"
$ sudo cp cosign.pub /etc/pki/containers/
$ sudo restorecon -RFv /etc/pki/containers

Then we setup the registry config to tell it to use sigstore signatures:

$ cat /etc/containers/registries.d/quay.io-example.yaml
docker:
  quay.io/example:
    use-sigstore-attachments: true
$ sudo restorecon -RFv /etc/containers/registries.d/quay.io-example.yaml

And then we update the container signature verification policy to:

  • Default to reject everything
  • Then for the docker transport:
    • Verify signatures for containers coming from our repository
    • Accept all other containers from other registries

If you do not plan on using container from other registries, you can even be stricter here and only allow your containers to be used.

/etc/containers/policy.json:

{
    "default": [
        {
            "type": "reject"
        }
    ],
    "transports": {
        "docker": {
            ...
            "quay.io/example": [
                {
                    "type": "sigstoreSigned",
                    "keyPath": "/etc/pki/containers/quay.io-example.pub",
                    "signedIdentity": {
                        "type": "matchRepository"
                    }
                }
            ],
            ...
            "": [
                {
                    "type": "insecureAcceptAnything"
                }
            ]
        },
        ...
    }
}

See the full man page for containers-policy.json(5).

You should now be good to go!

What about doing everything with podman?

Using this workflow, there is a (small) time window where the container images are pushed to the registry but not signed.

One option to avoid this problem would be to first push the container to a “temporary” tag first, sign it, and then copy the signed container to the latest tag.

Another option is to use podman to push and sign the container image “at the same time”. However podman still needs to push the image first and then sign it so there is still a possibility that signing fails and that you’re left with an unsigned image (this happened to me during testing).

Unfortunately for us, the version of podman available in the version of Ubuntu used for the GitHub Runners (22.04) is too old to support signing containers. We thus need to use a newer podman from a container image to workaround this.

Here is the same workflow, adapted to only use podman for signing:

name: "Build container using buildah, push and sign it using podman"

env:
  NAME: "example"
  REGISTRY: "quay.io/example"
  REGISTRY_DOMAIN: "quay.io"

on:
  pull_request:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/podman.yml'
  push:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/podman.yml'
  schedule:
    - cron:  '0 0 * * MON'

permissions: read-all

# Prevent multiple workflow runs from racing to ensure that pushes are made
# sequentialy for the main branch. Also cancel in progress workflow runs for
# pull requests only.
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: ${{ github.event_name == 'pull_request' }}

jobs:
  build-push-image:
    runs-on: ubuntu-latest
    container:
      image: quay.io/travier/podman-action
      options: --privileged -v /proc/:/host/proc/:ro
    steps:
      - name: Checkout repo
        uses: actions/checkout@v4

      - name: Setup QEMU for multi-arch builds
        shell: bash
        run: |
          for f in /usr/lib/binfmt.d/*; do cat $f | sudo tee /host/proc/sys/fs/binfmt_misc/register; done
          ls /host/proc/sys/fs/binfmt_misc

      - name: Build container image
        uses: redhat-actions/buildah-build@v2
        with:
          archs: amd64, arm64, ppc64le, s390x
          context: ${{ env.NAME }}
          image: ${{ env.NAME }}
          tags: latest
          containerfiles: ${{ env.NAME }}/Containerfile
          layers: false
          oci: true

      - name: Setup config to enable pushing Sigstore signatures
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        shell: bash
        run: |
          echo -e "docker:\n  ${{ env.REGISTRY_DOMAIN }}:\n    use-sigstore-attachments: true" \
            | sudo tee -a /etc/containers/registries.d/${{ env.REGISTRY_DOMAIN }}.yaml

      - name: Push to Container Registry
        # uses: redhat-actions/push-to-registry@v2
        uses: travier/push-to-registry@sigstore-signing
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        with:
          username: ${{ secrets.BOT_USERNAME }}
          password: ${{ secrets.BOT_SECRET }}
          image: ${{ env.NAME }}

This uses two additional workarounds for missing features:

  • There is no official container image that includes both podman and buildah right now, thus I made one: github.com/travier/podman-action
  • The redhat-actions/push-to-registry Action does not support signing yet (issue#89). I’ve implemented support for self managed key signing in pull#90. I’ve not looked at keyless signing yet.

You will also have to allow running my actions in the repository settings. In the “Actions permissions” section, you should use the following actions:

redhat-actions/*,
travier/push-to-registry@*,

Conclusion

The next steps are to figure out all the missing bits for keyless signing and replicate this entire process in GitLab CI.

A script element has been removed to ensure Planet works properly. Please find it in the original post.

Brise theme is yet another fork of Breeze. The name comes having both the French and German translations of Breeze, being Brise.

As some people know, I’m contributing quite a lot to the Breeze style for the Plasma 6 release and I don’t intend to stop doing that. Both git repositories share the same git history and I didn’t massively rename all the C++ classes from BreezeStyle to BriseStyle to make it as easy as possible to backport commits from one repository to the other. There are also no plans to make this the new default style for Plasma.

My goal with this Qt style is to have a style that is not a big departure of Breeze like you know it but does contain some cosmetic small changes. This would serve as a place where I can experiment with new ideas and if they tend to be popular to then move them to Breeze.

Here is a breakdown of all the changes I made so far.

  • I made Brise coinstallable with Breeze, so that users can have both installed simultaneously. I minified the changes to avoid merge conflicts while doing so.

  • I increased the border radius of all the elements from 3 pixels to 5 pixels. This value is configurable between small (3 pixels), medium (5 pixels) and large (7 pixels). A merge request was opened in Breeze and might make it into Plasma 6.1. The only difference is that in breeze the default will likely keep being 3 pixels for the time being.

Cute buttons and frames with 5 pixels border radius
Cute buttons and frames with 5 pixels border radius

  • Add a separator between the search field and the title in the standard KDE config windows which serves as an extension of the separator between the list of the setting’s categories and the setting’s page. This is mostly to be similar to System Settings and other Kirigami applications. There is a pending merge request for this also in Breeze.
  • A new tab style that removes the blue lines from the active lines and introduce other small changes. Non-editable tabs are also now filling the entire horizontal space available. I’m not completely happy with the look yet, so no merge requests have been submitted to Breeze.

Separator in the toolbar and the new tabs
Separator in the toolbar and the new tabs

  • Remove outlines from menu and combobox items. My goal is to go in the same direction as KirigamiAddons.RoundedItemDelegate.

Menu without outlines
Menu without outlines

  • Ensure that all the controls have the same height. Currently a small disparency in height is noticeable when they are in the same row. The patch is still a bit hacky and needs some wider testing on a large range of apps to ensure no regressions, but it is also a improvement I will definitively submit upstream once I feel like it’s ready.

 

 

Here, in these two screenshots, every control has 35 pixels as height.

Finally here is Kate and KMail’s settings with Breeze and Brise.

Monday, 18 December 2023

In this post, I will detail how to replace sudo (a setuid binary) by using SSH over a local UNIX socket.

I am of the opinion that setuid/setgid binaries are a UNIX legacy that should be deprecated. I will explain the security reasons behind that statement in a future post.

This is related to the work of the Confined Users SIG in Fedora.

Why bother?

The main benefit of this approach is that it enables root access to the host from any unprivileged toolbox / distrobox container. This is particularly useful on Fedora Atomic desktops (Silverblue, Kinoite, Sericea, Onyx) or Universal Blue (Bluefin, Bazzite) for example.

As a side effect of this setup, we also get the following security advantages:

  • No longer rely on sudo as a setuid binary for privileged operations.
  • Access control via a physical hardware token (here a Yubikey) for each privileged operation.

Setting up the server

Create the following systemd units:

/etc/systemd/system/sshd-unix.socket:

[Unit]
Description=OpenSSH Server Unix Socket
Documentation=man:sshd(8) man:sshd_config(5)

[Socket]
ListenStream=/run/sshd.sock
Accept=yes

[Install]
WantedBy=sockets.target

/etc/systemd/system/sshd-unix@.service:

[Unit]
Description=OpenSSH per-connection server daemon (Unix socket)
Documentation=man:sshd(8) man:sshd_config(5)
Wants=sshd-keygen.target
After=sshd-keygen.target

[Service]
ExecStart=-/usr/sbin/sshd -i -f /etc/ssh/sshd_config_unix
StandardInput=socket

Create a dedicated configuration file /etc/ssh/sshd_config_unix:

# Deny all non key based authentication methods
PermitRootLogin prohibit-password
PasswordAuthentication no
PermitEmptyPasswords no
GSSAPIAuthentication no

# Only allow access for specific users
AllowUsers root tim

# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile .ssh/authorized_keys

# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server

Enable and start the new socket unit:

$ sudo systemctl daemon-reload
$ sudo systemctl enable --now sshd-unix.socket

Add your SSH Key to /root/.ssh/authorized_keys.

Setting up the client

Install socat and use the following snippet in /.ssh/config:

Host host.local
    User root
    # We use `run/host/run` instead of `/run` to transparently work in and out of containers
    ProxyCommand socat - UNIX-CLIENT:/run/host/run/sshd.sock
    # Path to your SSH key. See: https://tim.siosm.fr/blog/2023/01/13/openssh-key-management/
    IdentityFile ~/.ssh/keys/localroot
    # Force TTY allocation to always get an interactive shell
    RequestTTY yes
    # Minimize log output
    LogLevel QUIET

Test your setup:

$ ssh host.local
[root@phoenix ~]#

Shell alias

Let’s create a sudohost shell “alias” (function) that you can add to your Bash or ZSH config to make using this command easier:

# Get an interactive root shell or run a command as root on the host
sudohost() {
    if [[ ${#} -eq 0 ]]; then
        cmd="$(printf "exec \"%s\" --login" "${SHELL}")"
        ssh host.local "${cmd}"
    else
        cmd="$(printf "cd \"%s\"; exec %s" "${PWD}" "$*")"
        ssh host.local "${cmd}"
    fi
}

2024-01-12 update: Fix quoting and array expansion (thanks to o11c).

Test the alias:

$ sudohost id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ sudohost pwd
/var/home/tim
$ sudohost ls
Desktop Downloads ...

We’ll keep a distinct alias for now as we’ll still have a need for the “real” sudo in our toolbox containers.

Security?

As-is, this setup is basically a free local root for anything running under your current user that has access to your SSH private key. This is however likely already the case on most developer’s workstations if you are part of the wheel, sudo or docker groups, as any code running under your user can edit your shell config and set a backdoored alias for sudo or run arbitrary privileged containers via Docker. sudo itself is not a security boundary as commonly configured by default.

To truly increase our security posture, we would instead need to remove sudo (and all other setuid binaries) and run our session under a fully unprivileged, confined user, but that’s for a future post.

Setting up U2F authentication with an sk-based SSH key-pair

To make it more obvious when commands are run as root, we can setup SSH authentication using U2F with a Yubikey as an example. While this, by itself, does not, strictly speaking, increase the security of this setup, this makes it harder to run commands without you being somewhat aware of it.

First, we need to figure out which algorithm are supported by our Yubikey:

$ lsusb -v 2>/dev/null | grep -A2 Yubico | grep "bcdDevice" | awk '{print $2}'

If the value is 5.2.3 or higher, then we can use ed25519-sk, otherwise we’ll have to use ecdsa-sk to generate the SSH key-pair:

$ ssh-keygen -t ed25519-sk
# or
$ ssh-keygen -t ecdsa-sk

Add the new sk-based SSH public key to /root/.ssh/authorized_keys.

Update the server configuration to only accept sk-based SSH key-pairs:

/etc/ssh/sshd_config_unix:

# Only allow sk-based SSH key-pairs authentication methods
PubkeyAcceptedKeyTypes sk-ecdsa-sha2-nistp256@openssh.com,sk-ssh-ed25519@openssh.com

...

Restricting access to a subset of users

You can also further restrict the access to the UNIX socket by configuring classic user/group UNIX permissions:

/etc/systemd/system/sshd-unix.socket:

1
2
3
4
5
6
7
8
...

[Socket]
...
SocketUser=tim
SocketGroup=tim
SocketMode=0660
...

Then reload systemd’s configuration and restart the socket unit.

Next steps: Disabling sudo

Now that we have a working alias to run privileged commands, we can disable sudo access for our user.

Important backup / pre-requisite step

Make sure that you have a backup and are able to boot from a LiveISO in case something goes wrong.

Set a strong password for the root account. Make sure that can locally log into the system via a TTY console.

If you have the classic sshd server enabled and listening on the network, make sure to disable remote login as root or password logins.

Removing yourself from the wheel / sudo groups

Open a terminal running as root (i.e. don’t use sudo for those commands) and remove you users from the wheel or sudo groups using:

$ usermod -dG wheel tim

You can also update the sudo config to remove access for users that are part of the wheel group:

# Comment / delete this line
%wheel  ALL=(ALL)       ALL

Removing the setuid binaries

To fully benefit from the security advantage of this setup, we need to remove the setuid binaries (sudo and su).

If you can, uninstall sudo and su from your system. This is usually not possible due to package dependencies (su is part of util-linux on Fedora).

Another option is to remove the setuid bit from the sudo and su binaries:

$ chmod u-s $(which sudo)
$ chmod u-s $(which su)

You will have to re-run those commands after each update on classic systems.

Setting this up for Fedora Atomic desktops is a little bit different as /usr is read only. This will be the subject of an upcoming blog post.

Conclusion

Like most of the time with security, this is not a silver bullet solution that will make your system “more secure” (TM). I have been working on this setup as part of my investigation to reduce our reliance on setuid binaries and trying to figure out alternatives for common use cases.

Let me know if you found this interesting as that will likely motivate me to write the next part!

References

An updated stable release of XWayland Video Bridge is out now for packaging.

https://download.kde.org/stable/xwaylandvideobridge/

sha256 ea72ac7b2a67578e9994dcb0619602ead3097a46fb9336661da200e63927ebe6

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

Changes

  • Also skip the switcher
  • Do not start in an X11 session and opt out of session management

Wednesday, 13 December 2023

There are some plans to have a more official blog post on the plasma-mobile.org, these are my personal ramblings below…

Plasma 6 is coming together nicely on the desktop!

Coming back from hiatus, I was pleasantly greeted by a much more working session than when I last saw it in May; I have now completely switched over to it on my main machine!

On the other hand, there is still a lot of work to do on mobile to prepare it for the Plasma 6 release in February. I will outline the current situation and the work I have done in the past few months in order to make Plasma 6 a possibility for Plasma Mobile.

Context 🔗

I started working on KDE projects again in October (after an 8 month hiatus), slowly getting back into the groove of things. I unfortunately do not have as much spare time these days after work and school, but I try to do what I can!

The distro situation 🔗

For Plasma Mobile shell development on phones, we need to have distributions with KDE packages built from git master.

Many years ago, we used to have Neon images with Plasma Mobile that were maintained by Bhushan that were used for development, but required a large investment in time to maintain. We no longer have the (human) resources to maintain a distribution, so we are dependent on working with other distributions for development images.

Until this year, Manjaro had helped us by providing daily development images. Unfortunately, I have not really had contact with the project since one of their employees left earlier this year.

As a sidenote: Maintenance of the Manjaro Plasma Mobile images seemed to be inactive in the past few months, though there have been some encouraging signs from the forum in the past few days?


And so, there was a predicament. While I could develop the Plasma 6 mobile shell on the desktop, I had no way of testing it on a phone.

Thankfully, postmarketOS graciously took up the effort of maintaining a repository of KDE packages that track git master. Instructions can be viewed here. With the Plasma 6 beta release, there will be postmarketOS images that can be easily downloaded and tested with!

Seshan has also been working on an immutable distro ProLinux which has images that track git master as well, which have been useful to test with.

Porting 🔗

A big part of the porting effort is simply porting the shell and applications to Qt 6 and KDE Frameworks 6.

In the case of the shell, porting to Qt 6 was luckily fairly trivial. There were far more changes to the base KDE frameworks as package structures and QML plugins were being reworked, though that has stabilized in recent months. There are still some major regressions that need to be resolved before February (will be discussed later), but I am reasonably confident that the shell will in good shape by then.

For applications however, it is more of a mixed bag.

Several applications do not have active maintainers, and others need much more polish for mobile usage (as not all application developers have phones to test with).

Technically, I am responsible for KClock, KWeather and KRecorder, but I have typically done contributions to a multitude of applications to ensure that they work well on mobile. This time around though, I have only been really able to work on the shell with the limited spare time I have, and so I have not been able to do some of the heavier porting work for applications.

KRecorder in particular is unlikely to be ported in time for Plasma 6 as it depends on Qt Multimedia, which has had significant changes in Qt 6.

Beyond porting, I have also noticed some significant mobile-specific regressions in other applications that have been ported, which need to be addressed.

I would encourage anyone who is thinking of contributing to KDE to start here, application development is a very good way to learn Qt and get into further KDE contributions (as it is quite self-contained, compared to, say, the shell).

Styles 🔗

Plasma Mobile currently uses a separate Qt Quick style from the desktop (qqc2-breeze-style vs. qqc2-desktop-style), in which maintenance was lagging for Plasma 6.

The separate style is needed in order to have better performance, as it avoids some of the complex theming that the desktop needs to use for unifying styles between Qt Quick and Qt Widgets applications. However, it is a maintenance burden.

In November, I did a bunch of work on qqc2-breeze-style in order fix Plasma 6 related regressions, and to make it render more similarly to qqc2-desktop-style.

Task Switcher moving to KWin 🔗

In Plasma 5, the task switcher was built into the plasmashell process (which contains the homescreen, panels and most of what you see in Plasma), piping the application thumbnails from KWin.

In order to improve performance, it was a bit of a hack; the task switcher was built into the homescreen, so when it was opened, apps were minimized, showing the task switcher underneath. It was not optimal though, as it required convoluted logic in order to swap between the homescreen and task switcher views, and the thumbnails provided from the stream from KWin were not optimal performance wise.

With Plasma 6, I am moving the task switcher to be a self-contained KWin effect instead, which is inline with how the Desktop overview effect is implemented. This moves the task switcher off of plasmashell to KWin, cleaning up the code and potentially having performance improvements with how the application previews will be displayed.

We can also make use of KWin’s infrastructure for gesture-only mode (removing the navigation bar), rather than relying on a hack with invisible panels in order to trigger it.

The move has been a bit problematic though, as KWin developed regressions during Plasma 6 development that cause effect rendering to be broken on the PinePhone and SDM845 devices (possibly due to OpenGL versions?), but the issues are being worked on. As of Dec. 13, this has been fixed.

Rewriting Folio - the homescreen 🔗

Note: I will be writing a separate blog post that goes much more into detail in the future.

For some context, the default homescreen in Plasma 5 is Halcyon, which provides a simple way have a list of applications, while allowing them to be pinned and grouped into folders.

We also have the Folio homescreen, which was the original default (before Plasma 5.26) that was more similar to a traditional homescreen, having favourites pages and an application drawer to access the full list of apps.

The problem with Folio in Plasma 5 though was that it was particularly unstable (known to brick the shell), and was effectively an extended desktop canvas, so screen rotations and scaling changes would completely ruin the layout.

I knew that it would require a very significant effort in order to rewrite it and fix its issues, so I developed Halcyon as a stopgap solution until I had the time to fix Folio.


And so I spent about 5 weeks starting in October working solely on the Folio rewrite! It will be shipping as the default homescreen once again in Plasma 6.

I am pretty happy with how it turned out, it supports:

  • An app drawer
  • KRunner search
  • Folders
  • Pages
  • Drag and drop between all of the above
  • Applets/widgets
  • Row-column flipping for screen rotations
  • Customizable row and column counts
  • Customizable page transitions
  • Ability to import and export homescreen layouts as files
  • … and more!

Applets in particular are pretty exciting, though still need some work. They use the same infrastructure that the Desktop uses, so we can use existing applets!

New applets for mobile apps can eventually also be developed, pending interest.

A new service: plasma-mobile-envmanager 🔗

Plasma Mobile relies on some configurations (set in files) for KWin and the shell in general that directly conflict with what Plasma Desktop expects.

For example, we use different look-and-feel packages to provide pieces such as the lockscreen theme, as well as tweaks for features such as disabling window decorations.

In Plasma 5, this was accomplished by having the distribution ship config files that are installed to /etc/xdg (from plasma-phone-settings), which overrode Plasma related settings for users.

This was problematic in that this would affect the desktop session, making it impossible to use both the desktop and mobile sessions without doing some tweaking before switching. This also made it a barrier to be easily installable as “just another desktop environment”, which was a common complaint.


In Plasma 6, I introduced plasma-mobile-envmanager a utility that runs prior to shell startup that automatically does the configuration switching between what is needed for Plasma Mobile, and what is needed for Plasma Desktop.

This now allows distros to drop having to install hardcoded configs onto the system, and makes it easy to simply install as a separate desktop environment for existing systems.

A new application: plasma-mobile-initial-start 🔗

I added an application that runs when starting Plasma Mobile for the first time, guiding users on configuration on their system, from setting up the Wi-Fi, to configuring cellular settings.

It currently exists as an application that runs when the shell is started for the first time.

However, it likely needs to eventually be ported to be a true first-start wizard similar to what GNOME has as well as pico-wizard (used by Manjaro), in order to have elevated permissions so that the user does not have to be prompted for their password.

Docked mode 🔗

With Plasma 6, I am making some steps toward improving support with attaching a monitor, keyboard and mouse.

A new “Docked Mode” quick setting was introduced that, when activated:

  • Bring back window decorations
  • Stops opening application windows in fullscreen

Eventually, some more work can be done in order to have the external monitors load desktop panels instead of the mobile ones, which should make the experience equivalent to Plasma Desktop.

Telephony 🔗

I have historically not done much work on the telephony front.

There have luckily been contributors that have worked on Spacebar and Plasma Dialer (shoutout to Michael and Alexey!) in the past few years, allowing me to focus on other things.

From recent testing though, there have been a lot of regressions in Plasma 6, so I likely need to start learning the ropes of how it all works in order to help out. Of particular focus for me will be improving the quality of the cellular settings and overall shell integration with ModemManager.

My current carrier only supports VoLTE so I have been unable to test calling, and I have had trouble with my PinePhone in getting cellular working, but I will probably try buying a USB modem to do testing on the desktop with.

Settings module consolidation 🔗

In Plasma 5, a lot of mobile specific settings modules lived outside of the Plasma Mobile repository, in places such as the settings application.

I moved these settings modules together to be in the Plasma Mobile repository. This also removes the need to have a separate build of plasma-nm.

Other things to address 🔗

A big pain point in Plasma 5 was the mobile lockscreen. There were many cases of it crashing (causing the white text on black screen issue) as well as extraordinarily slow load times.

Much of it likely stems from the fact that it has to load the QML files right after the device locks, which can be slow and sometimes has graphical issues when coupled with suspend. I have tried to optimize the load time in the past, but it may be the case that we need to rethink the architecture a bit, not sure…

Conclusion 🔗

I will be returning to a university term in January, likely making it harder for me to contribute for a few months again.

I have luckily finished most of the features I have wanted to get done for Plasma 6, and am now spending my effort fixing bugs and improving code quality. I hope that we can have a successful, bug-free Plasma Mobile release in February, but it is quite daunting at the moment as a single volunteer contributor for the mobile shell.

If you are interested in helping contribute, I encourage you to join the Plasma Mobile matrix room!

There is also some documentation on the wiki that can help you get started.

go konqi!

Thursday, 7 December 2023

For the fourth installment of Off-Theme we have a global theme based on the granddaddy of all the classic Unix desktops, a desktop that ruled the roost of the workstations from a bygone era. It is time to pay tribute to the DE that once dominated the Unix world.

Tuesday, 5 December 2023

Thank you to everyone who reported issues and contributed to QCoro. Your help is much appreciated!

Support for awaiting Qt signals with QPrivateSignal

Qt has a feature where signals can be made “private” (in the sense that only class that defines the signal can emit it) by appending QPrivateSignal argument to the signal method:

class MyObject : public QObject {
 Q_OBJECT
...
Q_SIGNALS:
 void error(int code, const QString &message, QPrivateSignal);
};

QPrivateSignal is a type that is defined inside the Q_OBJECT macro, so it’s private and as such only MyObject class can emit the signal, since only MyObject can instantiate QPrivateSignal:

void MyObject::handleError(int code, const QString &message)
{
 Q_EMIT error(code, message, QPrivateSignal{});
}

QCoro has a feature that makes it possible to co_await a signal emission and returns the signals arguments as a tuple:


MyObject myObject;
const auto [code, message] = co_await qCoro(&myObject, &MyObject::handleError);

While it was possible to co_await a “private” signal previously, it would get return the QPrivateSignal value as an additional value in the result tuple and on some occasions would not compile at all.

In QCoro 0.10, we can detect the QPrivateSignal argument and drop it inside QCoro so that it does not cause trouble and does not clutter the result type.

Achieving this wasn’t simple, as it’s not really possible to detect the type (because it’s private), e.g. code like this would fail to compile, because we are not allowed to refer to Obj::QPrivateSignal, since that type is private to Obj.

template<typename T, typename Obj>
constexpr bool is_qprivatesignal = std::same_as_v<T, typename Obj::QPrivateSignal>;

After many different attempts we ended up abusing __PRETTY_FUNCTION__ (and __FUNCSIG__ on MSVC) and checking whether the function’s name contains QPrivateSignal string in the expected location. It’s a whacky hack, but hey - if it works, it’s not stupid :). And thanks to improvements in compile-time evaluation in C++20, the check is evaluated completely at compile-time, so there’s no runtime overhead of obtaining current source location and doing string comparisons.

Source Code Reorganization (again!)

Big part of QCoro are template classes, so there’s a lot of code in headers. In my opinion, some of the files (especially qcorotask.h) were getting hard to read and navigate and it made it harder to just see the API of the class (like you get with non-template classes), which is what users of a library are usually most interested in.

Therefore I decided to move definitions into separated files, so that they don’t clutter the main include files.

This change is completely source- and binary-compatible, so QCoro users don’t have to make any changes to their code. The only difference is that the main QCoro headers are much prettier to look at now.

Bugfixes

  • QCoro::waitFor() now re-throws exceptions (#172, Daniel Vr√°til)
  • Replaced deprecated QWebSocket::error with QWbSocket::errorOccured in QCoroWebSockets module (#174, Marius P)
  • Fix QCoro::connect() not working with lambdas (#179, Johan Br√ºchert)
  • Fix library name postfix for qmake compatibilty (#192, Shantanu Tushar)
  • Fix std::coroutine_traits isn't a class template error with LLVM 16 (#196, Rafael Sadowski)

Full changelog

See changelog on Github

Sunday, 3 December 2023


KStars v3.6.8 is released on 2023.12.03 for Windows, MacOS & Linux. It's a bi-monthly bug-fix release with a couple of exciting features.

Aberration Inspector


John Evans introduces the very exciting Aberration Inspector tool. The Aberration Inspector is a tool that makes use of Autofocus to analyze backfocus and sensor tilt in the connected optical train. It solves up to 9 virtual tiles on the sensor as defined by the existing Mosaic Mask.


The information is then used to analyze:
  • Back focus.
  • Sensor Tilt.
There are 4 sections:
  • V-curve for the each tile.
  • Table of data detailing the curve fitting results.
  • Analysis of back focus and tilt.
  • 3D Surface graphic to explain the Petzval Surface intersection with the sensor.
This release provides display only functionality. In future it would be possible to add functionality to offer recommendations for adjustments using Octopi, PhotonCage, etc.

Sub-exposure Calculator


Joseph McGee continues to add improvements and fixes for the Sub-exposure calculator. For usability, the window is now re-sizeable, an issue with the display of tool tips was corrected, and an indicator has been added for the sensor type of the selected camera (Mono / Color).  For functionality: the upper limit of the Noise Increase input parameter was increased, support was added for cameras with non-variable read noise, (cameras with CCD sensors).


Several new camera data files were added to the KStars source code repository, and a function to allow direct direct download of camera files from the repository was enabled. (Note: users who have created their own camera data files, may wish set the file attribute to read-only, and/or make a back up copy in case of an accidental over-write from the download function if the camera file has the same name).

A new experimental graphical tool to determine an appropriate number of sub-exposures for integration was added. This tool allows the selection of an exposure time to noise ratio for a stacked image; the tool will compute the number of sub-exposures required to achieve that value.

Added several new camera data files:
  • Atik-16200CCD_Mono.xml
  • FLI-16200CCD_Mono.xml 
  • QHY_CCD_294M_Pro.xml
  • QHY_CCD_461_PH.xml
  • QHY_CCD_163C.xml 
  • QHY_CCD_163M.xml
  • QHY_CCD_268C.xml
  • QHY_CCD_294M.xml
  • QHY_CCD_600_PH.xml
  • ZWO_CCD_ASI294MC
  • Pro.xml ZWO_CCD_ASI294MM
  • Pro.xml ZWO_CCD_ASI533MC
  • Pro.xml ZWO_CCD_ASI2600MC
  • Pro.xml ZWO_CCD_ASI6200MC
  • Pro.xml ZWO_CCD_ASI533MC
  • Pro.xml ZWO_CCD_ASI533MM
  • Pro.xml Nikon_DSLR_DSC_D5100_(PTP_mode).xml Nikon_DSLR_DSC_D700_(PTP_mode).xml

FITSViewer Solver


Hy Murveit added a very useful feature to the FITS Viewer: a built-in solver!

The FITS Viewer Solver is used to plate-solve the image loaded in the FITS Viewer's tab. It only works with the internal StellarSolver. You get the RA and DEC coordinates for the center of the image, the image's scale, the angle of rotation, and the number of stars detected in the image. Its main use case is debugging plate-solving issues in Ekos, though the information displayed can be generally useful. The controls and displays are described below.


This adds a new tool inside the splitter on the FITS Viewer. It plate-solves the displayed image, and allows the user to experiment with a number of plate-solving parameters, and thus help debug plate-solving issues.

How to test it out?
  • Open the sliding panel on the left part way, click on Plate Solving, and resize the windows appropriately.
  • Experiment with the parameters available (Use Scale, Use Position, the scale and RA/DEC positions, choose a profile and/or edit it)
  • Click Solve, and the image is solved and the solution presented in the Scale and RA & DEC and Angle boxes.
  • If you enable "Mark Stars" above the image window, you will also see the stars that were detected.

Quality of Life improvements

  • Make "Set Coordinates Manually" dialog more intuitive.
  • Telescope name specified in the optical trains are now saved in the  FITS header (the mount name was saved before).
  • New placeholders for ISO, binning and pure exposure time added.
  • Add a new not-default scheduler option to disable greedy scheduling.
  • Reduce latency between captures, especially when guiding / dithering.
  • Fix issue with differential slewing.
  • Separate Business Logic from UI in Scheduler.
  • Fix bug in estimating job time, capture delays were misinterpreted.
  • Fixed guide start deviation was not saved properly in esq file.
  • Bugfix in one-pulse dither. Dither pulses were going the wrong way.
  • Fix Scheduler hangs when Focus does not signal Autofocus start failure.
  • Focus Guide Settle Bug.

Dear digiKam fans and users,

After five months of active maintenance and long bugs triage, the digiKam team is proud to present version 8.2.0 of its open source digital photo manager.

See below the list of most important features coming with this release.

  • Libraw : Updated to snapshot 2023-11-21
  • Bundles : Updated Exiv2 to last 0.28.1 release
  • Bundles : Updated ExifTool to last 12.70 release.
  • Bundles : Linux and macOS uses KF5 framework to last 5.110

This version arrives with a long review of bugzilla entries. Long time bugs present in older versions have been fixed and we spare a lots of time to contact users to validate changes in pre-release to confirm fixes before to deploy the program in production.