Skip to content

Tuesday, 2 January 2024

Cleaning up KDE’s metadata - the little things matter too

Lots of my KDE contributions revolve around plugin code and their metadata, meaning I have a good overview of where and how metadata is used. In this post, I will highlight some recent changes and show you how to utilize them in your Plasma Applets and KRunner plugins!

Applet and Containment metadata

Applets (or Widgets) are one of Plasma’s main selling points regarding customizability. Next to user-visible information like the name, description and categories, there is a need for some technical metadata properties. This includes X-Plasma-API-Minimum-Version for the compatible versions, the ID and the package structure, which should always be “Plasma/Applet”.

For integrating with the system tray, applets had to specify the X-Plasma-NotificationArea and X-Plasma-NotificationAreaCategory properties. The first one says that it may be shown in the system tray and the second one says in which category it belongs. But since we don’t want any applets without categories in there, the first value is redundant and may be omitted! Also, it was treated like a boolean value, but only "true" or "false" were expected. I stumbled upon this when correcting the types in metadata files.
This was noticed due to preparations for improving the JSON linting we have in KDE. I will resume working on it and might also blog about it :).

What most applets in KDE specify is X-KDE-MainScript, which determines the entrypoint of the applet. This is usually “ui/main.qml”, but in some cases the value differs. When working with applets, it is confusing to first have to look at the metadata in order to find the correct file. This key was removed in Plasma6 and the file is always ui/main.qml. Since this was the default value for the property, you may even omit it for Plasma5.
The same filename convention is also enforced for QML config modules (KCMs).

What all applets needed to specify was X-Plasma-API. This is typically set to "declarativeappletscript", but we de facto never used its value, but enforced it being present. This was historically needed because in Plasma4, applets could be written in other scripting languages. From Plasma6 onward, you may omit this key.

In the Plasma repositories, this allowed me to clean up over 200 lines of JSON data.

metadata.desktop files of Plasma5 addons

Just in case you were wondering: We have migrated from metadata.desktop files to only metadata.json files in Plasma6. This makes providing metadata more consistent and efficient. In case your projects still use the old format, you can run desktoptojson -i pathto/metadata.desktop and remove the file afterward.
See https://develop.kde.org/docs/plasma/widget/properties/#kpackagestructure for more detailed information. You can even do the conversion when targeting Plasma5 users!

Default object path for KRunner DBus plugins

Another nice addition is that “/runner” will now be the default object path. This means you can omit this one key. Check out the template to get started: https://invent.kde.org/frameworks/krunner/-/tree/master/templates/runner6python. DBus-Runners that worked without deprecations in Plasma5 will continue to do so in Plasma6! For C++ plugins, I will make a porting guide like blog post soonish, because I have started porting my own plugins to work with Plasma6.


Finally, I want to wish you all a happy and productive new year!

Saturday, 30 December 2023

This is a lighter month due to holidays (and also I’m trying not to burn out), but I tried to fit in a bit of KDE anyway. It’s all bugfixes anyway because of the feature freeze!

Not mentioned is a bunch of really boring busywork like unbreaking the stable branches of Gear applications due to the CI format changing.

Tokodon

[Bugfix] Fixed a bunch of papercuts with the Android build, and the new nightlies should be appearing in the F-Droid repository soon! It’s mostly adding missing icons and making sure it looks good in qqc2-breeze-style (the style we use on Android and Plasma Mobile.) [24.02]

[Bugfix] Fixed Akkoma and Pleroma tags not being detected correctly, they should open in Tokodon instead of your web browser again! [24.02]

Plasma

[Bugfix] KScreenLocker CI now runs supported tests, see the KillTest fixes and pamTest fix. Failing tests also make the pipeline visibly fail, as it should. (Unfortunately, the pipeline as of writing fails to due some unrelated regression?) [6.0]

[Bugfix] The lockscreen greeter now handles even the fallback theme failing, and display the “please unlock using loginctl” message instead of a black screen. [6.0]

[Bugfix] Improves the QtQuickControls style selection mechanism to work around a possible regression in Qt6. This should stop applications from mysteriously not opening in the rare (but unsupported) cases where our official styles aren’t installed/loading. [6.0]

Kirigami

[Bugfix] Fixed a bunch of TextArea bugs that affected mobile form factors, such as Plasma Mobile and Android. This is mostly for Tokodon (because we abuse TextAreas a lot in scrolling content) but it can help other applications too! The selectByMouse property is now respected, the cursor handles should show up less. [6.0]

[Bugfix] Invisible MenuItems in qqc2-breeze-style are collapsed like in qqc2-desktop-style. Mobile applications should no longer have elongated menus with lots of blank space! [6.0]

[Bugfix] You can finally right-click with a touchpad in qqc2-desktop-style TextFields again! This bug has been driving me up a wall when testing our Qt6 stuff. [6.0]

[Feature] When the Kirigami theme plugin fails to load, the error message will soon be a bit more descriptive. This should make it easier for non-developers to figure out why Kirigami applications don’t look correct. [6.0]

Android

[Bugfix] Fixed KWeather not launching on Android because it needed QApplication. I didn’t know QtCharts is QWidgets-based! [24.02]

I also went around and fixed up a bunch of other mobile applications with Android contributions too small to mention. Applications like Qrca, Kongress, etc.

NeoChat

[Bugfix] Prevent the NeoChat notification daemon from sticking around forever although that should rarely happen. [24.02]

Outside of KDE

Nagged for a new QtKeychain release due to a critical bug that would cause applications to never open KWallet5. Please also nag your distributions to package 0.14.2 soon! Anything using QtKeychain 0.14.1 or below won’t work in Plasma 6. This doesn’t affect people in the dev session, because QtKeychain should be built from git.

Helping the Gentoo KDE Team with packaging Plasma 6 and KDE Gear 6. I managed to update my desktop to Plasma 6 and submitted fixes to get it closer to working. I also added Arianna, PlasmaTube and MpvQt packages.

Wednesday, 27 December 2023

The ownCloud product Infinite Scale is going to be released in version five soon. The latest stable version is 4.0.5 and I am sure everybody checked it out already and is blown away by it’s performance, elegance and ease of use.

No, not yet?

Ok, well, in that case, here comes rescue: With the little script described here, it becomes really easy to start Infinite Scale to check it out on your computer. It makes it really easy and quick for you, without Linux super admin powers whatsoever.

To use it, you just need to open a terminal on your machine and cd into a directory somewhere in your home where you can afford to host some bytes.

Without further preparation, you type the following command line (NOT as user root please):

curl -L https://owncloud.com/runocis.sh | /bin/bash

What it does is that it automatically pulls the latest stable version of Infinite Scale from the official download server of ownCloud onto your computer. For that, it creates a configuration and a start script, and starts the server. The script detects the platform on which your’re running to download the right binary version. It also looks up the hostname and configures the installation for that name.

Once the server was started Infinite Scale’s web client can be accessed by pointing a browser to the URL https://your-hostname:9200/. Since this is an installation for testing purposes, it does not have a proper certificate configured. That is why your browser is complaining about the cert, and you have to calm it. And indeed, that is one of the reasons why you’re not supposed to use this sneak peak in production or even exposed to the internet.

For the nerds, the script does not really do magic, but just curls the golang single binary of Infinite Scale down to the machine into a sandbox directory, chmod it to be executable and create a working config and a data dir. All happens with the priviledges of the logged in user, no sudo or root involved. You’re encouraged to double check the install script using for example the command curl -L https://owncloud.com/runocis.sh | less - of course you never should trust anybody running scripts from the internet on your machine.

If the server is stopped by pressing Ctrl-C, it later can be started again by the script runocis.sh that was kindly left behind in the sandbox as well.

The installer was tested on these three platforms: 64 bit AMD/Intel CPU based Linux machines, 64 bit Raspberry Pi with Raspbian OS and MacOSX. The flavour of Linux should not make a difference.

If you encounter a problem with the script or if you have suggestions to improve, please find it in my this’n that section on Github. I am happy to receive issue reports or pull requests.

For further information and setups suitable for production please refer to the Infinite Scale documentation.

Tuesday, 26 December 2023

As someone suffering from a latent burnout thingy which has become more imminent in recent years and as someone who is still struggling to develop strategies to alleviate its effects on health and general well-being, I wholeheartedly recommend everyone to watch this video and let those points sink. Yes, even if you are not (yet) affected. The video is not all about burnout but about strategies for sustaining long-term sanity.

https://www.youtube.com/watch?v=qyz6sOVON68

Enjoy. :)

Thursday, 21 December 2023

chsh is a small tool that lets you change the default shell for your current user. In order to let any user change their own shell, which is set in /etc/passwd, it needs privileges and is generally setuid root.

I am of the opinion that setuid/setgid binaries are a UNIX legacy that should be deprecated. I will explain the security reasons behind that statement in a future post.

In this “UNIX legacy” series of posts, I am looking at classic setuid binaries and try to find better, safer alternatives for common use cases. In this post, we will look at alternatives to changing your login shell.

Should you change the default shell?

People usually change their default shell because they want to use a modern alternative to Bash (Zsh, fish, Oils, nushell, etc.).

Changing the default shell (especially to a non POSIX or Bash compatible one) might have unintended consequences as some scripts relying on Bash compatibility might not work anymore. There are lots of warnings about this, for example for the fish shell:

On Fedora Atomic Desktops (Silverblue, Kinoite, etc.), your preferred shell may not always be available, notably if you have to reset your overlays for an upgrade, and could lead to an unusable system:

So overall, it is a bad idea to change the default login shell for interactive users.

For non-interactive users or system users, the shell is usually set by the system administrator only and the user itself never needs to change it.

If you are using systemd-homed, then you can change your own shell via the homectl command without needing setuid binaries but for the same reasons as above, it is still not a good idea.

Graphical interface: Use a modern terminal emulator

If you want to use another shell than the default one, you can use the functionality from your graphical terminal emulator to start it by default instead of Bash.

I recommend using the freshly released Prompt (sources) terminal if you are running on Fedora Silverblue or other GNOME related desktops. You can set your preferred shell in the Profiles section of the preferences. It also has great integration for toolbox/distrobox containers. We’re investigating making this the default in a future version of Fedora Silverblue (issue#520).

If you are running on Fedora Kinoite or other KDE related desktops, you should look at Konsole’s profiles features. You can create your own profiles and set the Command to /bin/zsh to use another shell. You can also assign shortcuts to profiles to open them directly a new tab, or use /bin/toolbox enter fedora-toolbox-39 as Command to directly enter a toolbox container for example.

This is obviously not an exhaustive list and other modern terminal emulators also let you specify which command to start.

If your terminal emulator does not allow you to do that, then you can use the alternative from the next section.

Or use a small snippet

If you want to change the default shell for a user on a server, then you can add the following code snippet at the beginning of the user’s ~/.bashrc (example for fish):

# Only trigger if:
# - 'fish' is not the parent process of this shell
# - We did not call: bash -c '...'
# - The fish binary exists and is executable
if [[ $(ps --no-header --pid=$PPID --format=comm) != "fish" && -z ${BASH_EXECUTION_STRING} && -x "/bin/fish" ]]; then
  shopt -q login_shell && LOGIN_OPTION='--login' || LOGIN_OPTION=''
  exec fish $LOGIN_OPTION
fi

References

Cutelyst the Qt web framework is now at v4.0.0 just a bit later for it’s 10th anniversary.

With 2.5k commits it has been steadly improving, and in production for many high traffic applications. With this release we say good bye to our old Qt5 friend, also dropped uWSGI support, clearsilver and Grantlee were also removed, many methods now take a QStringView and Cutelyst::Header class was heavly refactored to allow usage of QByteArrayView, and stopped storing QStrings internally in a QHash, they are QByteArray inside a vector.

Before, all headers were uppercased and dashes replaced with underscores, this was quite some work, so that when searching the string had to be converted to this format to be searcheable, this had the advantage of allowing the use of QHash and in templates you could c.request.header.CONTENT_TYPE. Turns out both cases aren’t so important, speed is more important for the wider use cases.

With these changes Cutelyst managed to get 10 – 15% faster on TechEmpower benchmarks, which is great as we are still well positioned as a full stack framework there.

https://github.com/cutelyst/cutelyst/releases/tag/v4.0.0

Have fun, Merry Christmas and Happy New Year!

Wednesday, 20 December 2023

This is an update on the ongoing migration of jobs from Binary Factory to KDE's GitLab. Since the last blog a lot has happened.

A first update of Itinerary was submitted to Google Play directly from our GitLab.

Ben Cooksley has added a service for publishing our websites. Most websites are now built and published on our GitLab with only 5 websites remaining on Binary Factory.

Julius Künzel has added a service for signing macOS apps and DMGs. This allows us to build signed installers for macOS on GitLab.

The service for signing and publishing Flatpaks has gone live. Nightly Flatpaks built on our GitLab are now available at https://cdn.kde.org/flatpak/. For easy installation builds created since yesterday include .flatpakref files and .flatpakrepo files.

Last, but not least, similar to the full CI/CD pipeline for Android we now also have a full CI/CD pipeline for Windows. For Qt 5 builds this pipeline consists of the following GitLab jobs:

  • windows_qt515 - Builds the project with MSVC and runs the automatic tests.
  • craft_windows_qt515_x86_64 - Builds the project with MSVC and creates various installation packages including (if enabled for the project) a *-sideload.appx file and a *.appxupload file.
  • sign_appx_qt515 - Signs the *-sideload.appx file with KDE's signing certificate. The signed app package can be downloaded and installed without using the Microsoft store.
  • microsoftstore_qt515 - Submits the *.appxupload package to the Microsoft store for subsequent publication. This job doesn't run automatically.
Notes:
  • The craft_windows_qt515_x86_64 job also creates .exe installers. Those installers are not yet signed on GitLab, i.e. Windows should warn you when you try to install them. For the time being, you can download signed .exe installers from Binary Factory.
  • There are also jobs for building with MinGW, but MinGW builds cannot be used for creating app packages for the Microsoft Store. (It's still possible to publish apps with MinGW installers in the Microsoft Store, but that's a different story.)
The workflow for publishing an update of an app in the Microsoft Store as I envision it is as follows:
  1. You download the signed sideload app package, install it on a Windows (virtual) machine (after uninstalling a previously installed version) and perform a quick test to ensure that the app isn't completely broken.
  2. Then you trigger the microsoftstore_qt515 job to submit the app to the Microsoft Store. This creates a new draft submission in the Microsoft Partner Center. The app is not published automatically. To actually publish the submission you have to log into the Microsoft Partner Center and commit the submission.

Enabling the Windows CD Pipeline for Your Project

If you want to start building Windows app packages (APPX) for your project then add the craft-windows-x86-64.yml template for Qt 5 or the craft-windows-x86-64-qt6.yml template for Qt 6 to the .gitlab-ci.yml of your project. Additionally, you have to add a .craft.ini file with the following content to the root of your project to enable the creation of the Windows app packages.
[BlueprintSettings]
kde/applications/myapp.packageAppx = True

kde/applications/myapp must match the path of your project's Craft blueprint.

When you have successfully built the first Windows app packages then add the craft-windows-appx-qt5.yml or the craft-windows-appx-qt6.yml template to your .gitlab-ci.yml to get the sign_appx_qt* job and the microsoftstore_qt* job.

To enable signing your project (more precisely, a branch of your project) needs to be cleared for using the signing service. This is done by adding your project to the project settings of the appxsigner. Similarly, to enable submission to the Microsoft Store your project needs to be cleared by adding it to the project settings of the microsoftstorepublisher. If you have carefully curated metadata set in the store entry of you app that shouldn't be overwritten by data from your app's AppStream data then have a look at the keep setting for your project. I recommend to use keep sparingly if at all because at least for text content you will deprive people using the store of all the translations added by our great translation teams to your app's AppStream data.

Note that the first submission to the Microsoft Store has to be done manually.

Tuesday, 19 December 2023

All the Toolbx and Distrobox container images and the ones in my personal namespace on Quay.io are now signed using cosign.

How to set this up was not really well documented so this post is an attempt at that.

First we will look at how to setup a GitHub workflow using GitHub Actions to build multi-architecture container images with buildah and push them to a registry with podman. Then we will sign those images with cosign (sigstore) and detail what is needed to configure signature validation on the host. Finally we will detail the remaining work needed to be able to do the entire process only with podman.

Full example ready to go

If you just want to get going, you can copy the content of my github.com/travier/cosign-test repo and start building and pushing your containers. I recommend keeping only the cosign.yaml workflow for now (see below for the details).

“Minimal” GitHub workflow to build containers with buildah / podman

You can find those actions at github.com/redhat-actions.

Here is an example workflow with the Containerfile in the example sub directory:

name: "Build container using buildah/podman"

env:
  NAME: "example"
  REGISTRY: "quay.io/example"

on:
  # Trigger for pull requests to the main branch, only for relevant files
  pull_request:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/cosign.yml'
  # Trigger for push/merges to main branch, only for relevant files
  push:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/cosign.yml'
  # Trigger every Monday morning
  schedule:
    - cron:  '0 0 * * MON'

permissions: read-all

# Prevent multiple workflow runs from racing to ensure that pushes are made
# sequentialy for the main branch. Also cancel in progress workflow runs for
# pull requests only.
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: ${{ github.event_name == 'pull_request' }}

jobs:
  build-push-image:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repo
        uses: actions/checkout@v4

      - name: Setup QEMU for multi-arch builds
        shell: bash
        run: |
          sudo apt install qemu-user-static

      - name: Build container image
        uses: redhat-actions/buildah-build@v2
        with:
          # Only select the architectures that matter to you here
          archs: amd64, arm64, ppc64le, s390x
          context: ${{ env.NAME }}
          image: ${{ env.NAME }}
          tags: latest
          containerfiles: ${{ env.NAME }}/Containerfile
          layers: false
          oci: true

      - name: Push to Container Registry
        uses: redhat-actions/push-to-registry@v2
        # The id is unused right now, will be used in the next steps
        id: push
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        with:
          username: ${{ secrets.BOT_USERNAME }}
          password: ${{ secrets.BOT_SECRET }}
          image: ${{ env.NAME }}
          registry: ${{ env.REGISTRY }}
          tags: latest

This should let you to test changes to the image via builds in pull requests and publishing the changes only once they are merged.

You will have to setup the BOT_USERNAME and BOT_SECRET secrets in the repository configuration to push to the registry of your choice.

If you prefer to use the GitHub internal registry then you can use:

env:
  REGISTRY: ghcr.io/${{ github.repository_owner }}

...
  username: ${{ github.actor }}
  password: ${{ secrets.GITHUB_TOKEN }}

You will also need to set the job permissions to be able to write GitHub Packages (container registry):

permissions:
  contents: read
  packages: write

See the Publishing Docker images GitHub Docs.

You should also configure the GitHub Actions settings as follow:

  • In the “Actions permissions” section, you can restict allowed actions to: “Allow <username>, and select non-<username>, actions and reusable workflows”, with “Allow actions created by GitHub” selected and the following additionnal actions:
    redhat-actions/*,
    
  • In the “Workflow permissions” section, you can select the “Read repository contents and packages permissions” and select the “Allow GitHub Actions to create and approve pull requests”.

  • Make sure to add all the required secrets in the “Secrets and variables”, “Actions”, “Repository secrets” section.

Signing container images

We will use cosign to sign container images. With cosign, you get two main options to sign your containers:

  • Keyless signing: Sign containers with ephemeral keys by authenticating with an OIDC (OpenID Connect) protocol supported by Sigstore.
  • Self managed keys: Generate a “classic” long-lived key pair.

We will choose the the “self managed keys” option here as it is easier to setup for verification on the host in podman. I will likely make another post once I figure out how to setup keyless signature verification in podman.

Generate a key pair with:

$ cosign generate-key-pair

Enter an empty password as we will store this key in plain text as a repository secret (COSIGN_PRIVATE_KEY).

Then you can add the steps for signing with cosign at the end of your workflow:

      # Include at the end of the workflow previously defined

      - name: Login to Container Registry
        uses: redhat-actions/podman-login@v1
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ secrets.BOT_USERNAME }}
          password: ${{ secrets.BOT_SECRET }}

      - uses: sigstore/cosign-installer@v3.3.0
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'

      - name: Sign container image
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        run: |
          cosign sign -y --recursive --key env://COSIGN_PRIVATE_KEY ${{ env.REGISTRY }}/${{ env.NAME }}@${{ steps.push.outputs.digest }}
        env:
          COSIGN_EXPERIMENTAL: false
          COSIGN_PRIVATE_KEY: ${{ secrets.COSIGN_PRIVATE_KEY }}

2024-01-12 update: Sign container images recursively for multi-arch images.

We need to explicitly login to the container registry to get an auth token that will be used by cosign to push the signature to the registry.

This step sometimes fails, likely due to a race condition, that I have not been able to figure out yet. Retrying failed jobs usually works.

You should then update the GitHub Actions settings to allow the new actions as follows:

redhat-actions/*,
sigstore/cosign-installer@*,

Configuring podman on the host to verify image signatures

First, we copy the public key to a designated place in /etc:

$ sudo mkdir /etc/pki/containers
$ curl -O "https://.../cosign.pub"
$ sudo cp cosign.pub /etc/pki/containers/
$ sudo restorecon -RFv /etc/pki/containers

Then we setup the registry config to tell it to use sigstore signatures:

$ cat /etc/containers/registries.d/quay.io-example.yaml
docker:
  quay.io/example:
    use-sigstore-attachments: true
$ sudo restorecon -RFv /etc/containers/registries.d/quay.io-example.yaml

And then we update the container signature verification policy to:

  • Default to reject everything
  • Then for the docker transport:
    • Verify signatures for containers coming from our repository
    • Accept all other containers from other registries

If you do not plan on using container from other registries, you can even be stricter here and only allow your containers to be used.

/etc/containers/policy.json:

{
    "default": [
        {
            "type": "reject"
        }
    ],
    "transports": {
        "docker": {
            ...
            "quay.io/example": [
                {
                    "type": "sigstoreSigned",
                    "keyPath": "/etc/pki/containers/quay.io-example.pub",
                    "signedIdentity": {
                        "type": "matchRepository"
                    }
                }
            ],
            ...
            "": [
                {
                    "type": "insecureAcceptAnything"
                }
            ]
        },
        ...
    }
}

See the full man page for containers-policy.json(5).

You should now be good to go!

What about doing everything with podman?

Using this workflow, there is a (small) time window where the container images are pushed to the registry but not signed.

One option to avoid this problem would be to first push the container to a “temporary” tag first, sign it, and then copy the signed container to the latest tag.

Another option is to use podman to push and sign the container image “at the same time”. However podman still needs to push the image first and then sign it so there is still a possibility that signing fails and that you’re left with an unsigned image (this happened to me during testing).

Unfortunately for us, the version of podman available in the version of Ubuntu used for the GitHub Runners (22.04) is too old to support signing containers. We thus need to use a newer podman from a container image to workaround this.

Here is the same workflow, adapted to only use podman for signing:

name: "Build container using buildah, push and sign it using podman"

env:
  NAME: "example"
  REGISTRY: "quay.io/example"
  REGISTRY_DOMAIN: "quay.io"

on:
  pull_request:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/podman.yml'
  push:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/podman.yml'
  schedule:
    - cron:  '0 0 * * MON'

permissions: read-all

# Prevent multiple workflow runs from racing to ensure that pushes are made
# sequentialy for the main branch. Also cancel in progress workflow runs for
# pull requests only.
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: ${{ github.event_name == 'pull_request' }}

jobs:
  build-push-image:
    runs-on: ubuntu-latest
    container:
      image: quay.io/travier/podman-action
      options: --privileged -v /proc/:/host/proc/:ro
    steps:
      - name: Checkout repo
        uses: actions/checkout@v4

      - name: Setup QEMU for multi-arch builds
        shell: bash
        run: |
          for f in /usr/lib/binfmt.d/*; do cat $f | sudo tee /host/proc/sys/fs/binfmt_misc/register; done
          ls /host/proc/sys/fs/binfmt_misc

      - name: Build container image
        uses: redhat-actions/buildah-build@v2
        with:
          archs: amd64, arm64, ppc64le, s390x
          context: ${{ env.NAME }}
          image: ${{ env.NAME }}
          tags: latest
          containerfiles: ${{ env.NAME }}/Containerfile
          layers: false
          oci: true

      - name: Setup config to enable pushing Sigstore signatures
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        shell: bash
        run: |
          echo -e "docker:\n  ${{ env.REGISTRY_DOMAIN }}:\n    use-sigstore-attachments: true" \
            | sudo tee -a /etc/containers/registries.d/${{ env.REGISTRY_DOMAIN }}.yaml

      - name: Push to Container Registry
        # uses: redhat-actions/push-to-registry@v2
        uses: travier/push-to-registry@sigstore-signing
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        with:
          username: ${{ secrets.BOT_USERNAME }}
          password: ${{ secrets.BOT_SECRET }}
          image: ${{ env.NAME }}

This uses two additional workarounds for missing features:

  • There is no official container image that includes both podman and buildah right now, thus I made one: github.com/travier/podman-action
  • The redhat-actions/push-to-registry Action does not support signing yet (issue#89). I’ve implemented support for self managed key signing in pull#90. I’ve not looked at keyless signing yet.

You will also have to allow running my actions in the repository settings. In the “Actions permissions” section, you should use the following actions:

redhat-actions/*,
travier/push-to-registry@*,

Conclusion

The next steps are to figure out all the missing bits for keyless signing and replicate this entire process in GitLab CI.

A script element has been removed to ensure Planet works properly. Please find it in the original post.

Brise theme is yet another fork of Breeze. The name comes having both the French and German translations of Breeze, being Brise.

As some people know, I’m contributing quite a lot to the Breeze style for the Plasma 6 release and I don’t intend to stop doing that. Both git repositories share the same git history and I didn’t massively rename all the C++ classes from BreezeStyle to BriseStyle to make it as easy as possible to backport commits from one repository to the other. There are also no plans to make this the new default style for Plasma.

My goal with this Qt style is to have a style that is not a big departure of Breeze like you know it but does contain some cosmetic small changes. This would serve as a place where I can experiment with new ideas and if they tend to be popular to then move them to Breeze.

Here is a breakdown of all the changes I made so far.

  • I made Brise coinstallable with Breeze, so that users can have both installed simultaneously. I minified the changes to avoid merge conflicts while doing so.

  • I increased the border radius of all the elements from 3 pixels to 5 pixels. This value is configurable between small (3 pixels), medium (5 pixels) and large (7 pixels). A merge request was opened in Breeze and might make it into Plasma 6.1. The only difference is that in breeze the default will likely keep being 3 pixels for the time being.

Cute buttons and frames with 5 pixels border radius
Cute buttons and frames with 5 pixels border radius

  • Add a separator between the search field and the title in the standard KDE config windows which serves as an extension of the separator between the list of the setting’s categories and the setting’s page. This is mostly to be similar to System Settings and other Kirigami applications. There is a pending merge request for this also in Breeze.
  • A new tab style that removes the blue lines from the active lines and introduce other small changes. Non-editable tabs are also now filling the entire horizontal space available. I’m not completely happy with the look yet, so no merge requests have been submitted to Breeze.

Separator in the toolbar and the new tabs
Separator in the toolbar and the new tabs

  • Remove outlines from menu and combobox items. My goal is to go in the same direction as KirigamiAddons.RoundedItemDelegate.

Menu without outlines
Menu without outlines

  • Ensure that all the controls have the same height. Currently a small disparency in height is noticeable when they are in the same row. The patch is still a bit hacky and needs some wider testing on a large range of apps to ensure no regressions, but it is also a improvement I will definitively submit upstream once I feel like it’s ready.

 

 

Here, in these two screenshots, every control has 35 pixels as height.

Finally here is Kate and KMail’s settings with Breeze and Brise.

Monday, 18 December 2023

In this post, I will detail how to replace sudo (a setuid binary) by using SSH over a local UNIX socket.

I am of the opinion that setuid/setgid binaries are a UNIX legacy that should be deprecated. I will explain the security reasons behind that statement in a future post.

This is related to the work of the Confined Users SIG in Fedora.

Why bother?

The main benefit of this approach is that it enables root access to the host from any unprivileged toolbox / distrobox container. This is particularly useful on Fedora Atomic desktops (Silverblue, Kinoite, Sericea, Onyx) or Universal Blue (Bluefin, Bazzite) for example.

As a side effect of this setup, we also get the following security advantages:

  • No longer rely on sudo as a setuid binary for privileged operations.
  • Access control via a physical hardware token (here a Yubikey) for each privileged operation.

Setting up the server

Create the following systemd units:

/etc/systemd/system/sshd-unix.socket:

[Unit]
Description=OpenSSH Server Unix Socket
Documentation=man:sshd(8) man:sshd_config(5)

[Socket]
ListenStream=/run/sshd.sock
Accept=yes

[Install]
WantedBy=sockets.target

/etc/systemd/system/sshd-unix@.service:

[Unit]
Description=OpenSSH per-connection server daemon (Unix socket)
Documentation=man:sshd(8) man:sshd_config(5)
Wants=sshd-keygen.target
After=sshd-keygen.target

[Service]
ExecStart=-/usr/sbin/sshd -i -f /etc/ssh/sshd_config_unix
StandardInput=socket

Create a dedicated configuration file /etc/ssh/sshd_config_unix:

# Deny all non key based authentication methods
PermitRootLogin prohibit-password
PasswordAuthentication no
PermitEmptyPasswords no
GSSAPIAuthentication no

# Only allow access for specific users
AllowUsers root tim

# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile .ssh/authorized_keys

# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server

Enable and start the new socket unit:

$ sudo systemctl daemon-reload
$ sudo systemctl enable --now sshd-unix.socket

Add your SSH Key to /root/.ssh/authorized_keys.

Setting up the client

Install socat and use the following snippet in /.ssh/config:

Host host.local
    User root
    # We use `run/host/run` instead of `/run` to transparently work in and out of containers
    ProxyCommand socat - UNIX-CLIENT:/run/host/run/sshd.sock
    # Path to your SSH key. See: https://tim.siosm.fr/blog/2023/01/13/openssh-key-management/
    IdentityFile ~/.ssh/keys/localroot
    # Force TTY allocation to always get an interactive shell
    RequestTTY yes
    # Minimize log output
    LogLevel QUIET

Test your setup:

$ ssh host.local
[root@phoenix ~]#

Shell alias

Let’s create a sudohost shell “alias” (function) that you can add to your Bash or ZSH config to make using this command easier:

# Get an interactive root shell or run a command as root on the host
sudohost() {
    if [[ ${#} -eq 0 ]]; then
        cmd="$(printf "exec \"%s\" --login" "${SHELL}")"
        ssh host.local "${cmd}"
    else
        cmd="$(printf "cd \"%s\"; exec %s" "${PWD}" "$*")"
        ssh host.local "${cmd}"
    fi
}

2024-01-12 update: Fix quoting and array expansion (thanks to o11c).

Test the alias:

$ sudohost id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ sudohost pwd
/var/home/tim
$ sudohost ls
Desktop Downloads ...

We’ll keep a distinct alias for now as we’ll still have a need for the “real” sudo in our toolbox containers.

Security?

As-is, this setup is basically a free local root for anything running under your current user that has access to your SSH private key. This is however likely already the case on most developer’s workstations if you are part of the wheel, sudo or docker groups, as any code running under your user can edit your shell config and set a backdoored alias for sudo or run arbitrary privileged containers via Docker. sudo itself is not a security boundary as commonly configured by default.

To truly increase our security posture, we would instead need to remove sudo (and all other setuid binaries) and run our session under a fully unprivileged, confined user, but that’s for a future post.

Setting up U2F authentication with an sk-based SSH key-pair

To make it more obvious when commands are run as root, we can setup SSH authentication using U2F with a Yubikey as an example. While this, by itself, does not, strictly speaking, increase the security of this setup, this makes it harder to run commands without you being somewhat aware of it.

First, we need to figure out which algorithm are supported by our Yubikey:

$ lsusb -v 2>/dev/null | grep -A2 Yubico | grep "bcdDevice" | awk '{print $2}'

If the value is 5.2.3 or higher, then we can use ed25519-sk, otherwise we’ll have to use ecdsa-sk to generate the SSH key-pair:

$ ssh-keygen -t ed25519-sk
# or
$ ssh-keygen -t ecdsa-sk

Add the new sk-based SSH public key to /root/.ssh/authorized_keys.

Update the server configuration to only accept sk-based SSH key-pairs:

/etc/ssh/sshd_config_unix:

# Only allow sk-based SSH key-pairs authentication methods
PubkeyAcceptedKeyTypes sk-ecdsa-sha2-nistp256@openssh.com,sk-ssh-ed25519@openssh.com

...

Restricting access to a subset of users

You can also further restrict the access to the UNIX socket by configuring classic user/group UNIX permissions:

/etc/systemd/system/sshd-unix.socket:

1
2
3
4
5
6
7
8
...

[Socket]
...
SocketUser=tim
SocketGroup=tim
SocketMode=0660
...

Then reload systemd’s configuration and restart the socket unit.

Next steps: Disabling sudo

Now that we have a working alias to run privileged commands, we can disable sudo access for our user.

Important backup / pre-requisite step

Make sure that you have a backup and are able to boot from a LiveISO in case something goes wrong.

Set a strong password for the root account. Make sure that can locally log into the system via a TTY console.

If you have the classic sshd server enabled and listening on the network, make sure to disable remote login as root or password logins.

Removing yourself from the wheel / sudo groups

Open a terminal running as root (i.e. don’t use sudo for those commands) and remove you users from the wheel or sudo groups using:

$ usermod -dG wheel tim

You can also update the sudo config to remove access for users that are part of the wheel group:

# Comment / delete this line
%wheel  ALL=(ALL)       ALL

Removing the setuid binaries

To fully benefit from the security advantage of this setup, we need to remove the setuid binaries (sudo and su).

If you can, uninstall sudo and su from your system. This is usually not possible due to package dependencies (su is part of util-linux on Fedora).

Another option is to remove the setuid bit from the sudo and su binaries:

$ chmod u-s $(which sudo)
$ chmod u-s $(which su)

You will have to re-run those commands after each update on classic systems.

Setting this up for Fedora Atomic desktops is a little bit different as /usr is read only. This will be the subject of an upcoming blog post.

Conclusion

Like most of the time with security, this is not a silver bullet solution that will make your system “more secure” (TM). I have been working on this setup as part of my investigation to reduce our reliance on setuid binaries and trying to figure out alternatives for common use cases.

Let me know if you found this interesting as that will likely motivate me to write the next part!

References