Skip to content

Thursday, 21 December 2023

chsh is a small tool that lets you change the default shell for your current user. In order to let any user change their own shell, which is set in /etc/passwd, it needs privileges and is generally setuid root.

I am of the opinion that setuid/setgid binaries are a UNIX legacy that should be deprecated. I will explain the security reasons behind that statement in a future post.

In this “UNIX legacy” series of posts, I am looking at classic setuid binaries and try to find better, safer alternatives for common use cases. In this post, we will look at alternatives to changing your login shell.

Should you change the default shell?

People usually change their default shell because they want to use a modern alternative to Bash (Zsh, fish, Oils, nushell, etc.).

Changing the default shell (especially to a non POSIX or Bash compatible one) might have unintended consequences as some scripts relying on Bash compatibility might not work anymore. There are lots of warnings about this, for example for the fish shell:

On Fedora Atomic Desktops (Silverblue, Kinoite, etc.), your preferred shell may not always be available, notably if you have to reset your overlays for an upgrade, and could lead to an unusable system:

So overall, it is a bad idea to change the default login shell for interactive users.

For non-interactive users or system users, the shell is usually set by the system administrator only and the user itself never needs to change it.

If you are using systemd-homed, then you can change your own shell via the homectl command without needing setuid binaries but for the same reasons as above, it is still not a good idea.

Graphical interface: Use a modern terminal emulator

If you want to use another shell than the default one, you can use the functionality from your graphical terminal emulator to start it by default instead of Bash.

I recommend using the freshly released Prompt (sources) terminal if you are running on Fedora Silverblue or other GNOME related desktops. You can set your preferred shell in the Profiles section of the preferences. It also has great integration for toolbox/distrobox containers. We’re investigating making this the default in a future version of Fedora Silverblue (issue#520).

If you are running on Fedora Kinoite or other KDE related desktops, you should look at Konsole’s profiles features. You can create your own profiles and set the Command to /bin/zsh to use another shell. You can also assign shortcuts to profiles to open them directly a new tab, or use /bin/toolbox enter fedora-toolbox-39 as Command to directly enter a toolbox container for example.

This is obviously not an exhaustive list and other modern terminal emulators also let you specify which command to start.

If your terminal emulator does not allow you to do that, then you can use the alternative from the next section.

Or use a small snippet

If you want to change the default shell for a user on a server, then you can add the following code snippet at the beginning of the user’s ~/.bashrc (example for fish):

# Only trigger if:
# - 'fish' is not the parent process of this shell
# - We did not call: bash -c '...'
# - The fish binary exists and is executable
if [[ $(ps --no-header --pid=$PPID --format=comm) != "fish" && -z ${BASH_EXECUTION_STRING} && -x "/bin/fish" ]]; then
  shopt -q login_shell && LOGIN_OPTION='--login' || LOGIN_OPTION=''
  exec fish $LOGIN_OPTION
fi

References

Cutelyst the Qt web framework is now at v4.0.0 just a bit later for it’s 10th anniversary.

With 2.5k commits it has been steadly improving, and in production for many high traffic applications. With this release we say good bye to our old Qt5 friend, also dropped uWSGI support, clearsilver and Grantlee were also removed, many methods now take a QStringView and Cutelyst::Header class was heavly refactored to allow usage of QByteArrayView, and stopped storing QStrings internally in a QHash, they are QByteArray inside a vector.

Before, all headers were uppercased and dashes replaced with underscores, this was quite some work, so that when searching the string had to be converted to this format to be searcheable, this had the advantage of allowing the use of QHash and in templates you could c.request.header.CONTENT_TYPE. Turns out both cases aren’t so important, speed is more important for the wider use cases.

With these changes Cutelyst managed to get 10 – 15% faster on TechEmpower benchmarks, which is great as we are still well positioned as a full stack framework there.

https://github.com/cutelyst/cutelyst/releases/tag/v4.0.0

Have fun, Merry Christmas and Happy New Year!

Wednesday, 20 December 2023

This is an update on the ongoing migration of jobs from Binary Factory to KDE's GitLab. Since the last blog a lot has happened.

A first update of Itinerary was submitted to Google Play directly from our GitLab.

Ben Cooksley has added a service for publishing our websites. Most websites are now built and published on our GitLab with only 5 websites remaining on Binary Factory.

Julius Künzel has added a service for signing macOS apps and DMGs. This allows us to build signed installers for macOS on GitLab.

The service for signing and publishing Flatpaks has gone live. Nightly Flatpaks built on our GitLab are now available at https://cdn.kde.org/flatpak/. For easy installation builds created since yesterday include .flatpakref files and .flatpakrepo files.

Last, but not least, similar to the full CI/CD pipeline for Android we now also have a full CI/CD pipeline for Windows. For Qt 5 builds this pipeline consists of the following GitLab jobs:

  • windows_qt515 - Builds the project with MSVC and runs the automatic tests.
  • craft_windows_qt515_x86_64 - Builds the project with MSVC and creates various installation packages including (if enabled for the project) a *-sideload.appx file and a *.appxupload file.
  • sign_appx_qt515 - Signs the *-sideload.appx file with KDE's signing certificate. The signed app package can be downloaded and installed without using the Microsoft store.
  • microsoftstore_qt515 - Submits the *.appxupload package to the Microsoft store for subsequent publication. This job doesn't run automatically.
Notes:
  • The craft_windows_qt515_x86_64 job also creates .exe installers. Those installers are not yet signed on GitLab, i.e. Windows should warn you when you try to install them. For the time being, you can download signed .exe installers from Binary Factory.
  • There are also jobs for building with MinGW, but MinGW builds cannot be used for creating app packages for the Microsoft Store. (It's still possible to publish apps with MinGW installers in the Microsoft Store, but that's a different story.)
The workflow for publishing an update of an app in the Microsoft Store as I envision it is as follows:
  1. You download the signed sideload app package, install it on a Windows (virtual) machine (after uninstalling a previously installed version) and perform a quick test to ensure that the app isn't completely broken.
  2. Then you trigger the microsoftstore_qt515 job to submit the app to the Microsoft Store. This creates a new draft submission in the Microsoft Partner Center. The app is not published automatically. To actually publish the submission you have to log into the Microsoft Partner Center and commit the submission.

Enabling the Windows CD Pipeline for Your Project

If you want to start building Windows app packages (APPX) for your project then add the craft-windows-x86-64.yml template for Qt 5 or the craft-windows-x86-64-qt6.yml template for Qt 6 to the .gitlab-ci.yml of your project. Additionally, you have to add a .craft.ini file with the following content to the root of your project to enable the creation of the Windows app packages.
[BlueprintSettings]
kde/applications/myapp.packageAppx = True

kde/applications/myapp must match the path of your project's Craft blueprint.

When you have successfully built the first Windows app packages then add the craft-windows-appx-qt5.yml or the craft-windows-appx-qt6.yml template to your .gitlab-ci.yml to get the sign_appx_qt* job and the microsoftstore_qt* job.

To enable signing your project (more precisely, a branch of your project) needs to be cleared for using the signing service. This is done by adding your project to the project settings of the appxsigner. Similarly, to enable submission to the Microsoft Store your project needs to be cleared by adding it to the project settings of the microsoftstorepublisher. If you have carefully curated metadata set in the store entry of you app that shouldn't be overwritten by data from your app's AppStream data then have a look at the keep setting for your project. I recommend to use keep sparingly if at all because at least for text content you will deprive people using the store of all the translations added by our great translation teams to your app's AppStream data.

Note that the first submission to the Microsoft Store has to be done manually.

Tuesday, 19 December 2023

All the Toolbx and Distrobox container images and the ones in my personal namespace on Quay.io are now signed using cosign.

How to set this up was not really well documented so this post is an attempt at that.

First we will look at how to setup a GitHub workflow using GitHub Actions to build multi-architecture container images with buildah and push them to a registry with podman. Then we will sign those images with cosign (sigstore) and detail what is needed to configure signature validation on the host. Finally we will detail the remaining work needed to be able to do the entire process only with podman.

Full example ready to go

If you just want to get going, you can copy the content of my github.com/travier/cosign-test repo and start building and pushing your containers. I recommend keeping only the cosign.yaml workflow for now (see below for the details).

“Minimal” GitHub workflow to build containers with buildah / podman

You can find those actions at github.com/redhat-actions.

Here is an example workflow with the Containerfile in the example sub directory:

name: "Build container using buildah/podman"

env:
  NAME: "example"
  REGISTRY: "quay.io/example"

on:
  # Trigger for pull requests to the main branch, only for relevant files
  pull_request:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/cosign.yml'
  # Trigger for push/merges to main branch, only for relevant files
  push:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/cosign.yml'
  # Trigger every Monday morning
  schedule:
    - cron:  '0 0 * * MON'

permissions: read-all

# Prevent multiple workflow runs from racing to ensure that pushes are made
# sequentialy for the main branch. Also cancel in progress workflow runs for
# pull requests only.
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: ${{ github.event_name == 'pull_request' }}

jobs:
  build-push-image:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repo
        uses: actions/checkout@v4

      - name: Setup QEMU for multi-arch builds
        shell: bash
        run: |
          sudo apt install qemu-user-static

      - name: Build container image
        uses: redhat-actions/buildah-build@v2
        with:
          # Only select the architectures that matter to you here
          archs: amd64, arm64, ppc64le, s390x
          context: ${{ env.NAME }}
          image: ${{ env.NAME }}
          tags: latest
          containerfiles: ${{ env.NAME }}/Containerfile
          layers: false
          oci: true

      - name: Push to Container Registry
        uses: redhat-actions/push-to-registry@v2
        # The id is unused right now, will be used in the next steps
        id: push
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        with:
          username: ${{ secrets.BOT_USERNAME }}
          password: ${{ secrets.BOT_SECRET }}
          image: ${{ env.NAME }}
          registry: ${{ env.REGISTRY }}
          tags: latest

This should let you to test changes to the image via builds in pull requests and publishing the changes only once they are merged.

You will have to setup the BOT_USERNAME and BOT_SECRET secrets in the repository configuration to push to the registry of your choice.

If you prefer to use the GitHub internal registry then you can use:

env:
  REGISTRY: ghcr.io/${{ github.repository_owner }}

...
  username: ${{ github.actor }}
  password: ${{ secrets.GITHUB_TOKEN }}

You will also need to set the job permissions to be able to write GitHub Packages (container registry):

permissions:
  contents: read
  packages: write

See the Publishing Docker images GitHub Docs.

You should also configure the GitHub Actions settings as follow:

  • In the “Actions permissions” section, you can restict allowed actions to: “Allow <username>, and select non-<username>, actions and reusable workflows”, with “Allow actions created by GitHub” selected and the following additionnal actions:
    redhat-actions/*,
    
  • In the “Workflow permissions” section, you can select the “Read repository contents and packages permissions” and select the “Allow GitHub Actions to create and approve pull requests”.

  • Make sure to add all the required secrets in the “Secrets and variables”, “Actions”, “Repository secrets” section.

Signing container images

We will use cosign to sign container images. With cosign, you get two main options to sign your containers:

  • Keyless signing: Sign containers with ephemeral keys by authenticating with an OIDC (OpenID Connect) protocol supported by Sigstore.
  • Self managed keys: Generate a “classic” long-lived key pair.

We will choose the the “self managed keys” option here as it is easier to setup for verification on the host in podman. I will likely make another post once I figure out how to setup keyless signature verification in podman.

Generate a key pair with:

$ cosign generate-key-pair

Enter an empty password as we will store this key in plain text as a repository secret (COSIGN_PRIVATE_KEY).

Then you can add the steps for signing with cosign at the end of your workflow:

      # Include at the end of the workflow previously defined

      - name: Login to Container Registry
        uses: redhat-actions/podman-login@v1
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ secrets.BOT_USERNAME }}
          password: ${{ secrets.BOT_SECRET }}

      - uses: sigstore/cosign-installer@v3.3.0
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'

      - name: Sign container image
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        run: |
          cosign sign -y --recursive --key env://COSIGN_PRIVATE_KEY ${{ env.REGISTRY }}/${{ env.NAME }}@${{ steps.push.outputs.digest }}
        env:
          COSIGN_EXPERIMENTAL: false
          COSIGN_PRIVATE_KEY: ${{ secrets.COSIGN_PRIVATE_KEY }}

2024-01-12 update: Sign container images recursively for multi-arch images.

We need to explicitly login to the container registry to get an auth token that will be used by cosign to push the signature to the registry.

This step sometimes fails, likely due to a race condition, that I have not been able to figure out yet. Retrying failed jobs usually works.

You should then update the GitHub Actions settings to allow the new actions as follows:

redhat-actions/*,
sigstore/cosign-installer@*,

Configuring podman on the host to verify image signatures

First, we copy the public key to a designated place in /etc:

$ sudo mkdir /etc/pki/containers
$ curl -O "https://.../cosign.pub"
$ sudo cp cosign.pub /etc/pki/containers/
$ sudo restorecon -RFv /etc/pki/containers

Then we setup the registry config to tell it to use sigstore signatures:

$ cat /etc/containers/registries.d/quay.io-example.yaml
docker:
  quay.io/example:
    use-sigstore-attachments: true
$ sudo restorecon -RFv /etc/containers/registries.d/quay.io-example.yaml

And then we update the container signature verification policy to:

  • Default to reject everything
  • Then for the docker transport:
    • Verify signatures for containers coming from our repository
    • Accept all other containers from other registries

If you do not plan on using container from other registries, you can even be stricter here and only allow your containers to be used.

/etc/containers/policy.json:

{
    "default": [
        {
            "type": "reject"
        }
    ],
    "transports": {
        "docker": {
            ...
            "quay.io/example": [
                {
                    "type": "sigstoreSigned",
                    "keyPath": "/etc/pki/containers/quay.io-example.pub",
                    "signedIdentity": {
                        "type": "matchRepository"
                    }
                }
            ],
            ...
            "": [
                {
                    "type": "insecureAcceptAnything"
                }
            ]
        },
        ...
    }
}

See the full man page for containers-policy.json(5).

You should now be good to go!

What about doing everything with podman?

Using this workflow, there is a (small) time window where the container images are pushed to the registry but not signed.

One option to avoid this problem would be to first push the container to a “temporary” tag first, sign it, and then copy the signed container to the latest tag.

Another option is to use podman to push and sign the container image “at the same time”. However podman still needs to push the image first and then sign it so there is still a possibility that signing fails and that you’re left with an unsigned image (this happened to me during testing).

Unfortunately for us, the version of podman available in the version of Ubuntu used for the GitHub Runners (22.04) is too old to support signing containers. We thus need to use a newer podman from a container image to workaround this.

Here is the same workflow, adapted to only use podman for signing:

name: "Build container using buildah, push and sign it using podman"

env:
  NAME: "example"
  REGISTRY: "quay.io/example"
  REGISTRY_DOMAIN: "quay.io"

on:
  pull_request:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/podman.yml'
  push:
    branches:
      - main
    paths:
      - 'example/**'
      - '.github/workflows/podman.yml'
  schedule:
    - cron:  '0 0 * * MON'

permissions: read-all

# Prevent multiple workflow runs from racing to ensure that pushes are made
# sequentialy for the main branch. Also cancel in progress workflow runs for
# pull requests only.
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: ${{ github.event_name == 'pull_request' }}

jobs:
  build-push-image:
    runs-on: ubuntu-latest
    container:
      image: quay.io/travier/podman-action
      options: --privileged -v /proc/:/host/proc/:ro
    steps:
      - name: Checkout repo
        uses: actions/checkout@v4

      - name: Setup QEMU for multi-arch builds
        shell: bash
        run: |
          for f in /usr/lib/binfmt.d/*; do cat $f | sudo tee /host/proc/sys/fs/binfmt_misc/register; done
          ls /host/proc/sys/fs/binfmt_misc

      - name: Build container image
        uses: redhat-actions/buildah-build@v2
        with:
          archs: amd64, arm64, ppc64le, s390x
          context: ${{ env.NAME }}
          image: ${{ env.NAME }}
          tags: latest
          containerfiles: ${{ env.NAME }}/Containerfile
          layers: false
          oci: true

      - name: Setup config to enable pushing Sigstore signatures
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        shell: bash
        run: |
          echo -e "docker:\n  ${{ env.REGISTRY_DOMAIN }}:\n    use-sigstore-attachments: true" \
            | sudo tee -a /etc/containers/registries.d/${{ env.REGISTRY_DOMAIN }}.yaml

      - name: Push to Container Registry
        # uses: redhat-actions/push-to-registry@v2
        uses: travier/push-to-registry@sigstore-signing
        if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main'
        with:
          username: ${{ secrets.BOT_USERNAME }}
          password: ${{ secrets.BOT_SECRET }}
          image: ${{ env.NAME }}

This uses two additional workarounds for missing features:

  • There is no official container image that includes both podman and buildah right now, thus I made one: github.com/travier/podman-action
  • The redhat-actions/push-to-registry Action does not support signing yet (issue#89). I’ve implemented support for self managed key signing in pull#90. I’ve not looked at keyless signing yet.

You will also have to allow running my actions in the repository settings. In the “Actions permissions” section, you should use the following actions:

redhat-actions/*,
travier/push-to-registry@*,

Conclusion

The next steps are to figure out all the missing bits for keyless signing and replicate this entire process in GitLab CI.

A script element has been removed to ensure Planet works properly. Please find it in the original post.

Brise theme is yet another fork of Breeze. The name comes having both the French and German translations of Breeze, being Brise.

As some people know, I’m contributing quite a lot to the Breeze style for the Plasma 6 release and I don’t intend to stop doing that. Both git repositories share the same git history and I didn’t massively rename all the C++ classes from BreezeStyle to BriseStyle to make it as easy as possible to backport commits from one repository to the other. There are also no plans to make this the new default style for Plasma.

My goal with this Qt style is to have a style that is not a big departure of Breeze like you know it but does contain some cosmetic small changes. This would serve as a place where I can experiment with new ideas and if they tend to be popular to then move them to Breeze.

Here is a breakdown of all the changes I made so far.

  • I made Brise coinstallable with Breeze, so that users can have both installed simultaneously. I minified the changes to avoid merge conflicts while doing so.

  • I increased the border radius of all the elements from 3 pixels to 5 pixels. This value is configurable between small (3 pixels), medium (5 pixels) and large (7 pixels). A merge request was opened in Breeze and might make it into Plasma 6.1. The only difference is that in breeze the default will likely keep being 3 pixels for the time being.

Cute buttons and frames with 5 pixels border radius
Cute buttons and frames with 5 pixels border radius

  • Add a separator between the search field and the title in the standard KDE config windows which serves as an extension of the separator between the list of the setting’s categories and the setting’s page. This is mostly to be similar to System Settings and other Kirigami applications. There is a pending merge request for this also in Breeze.
  • A new tab style that removes the blue lines from the active lines and introduce other small changes. Non-editable tabs are also now filling the entire horizontal space available. I’m not completely happy with the look yet, so no merge requests have been submitted to Breeze.

Separator in the toolbar and the new tabs
Separator in the toolbar and the new tabs

  • Remove outlines from menu and combobox items. My goal is to go in the same direction as KirigamiAddons.RoundedItemDelegate.

Menu without outlines
Menu without outlines

  • Ensure that all the controls have the same height. Currently a small disparency in height is noticeable when they are in the same row. The patch is still a bit hacky and needs some wider testing on a large range of apps to ensure no regressions, but it is also a improvement I will definitively submit upstream once I feel like it’s ready.

 

 

Here, in these two screenshots, every control has 35 pixels as height.

Finally here is Kate and KMail’s settings with Breeze and Brise.

Monday, 18 December 2023

In this post, I will detail how to replace sudo (a setuid binary) by using SSH over a local UNIX socket.

I am of the opinion that setuid/setgid binaries are a UNIX legacy that should be deprecated. I will explain the security reasons behind that statement in a future post.

This is related to the work of the Confined Users SIG in Fedora.

Why bother?

The main benefit of this approach is that it enables root access to the host from any unprivileged toolbox / distrobox container. This is particularly useful on Fedora Atomic desktops (Silverblue, Kinoite, Sericea, Onyx) or Universal Blue (Bluefin, Bazzite) for example.

As a side effect of this setup, we also get the following security advantages:

  • No longer rely on sudo as a setuid binary for privileged operations.
  • Access control via a physical hardware token (here a Yubikey) for each privileged operation.

Setting up the server

Create the following systemd units:

/etc/systemd/system/sshd-unix.socket:

[Unit]
Description=OpenSSH Server Unix Socket
Documentation=man:sshd(8) man:sshd_config(5)

[Socket]
ListenStream=/run/sshd.sock
Accept=yes

[Install]
WantedBy=sockets.target

/etc/systemd/system/sshd-unix@.service:

[Unit]
Description=OpenSSH per-connection server daemon (Unix socket)
Documentation=man:sshd(8) man:sshd_config(5)
Wants=sshd-keygen.target
After=sshd-keygen.target

[Service]
ExecStart=-/usr/sbin/sshd -i -f /etc/ssh/sshd_config_unix
StandardInput=socket

Create a dedicated configuration file /etc/ssh/sshd_config_unix:

# Deny all non key based authentication methods
PermitRootLogin prohibit-password
PasswordAuthentication no
PermitEmptyPasswords no
GSSAPIAuthentication no

# Only allow access for specific users
AllowUsers root tim

# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile .ssh/authorized_keys

# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server

Enable and start the new socket unit:

$ sudo systemctl daemon-reload
$ sudo systemctl enable --now sshd-unix.socket

Add your SSH Key to /root/.ssh/authorized_keys.

Setting up the client

Install socat and use the following snippet in /.ssh/config:

Host host.local
    User root
    # We use `run/host/run` instead of `/run` to transparently work in and out of containers
    ProxyCommand socat - UNIX-CLIENT:/run/host/run/sshd.sock
    # Path to your SSH key. See: https://tim.siosm.fr/blog/2023/01/13/openssh-key-management/
    IdentityFile ~/.ssh/keys/localroot
    # Force TTY allocation to always get an interactive shell
    RequestTTY yes
    # Minimize log output
    LogLevel QUIET

Test your setup:

$ ssh host.local
[root@phoenix ~]#

Shell alias

Let’s create a sudohost shell “alias” (function) that you can add to your Bash or ZSH config to make using this command easier:

# Get an interactive root shell or run a command as root on the host
sudohost() {
    if [[ ${#} -eq 0 ]]; then
        cmd="$(printf "exec \"%s\" --login" "${SHELL}")"
        ssh host.local "${cmd}"
    else
        cmd="$(printf "cd \"%s\"; exec %s" "${PWD}" "$*")"
        ssh host.local "${cmd}"
    fi
}

2024-01-12 update: Fix quoting and array expansion (thanks to o11c).

Test the alias:

$ sudohost id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ sudohost pwd
/var/home/tim
$ sudohost ls
Desktop Downloads ...

We’ll keep a distinct alias for now as we’ll still have a need for the “real” sudo in our toolbox containers.

Security?

As-is, this setup is basically a free local root for anything running under your current user that has access to your SSH private key. This is however likely already the case on most developer’s workstations if you are part of the wheel, sudo or docker groups, as any code running under your user can edit your shell config and set a backdoored alias for sudo or run arbitrary privileged containers via Docker. sudo itself is not a security boundary as commonly configured by default.

To truly increase our security posture, we would instead need to remove sudo (and all other setuid binaries) and run our session under a fully unprivileged, confined user, but that’s for a future post.

Setting up U2F authentication with an sk-based SSH key-pair

To make it more obvious when commands are run as root, we can setup SSH authentication using U2F with a Yubikey as an example. While this, by itself, does not, strictly speaking, increase the security of this setup, this makes it harder to run commands without you being somewhat aware of it.

First, we need to figure out which algorithm are supported by our Yubikey:

$ lsusb -v 2>/dev/null | grep -A2 Yubico | grep "bcdDevice" | awk '{print $2}'

If the value is 5.2.3 or higher, then we can use ed25519-sk, otherwise we’ll have to use ecdsa-sk to generate the SSH key-pair:

$ ssh-keygen -t ed25519-sk
# or
$ ssh-keygen -t ecdsa-sk

Add the new sk-based SSH public key to /root/.ssh/authorized_keys.

Update the server configuration to only accept sk-based SSH key-pairs:

/etc/ssh/sshd_config_unix:

# Only allow sk-based SSH key-pairs authentication methods
PubkeyAcceptedKeyTypes sk-ecdsa-sha2-nistp256@openssh.com,sk-ssh-ed25519@openssh.com

...

Restricting access to a subset of users

You can also further restrict the access to the UNIX socket by configuring classic user/group UNIX permissions:

/etc/systemd/system/sshd-unix.socket:

1
2
3
4
5
6
7
8
...

[Socket]
...
SocketUser=tim
SocketGroup=tim
SocketMode=0660
...

Then reload systemd’s configuration and restart the socket unit.

Next steps: Disabling sudo

Now that we have a working alias to run privileged commands, we can disable sudo access for our user.

Important backup / pre-requisite step

Make sure that you have a backup and are able to boot from a LiveISO in case something goes wrong.

Set a strong password for the root account. Make sure that can locally log into the system via a TTY console.

If you have the classic sshd server enabled and listening on the network, make sure to disable remote login as root or password logins.

Removing yourself from the wheel / sudo groups

Open a terminal running as root (i.e. don’t use sudo for those commands) and remove you users from the wheel or sudo groups using:

$ usermod -dG wheel tim

You can also update the sudo config to remove access for users that are part of the wheel group:

# Comment / delete this line
%wheel  ALL=(ALL)       ALL

Removing the setuid binaries

To fully benefit from the security advantage of this setup, we need to remove the setuid binaries (sudo and su).

If you can, uninstall sudo and su from your system. This is usually not possible due to package dependencies (su is part of util-linux on Fedora).

Another option is to remove the setuid bit from the sudo and su binaries:

$ chmod u-s $(which sudo)
$ chmod u-s $(which su)

You will have to re-run those commands after each update on classic systems.

Setting this up for Fedora Atomic desktops is a little bit different as /usr is read only. This will be the subject of an upcoming blog post.

Conclusion

Like most of the time with security, this is not a silver bullet solution that will make your system “more secure” (TM). I have been working on this setup as part of my investigation to reduce our reliance on setuid binaries and trying to figure out alternatives for common use cases.

Let me know if you found this interesting as that will likely motivate me to write the next part!

References

An updated stable release of XWayland Video Bridge is out now for packaging.

https://download.kde.org/stable/xwaylandvideobridge/

sha256 ea72ac7b2a67578e9994dcb0619602ead3097a46fb9336661da200e63927ebe6

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

Changes

  • Also skip the switcher
  • Do not start in an X11 session and opt out of session management

Sunday, 17 December 2023

In my last post about HDR and color management I explained roughly how color management works, what we’re doing to make it work on Wayland and how far along we were with that in Plasma. That’s more than half a year ago now, so let’s take a look at what changed since then!

Color management with ICC profiles

KWin now supports ICC profiles: In display settings you can set one for each screen, and KWin will use that to adjust the colors accordingly.

Applications are still limited to sRGB for now. For Wayland native applications, a color management protocol is required to change that, so that apps can know about the colorspaces they can use, and so that KWin can know which colorspace their windows are actually using, and the upstream color management protocol for that is still not done yet. It’s getting close though! For example I have an implementation for it in a KWin branch, and Victoria Brekenfeld from System76 implemented a Vulkan layer using the protocol to allow applications to use the VK_EXT_swapchain_colorspace and VK_EXT_hdr_metadata Vulkan extensions, which can be used to run some applications and games with non-sRGB colorspaces.

Apps running through Xwayland are strictly limited to sRGB too, even if they have the ability to work with ICC profiles, as they have the same problem as Wayland native apps: Outside of manual overrides with application settings there’s now way to tell them to use a specific ICC profile or colorspace, and there’s also no way for KWin to know which profile or colorspace the application is using. Even if you set an ICC profile with an application setting, KWin still doesn’t know about that, so the colors will be wrong.1

It would be possible to introduce an “API” using X11 atoms to make at least the basic arbitrary primaries + sRGB EOTF case work though, so if any developers of apps that are still stuck with X11 for the foreseeable future would be interested in that, please contact me about it!

HDR

In Plasma 6 you can enable HDR in the display settings, which enables static HDR metadata signalling for the display, with the PQ EOTF and the display’s preferred brightness values, and sets the colorspace to rec.2020.

With that enabled, you get two additional settings:

  • “SDR Brightness” is, as the name suggests, the brightness KWin renders non-HDR stuff at, and effectively replaces the brightness setting that most displays disable when they’re in HDR mode
  • “SDR Color Intensity” is inspired by the color slider on the Steam Deck. For sRGB applications it scales the color gamut up to (at 100%) rec.2020, or more simply put, it makes the colors of non-HDR apps more intense, to counteract the bad gamut mapping many HDR displays do and make colors of SDR apps look more like when HDR is disabled

HDR settings page

There’s some additional hidden settings to override bad brightness metadata from displays too. A GUI for that is still planned, but until that’s done you can use kscreen-doctor to override the brightness values your screen provides.

KWin now also uses gamma 2.2 instead of the sRGB piece-wise transfer function for sRGB applications, as that more closely matches what displays actually do in SDR mode. This means that in the dark regions of sRGB content things will now look like they do with HDR disabled, instead of things being a little bit brighter and looking kind of washed out.23


My last post ended at this point, with me saying that looking at boring sRGB apps in HDR mode would be all you could do for now… well, not anymore! While I already mentioned that Xwayland apps are restricted to sRGB, gamescope uses a Vulkan layer together with a custom Wayland protocol to bypass Xwayland almost entirely. This is how HDR is done on the Steam Deck OLED and it works well, so all that was still missing is a way for gamescope to pass the buffers and HDR metadata on to KWin.

To make that happen, Joshua Ashton from Valve and I put together a small Wayland protocol for doing HDR until the upstream protocol is done. I implemented it in KWin, forked Victoria’s Vulkan layer to make my own using that protocol and Joshua implemented HDR support for gamescope nested with the Vulkan extensions implemented by the layer.

The Plasma 6 beta is already shipping with that implementation in KWin, and with a few additional steps you can play most HDR capable games in the Wayland session:

  1. install the Vulkan layer
  2. install gamescope git master, or at least a version that’s new enough to have this commit
  3. run Steam with the following command:
    ENABLE_HDR_WSI=1 gamescope --hdr-enabled --hdr-debug-force-output --steam -- env ENABLE_GAMESCOPE_WSI=1 DXVK_HDR=1 DISABLE_HDR_WSI=1 steam
    

To explain a bit what that does, ENABLE_HDR_WSI=1 enables the Vulkan layer, which is off by default. gamscope --hdr-enabled --hdr-debug-force-output --steam runs gamescope with hdr force-enabled (automatically detecting and using HDR instead of that is planned) and with Steam integration, and env ENABLE_GAMESCOPE_WSI=1 DXVK_HDR=1 DISABLE_HDR_WSI=1 steam runs Steam with gamescope’s Vulkan layer enabled and mine disabled, so that they don’t conflict.

You can also adjust the brightness of sdr stuff in gamescope with the --hdr-sdr-content-nits flag; for a list of things you can do just check gamescope --help. The full command I’m using for my screen is

ENABLE_HDR_WSI=1 gamescope --fullscreen -w 5120 -h 1440 --hdr-enabled --hdr-debug-force-output --hdr-sdr-content-nits 600 --steam -- env ENABLE_GAMESCOPE_WSI=1 DXVK_HDR=1 DISABLE_HDR_WSI=1 steam -bigpicture

With that, Steam starts in a nested gamescope instance and games started in it have HDR working, as long as they use Proton 8 or newer. When this was initially implemented as a bunch of hacks at XDC this year there were issues with a few games, but right now, almost all the HDR capable games in my own Steam library work fine and look good in HDR. That includes

  • Ori and the Will of the Wisps
  • Cyberpunk 2077
  • God of War
  • Doom Eternal
  • Jedi: Fallen Order
  • Quake II RTX

Quake II RTX doesn’t even need gamescope! With ENABLE_HDR_WSI=1 SDL_VIDEODRIVER=wayland in the launch options for the game in Steam you can get a Linux- and Wayland-native game working with HDR, which is pretty cool.

I also have Spider-Man: Remastered and Spider-Man: Miles Morales, in both of which HDR also ‘works’, but it looks quite washed out vs. SDR. I found complaints about the same problem from Windows users online, so I assume the implementation in the games is just bad and there’s nothing wrong with the graphics stack around them.

jedi fallen order

god of war

Cyberpunk 2077

(Sorry for showing SDR screenshots of HDR games, but HDR screenshots aren’t implemented yet, and it looks worse when I take a picture of the screen with my phone)


With the same Vulkan layer, you can also run other HDR-capable applications, like for example mpv:

ENABLE_HDR_WSI=1 mpv --vo=gpu-next --target-colorspace-hint --gpu-api=vulkan --gpu-context=waylandvk "path/to/video"

An actual HDR video

This time the video being played is actually HDR, without any hacks! And while my phone camera isn’t great at capturing HDR content in general, this is one of the cases where you can really see how HDR is actually better than SDR, especially on an OLED display:

HDR and SDR side by side


The future

Obviously there is still a lot to do. Color management is limited to either sRGB or full-blown rec.2020, you shouldn’t have to install stuff from Github yourself, and certainly shouldn’t have to mess around with the command line4 to play games and watch videos in HDR, HDR screenshots and HDR screen recording aren’t a thing yet, and many other small and big things need implementing or fixing. There’s a lot of work needed to make these things just work™ as they should outside of special cases like the gamescope embedded session.

Things are moving fast though, and I’m pretty happy with the progress we made so far.





  1. Note that if you do want or need to set an ICC profile in the application for some reason, setting a sRGB profile as the display profile is wrong. It must have rec.709 primaries, but with the gamma 2.2 transfer function instead of the piece-wise sRGB one so often used! 

  2. This is the reason for footnote 1 

  3. As far as I know, Windows 11 still does this wrong! 

  4. I recommend setting up .desktop files to automate that away if you want to use HDR more often. Right click on the application launcher in Plasma -> Edit Applications makes that pretty easy 

Friday, 15 December 2023

Let’s go for my web review for the week 2023-50.


Make a Linux App

Tags: tech, foss, linux

Very needed evangelization. Go forth and make apps for Linux!

https://makealinux.app/


New extensions you’ll love now available on Firefox for Android

Tags: tech, mozilla, foss, android

Definitely the right move, more extensions are pouring in. People should benefit from them on Android as well.

https://blog.mozilla.org/en/mozilla/new-extensions-youll-love-now-available-on-firefox-for-android/


Spritely and Veilid: Exciting Projects Building the Peer-to-Peer Web

Tags: tech, distributed, p2p, privacy

What’s cooking up for the next generation of peer-to-peer applications? Here are two exciting examples of building blocks which are in the works.

https://www.eff.org/deeplinks/2023/12/meet-spritely-and-veilid


W4 Games raises $15M to drive video game development inflection with Godot Engine

Tags: tech, 3d, gaming, godot, foss

Good news for the Godot Engine. Let’s see where this goes in the coming years.

https://w4games.com/2023/12/07/w4-games-raises-15m-to-drive-video-game-development-inflection-with-godot-engine/


Epic v. Google: everything we’re learning live in Fortnite court

Tags: tech, google, law, monopoly

There will be an appeal but this is an important ruling already.

https://www.theverge.com/23945184/epic-v-google-fortnite-play-store-antitrust-trial-updates#stream-entry-65d34a06-1fa5-4eab-abf6-ce450441b543


Pluralistic: “If buying isn’t owning, piracy isn’t stealing”

Tags: tech, DRM, copyright, criticism

Once more, an excellent piece from Cory Doctorow. Allowing DRM encumbered devices could only lead to the mess we’re seeing nowadays.

https://pluralistic.net/2023/12/08/playstationed/


Power Hungry Processing: ⚡ Watts ⚡ Driving the Cost of AI Deployment?

Tags: tech, ai, machine-learning, gpt, energy

Important and interesting study showing how the new generation of models are driving energy consumption way up. As a developer, do the responsible thing and use smaller, more specific models.

https://arxiv.org/pdf/2311.16863.pdf


The AI trust crisis

Tags: tech, ai, machine-learning, surveillance, trust, transparency

There’s definitely a problem here. The lack of transparency from the involved companies doesn’t help. It’s also a chance for local and self-hostable models, let’s hope their use increases.

https://simonwillison.net/2023/Dec/14/ai-trust-crisis/


Kernel vs. User-Level Networking: Don’t Throw Out the Stack with the Interrupts | Proceedings of the ACM on Measurement and Analysis of Computing Systems

Tags: tech, linux, kernel, networking

Interesting deep dive about how network stacks work in kernel or in user land. Also provides some insight on how to improve the kernel stack.

https://dl.acm.org/doi/10.11453626780


What every computer scientist should know about floating-point arithmetic

Tags: tech, floats, mathematics

Probably the definitive resource on how floating-point arithmetic works.

https://dl.acm.org/doi/pdf/10.1145103162.103163


The SQL Murder Mystery

Tags: tech, sql, databases, learning, game, funny

You like SQL? You like murder mysteries? This little game might be right for you.

https://mystery.knightlab.com/


Patching around a C++ crash with a little bit of Lua

Tags: tech, http, safety, bug, c++, lua

Nice trick to get the pressure off the team while it looks for a proper solution.

https://rachelbythebay.com/w/2023/12/07/header/


Real-world match/case

Tags: tech, python, pattern

Nice illustration on how pattern matching can simplify code and make it easier to write.

https://nedbatchelder.com/blog/202312/realworld_matchcase.html


trippy | A network diagnostic tool

Tags: tech, tools, networking

Looks like a nice tool for exploring network issues. I’ll take it for a spin when I get the chance.

https://trippy.cli.rs/


Dynamic music and sound techniques for video games

Tags: tech, music, sound, gaming

Interesting little tricks to create music and sound variations to guide the user.

https://blog.gingerbeardman.com/2023/12/09/dynamic-music-and-sound-techniques-for-video-games/


Behavior Belongs in the HTML

Tags: tech, html, semantic

This would indeed be a nice path forward for HTML. It’s much too dominated by JavaScript for now, having standardized semantic extensibility would be just better.

https://unplannedobsolescence.com/blog/behavior-belongs-in-html/


Painting with Math: A Gentle Study of Raymarching - Maxime Heckel’s Blog

Tags: tech, 3d, shader, mathematics

Another nice introduction to raymarching. I still find this a very interesting rendering approach. It’s really cool what you can do with those Signed Distance Fields functions.

https://blog.maximeheckel.com/posts/painting-with-math-a-gentle-study-of-raymarching/


Canon TDD - by Kent Beck - Software Design: Tidy First?

Tags: tech, tdd, design, tests

This is apparently a much needed clarification. Let’s get back to basics.

https://tidyfirst.substack.com/p/canon-tdd


Advice to a New Speaker - Dan North & Associates Limited

Tags: tech, talk

Nice list of advices in the second section. It makes a good point in some of the dynamics women might have to face in public conferences.

https://dannorth.net/advice-to-a-new-speaker/


The Importance of Career Laddering | CSS-Tricks - CSS-Tricks

Tags: tech, management, career

You got a career ladder in place? Well, that’s just a first step, how do you make sure the expectations are clear to people? How do you follow through? This article helps with those questions.

https://css-tricks.com/the-importance-of-career-laddering/


The surprising connection between after-hours work and decreased productivity

Tags: organization, meetings, productivity, work, life

Interesting survey results. This kind of confirm what we already suspected regarding longer work day and the amount of meetings.

https://slack.com/intl/en-gb/blog/news/the-surprising-connection-between-after-hours-work-and-decreased-productivity?nojsmode=1



Bye for now!

Wednesday, 13 December 2023

There are some plans to have a more official blog post on the plasma-mobile.org, these are my personal ramblings below…

Plasma 6 is coming together nicely on the desktop!

Coming back from hiatus, I was pleasantly greeted by a much more working session than when I last saw it in May; I have now completely switched over to it on my main machine!

On the other hand, there is still a lot of work to do on mobile to prepare it for the Plasma 6 release in February. I will outline the current situation and the work I have done in the past few months in order to make Plasma 6 a possibility for Plasma Mobile.

Context 🔗

I started working on KDE projects again in October (after an 8 month hiatus), slowly getting back into the groove of things. I unfortunately do not have as much spare time these days after work and school, but I try to do what I can!

The distro situation 🔗

For Plasma Mobile shell development on phones, we need to have distributions with KDE packages built from git master.

Many years ago, we used to have Neon images with Plasma Mobile that were maintained by Bhushan that were used for development, but required a large investment in time to maintain. We no longer have the (human) resources to maintain a distribution, so we are dependent on working with other distributions for development images.

Until this year, Manjaro had helped us by providing daily development images. Unfortunately, I have not really had contact with the project since one of their employees left earlier this year.

As a sidenote: Maintenance of the Manjaro Plasma Mobile images seemed to be inactive in the past few months, though there have been some encouraging signs from the forum in the past few days?


And so, there was a predicament. While I could develop the Plasma 6 mobile shell on the desktop, I had no way of testing it on a phone.

Thankfully, postmarketOS graciously took up the effort of maintaining a repository of KDE packages that track git master. Instructions can be viewed here. With the Plasma 6 beta release, there will be postmarketOS images that can be easily downloaded and tested with!

Seshan has also been working on an immutable distro ProLinux which has images that track git master as well, which have been useful to test with.

Porting 🔗

A big part of the porting effort is simply porting the shell and applications to Qt 6 and KDE Frameworks 6.

In the case of the shell, porting to Qt 6 was luckily fairly trivial. There were far more changes to the base KDE frameworks as package structures and QML plugins were being reworked, though that has stabilized in recent months. There are still some major regressions that need to be resolved before February (will be discussed later), but I am reasonably confident that the shell will in good shape by then.

For applications however, it is more of a mixed bag.

Several applications do not have active maintainers, and others need much more polish for mobile usage (as not all application developers have phones to test with).

Technically, I am responsible for KClock, KWeather and KRecorder, but I have typically done contributions to a multitude of applications to ensure that they work well on mobile. This time around though, I have only been really able to work on the shell with the limited spare time I have, and so I have not been able to do some of the heavier porting work for applications.

KRecorder in particular is unlikely to be ported in time for Plasma 6 as it depends on Qt Multimedia, which has had significant changes in Qt 6.

Beyond porting, I have also noticed some significant mobile-specific regressions in other applications that have been ported, which need to be addressed.

I would encourage anyone who is thinking of contributing to KDE to start here, application development is a very good way to learn Qt and get into further KDE contributions (as it is quite self-contained, compared to, say, the shell).

Styles 🔗

Plasma Mobile currently uses a separate Qt Quick style from the desktop (qqc2-breeze-style vs. qqc2-desktop-style), in which maintenance was lagging for Plasma 6.

The separate style is needed in order to have better performance, as it avoids some of the complex theming that the desktop needs to use for unifying styles between Qt Quick and Qt Widgets applications. However, it is a maintenance burden.

In November, I did a bunch of work on qqc2-breeze-style in order fix Plasma 6 related regressions, and to make it render more similarly to qqc2-desktop-style.

Task Switcher moving to KWin 🔗

In Plasma 5, the task switcher was built into the plasmashell process (which contains the homescreen, panels and most of what you see in Plasma), piping the application thumbnails from KWin.

In order to improve performance, it was a bit of a hack; the task switcher was built into the homescreen, so when it was opened, apps were minimized, showing the task switcher underneath. It was not optimal though, as it required convoluted logic in order to swap between the homescreen and task switcher views, and the thumbnails provided from the stream from KWin were not optimal performance wise.

With Plasma 6, I am moving the task switcher to be a self-contained KWin effect instead, which is inline with how the Desktop overview effect is implemented. This moves the task switcher off of plasmashell to KWin, cleaning up the code and potentially having performance improvements with how the application previews will be displayed.

We can also make use of KWin’s infrastructure for gesture-only mode (removing the navigation bar), rather than relying on a hack with invisible panels in order to trigger it.

The move has been a bit problematic though, as KWin developed regressions during Plasma 6 development that cause effect rendering to be broken on the PinePhone and SDM845 devices (possibly due to OpenGL versions?), but the issues are being worked on. As of Dec. 13, this has been fixed.

Rewriting Folio - the homescreen 🔗

Note: I will be writing a separate blog post that goes much more into detail in the future.

For some context, the default homescreen in Plasma 5 is Halcyon, which provides a simple way have a list of applications, while allowing them to be pinned and grouped into folders.

We also have the Folio homescreen, which was the original default (before Plasma 5.26) that was more similar to a traditional homescreen, having favourites pages and an application drawer to access the full list of apps.

The problem with Folio in Plasma 5 though was that it was particularly unstable (known to brick the shell), and was effectively an extended desktop canvas, so screen rotations and scaling changes would completely ruin the layout.

I knew that it would require a very significant effort in order to rewrite it and fix its issues, so I developed Halcyon as a stopgap solution until I had the time to fix Folio.


And so I spent about 5 weeks starting in October working solely on the Folio rewrite! It will be shipping as the default homescreen once again in Plasma 6.

I am pretty happy with how it turned out, it supports:

  • An app drawer
  • KRunner search
  • Folders
  • Pages
  • Drag and drop between all of the above
  • Applets/widgets
  • Row-column flipping for screen rotations
  • Customizable row and column counts
  • Customizable page transitions
  • Ability to import and export homescreen layouts as files
  • … and more!

Applets in particular are pretty exciting, though still need some work. They use the same infrastructure that the Desktop uses, so we can use existing applets!

New applets for mobile apps can eventually also be developed, pending interest.

A new service: plasma-mobile-envmanager 🔗

Plasma Mobile relies on some configurations (set in files) for KWin and the shell in general that directly conflict with what Plasma Desktop expects.

For example, we use different look-and-feel packages to provide pieces such as the lockscreen theme, as well as tweaks for features such as disabling window decorations.

In Plasma 5, this was accomplished by having the distribution ship config files that are installed to /etc/xdg (from plasma-phone-settings), which overrode Plasma related settings for users.

This was problematic in that this would affect the desktop session, making it impossible to use both the desktop and mobile sessions without doing some tweaking before switching. This also made it a barrier to be easily installable as “just another desktop environment”, which was a common complaint.


In Plasma 6, I introduced plasma-mobile-envmanager a utility that runs prior to shell startup that automatically does the configuration switching between what is needed for Plasma Mobile, and what is needed for Plasma Desktop.

This now allows distros to drop having to install hardcoded configs onto the system, and makes it easy to simply install as a separate desktop environment for existing systems.

A new application: plasma-mobile-initial-start 🔗

I added an application that runs when starting Plasma Mobile for the first time, guiding users on configuration on their system, from setting up the Wi-Fi, to configuring cellular settings.

It currently exists as an application that runs when the shell is started for the first time.

However, it likely needs to eventually be ported to be a true first-start wizard similar to what GNOME has as well as pico-wizard (used by Manjaro), in order to have elevated permissions so that the user does not have to be prompted for their password.

Docked mode 🔗

With Plasma 6, I am making some steps toward improving support with attaching a monitor, keyboard and mouse.

A new “Docked Mode” quick setting was introduced that, when activated:

  • Bring back window decorations
  • Stops opening application windows in fullscreen

Eventually, some more work can be done in order to have the external monitors load desktop panels instead of the mobile ones, which should make the experience equivalent to Plasma Desktop.

Telephony 🔗

I have historically not done much work on the telephony front.

There have luckily been contributors that have worked on Spacebar and Plasma Dialer (shoutout to Michael and Alexey!) in the past few years, allowing me to focus on other things.

From recent testing though, there have been a lot of regressions in Plasma 6, so I likely need to start learning the ropes of how it all works in order to help out. Of particular focus for me will be improving the quality of the cellular settings and overall shell integration with ModemManager.

My current carrier only supports VoLTE so I have been unable to test calling, and I have had trouble with my PinePhone in getting cellular working, but I will probably try buying a USB modem to do testing on the desktop with.

Settings module consolidation 🔗

In Plasma 5, a lot of mobile specific settings modules lived outside of the Plasma Mobile repository, in places such as the settings application.

I moved these settings modules together to be in the Plasma Mobile repository. This also removes the need to have a separate build of plasma-nm.

Other things to address 🔗

A big pain point in Plasma 5 was the mobile lockscreen. There were many cases of it crashing (causing the white text on black screen issue) as well as extraordinarily slow load times.

Much of it likely stems from the fact that it has to load the QML files right after the device locks, which can be slow and sometimes has graphical issues when coupled with suspend. I have tried to optimize the load time in the past, but it may be the case that we need to rethink the architecture a bit, not sure…

Conclusion 🔗

I will be returning to a university term in January, likely making it harder for me to contribute for a few months again.

I have luckily finished most of the features I have wanted to get done for Plasma 6, and am now spending my effort fixing bugs and improving code quality. I hope that we can have a successful, bug-free Plasma Mobile release in February, but it is quite daunting at the moment as a single volunteer contributor for the mobile shell.

If you are interested in helping contribute, I encourage you to join the Plasma Mobile matrix room!

There is also some documentation on the wiki that can help you get started.

go konqi!