Skip to content

Welcome to Planet KDE

This is a feed aggregator that collects what the contributors to the KDE community are writing on their respective blogs, in different languages

Thursday, 20 January 2022



The Linux App Summit (LAS) of 2022 will be held in Rovereto, a picturesque city at the foot of the Italian Alps.

Whether you are a company, journalist, developer, or user interested in the ever-growing Linux app ecosystem, LAS will have something for you. Scheduled for April, LAS 2022 will be a hybrid event, combining on-site and remote sessions, including talks, panels and Q&As.

The call for papers will open soon, and the registrations shortly after.

Follow us on Twitter to keep up to date with Linux App Summit news.

About the Linux App Summit

The Linux App Summit (LAS) brings the global Linux community together to learn, collaborate, and help grow the Linux application ecosystem. Through talks, panels, and Q&A sessions, we encourage attendees to share ideas, make connections, and join our goal of building a common app ecosystem. Previous iterations of the Linux App Summit have been held in the United States in Portland, Oregon and Denver, Colorado, as well as in Barcelona, Spain.

Learn more by visiting linuxappsummit.org.

About Rovereto



Rovereto is an old Fortress Town in Northern Italy at the foot of the Italian Alps. It is located in the autonomous province of Trento and is the main city of the Vallagarina district.

The city has several interesting sites including:

  • The Ancient War Museum
  • A castle built by the counts of Castelbarco
  • The Museum of Modern and Contemporary Art of Trento

Rovereto's economy revolves around wine, coffee, rubber, and chocolate. The town was acknowledged as a “Peace town” in the 20th century and is also the location of important palaeontological remains, such as dinosaur footprints in the surrounding area.

We look forward to seeing you in Rovereto, Italy.

* The image “Rovereto” featured above is by barnyz and is distributed under a CC BY-NC-ND 2.0 license.

I was recently interviewed for episode 261 of Destination Linux, and it was a blast! Check it out there, or here:

For the folks commenting on my background, yes indeed, all KDE bugs are bed bugs.

Speaking of which… go squish some!!!

Wednesday, 19 January 2022

One of the more broadly-useful things to come out of KDE Frameworks efforts is, in my opinion, the KDE Extra CMake Modules (ECM). Since KDE software nearly-universally uses CMake as (meta-)build system, a lot of common functionality is distilled into the ECM. It makes building KDE software more consistent and generally easier. Inspired by KDE ECM, let me present ARPA2CM, a conceptually-similar set of CMake modules for a different software stack.

Software Stack

The ARPA2CM modules are – no suprise here – CMake modules for the ARPA2 stack. ARPA2 is a loose collection of projects for security-related things. A lot of it is based on Kerberos, there’s a bunch of TLS work, and as an over-arching idea there’s “identity management” where it should be far easier to switch identities when accessing the internet – authentication should not belong to walled gardens.

There’s some philosophy at internetwide.org.

The irony of using plain http there is not lost on me.

My part of the stack is largely the CMake bits and some LDAP-wrangling. Early on we noticed that there would be a dozen or more somewhat-independent repositories, and they would all have common needs: finding Kerberos, finding TLS libraries, dealing with logging, .. exactly the category of things that KDE’s ECM does.

However, the dependencies (find-modules) are ones that few KDE applications need, the major language is C rather than C++, with no Qt in sight, so ECM itself is a bad fit for the CMake-level tooling. We developed our own collection, although it’s all BSD-2-Clause licensed so that it could be incorporated upstream (in CMake) or cross-stream (in KDE ECM) if needed.

There’s also some bits that were copied straight out of ECM into ARPA2CM, like FindGperf and its supporting modules.

The target platform(s) for the software stack are primarily Linux (Debian stable and newer, I think) and secondarily Windows; this means that the chosen CMake-required-version, for instance, is a bit older than the most-recent one.

Coding Style

Since most of the “code” is CMake code, the important formatting tool is not clang-format. I chose gersemi as the CMake-formatter; Harald Sitter once spent a long time looking for such formatting tools and pointed my at this one. It is somewhat simplistic, but it does do reasonable formatting and is somewhat opinionated about it, so there’s not a ton to tweak.

Once I started using gersemi in earnest, I found some bugs, reported them, and they were fixed. I like tools like that.

I have not gone over the code repeatedly to make it the most beautiful CMake code possible. That’s for some time when there’s other slack in the schedule. It’s reasonably consistent all over, but the module- documentation, for instance, could be made .rst for consistency with ECM and CMake documentation. Merge requests over on GitLab are welcome.

Using ARPA2CM

Since ARPA2CM is not packaged (as far as I know; heck, this is the first release announcement so why would anyone have done that already), getting it might currently be tricky. If you’re the vendoring type, plopping the find-modules/ and modules/ directories into another project might be enough.

If ARPA2CM is installed, then there is an example CMakeLists.txt which looks for it. This is an awful lot like you might use KDE ECM in your project.

Future

I expect to do some minor maintainence on the ARPA2CM collection, and as the rest of the stack matures, it may grow some new modules for shared dependencies or use. FreeDiameter is the most recent addition to the modules (just before the 1.0 release of ARPA2CM). As long as it does its job for the dozen projects under the ARPA2 umbrella, it’s “good enough”.

Now I can move on to working on LDAP-wrangling software that uses these CMake modules.

Work on ARPA2CM was sponsored by Stichting NLNet and by the NGI0 project “xover”, at various points in the now 4-year history of ARPA2CM.

KDE Plasma 5.24 Beta is here! 🎉

On a classic Fedora system or on other disctributions you can try it with the repos listed on the KDE Wiki. Here is how to safely try it on Fedora Kinoite, using the packages for Fedora 35 made by Marc Deop, a member of the Fedora KDE SIG.

The latest version of KDE Plasma is usually available in Fedora Rawhide (unfortunately not available right now) however rebasing the entire system to a development version involves a lot of uncertainty. Thus it is much safer to change only the KDE Plasma packages and frameworks while keeping a stable system as a base.

As always, make sure to backup your data before trying out beta software that could result in the loss of your personal cat picture collection.

Setting up the RPM repos

Add the following Fedora COPR repos (frameworks, plasma) on your host and inside a toolbox:

$ cat /etc/yum.repos.d/kde-beta.repo
[copr:copr.fedorainfracloud.org:marcdeop:frameworks]
name=Copr repo for frameworks owned by marcdeop
baseurl=https://download.copr.fedorainfracloud.org/results/marcdeop/frameworks/fedora-$releasever-$basearch/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://download.copr.fedorainfracloud.org/results/marcdeop/frameworks/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1

[copr:copr.fedorainfracloud.org:marcdeop:plasma]
name=Copr repo for plasma owned by marcdeop
baseurl=https://download.copr.fedorainfracloud.org/results/marcdeop/plasma/fedora-$releasever-$basearch/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://download.copr.fedorainfracloud.org/results/marcdeop/plasma/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1

Downloading the packages from a toolbox

Download the RPM packages from the repo:

[toolbox]$ echo "bluedevil breeze-cursor-theme breeze-gtk-common breeze-gtk-gtk3 breeze-gtk-gtk4 breeze-icon-theme kactivitymanagerd kde-cli-tools kdecoration kde-gtk-config kdeplasma-addons kdesu kf5-attica kf5-baloo kf5-baloo-file kf5-baloo-libs kf5-bluez-qt kf5-filesystem kf5-frameworkintegration kf5-frameworkintegration-libs kf5-kactivities kf5-kactivities-stats kf5-karchive kf5-kauth kf5-kbookmarks kf5-kcmutils kf5-kcodecs kf5-kcompletion kf5-kconfig-core kf5-kconfig-gui kf5-kconfigwidgets kf5-kcoreaddons kf5-kcrash kf5-kdbusaddons kf5-kdeclarative kf5-kded kf5-kdelibs4support kf5-kdelibs4support-libs kf5-kdesu kf5-kdnssd kf5-kdoctools kf5-kfilemetadata kf5-kglobalaccel kf5-kglobalaccel-libs kf5-kguiaddons kf5-kholidays kf5-khtml kf5-ki18n kf5-kiconthemes kf5-kidletime kf5-kimageformats kf5-kinit kf5-kio-core kf5-kio-core-libs kf5-kio-doc kf5-kio-file-widgets kf5-kio-gui kf5-kio-ntlm kf5-kio-widgets kf5-kio-widgets-libs kf5-kirigami2 kf5-kitemmodels kf5-kitemviews kf5-kjobwidgets kf5-kjs kf5-knewstuff kf5-knotifications kf5-knotifyconfig kf5-kpackage kf5-kparts kf5-kpeople kf5-kpty kf5-kquickcharts kf5-krunner kf5-kservice kf5-ktexteditor kf5-ktextwidgets kf5-kunitconversion kf5-kwallet kf5-kwallet-libs kf5-kwayland kf5-kwidgetsaddons kf5-kwindowsystem kf5-kxmlgui kf5-kxmlrpcclient kf5-modemmanager-qt kf5-networkmanager-qt kf5-plasma kf5-prison kf5-purpose kf5-solid kf5-sonnet-core kf5-sonnet-ui kf5-syntax-highlighting kf5-threadweaver khotkeys kinfocenter kmenuedit kscreen kscreenlocker ksystemstats kwayland-integration kwayland-server kwin kwin-common kwin-libs kwin-wayland kwin-x11 kwrited layer-shell-qt libkscreen-qt5 libksysguard libksysguard-common libkworkspace5 oxygen-sound-theme pam-kwallet plasma-breeze plasma-breeze-common plasma-browser-integration plasma-desktop plasma-desktop-doc plasma-discover plasma-discover-flatpak plasma-discover-libs plasma-discover-notifier plasma-disks plasma-drkonqi plasma-integration plasma-lookandfeel-fedora plasma-milou plasma-nm plasma-nm-openconnect plasma-nm-openvpn plasma-nm-vpnc plasma-pa plasma-systemmonitor plasma-systemsettings plasma-thunderbolt plasma-vault plasma-workspace plasma-workspace-common plasma-workspace-geolocation plasma-workspace-geolocation-libs plasma-workspace-libs plasma-workspace-wayland plasma-workspace-x11 polkit-kde powerdevil qqc2-desktop-style sddm-breeze sddm-kcm xdg-desktop-portal-kde" > packages.list
[toolbox]$ mkdir -p rpm && cd rpm
[toolbox]$ dnf download --arch=x86_64,noarch $(cat ../packages.list)

The list can be generated from the following commands:

[toolbox]$ dnf repository-packages copr:copr.fedorainfracloud.org:marcdeop:frameworks list | grep copr:copr.fedorainfracloud.org:marcdeop:frameworks | grep -vE "(debug|devel|\.src)" | cut -f1 -d\ | sed 's/\.x86_64//' | sed 's/\.noarch//' > frameworks.list
[toolbox]$ dnf repository-packages copr:copr.fedorainfracloud.org:marcdeop:plasma list | grep copr:copr.fedorainfracloud.org:marcdeop:plasma | grep -vE "(debug|devel|\.src)" | cut -f1 -d\ | sed 's/\.x86_64//' | sed 's/\.noarch//' > plasma.list
[toolbox]$ rpm -qa | sed "s/.noarch//" | sed "s/.x86_64//" | sed "s/\.fc35//" | sed "s/\-[^-]*$//" | sed "s/\-[^-]*$//" > installed.list
[toolbox]$ comm -12 <(cat installed.list | sort) <(cat frameworks.list plasma.list | sort) > packages.list

Overriding the packages

Use ostree to pin your current (hopefully working) deployment and then rpm-ostree to create a new deployment with (a lot) of package overrides:

[host]$ sudo ostree admin pin 0
[host]$ cd rpm
[host]$ sudo rpm-ostree override replace ./*.rpm

And reboot.

Fedora Kinoite 35 with the KDE Plasma 5.24 Beta

Rolling back

You can either simply boot the previous deployment, or rollback to it:

[host]$ sudo rpm-ostree rollback

or reset all your overrides:

[host]$ sudo rpm-ostree override reset --all

and reboot.

Conclusion

This is just the first step to make trying Beta versions of KDE Plasma and Apps easier on Fedora Kinoite. There is a lot of work in progress to make this process much easier in the future.

Plasma 5.24 Beta Review Day

When a new Plasma release enters Beta Phase, there are three weeks of intense testing, bugfixing and polishing.

During this time we need as many users and developers as possible to help with finding regressions, trying to reproduce incoming reports and generally being on top of as much as possible. The more users, workflows, use cases and hardware the tests are being run on greatly helps to cover a wide variety of the entire software stack.

In order to make this process more accessible, more systematic and hopefully more fun we have an official “Plasma Beta Review Day”

Who can take part?

Any user of plasma who is able and willing to install the latest beta or run a live ISO with our beta on it and wants to help.

When will it take place?

Thursday 20 January 2022 11:00UTC – 17:00UTC

Please see our Plasma Schedule in the Future releases section. Look for Beta Review Day.

Where will it be coordinated?

Join us in our Matrix chat room.

What will this consist of?

  • Introductions to Our Bugzilla Bugtracking System for people who want support for filing or triaging their first bugs
  • Being assigned a short list of bugs to validate or de-duplicate entries (for those more experienced)
  • Going through a defined list of all the new areas of Plasma to check for regressions
  • Developers being online: if Developers can get debug info for issues you have (we will help you with getting this info), they might be able identify and fix things in real time!

What should I prepare?

Ideally, get yourself set up with a beta of the latest Plasma. You can either use one of the Live-Images (without the need to install) or use packages provided by your distribution.

Disclaimer: As this is Beta Software, there is a (small) chance that you might encounter data loss if you’re installing the Beta version to your system. Make sure that you backup any data before installing. Users of the Live-Image are less likely to encounter data loss, but a backup of your data is still encouraged!

If you can’t use the Plasma beta on your own system, shells.com has provided us demo accounts, so you can try the beta inside your web browser. These accounts will be available in the shared notes in the web conference channel.

We hope to see you all soon!

We are happy to announce the release of Qt Creator 6.0.2!

Here at KDAB, we recently published a library called KDBindings, which aims to reimplement both Qt signals and slots and data binding in pure C++17.

To get an introduction to the KDBindings implementation of signals and slots, I recommend that you take a look at the KDBindings Getting Started Guide. It will give you an overview of what signals and slots are, as well as how our implementation of them is used. Alternatively, take a look at our introductory blog post.

On a basic level, signals model an event that might occur, like a command being issued or a variable being set. The signal will then “emit” the data associated with the event. This data is the signal’s “signature,” similar to a normal function signature. The signal can then be connected to any function with a matching signature. Such functions are referred to as “slots.” Whenever the signal is emitted, it will call all of these slots in an arbitrary order. They can then react to the event.

Because this mechanism is very well-suited to loosely couple different systems, we discovered that it is often useful to connect a signal to a slot, even if their signatures only partially match.

This is especially useful if the signature of the signal was not designed to explicitly work together with this particular slot, but rather to be generic. Signals might emit too much information, for example, when a slot is only interested in the occurrence of an event and not the data associated with it. On the other hand, a slot might require additional information that is not emitted by the signal but might be available when the slot is connected.

Commands and Titles

Let’s say we have a Signal<std::string, std::chrono::time_point<std::chrono::system_clock>> that is emitted every time the user issues a command. The std::string that is emitted contains the command issued and the time_point contains the timestamp of when the command was issued.

Now imagine that we want to update the title of our application so that it always displays the last issued command.

Ideally, we would want to be able to write something like this to make that happen:

class Window {
public:
  void setTitle(const std::string& title) { ... }
};

// This Signal would be emitted every time the user issues a command
Signal<std::string, std::chrono::time_point<std::chrono::system_clock>> commandSignal; 

Window appWindow;
commandSignal.connect(&Window::setTitle, &appWindow);

Unfortunately, this won’t just work immediately. The connect function on the signal needs to do a lot of work here:

  • Window::setTitle is a member function, so it will need a this pointer to the object it should modify. In this case, this is &appWindow.
  • The function doesn’t have any use for the time_point emitted by the signal, only for the std::string. So, the signal must discard its second argument for this particular slot.

std::bind Function Pointer as a Workaround

The problem is that the default implementation of the connect function only takes a “std::function<void(std::string, std::chrono::time_point<std::chrono::system_clock>)>” as its argument.

A workaround for this could be the use of std::bind:

commandSignal.connect(std::bind(&Window::setTitle, &appWindow, std::placeholders::_1));

With this, we can provide our member function with the required pointer to our appWindow. Furthermore, std::bind returns an object that can be called with a basically unlimited number of arguments. Since we only used std::placeholders::_1, it will then discard all arguments except the first one. In our case, this would discard the time_point argument, which is exactly what we need.

In general, using std::bind would work. However, the previously shown API would be a lot nicer, as it hides the std::bind call and doesn’t require the use of any placeholders.

Let’s use template meta-programming to write a function that can generate the needed call to std::bind for us.

The bind_first Function

Our initial goal is to create a function that, similar to std::bind, can bind arguments to a function. However, for our use case, we only want to bind the first arguments and refrain from explicitly specifying the placeholders for every remaining argument.

For the general idea of how to solve this problem, I found a Github Gist that defines a function bind_first that almost does what we need. It can generate the needed placeholders for std::bind as well as forward the arguments to be bound.

The problem with this implementation is that it uses sizeof…(Args) to determine how many placeholders need to be generated. The Args variadic template argument is only available because it requires the function to be a non-const member function. So, this won’t work on just any function type. It won’t even work on const member functions.

However, if we assume there is a compile-time function get_arity that returns the arity (number of arguments) of a function, we can improve bind_first to accept any function type:

// We create a template struct that can be used instead of std::placeholders::_X. 
// Then we can use template-meta-programming to generate a sequence of placeholders. 
template<int> 
struct placeholder { 
}; 

// To be able to use our custom placeholder type, we can specialize std::is_placeholder. 
namespace std { 
    template<int N> 
    struct is_placeholder<placeholder<N>> 
        : integral_constant<int, N> { 
    }; 
}; // namespace std
 
// This helper function can then bind a certain number of arguments "Args",
// and generate placeholders for the rest of the arguments, given an
// index_sequence of placeholder numbers.
//
// Note the +1 here, std::placeholders are 1-indexed and the 1 offset needs
// to be added to the 0-indexed index_sequence. 
template<typename Func, typename... Args, std::size_t... Is> 
auto bind_first_helper(std::index_sequence<Is...>, Func &&fun, Args... args) 
{ 
    return std::bind(
        std::forward<Func>(fun),
        std::forward<Args>(args)...,
        placeholder<Is + 1>{}...
    ); 
}
 
// The bind_first function then simply generates the index_sequence by
// subtracting the number of arguments the function "fun" takes, with
// the number of arguments Args, that are to be bound. 
template< 
    typename Func, 
    typename... Args, 
    /*
      Disallow any placeholder arguments, they would mess with the number
      and ordering of required and bound arguments, and are, for now, unsupported
    */ 
    typename = std::enable_if_t<
        std::conjunction_v<std::negation<std::is_placeholder<Args>>...>
    >
>
auto bind_first(Func &&fun, Args &&...args) 
{ 
    return bind_first_helper(
        std::make_index_sequence<get_arity<Func>() - sizeof...(Args)>{},
        std::forward<Func>(fun),
        std::forward<Args>(args)...
    ); 
}

// An example use:
QWindow window;
std::function<void(QString, std::chrono::time_point<std::chrono::system_clock>)> bound = 
    bind_first(&QWindow::setTitle, &window);

The get_arity Function

As noted earlier, the bind_first implementation assumes the existence of a get_arity function, which can determine the number of arguments of a function at compile time.

This function is, however, not part of standard C++. So, how does one go about implementing it?

To understand the basic idea behind how this can work, take a look at this Stackoverflow post: https://stackoverflow.com/questions/27866909/get-function-arity-from-template-parameter.

In comparison to the StackOverflow answer, I chose to implement this using a constexpr function, as I find the interface clearer when called.

Without further ado, here’s the code:

// This is just a template struct necessary to overload the get_arity function by type. 
// C++ doesn't allow partial template function specialization, but we can overload it with our tagged type. 
template<typename T> 
struct TypeMarker { 
    constexpr TypeMarker() = default; 
};
 
// Base implementation of get_arity refers to specialized implementations for each 
// type of callable object by using the overload for it's specialized TypeMarker. 
template<typename T> 
constexpr size_t get_arity() 
{ 
    return get_arity(TypeMarker<std::decay_t<T>>{}); 
} 

// The arity of a function pointer is simply it's number of arguments. 
template<typename Return, typename... Arguments> 
constexpr size_t get_arity(TypeMarker<Return (*)(Arguments...)>) 
{ 
    return sizeof...(Arguments);
} 

template<typename Return, typename... Arguments> 
constexpr size_t get_arity(TypeMarker<Return (*)(Arguments...) noexcept>) 
{ 
    return sizeof...(Arguments); 
} 

// The arity of a generic callable object is the arity of it's operator() - 1,
// as the "this" pointer is already known for such an object. 
// As lambdas are also just instances of an anonymous class, they must also implement 
// the operator() member function, so this also works for lambdas. 
template<typename T> 
constexpr size_t get_arity(TypeMarker<T>) 
{ 
    return get_arity(TypeMarker<decltype(&T::operator())>{}) - 1; 
}

// Syntactic sugar version of get_arity, allows passing any callable object
// to get_arity, instead of having to pass its decltype as a template argument.
template<typename T>
constexpr size_t get_arity(const T &)
{
    return get_arity<T>();
}

This code will work for free functions. It also works for arbitrary objects that implement the function call operator (operator()) by delegating to the get_arity function for that operator() member function. This also includes lambdas, as they must implement the function call operator as well.

Unfortunately, implementing get_arity for member functions is a bit more complicated.

An initial implementation might look like this:

template<typename Return, typename Class, typename... Arguments> 
constexpr size_t get_arity(TypeMarker<Return (Class::*)(Arguments...)>) 
{ 
    return sizeof...(Arguments) + 1; 
}

This works well for normal, plain member functions. What about const-correctness though?

We would need another overload for TypeMarker<Return (Class::*)(Arguments...) const>. Taking a look at the cppreference for function declarations reveals that we also need to take care of volatile, noexcept, as well as ref qualified (& and &&) member functions.

Unfortunately, I have not found any way to remove these specifiers from the function pointer types that are already available. But instantiating every combination isn’t too bad when we use a macro to do the heavy lifting for us:

// remember to add +1, the "this" pointer is an implicit argument 
#define KDBINDINGS_DEFINE_MEMBER_GET_ARITY(MODIFIERS)                                                   template<typename Return, typename Class, typename... Arguments> \
constexpr size_t get_arity(TypeMarker<Return (Class::*)(Arguments...) MODIFIERS>) \ 
{ \ 
    return sizeof...(Arguments) + 1; \ 
} 

KDBINDINGS_DEFINE_MEMBER_GET_ARITY() 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(const) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(&) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(const &) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(&&) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(const &&) 

KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile const) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile &) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile const &) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile &&) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile const &&) 

KDBINDINGS_DEFINE_MEMBER_GET_ARITY(noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(const noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(&noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(const &noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(&&noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(const &&noexcept) 

KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile const noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile &noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile const &noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile &&noexcept) 
KDBINDINGS_DEFINE_MEMBER_GET_ARITY(volatile const &&noexcept)

So it’s a macro to generate template functions, which are pretty much macros themselves — that’s a lot of meta programming.

But in the end, we are successful. This code will work for (almost) any callable object. The current limits I can think of are overloaded functions as well as template functions. These have to be disambiguated, as it’s not clear which overload get_arity should refer to.

The definition of get_arity might especially come in handy in the future, if further metaprogramming with arbitrary callable objects is needed.

Putting It All Together

With our completed bind_first implementation, we can now define a new connect function for our signal:

// We use the enable_if_t here to disable this function if the argument is already 
// convertible to the correct std::function type 
template<typename Func, typename... FuncArgs> 
auto connect(Func &&slot, FuncArgs &&...args) const
    -> std::enable_if_t< 
          std::disjunction_v< 
              std::negation<std::is_convertible<Func, std::function<void(Args...)>>>, 
              /* Also enable this function if we want to bind at least one argument*/
              std::integral_constant<bool, sizeof...(FuncArgs)>>, 
          ConnectionHandle
    > 
{ 
    return connect(
        static_cast<std::function<void(Args...)>>(
            bind_first(std::forward<Func>(slot),
            std::forward<FuncArgs>(args)...)
        )
    ); 
}

And, finally, we have our desired API:

Signal<std::string, std::chrono::time_point<std::chrono::system_clock>> commandSignal; 

Window appWindow; 
commandSignal.connect(&Window::setTitle, &window);

As mentioned earlier, this also allows us to bind functions to a signal that need additional information.

Signal<std::string, std::chrono::time_point<std::chrono::system_clock>> commandSignal;

// Imagine we had a logging function that takes a logging level
// and a message to log.
void log(const LogLevel& level, const std::string& message);

// Connecting this function will now log the user command every time
// with the log level "Info".
commandSignal.connect(log, LogLevel::Info);

All of these features come together to make KDBindings signals very flexible in how they can connect to signals. That makes it very easy to couple together systems without having to make the signals and slots entirely line up first.

KDBindings

If you want to check out the complete code, take a look at the KDBindings source on Github.

The library also offers a lot more awesome features, like properties and data binding in pure C++17. It’s a header-only library that is easy to integrate and licensed with the very liberal MIT-License.

This, of course, also means that you’re free to use the code shown here however you like.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Loose Coupling with Signals & Slots appeared first on KDAB.

In my 2022 roadmap, I mentioned something called the “15-Minute Bug Initiative.” Today I’d like to flesh it out and request participation! This blog post is not only informational, but I really hope any developers reading along will get excited and decide to participate. 🙂


KDE software has historically been accused of being resource-intensive, ugly, and buggy. Over the years we’ve largely resolved the first two, but the issue of bugginess persists.

Have you ever had that experience where you’re introducing someone to a KDE Plasma system and to your horror, they run into multiple bugs within moments? These are the issues we need to fix first: those that can be easily encountered within 15 minutes of basic usage. They leave a bad taste in people’s mouths and provide the impression that the system is a house of cards. It’s time to remedy this final strategic weakness of KDE, starting with Plasma itself. So I’d like to present our initial list of bugs:

http://tinyurl.com/kdeplasma-15-minute-bugs

If you have any software development skills, working on these bugs is a super impactful way to make a difference with code!! Every fixed bug is a huge deal, and brings Plasma meaningfully closer to a position of true stability.


Likely-to-be-frequently-asked questions

1. What are the criteria for being a 15-minute bug?

It’s an inherently squishy thing, but I look for the following:

  1. Affects the default setup
  2. 100% reproducible
  3. Something basic doesn’t work (e.g. a button doesn’t do anything when clicked)
  4. Something basic looks visually broken (e.g. “korners” bug)
  5. Causes you to get locked you out of your system
  6. Causes a full session crash
  7. Requires a reboot or terminal commands to fix
  8. There’s no workaround
  9. It’s a recent regression
  10. The bug report has more than 5 duplicates

The more of those conditions apply, the more likely that any Plasma user will run into it quickly during normal usage, and the more I feel like it qualifies.

2. Who determines what gets to be a 15-minute bug?

KDE developers and bug triagers make the call.

3. I’m a developer or bug triager; how do I add a bug to this list?

Change its Priority to VHI. If you don’t have permission to do this, ask sysadmins for “editbugs” permission over here: https://phabricator.kde.org/maniphest/task/edit/form/2/

4. I’m not a developer or a bug triager; how can I help?

You can go through the list and try to reproduce or confim the bugs, and do investigation into root causes and triggering factors for the ones where this isn’t already known. Those are important because a skilled developer can usually quickly fix a bug they can reproduce. But if they can’t, then they may never be able to. So if you can help developers reproduce bugs, that’s extremely valuable.

5. I’m experiencing this annoying issue that’s not on the list! Can you add it?

Maybe. Mention the 15-minute bug initiative in the bug report for it, and KDE’s bug triagers will see if it makes the cut.

6. Why are you only doing Plasma bugs right now?

Lack of resources. The list currently has over 50 bugs, and I don’t anticipate that we’ll get it down to zero in a year. A lot of the issues there are quite challenging to fix. But if I’m wrong and we blaze through everything, then I’ll absolutely broaden the initiative to include first frameworks, and then apps! Stabilize all the things!


So that’s the 15-Minute Bug Initiative. Let’s get cracking and make Plasma rock solid in 2022!

http://tinyurl.com/kdeplasma-15-minute-bugs

New year, new revision of the digiKam Recipes book. It is a relatively modest update that features two new additions: how to upload photos to a remove machine via SSH directly from digiKam and how to access digiKam remotely via RDP. Oh, and there is a new colorful book cover. As always, all digiKam Recipes readers will receive the updated version of the book automatically and free of charge. The digiKam Recipes book is available from Google Play Store and Gumroad.

Tuesday, 18 January 2022

Krita 3 and later are compatible with G’MIC, an open-source digital image processing framework. This support is provided by G’MIC-Qt, a Qt-based frontend for G’MIC. Since its inception, G’MIC-Qt was shipped as a standalone, externally built executable that is an optional, runtime dependency of Krita.

Krita 5 changes the way G’MIC-Qt is consumed. In order to support CentOS and macOS, G’MIC-Qt has been converted into a dynamically loadable library that is a dependent of Krita.

This file reviews these changes, and how to package Krita accordingly.

Rationale

We have chosen to ship G’MIC-Qt as a library because of two longstanding bugs.

The Krita host for G’MIC-Qt relies on QSharedMemory, i.e. a shared memory segment, on wich a pipe is instantiated to pass messages to and from the host app. Firstly, this approach made opening two simultaneous G’MIC-Qt instances (each paired to its own Krita instance) impossible 1. Secondly, it also forbade using G’MIC-Qt with Krita on CentOS, as well as macOS, because the former doesn’t support QSharedMemory 2, and the latter has a meager 4KB as the maximum shared segment size. While there’s no workaround (to our knowledge) in CentOS, the only workaround for macOS is to manipulate the maximum segment size via sysctl 3, which was already difficult pre-Mojave 4 and now, due to the significant security measures of recent macOS versions, is nothing short of a sysadmin task 5.

There were two approaches. One was to move to a mmap-d file, which is unpredictable to sync due to each canvas’s differing space requirements. The easiest, and the one we chose, was to move to a tighter coupled memory model– a dynamically loadable plugin, as shown in my proposal PR 6. This was rejected by the G’MIC developers because of the possibility of crashing the host app due to a G’MIC internal bug 78. This decision was later enacted as part of G’MIC contributing policies 9.

How did you fix it?

Due to the above, the only path forward was to fork G’MIC, which we did in Krita MR !581 10.

From a source code point of view, our fork is based on top of the latest version’s tarball. Each tarball’s contents are committed to the main branch of the amyspark/gmic GitHub repository 11. For every covered release, there is a branch that in turn overlays our own plugin implementation, along with additional fixes that ensure that G’MIC-Qt doesn’t attempt to overwrite the internal state of the host application; namely, QCoreApplication settings, widget styles, and the installed translators.

From a technical point of view, this library interfaces with Krita through a new, purpose specific library, kisqmicinterface. This library contains nothing more than the previous iteration of the communications system, but now exported through namesake APIs 12.

In short, we have reversed the dependency flow; while in Krita v4 and earlier G’MIC-Qt was a runtime dependency, in v5, it’s G’MIC-Qt that depends on Krita as a build and runtime dependency.

Getting the source code

The patched version’s tarballs are GPG signed and available at the Releases section of the GitHub repository 13. Alternatively, the tarballs (though not the signatures) are also mirrored at our dependencies stash at files.kde.org 14. The tarballs are signed with the GPG key which is available at my GitHub profile. Its fingerprint is 4894424D2412FEE5176732A3FC00108CFD9DBF1E.

Building Krita’s G’MIC-Qt library

After building Krita with your standard process, the CMake install process should have put kisqmicinterface.so in your lib folder:

[2022-01-09T16:21:32.589Z] -- Installing: /home/appimage/appimage-workspace/krita.appdir/usr/lib/x86_64-linux-gnu/libkritaqmicinterface.so.18.0.0
[2022-01-09T16:21:32.589Z] -- Installing: /home/appimage/appimage-workspace/krita.appdir/usr/lib/x86_64-linux-gnu/libkritaqmicinterface.so.18
[2022-01-09T16:21:32.589Z] -- Set runtime path of "/home/appimage/appimage-workspace/krita.appdir/usr/lib/x86_64-linux-gnu/libkritaqmicinterface.so.18.0.0" to "/home/appimage/appimage-workspace/krita.appdir/usr/lib/x86_64-linux-gnu:/home/appimage/appimage-workspace/deps/usr/lib:/home/appimage/appimage-workspace/deps/usr/lib/x86_64-linux-gnu"
[2022-01-09T16:21:32.589Z] -- Installing: /home/appimage/appimage-workspace/krita.appdir/usr/lib/x86_64-linux-gnu/libkritaqmicinterface.so

It should also install these headers, as illustrated below:

  • kis_qmic_plugin_interface.h exports a G’MIC-alike launch entry point that the plugin will implement
  • kis_qmic_interface.h implements the G’MIC request-response APIs
  • kritaqmicinterface_export.h is the CMake auto-generated export decoration header
[2022-01-09T16:21:32.589Z] -- Installing: /home/appimage/appimage-workspace/krita.appdir/usr/include/kis_qmic_interface.h
[2022-01-09T16:21:32.589Z] -- Installing: /home/appimage/appimage-workspace/krita.appdir/usr/include/kis_qmic_plugin_interface.h
[2022-01-09T16:21:32.589Z] -- Installing: /home/appimage/appimage-workspace/krita.appdir/usr/include/kritaqmicinterface_export.h

The three headers, along with the libkritaqmicinterface.a archive library (if building for Windows under MinGW), comprise a krita-gmic-dev package that’ll be a build dependency of the new G’MIC-Qt plugin. Please note that libkritaqmicinterface.so is consumed by Krita and MUST NOT be placed inside this dev package.

Now, download the G’MIC-Qt tarball from one of the sources listed previously, and unpack it to an isolated directory. Then, you can build it with these lines (adjust them as described):

mkdir build
cmake -S ./gmic-$<the tarball's G'MIC version>-patched/gmic-qt \
      -B ./build \
      -DCMAKE_PREFIX_PATH=$<installation prefix of krita-gmic-dev> \
      -DCMAKE_INSTALL_PREFIX=$<installation prefix of krita itself> \ 
      -DENABLE_SYSTEM_GMIC=$<false if you don't want to use your system's G'MIC> \
      -DGMIC_QT_HOST=krita-plugin
cmake --build . --config $<your desired build type> --target install

The changes from a standard G’MIC build are:

  • the new GMIC_QT_HOST value, krita-plugin
  • the requirement for the krita-gmic-dev package to be available in CMAKE_PREFIX_PATH

This process is illustrated in any of our official build scripts for Windows 15 and for macOS/Linux 16. You can also check the 3rdparty_plugins section of our source tree 17 to see what other hardening we apply to the build.