Planet KDE logo

Welcome to Planet KDE

This is a feed aggregator that collects what the contributors to the KDE community are writing on their respective blogs
To have your blog added to this aggregator, please read the instructions

In the last blog post we saw an essential, C++ oriented, Visual Studio Code setup. That was enough to get going right away, but we can still definitely do more and better. Here I’ll show you how to get a complete setup for your qmake and CMake projects, all this while also wearing a Qt hat (on top of my C++ hat) and having a deeper look at the Qt side.

Build qmake Qt projects

Qmake is not integrated with Visual Studio Code the way CMake is, so setting up a qmake project for build is slightly more convoluted than doing the same with CMake. This means we’ll have to define our own build tasks. We’re going to do this in two stages: build steps definition and build steps combination, leveraging the fact that Visual Studio Code implements task dependencies and ordered sequential execution of dependencies.

Create build steps

As far as build steps are concerned, the following are, in a nutshell, the ones that will cover most cases:

  • Create the build directory (in a way that doesn’t fail if the directory already exists)
    {
      "label": "Create build dir",
      "type": "shell",
      "command": "mkdir -Force path/to/build/dir"
    }
    

    Here, -Force is a powershell parameter that will prevent the command to fail if the directory already exists. On Unix based systems, you can use mkdir -p instead.

  • Run qmake
    {
      "label": "Run qmake",
      "type": "shell",
      "command": "qmake",
      "arg": [ ... add your qmake arguments here ... ],
      "options": {
        "cwd": "path/to/build/dir"
      }
    }
    
  • Run make/nmake/jom, depending on the platform
    {
      "label": "Run make",
      "type": "shell",
      "command": "jom",
      "options": {
        "cwd": "path/to/build/dir"
      }
    }
    
  • Clear build folder This can mean different things depending on how the build file is configured. It could be a simple make clean, or a more thorough removal of the whole content of the build folder.
    {
      "label": "Clear build folder",
      "type": "shell",
      "command": "jom clean",
      "options": {
        "cwd": "path/to/build/dir"
      }
    }
    
Combine build steps

Now that our steps are defined, we can go on and define the actual build tasks. We’ll prepare two for this example, one for running a build, and one for running a clean build. Let’s see the task for a regular build:

{
  "label": "Build",
  "dependsOn": [
    "Create build dir",
    "Run qmake",
    "Run make"
  ],
  "dependsOrder": "sequence"
}

There are two new properties here: "dependsOn" is a list of task labels, and it means that those tasks need to be executed before the current task is built, while "dependsOrder", when set to "sequence", will tell Visual Studio Code to run all dependent tasks sequentially and in the given order.

The task for a clean build is very similar and will only have an extra step where the project is cleaned before being built again:

{
  "label": "Clean build",
  "dependsOn": [
    "Clear build folder",
    "Create build dir",
    "Run qmake",
    "Run make"
  ],
  "dependsOrder": "sequence"
}

And that’s it, now it’s just a matter to open the command palette (Ctrl+Shift+P), select “Task: Run Task” and then “Build”.

Use a default task

As an alternative (or better, an addition) to selecting manually the build task from a list every time, Visual Studio Code also allows to run a default task with a key combination (Ctrl+Shift+B). To mark a task as default, you need to add a few extra lines to the task configuration:

{
  // ... task configuration
  "group": {
    "kind": "build",
    "isDefault": true
  }
}
Use your own Qt

If Qt is not configured at a system level, or you want to use a Qt version other than the default one installed and configured in your system, you need to explicitly configure the environment so that every task is run with the right Qt version in the path. Visual Studio Code allows you to do this every time a terminal is launched for running a task, so our environment customizations are set before running the task command.

This is done in the settings file (or in the workspace settings if you’re working with a workspace), and the property name for this setting is system dependent: either "terminal.integrated.env.windows", "terminal.integrated.env.linux", or "terminal.integrated.env.osx". The property requires an object, where each property is the name of an environment variable, and the associated value is the value for the variable. Below is an example configuration for Windows:

{
  // All other settings...
  "terminal.integrated.env.windows": {
    "PATH": "C:/Qt/5.12.4/msvc2017_64/bin;${env:PATH}"
  }
}

Build CMake Qt projects

Setting up a CMake based project using the CMake extension doesn’t require any settings manipulation if Qt is already configured on your system. What you will need is to select a CMake kit (the CMake extension finds them automatically), a build variant, and launch the build with F7.

Short video showing how to launch a CMake build with Visual Studio Code

However, you may want to use extra arguments in the configuration step or specify your build directory so for instance it doesn’t end up being inside the source directory. You can customize CMake configuration arguments by setting the property "cmake.configureSettings" in your settings file. This property expects a list of string arguments that will be passed to CMake during the configuration step:

"cmake.configureSettings": {
  "CMAKE_PREFIX_PATH": "my/prefix/path",
  "ENABLE_FEATURE": "1",
  "ENABLE_OTHER_FEATURE": "0"
}

To customize the build directory, just set "cmake.buildDirectory" to the desired path. This value may contain variables, so it can be configured, for instance, to point a path relative to the project folder:

"cmake.buildDirectory": "${workspaceFolder}/../build-cmake-project"

If you want to use a custom Qt version, or Qt is not configured system-wide (as is the case on Windows) it’s enough to set CMAKE_PREFIX_PATH properly in the "cmake.configureSettings" property in the settings file. For example:

"cmake.configureSettings": {
  "CMAKE_PREFIX_PATH": "otherprefixpath;C:/Qt/5.12.4/msvc2017_64"
  // ... other args
]

You can find a complete documentation for the CMake Tools extension here, featuring a guide on how to use CMake Tools from the UI, and a documentation for all available settings.

Running and debugging our Qt application

Now that your application has been built, let’s see how we can launch it and, most importantly, debug it.

Running qmake projects

For projects built with qmake, we don’t have any help from extensions and the only option we have is to bake our own launch configurations in the way we’ve seen in the last blog post. This is done in the launch configurations file (launch.json) or in the workspace file, and this is how a launch configuration looks:

{
  "name": "My application",
  "type": "cppvsdbg",
  "request": "launch",
  "program": "path/to/application",
  "stopAtEntry": false,
  "cwd": "${workspaceFolder}",
  "environment": [],
  "externalConsole": false
}

You can run launch configurations both with or without debugger, using “Debug: Start Debugging” (F5) or “Run: Start Without Debugging” (Ctrl+F5) respectively. If Qt is not configured at a system level, or you want to use a custom Qt version, the corresponding launch configuration will need to be explicitly configured to include Qt in its path.

You can do this by updating the "environment" property in the launch configuration. Below is an example for Windows, setting only the "PATH" environment variable. Configurations for other systems need to be adjusted but are essentially similar.

"environment": [
  {
    "name": "PATH",
    "value": "C:/Qt/5.12.4/msvc2017_64/bin;${env:PATH}"
  }
]

Side note: here ${env:PATH} means whaterever value the environment variable PATH has before the launch configuration is run. In general, the syntax ${env:VARNAME} can be used to get an environment variable in a task or a launch configuration.

Running CMake projects

Working with CMake is easier in principle, as the CMake extension provides the commands “CMake: Run Without Debugging” and “CMake: Debug”, allowing you to respectively launch and debug CMake targets.

However, this approach has a number of shortcomings:

  • It’s not possible to specify per-target run arguments for debug runs.
  • It’s not possible at all to specify run arguments for non-debug runs.
  • Some debugging options such as source mapping or custom views with natvis are not configurable using cmake settings.

So in conclusion using the CMake extension for running targets is not really convenient if you want a comprehensive debugging experience, and the best way to go is still to create your own launch configurations.

The CMake extension provides a few convenient variables for launch configurations:

  • ${command:cmake.launchTargetPath}: resolves to the full path of the executable for the target selected from the launch target menu.
  • ${command:cmake.launchTargetDirectory}: resolves to the directory containing the executable for the target selected from the launch target menu.

Qt aware debugging

What we’ve seen until now will let you build and run your Qt applications, using either your system provided Qt or your own. Debugging will work out of the box already, as long as the application code is involved. But wouldn’t it be great to also be able to peek inside Qt’s source code while debugging? Or if we had a better visualization for Qt specific types?

Turns out we can do both with little manipulation on launch configurations. Let’s see how.

Configure debug symbols

Usually Qt debug symbols are distributed alongside libraries, so there’s no real need to explicitly configure debug symbols paths. If that’s not the case, you can configure the debug symbols path by setting the "symbolSearchPath" property on a launch configuration. This property is a string and contains a list of paths separated by a semicolon.

"symbolSearchPath": "otherSearchPath;C:/Qt/5.12.4/msvc2017_64/bin"

This of course can be useful for adding debug symbols for other libraries too.

Source mapping

If the source directory for your Qt differs from the actual source directory (or directories) used while building it, you can configure the debugger to resolve those paths correctly. This happens for instance with binary Qt releases on Windows. You can enable source mapping in launch configurations by adding the "sourceFileMap" property. This property requires an object where each key is the source folder as it’s provided by the debug symbols, and the corresponding value is the path where the source code is in your system. This is how it can be configured for a binary Qt release on Windows:

"sourceFileMap": {
    "C:/work/build/qt5_workdir/w/s": "C:/Qt/5.12.4/Src",
    "Q:/qt5_workdir/w/s": "C:/Qt/5.12.4/Src",
    "C:/Users/qt/work/install": "C:/Qt/5.12.4/Src",
    "C:/Users/qt/work/qt": "C:/Qt/5.12.4/Src"
}
Using Natvis for Qt aware objects visualization

Natvis is a Visual Studio framework that allows you to customize how native C++ objects are visualized in the debugger. Natvis visualization rules are specified through xml files with a specific schema. A natvis file lists visualization rules for each C++ type, and every visualization rule consists in a series of properties. Such properties are meant to be user friendly and will be displayed on the debug window when visualizing objects of the corresponding type.

To name a few examples, a QString is visualized as the string it contains and has a size property and a number of items corresponding to its characters, and QRect will have a width and a height property instead of just the bare (and less intuitive) internal representation of the top left and bottom right points (x1, y1, x2, y2).

If you want to enable natvis in a debug run, just set the "visualizerFile" property in your launch configuration so that it points to the natvis file.

"visualizerFile": "path/to/qt5.natvis"

Debug pane before and after configuring natvis

You can find a ready to use natvis file for Qt 5 at this link.

Updating the code model

In order to be able to navigate Qt headers and enable IntelliSense for the Qt API, it’s enough to adjust the C++ settings for our project (c_cpp_properties.json) by adding the Qt include folder (and all its subfolders):

{
  // ...
  "includePath": [
    // ...
    "C:/Qt/5.12.4/msvc2017_64/include/**"
  ]
}

If you’re working on a CMake project, it’s also possible to use the CMake plugin as a configuration provider. Doing so, include paths and defines will be bound to the currently configured CMake build, and won’t need to be specified manually. This simplifies the C++ properties file considerably, as it’s shown in the example below:

{
  "configurations": [
    {
      "name": "Win32",
      "intelliSenseMode": "msvc-x64",
      "configurationProvider": "vector-of-bool.cmake-tools"
    }
  ],
  "version": 4
}

A note about using Visual Studio compilers on Windows

Visual Studio provides batch files that automate the environment setup necessary to use their C++ compiler and linker. In the last post we saw how it’s possible to configure a task so that it sets up the environment through the vcvars.bat script before running a command.

However, if you need to configure the environment with vcvars.bat for most of your build steps, it is also possible to configure Visual Studio Code so that it runs the batch file for every task. To do so, you need to tweak the configured shell (which is powershell by default on windows) and pass a few args. The setting name for doing this is “terminal.integrated.shellArgs.windows” and it’s set as follows:

"terminal.integrated.shellArgs.windows": [
  "Invoke-BatchFile 'C:/Program Files (x86)/Microsoft Visual Studio/2017/Professional/VC/Auxiliary/Build/vcvars64.bat' amd64",
  "; powershell"
]

What’s going on here is this: Visual Studio Code will launch by default every shell task by calling this command:

powershell <terminal.integrated.shellargs.windows> -Command <task command> <task argument list>

So, if you set “terminal.integrated.shellArgs.windows” this way, the final command will look like this:

powershell Invoke-BatchFile 'path/to/vcvars' ; powershell -Command <task command> <task argument list>

As a result, task commands will be effectively run in a powershell with the right environment set.

And that’s it for now. Many new things on the table, and some advanced features too. Hopefully this will help you with your workflow.

But there is still more to say, so make sure you don’t miss the next post!

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Using Visual Studio Code for Qt Applications – Part Two appeared first on KDAB.

Laptop update

Nate Graham posted at 03:05 ngraham

Thanks to the KDE community, I’ve finally chosen and ordered a new laptop: a Lenovo ThinkPad X1 Yoga. People heavily recommended the X1 Carbon, which is essentially the same computer except less touch-focused. That led me to the Yoga which seems to fit the bill perfectly: in addition to the necessary touchscreen, according to reviews it has otherwise excellent screen characteristics, a perfect keyboard, great speakers, and a great trackpad. I also like the look and probable durability of the aluminum case. Though it’s not a Ryzen 4000-series laptop, CPU performance is still three times better than my current laptop, so I’m not complaining. Mine arrives in three weeks. Thanks again everyone!

Sometimes you just want to get something done. Something for yourself.

You do not intend it to be reused, or even pretty.

You build a tool.

My tool was a photoframe with some basic overlays. I wanted the family calendar, some weather information (current temperature + forecast), time, and the next bus heading for the train station.

To make this acceptable in a home environment, I built it as a photoframe. You can find the sources in the hassframe-ui repository on my github.

A hidden feature is that if you tap the screen, a home automation control panel slides up. That way you can control all the lights, as well as heat in the garage and an AC in the bedroom. Very convenient.

All this is built using QML. Three somewhat useful models are available:

  • IcalModel, taking a URL and parsing whatever it gets back as ICAL data. It is a very naive parser and does not care about things such as time zones and other details.
  • YrWeatherModel, uses yr.no‘s public APIs to pull out a weather forecast for a given location.
  • ButStopModel, uses the APIs from resrobot to look for departures to the train station from two bus stops close to my home and then merge the results into a model.

I also have a bunch of REST calls to my local home assistant server. Most of these reside in the HassButton class, but I also get the current temperature from there. These are hardcoded for my local network, so needs refactoring to be used outside of my LAN.

All of these interfaces require API keys of one kind or another – be it a proper key, or a secret URL. These are pulled from environment variables in main.cpp and then exposed to QML. That way, you can reuse the components without having to share your secrets.

All in all the code is quite hacky. Especially main.qml. I refactor out parts from there now and then, but the photoframe works, so its not anything that I prioritize.

Currently it runs on a Raspberry Pi on top of Raspbian. I want to build an optimized Yocto image making it less hacky and more pre-packaged. Perhaps there will be a rainy day this summer and I’ll get around to it. Burkhard has prepared the instructions needed over at embedded use.

Tuesday

26 May, 2020

Sup y’all.

This is a series of blog posts explaining different ways to contribute to KDE in an easy-to-digest manner. The purpose of this series originated from how I feel about asking users to contribute back to KDE. I firmly believe that showing users how contributing is easier than they think is more effective than simply calling them out and directing them to the correct resources; especially if, like me, said user suffers from anxiety or does not believe they are up to the task, in spite of their desire to help back.

Last time I talked about websites, I taught how to port current KDE websites to Markdown, and this led to a considerable influx of contributors, since it required very little technical knowledge. This blog post however is directed to people who are minimally acquainted with git, html/css, and Markdown. We will be learning a bit of how Jekyll and scss work too.

This post has now been copied over to https://community.kde.org/KDE.org/Jekyll! Which means the content available on the wiki can serve as a full tutorial explaining how to prepare KDE application websites from scratch using Jekyll. This should in theory enable you to at the very least set up a Jekyll environment, apply content to it, and insert some css in it for a bit of customization!

Please be wary that not every project necessarily wants a website of their own, and so you need to contact the website maintainers either directly or by contacting the KDE Web team over Telegram, Matrix or IRC!

As usual, if you want to skip the explanations and start immediately, you can click here.

Preparations

The first thing we’ll need to do is install the programs required to create the website. Just install: ruby-dev bundler git using your distribution package manager, and we’re set to go!

Let’s download the KDE repository containing the original KDE Jekyll theme. This way, we can both explore the repo and obtain the examples folder. Run the following command:

git clone https://invent.kde.org/websites/jekyll-kde-theme.git

Inside, there are mainly three folders we should take a look at: /css, /_sass, and /examples.

Inside /_sass, there are several /.scss files which should be the foundation of the KDE theme. If you already have some experience with these files, you should know that they can be imported to any website that uses the theme by adding an @import inside a /main.scss.

Inside /css, the file /main.scss is located. Here you can see that the various .scss files you previously saw in the /_sass folder are being imported into the project. Any extra CSS shoud be contained either within main.scss or any .scss file you create within this folder.

If you check the /examples folder, you can see two folders: /application and /application-components. The first is a general example for building any application website; the latter is useful for bundled applications, as is the case of kontact.kde.org. For the purpose of this post, your main concern should be copying the /application folder to another place, more fitting for work.

In my case, for instance, I was planning to create the website for Subtitle Composer, so I put the folder inside my home folder. I then renamed it to /subtitlecomposer-kde-org, so as to follow the pattern found over KDE’s Gitlab instance, Invent.

How stuff works

With this, we now have an example website to improve. Let’s take a look at what you will change to make it a proper website for a KDE application:

First of all, there are the folders /assets/img and /css. /assets/img is pretty straightforward, it’s where you’ll put any media you want to display on the website that’s not displayed by the theme. The already present app_icon.png is what will be displayed on the upper left of the website, screenshot.png will be used to display the application screenshot inside the main example carousel.

As already mentioned, the /css folder is where you should store any .scss files you’d like to create, in addition to main.scss. Inside it, you should already see home.scss and download.scss. The absolute minimum required should be home.scss, though.

Your example website should also have a .gitignore. This is important since Jekyll generates extra folders and files when compiled, and we don’t want them to be unnecessarily uploaded to our future repository when we push it to Invent.

The two main configuration files you should pay attention to are Gemfile and _config.yml. Gemfile is what determines the use of Jekyll and the KDE Jekyll theme, it should have something like:

source "https://rubygems.org"
ruby RUBY_VERSION
gem "jekyll", "3.8"
gem "jekyll-kde-theme", path: '../../'

You should always change the path of the jekyll-kde-theme to git, it’s the main way the theme is distributed for all KDE application websites to receive fixes as soon as possible. You also need it if you want to use a dark theme on the website. You need to switch that line to include the following:

gem "jekyll-kde-theme", :git => 'https://invent.kde.org/websites/jekyll-kde-theme.git'

As for the _config.yaml, it’s where most metadata for the website will be stored, and it’s also what determines the website’s structure:

Figure 1 – Default _config.yaml example.

You should change the appropriate information there by checking the previous website for the application you’re designing it for, or by contacting the devs directly if they don’t already have a website.

I’ll elaborate a bit more on the role of this file on the structure of the website later.

After reading the config files, the first file we should take a look at should be index.html.

Figure 2 – Default index.html example.

The section within --- is a bit of metadata determining what should be included on the website, it is called Front Matter. It should be present regardless of whether you’ll be working on an HTML or Markdown file.

Then, you can see a section containing a carousel with only one image. You may have noticed that kHeader was also present within the KDE Jekyll /_sass/home.scss file.

That’s what’s convenient in this kind of setup: you can use resources already available by the theme by just calling them. If you’re already acquainted with those resources, it becomes quite easy to create multiple reproducible builds with them. We will be using a few of those later on.

This is how the index.html looks like by default; Konsole is used as a template website in this case.

Figure 3 – Default appearance of example website.

Carousels are generally a bad idea unless they are properly implemented, thus the carousel doesn’t come with more than one image by default. However, it is the website maintainer’s choice as to whether it should be used or not.

The line we see there containing {% include blog.html %} serves to insert blog.html in the contents of index.html. blog.html, in turn, pulls the Markdown files inside your /_posts folder. Thus, the result:

Figure 4 – Default blog element. Shown using the dark theme so as to not f***ing blind you in my dark theme blog. The following images should also follow a dark theme for readability.

Now, let’s take a look at the get-involved.md file now, which should be easier to understand.

Figure 5 – Default get-involved.md example.

Writing in Markdown is particularly easy, and Jekyll interprets each part of Markdown as HTML code, so if you want to add a specific style to something written in Markdown, you’d use its HTML equivalent, so in ## Text for a header, ## would be considered an <h2> tag.

Figure 6 – How default get-involved.md looks like.

The konqi: /assets/img/konqi-dev.png does not pull from the /assets/img folder from your local project, but rather from the Jekyll KDE theme itself. The theme also provides for its layout on the page, so adding this simple line in your Front Matter will show this adorable Konqi icon properly. If any extra Konqi icon is required, they can be added manually to your local installation and called through a konqi: line.

One particular thing that is shown twice in this image is the use of variables. With {{ site.somedataIwanttoshow }} you can call any information present in your _config.yaml. Image XXXX shows two examples: {{ site.title}} and {{ site.email }}, which display on Image YYYY as Konsole and konsole-devel@kde.org, respectively.

Get on with it!

Alright, alright. Let’s build the website already.

So what you previously did if you read the previous section was installing ruby-dev, bundler and git, right? It should be noted that you need Ruby 2.6 as of now. If you only have Ruby 2.7, you’ll face an error in which a gem named eventmachine won’t install. I’ll provide you with the respective commands for Ubuntu/Debian, openSUSE and Arch/Manjaro:

Ubuntu: sudo apt install ruby-dev bundler git
openSUSE: sudo zypper install ruby-devel ruby2.6-devel bundler git
Arch: sudo pacman -S ruby ruby2.6 ruby-bundler git

After that, you likely already cloned the KDE Jekyll repository by using the command git clone https://invent.kde.org/websites/jekyll-kde-theme.git and copied the /examples/application folder elsewhere, renaming the folder to something like /myproject-kde-org, right?

Next you’ll want to change your directory (cd) to the project’s top directory. If it is located in your home folder, you should cd myproject-kde-org.

Already inside, run gem install jekyll --user-install. This will run Ruby’s default dependency manager, gem, which will install the jekyll gem with user privileges, that is, inside your home directory. This is convenient because then you don’t fill your system with gems, and you can install different gems according to the project (not like you’ll need to if you’re just planning on creating one website).

After the command finishes downloading the gems, run bundle config set path 'vendor/bundle' and afterwards bundle install. The first command determines where to store the gems defined in your Gemfile, namely jekyll and jekyll-kde-theme. The command bundle install subsequently installs them. The later takes a bit to finish.

That’s basically it. With that set up, we can generate our website with bundle exec jekyll serve. You’ll see output like this:

Figure 7 – Serving Jekyll website on localhost at default port 4000.

As you can see, the website should be available by opening a browser and typing 127.0.0.1:4000. If you’re working on a server without a GUI interface, get your ip with ip a and use it with the command:

bundle exec jekyll serve --host my.ip.shouldbe.here --port anyportIwant

Then access it through another machine on your LAN—this includes your phone, which is convenient when you’re testing your website on mobile.

One nicety about Jekyll is the functionality of auto-regeneration. With this properly running, every change you do to a file on your website will show up automatically on it without requiring a “server restart”, so to speak. With this, you can keep the website running on your side monitor (if you have any) while you change files on your main monitor.

If you get an error about Duplicate Directories, just ignore it. This occurs because of a symlink in /examples/application-components which is quite convenient, and this error does not hinder you from generating your website. If you reeeeeeally don’t want your beautiful terminal output to be plagued by an error message, though, you can just remove the /vendor/bundle/ruby2.6/blablabla/_changelogs and /vendor/bundle/ruby2.6/blablabla/_posts folders which are shown in the error output.

A practical example: Subtitle Composer

Now I’ll show you some ways I customized and worked on the Subtitle Composer website, the website that allowed me to learn how to handle Jekyll.

The first thing I noticed was the following line used to add a screenshot to the carousel:

<div class="slide-background" style="background-image:url(/assets/img/screenshot.png)"></div>

This means I need to add background-image:url(/path/to/image/either/local/or/from/theme.png) whenever I want to have an element link to an image. I prefer to do so in scss since pure Markdown doesn’t allow to apply styling directly, unlike HTML, so everything stays in the same place.

I could naturally switch the screenshot.png file in my /assets/img folder in order to switch the screenshot, which is what I did; but what about the wallpaper? I have nothing against Kokkini; however, I think keeping a more up-to-date wallpaper makes more sense for a new website. Well, in all honesty, I also wanted to do this because I particularly like the newest wallpapers. 😀

Apparently the wallpaper was showing up despite not existing in my local /assets/img folder, therefore it was being provided by the theme by default.

Right clicking on the wallpaper and selecting Inspect element immediately shows the css element div.carousel-item-content. However, after some testing, the actual element that needs to be changed is #kHeader. Therefore, on my main.scss file, I added #kHeader {background-image:url(/assets/img/wallpaper.png). This managed to override the wallpaper provided by the KDE Jekyll theme, switching Kokkini for Next.

The next thing I wanted to do was to add a subtitle effect on the name of Subtitle Composer; it seemed like an awesome thing to do, but I had no idea how to do that (I’m not very experienced with web design at all! I actually took way too much time to learn all of this), and so I searched for it in the internet.

The guess that got me in the right direction was thinking that I’d need a thin border around text to convey an effect of subtitle, thus I searched for “css text 1px border”, which led me to this StackOverflow question: CSS Font Border?, with the following code: h1 {color:yellow; text-shadow: -1px 0 black, 0 1px black, 1px 0 black, 0 -1px black;}.

So I included a class named subtitleheader to my original <h1>Subtitle Composer</h1>, allowing me to add .subtitleheader {font-size:30px;color:yellow; text-shadow:-3px 0 #4d4d4d, 0 3px #4d4d4d, 3px 0 #4d4d4d, 0 -3px #4d4d4d;font-weight:bold;min-width:130%} to main.scss after a ton of experimenting until I achieved the desired result. I added similar code to the paragraph describing Subtitle Composer, namely .subtitleparagraph {font-size:20px;color:#eff0f1; text-shadow:-2px 0 #4d4d4d, 0 2px #4d4d4d, 2px 0 #4d4d4d, 0 -2px #4d4d4d;font-weight:bold;min-width:130%}, and so I arrived at this:

Figure 8 – Subtitle effect.

So now I need to change the top navigation bar to fit what I needed.

Figure 9 – Top navigation bar on browser.

For that, I need to learn how to change some stuff on _config.yaml. Let’s take a look at the code:

Figure 10 – Top navigation bar on _config.yaml.

This is quite straightforward. A navigation bar can be on the top or bottom section, and each - title: represents an element in the navigation bar, and each element can work as a list if you add a subnav. Despite the url: pointing to an html file, markdown files can be used too without specifying the .md file format.

I changed it to look like so:

Figure 11 – Changed top navigation bar.

Which is way simpler than the default and fits the content I could find about Subtitle Composer that was available both on the Invent project and its old Github.

I also copy-pasted the bottom navigation bar from the KDE Connect website so it gets quite complete on the bottom of the website too.

Figure 12 – Complete bottom navigation bar.

After that I could work on inserting the content inside the respective .md files.

Let’s take a look at features.md and get-involved.md now.

Figure 13 – Subtitle Composer features.md.
Figure 14 – Subtitle Composer get-involved.md.

Anyone acquainted with HTML and Markdown is likely to know that --- is the equivalent to <hr>, it’s an horizontal ruler, that is, it creates a straight horizontal line, which in this case acts sort of like a separator. I used this together with an HTML line break <br> so the header gets more noticeable.

Also note that using --- after the Front Matter will be interpreted by Jekyll as normal Markdown.

The reason I wanted to show this was to exemplify my previous comment about Markdown being interpreted as HTML: if I add a style to the element hr in main.scss, it will work for both the HTML and Markdown notations. So, by adding hr {border-color:#7f8c8d} when using a dark theme, the result should be something like this:

Figure 15 – Horizontal ruler with light border.

Well, talking about dark themes, what the heck do I need to do to add a dark theme? I already knew prefers-color-scheme was used for that, but I had no idea how it worked. The Mozilla page explains it very well though, you just need to add any style to:

@media (prefers-color-scheme: dark) { any.style {should-be:here;} }

Since the hr element needed to be light only when using a dark theme, and should be standard black everywhere else, I used it inside prefers-color-scheme: dark. So the result was this:

@media (prefers-color-scheme: dark) { hr {border-color:#7f8c8d} }

If you want to test prefers-color-scheme on your browser without changing your local theme, on Firefox you can go to about:config and create a config named ui.systemUsesDarkTheme with a number value of 0 or 1, and on Chrome you can open the Developer console, click on the top right hamburger menu, go to More tools, Rendering, and change Emulate CSS media feature prefers-color-scheme.

I made a few more customizations to the website, as you can check out here. However, the examples I showed are not only enough for anyone to start hacking, but they are also proof of how I managed to learn stuff on my own, search for what I wanted to change on the website, and how I managed to learn all of that in a few months despite working full-time and not having that much free time in general. You can be sure you’re able to do that too!

Additionally, Jekyll seems to be quite performant. It doesn’t use JavaScript and most KDE websites done with it render over 90% performance on Google’s PageSpeed Insights, y’know!

If you’re interested in building a website from scratch using Jekyll for a KDE application, please contact the KDE Web team over Telegram, Matrix or IRC first, yeah? This post was also copied over to https://community.kde.org/KDE.org/Jekyll if you’d rather to check the wiki for these instructions.

It’s a great pleasure to announce that KIO FUSE has a second Beta release available for testing! We encourage all who are interested to test and report their findings (good or bad) here. Note that, the more people who test (and let us know that they’ve tested), the quicker we’ll be confident to have a 5.0.0 release. You can find the repository here.

To compile KIO FUSE, simply run kdesrc-build kio-fuse or follow the README. If your distributor is really nice they may already have KIO FUSE packaged but if they don’t, encourage them to do so!

In this beta, the hallmark features implemented are the ability to read and write without downloading the whole file (smb/sftp/file protocols only) and expiring local nodes so that changes on the remote side become visible. In addition, a new Dbus API was added to map a FUSE path back to a remote URL, used for syncing the terminal panel in Dolphin the main view (needs Dolphin >= 20.07.80).

Thanks,

feverfew

It’s been a while since the last status update on Plasma Mobile, so let’s take a look at what happened since then.

To assist new people in contributing, we organized a virtual mini Plasma Mobile sprint in April. During the three days, we discussed many things, including our current tasks, the websites and documentation, our apps and many other topics. Most of our important tasks have been asigned to people, many of them have been implemented already.

On Saturday, there was a training day, with four training sessions on the technology behind Plasma Mobile:

While the Sprint was not active, we were busy working on Plasma Mobile itself.

Nico mostly worked on fixing bugs. First, files now open correctly in Okular Also, the busy indicator made invisible when it’s not running In ruqola, he fixed a few bugs regarding the layout on small screens.

KTrip now has all providers disabled by default, to make it easier to have a good selection of enabled providers. The list of available connections and the connection details have been redesigned.

Redesigned connections view Redesigned connection details

It now also allows to open a station’s position on a map.

Tobi has been busy working on the wireless connectivity. Some refactoring and bugfixes allow Wi-Fi connections to be editable now and to create custom connections, enabling users to connect to hidden Wi-Fi networks. Also, there is now a settings module for creating Wi-Fi hotspots

Apart from the work on the wireless settings, he put a lot of work into Alligator, the RSS feed reader for Plasma Mobile. Even though it’s been in development since the previous Plasma Mobile Sprint in February, it’s never really been anounced here. By now, most basic functionality is working, and it’s not even slow anymore!

In the last few days especially, a lot of work was put into things like porting it to Android, getting an (amazing!) icon and getting a proper KDE repository for it. With some more polish, it will be ready for a first release soon.

Dimitris has put a lot of work into calindori. A day view has been created, offering the users the opportunity to review the incidences (events or tasks) of each day. In specific, the incidences are presented in an hour list, according to their start and in case of events end time. The users may navigate between the days as well as return to the current day, using the action toolbar buttons. Day View

A Week view is now available; users can manage tasks and events via a per-week dashboard. On that dashboard, a list view displays the days of each week. The tasks and events of each day can be seen. Upon clicking to a task/event, a new page is opened. On that page, the task/event can be edited or deleted. Users can navigate between weeks using the action toolbar icons. Week View

Bart has made the wifi and wwan buttons in the top drawer toggles instead of opening the settings pages.

Marco implemented a webapp-container in Angelfish, which can be used to ship websites as webapps in Plasma Mobile images.

Jonah later added functionality to Angelfish to allow adding webapps to the homescreen. As webapps don’t have any browser user interface apart from the html content, he added a retry button to the error handler, to allow reloading in case of errors. Jonah also adapted Angelfish’s use of overlay sheets to the new design from Kirigami. The same was done with the time settings module of the settings app.

Recently he managed to workaround a bug causing scrolling in all apps using QtWebEngine, notably Angelfish, to jump around. This was caused by frames being shown in the wrong order. Fixing this was only possible thanks to the debugging of this issue Rinigus Saar did for SailfishOS. Over the last month, he started to rewrite the SMS app SpaceBar, because it became very hard to debug, and the KDE Telepathy API it was using wasn’t well suited for SMS. The new app is directly based on TelepathyQt, just like the dialer. New features include mapping of contacts to phone numbers of incoming conversations, starting chats without having to add a contact for the number, visual and functional improvements. Anthony Fieroni fixed incoming conversations in the new app.

While the app is already working better than SpaceBar, we are still working on fixing the last bugs and integrating it with the dialer internally. Jonah also updated the design of the application headers, which are used across all Kirigami apps according to the suggestions of the KDE Visual Design Group.

Rinigus redesigned the highlight of the current tab in Angelfish, and implemented an overlay which shows the history of the current tab.

Nico worked on improving the dialer UI, which now makes use of the new SwipeNavigator component from Kirigami. Redesigned dialer

Want to help?

This post shows what a difference new contributors can make. Do you want to be part of our team as well? Just in case we prepared a few ideas for tasks new contributors can work on. Most coding tasks require some QML knowledge. KDAB’s QML introduction video series is a great resource for that!

For a change, let’s talk about a topic other than notifications. More than five years ago (can’t believe how time has passed) I took over maintainership of PowerDevil, Plasma’s power management service. While I did a lot of cleanup and feature work in the beginning, there haven’t been many major changes for some time.

Bar-like popup informing of a screen brightness change
“Blast from the Past” – just casually sneaking in the more compact volume/brightness popup we’ll have in Plasma 5.20 to get your attention

One of the first features I added back then was smooth brightness changes. PowerDevil supports three ways of changing screen brightness: through XRandR configuration, through DDC (display data channel, for desktop monitors, experimental and not built by default), and by writing to sysfs (/sys/class/backlight or /sys/class/leds). Since the latter requires privileges and uses a helper binary through KDE’s KAuth framework, I only implemented the animation for the XRandR code path, which was executed in the same process.

Obviously, XRandR doesn’t work on Wayland, and it seems that modern graphics drivers don’t support changing brightness through it anymore either. I recently sat down and wrote a patch to have the helper binary execute a similar animation. KAuth works quite magically by exposing methods defined in an .actions file through DBus and then calling them as slots through Qt’s meta object. Unfortunately, the way it is designed doesn’t allow for delayed replies, which I wanted to use so the job only finished once the animation was completed in order to keep PowerDevil’s state consistent. I then found that KAuth randomly keeps its helper running for 10 seconds, more than enough for a 250ms animation.

I’m not too happy with the implementation and the brightness handling class itself has turned into quite a mess over the years, and having three (four, if you count keyboard brightness) completely separate brightness controls entangled within doesn’t help. To clean it up I want to get rid of XRandR brightness support. Since I don’t know if that’s actually still being used – I surely haven’t used it ever since I ditched the Intel driver – please do me a favor and check what brightness control your machine uses by running PowerDevil from command line (make sure to quit the running process first, and executable location will vary depending on your distribution):

QT_LOGGING_RULES='powerdevil=true' /usr/lib/x86_64-linux-gnu/libexec/org_kde_powerdevil

When using XRandR it will say “powerdevil: Using XRandR”, otherwise it’ll be “Xrandr not supported, trying ddc, helper”. (It’ll always say it tries the DDC helper even when it isn’t built with that.) If it actually uses XRandR please tell me and what GPU and driver you are using. That would help a lot in judging the impact of removing this! If it is indeed using XRandR, please see if you can still manually write into /sys/class/backlight/[whatever devices there may be]/brightness to alter screen brightness. It prefers XRandR but that doesn’t mean that sysfs couldn’t be working, too. Please also tell me the value of max_brightness in there. Feel free to chime in on the plasma-devel mailing list thread I started on the subject to share your thoughts. Now you can see why having some telemetry via KUserFeedback would be tremendously useful for improving code quality and, er, “user experience”.

Fullscreen video player of a KDE Akademy 2019 talk “Taking KDE to the Skies: Making the Drone Ground Control Kirogi” with a “Battery running low” notification in the top right
Casually watching one of my favorite Akademy 2019 talks when my battery ran out

Lastly, of course there can’t be a post on this blog without mentioning notifications: in the upcoming Plasma 5.19 the “low battery” notification is marked as critical which will make it show on top of full screen windows. Previously, while watching a video, playing a game, or giving a presentation, you likely didn’t see an advance warning. Only once battery reached critical levels would you get told to pick up a charger, running, tumbling down some stairs, frantically searching for it, before the 60 second timeout for sleep or standby expired. (Seriously, the reason why throughout the years I prolonged the timeout from the original 30 seconds and eventually added a “cancel” button was to stop fellow Plasma hackers injure themselves on sprints when running to their bags to fetch a charger.)

KDAB’s Kevin Funk presented Using Modern CMake with Qt at Qt Virtual Tech Con last month.

He reported that the Qt Company did a great job moderating the sessions at this event, and there was a lively Q&A at the end – Kevin had to pick from about 60 questions, so this is a hot topic.

Now the event is over you can access Kevin’s talk here, including the answers he had time for, and also his slides, below the abstract.

Using Modern CMake with Qt with Kevin Funk

Prerequisite: No prior CMake experience required

CMake is a cross-platform build system, with powerful APIs for finding dependencies of various or specific versions, and with many abstractions for platforms, compilers, other build systems, and dependencies.

The next major Qt version, Qt6, will be using CMake internally as its build system, so the CMake integration with Qt will likely get tighter and more versatile in the long-term.

In this talk, we’ll be introducing Qt specific CMake functionalities, in order to find and use Qt5 inside your personal CMake-based project, using modern CMake capabilities. We are going to discuss how to find Qt installs using CMake’s find_package function and how to find specific Qt versions when multiple versions are installed.

Further than that, useful CMake variables such as CMAKE_INCLUDE_CURRENT_DIR, CMAKE_AUTOMOC, CMAKE_AUTORCC, and CMAKE_AUTOUIC will be explained in detail and how the use of the CMake integrations can speed up the build drastically.

Last but not least some of the additional supplied Qt related CMake functions, such as for big resources or translation support will be discussed.

Target audience: Build Engineers or Software Engineers who would like to know more about using Qt under CMake.

Download Kevin’s slides: QTVTC20 – Using Modern CMake – Kevin Funk

About Kevin Funk

Kevin has actively developed with Qt/C++ since 2006 and has a special interest in tooling and profiling. He’s an active contributor to KDAB’s GammaRay analyzer (a high-level Qt application debugger) and has a strong emphasis on state machine tooling. He is a co-maintainer of the KDevelop IDE, a powerful C/C++ development environment backed by Clang, and is pushing for cross-platform success inside KDE. Kevin holds a Masters Degree in Computer Science.

Download Kevin’s whitepaper on CMake and Qt…

The post Using Modern CMake with Qt appeared first on KDAB.

Monday

25 May, 2020

Last week, as part of my GSoC Project with DigiKam, I implemented a new feature to effectively Ignore faces. The feature had been requested multiple times, and was in-fact necessitated due to the power of DigiKam’s Facial Recognition Algorithm.

DigiKam will often detect and then try to recognize faces in photos that the user perhaps doesn’t recognize himself! With the implementation of this new feature, the user could just mark such faces as Ignored. Faces marked as Ignored will not be detected by the Face Detection process in the future, nor will they be considered during the recognition process.

Only Unknown Faces are allowed to be marked as Ignored, this stems from the logic that if you confirmed a face, i.e. gave it a name, then it is someone you know, and hence marking them as Ignored doesn’t really make sense.

To mark an Unknown Face as Ignored, press the ⛔ sign that appears when you hover over a face.

If this is your first time marking a face as Ignored, this will lead to the creation of a new tag (named Ignored) in the People Sidebar. The reason that the Ignored tag isn’t created automatically at startup (like Unconfirmed and Unknown) is that most users would not require the Ignored functionality, hence it is only created when necessary.

And it’s that simple! In case you accidentally marked a face as Ignored, you can press the ✅ to un-mark the Ignored face, and effectively undo the procedure.

Even though the feature seems quite simple at first glance, it involved a lot of changes to the underlying code-base. I’ll perhaps go over details of the implementation in a later post.


DigiKam is the ultimate cross-platform application for digital photo management. Download it for free! : https://www.digikam.org/download/

I have started working on my project earlier due to the uncertain times that we find ourselves in. Hence this is Week -1 report, a week earlier than the official start of coding period. This week corresponds to Week 1 of the planned timeline.

Present state.

This week was easier than expected. Adding the storyboard docker to Krita’s plugin system was very easy, thanks to the numerous dockers already implemented. Implementing outer GUI was tougher than that, but it was easy on absolute terms. The GUI consists of four QToolButtons namely Export, Comment, Lock(Icon), Arrange(Icon) and a QTableView(which will be promoted to a custom view). Three of these buttons Export, Comment and Arrange have a menu associated with them. Lock is a toggle button.

Export Button’s menu is simple and consists of QActions corresponding to pdf and svg export formats. These buttons would open a Dialog. The dialog is not yet implemented.

Comment button’s menu consists of a QListView and two QToolButtons. The QListView will be drawn based on the comments in main model’s data. This menu is not fully implemented yet. The (…) button is delete button.

Arrange button’s menu consists of 2 button groups Mode and View. The button groups consist of QRadioButtons corresponding to the values offered in Mode and View. This menu is setup so that it stays open after choosing an option, as the user might want to change more than one option. e.g. they might want to change view to column and mode to thumbnail at the same time.

Next week I would try to write unit tests for delegate class, add details to documentation, implement the Storyboard Item class and start implementing the MVC classes.