Soon you’ll be able to get a fresh look for your terminal without leaving the window or having to mess with copying around files manually!
To celebrate I’ve also made a new color scheme based on Atom’s One Dark syntax theme.
Users of Kubuntu 17.10 Artful Aardvark can now upgrade via our backports PPA to the 3rd bugfix release (5.12.3) of the Plasma 5.12 LTS release series from KDE.
(Testers of 18.04 Bionic Beaver will need to be patient as the Ubuntu archive is currently in Beta 1 candidate freeze for our packages, but we hope to update the packages there once the Beta 1 is released)
The full changelog of fixes for 5.12.3 can be found here.
This includes an impressive list of fixes for Plasma Discover software centre, thanks in part to the excellent recent drive to improve and polish this important part of the plasma desktop by our Product Manager and KDE Developer Nate Graham.
Users of 17.10:
To update add the following repository to your software sources list:
or if it is already added, the updates should become available via your preferred update method.
The PPA can be added manually in the Konsole terminal with the command:
sudo add-apt-repository ppa:kubuntu-ppa/backports
and packages then updated with
sudo apt update
sudo apt full-upgrade
PPA upgrade notes:
~ The Kubuntu backports PPA includes various other backported applications, and KDE Frameworks 5.43, so please be aware that enabling the backports PPA for the 1st time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to Plasma 5.12.
~ The PPA will also continue to receive bugfix updates to Plasma 5.12 when they become available, and further updated KDE applications.
~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list , IRC , and/or file a bug against our PPA packages .
1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa
Candidate images for the Kubuntu Bionic Beaver (18.04) Beta 1 are now available for testing.
The Kubuntu team will be releasing 18.04 in April. The final Beta 1 milestone will be available on March 8.
This is the first spin of a Beta 1 candidate in preparation for the Beta 1 release. Kubuntu Beta pre-releases are NOT recommended for:
Kubuntu Beta pre-releases are recommended for:
Getting Kubuntu 17.10 Beta 1 Candidates:
To upgrade to Kubuntu 18.04 pre-releases from 17.10, run sudo do-release-upgrade -d from a command line.
Download a Bootable image and put it onto a DVD or USB Drive via the download link at
This is also the direct link to report your findings and any bug reports you file.
See our release notes: https://wiki.ubuntu.com/BionicBeaver/Beta1/Kubuntu
Please report your results on the Release tracker.
Glasgow’s group of Linux nerds has been gathering for 20 years so I was pleased to eat lots of curry at the Scottish Linux User Group’s 20th anniversary dinner. In the pub afterwards I showed off the new KDE Slimbook II and recorded a little intro. It’s maybe not the most slick presenting skills but it’s my first time making a video
The partnership with KDE and Slimbook is unique in the open source world and it’s really exciting they want to continue it with this new even-higher end model. Faster memory, faster hard disk, larger screen, larger touchpad, USB-C, better wifi signal, this baby has it all. It’s a bargain too from only 700euro.
Haven’t you ever wanted to have an open source artificial intelligence assistant like some companies provide on their phones/desktops or even at your home but without the privacy concerns? In July last year, during Akademy 2017, I saw a great presentation of Mycroft and the Mycroft plasmoid, done by Aditya Mehra. Mycroft can understand and answer questions the user speaks on the microphone and can also perform actions the user requests. I inmediately knew I wanted that in openSUSE. You can watch the conference in the next video to see what I mean:
Unfortunately, I saw Mycroft had a lot of dependencies and an unorthodox install system, so I didn’t do much with it, but then November came and we had a hackweek at SUSE (in few words, it’s a week SUSE developers can use to work on personal projects). So I started this project to package all Mycroft dependencies along with Mycroft itself and the Mycroft plasmoid as well as to modify Mycroft to integrate in a regular Linux system. Since then, I’ve been using some of my free time to update the packages and the result is that now Mycroft can be easily installed on openSUSE Tumbleweed with a couple of commands and following standard procedures.
I’ll give the installation instructions below, but first of all, let me give some clarifications:
First, you have to add the
devel:languages:python repository to zypper, which contains development python packages that haven’t been accepted (yet) into Tumbleweed:
sudo zypper ar -f https://download.opensuse.org/repositories/devel:/languages:/python/openSUSE_Tumbleweed/devel:languages:python.repo
Then, you have to add the repository of the OBS project I use to release usable packages that are for any reason not yet in the official distribution repositories:
sudo zypper ar -f https://download.opensuse.org/repositories/home:/alarrosa:/packages/openSUSE_Tumbleweed/home:alarrosa:packages.repo
Note that both commands above are just one long line each.
Now, you can install the mycroft-core and plasma-mycroft packages, which should pull in all their dependencies:
sudo zypper in mycroft-core plasma-mycroft
It will request you to trust the added repositories keys. On a clean Tumbleweed system the command installs 160 packages and after it finishes, you can add the Mycroft plasmoid to the plasma desktop.
Once installed, you can use the plasmoid to start the Mycroft services, ask something (in the example below, I said on the microphone “Hey Mycroft, what is 2 + 2 ?”) and stop Mycroft.
But before using Mycroft you have to pair it. The first time you start it, it will give you a code composed of 6 alphanumeric characters. Go to home.mycroft.ai, create a free account and register your “device” by entering the code.
And that’s all! You should now be able to use Mycroft on your system and maybe even install new skills. A skill is a module that add a certain capability to Mycroft (for example, if you add the plasma-user-control-skill, Mycroft will understand you when you say “Hey Mycroft, lock the screen” and lock the screen as you requested). Skills can be listed/installed/removed using the plasmoid or the msm commandline application.
In any case, please note this is still work in progress and some features may not work well. Also, I made some changes to mycroft-core code and plasma-mycroft in order to install it in a Linux system and allow it to work without a python virtual environment, so this might break stuff too. Please, don’t blame the Mycroft developers for issues you find with these packages and if you report any issue, I think it’s better to mention it first in the comments section in this post before submitting a bug report to github.com/MycroftAI and bothering them with problems that might not be their fault.
What did I change with respect to the code provided by Mycroft developers? Well, first, I included some upstream patches to make mycroft-core use python3 and installed it like any other python3 application in /usr/lib/python3.6/site-packages/ . That way we’re also helping the Mycroft developers with the planned upgrade to python3 by testing early.
I also changed the way Mycroft is started so it feels more natural on a Linux desktop. For this, I created some systemd units based on the ones done for ArchLinux. The idea is that there’s a user systemd target called mycroft.target that runs 4 systemd user services when run (mycroft-bus, mycroft-skills, mycroft-audio and mycroft-voice). Of course, it also stops them when the target is stopped. This is all hidden to the user, who can just start/stop Mycroft turning a switch in the plasmoid.
On a regular Mycroft installation, the configuration file is in /etc/mycroft.conf and the skills are installed to /opt/mycroft/skills, but on a regular system a regular user can’t modify those files/directories, so I moved them to ~/.mycroft/mycroft.conf and ~/.mycroft/skills and changed mycroft to prefer those locations. You can have a look at the Mycroft documentation to see what parameters you can set in your mycroft.conf file.
When installing a skill, in a regular Mycroft installation, msm invokes pip to install the required python modules on the virtual environment. Since we’re not using virtual environments I’m just logging the required python modules to ~/.mycroft/mycroft-python-modules.log . So you if you think a skill might be misbehaving or not being properly loaded you should first check that file to see if there’s a missing python module requirement which should be installed in the system using zypper. I plan to make this automatic in the future, but for now, you’ll have to check manually.
I also added changes to other packages. For example, the duckduckgo2 python module is not prepared to work with python3, so I ported it. The same happens with the aiml python module, which seems to be abandoned since 2014 and only works with python2. Fortunately, in this case there’s a python-aiml fork, which adds support for python3 and other improvements, so I made mycroft use that one instead.
This is a small list of questions and commands you might like to try:
Hey Mycroft …
After you play a bit with it and test the basic functionality works, you might want to configure Mycroft for your settings. I recommend to at least open the ~/.mycroft/mycroft.conf file and change the example location settings to your city, your coordinates (look for your city on Wikipedia and press on the coordinates in the right side box to see your city coordinates in decimal notation) and your timezone (the “offset” value is your timezone difference with respect to GMT in milliseconds and “dstOffset” is the daylight saving time offset which is usually AFAIK, generally 1 hour).
When changing the configuration file, be extremely careful and don’t leave any blank line nor introduce any comment, since currently the json parser is very sensitive to syntax errors (fortunately, you’ll see clear errors in the logs if there’s any). In any case, be sure to have a backup config file, just in case.
journalctl --user --since "1 hour ago"the journals and see if the skill is generating any exception. Also, having a look at ~/.mycroft/mycroft-python-modules.log might be a good idea to check if those python packages are installed in the system (note that the openSUSE python packaging guidelines state that the python3 package for a module must be called python3-<modulename> so it should be easy to check manually)
I have many plans for these packages. For example, I’d like to submit to upstream all changes I’ve done since I think those will be useful for other distributions too and to help get it to work with python3 as soon as possible. As mentioned before, I’d also like to make a pip/zypper integration tool so skills requirements can be installed automatically in the system and I’d like to add a skill to Mycroft to integrate it with one application I’m developing (more on this in future posts ) . If nobody does it first, it would be great to add Spanish support now that it seems support for languages other than English is being added.
Btw, Mycroft developers are adding support for the Mozilla open source voice recognition framework, so you might consider collaborating with The Common Voice project to help it grow and mature.
Before ending, I’d like to thank all the python openSUSE packagers (specially Tomáš Chvátal) for carefully, patiently and quickly reviewing python package submissions for over 50 new packages required by mycroft-core and of course, the Mycroft developers and Aditya Mehra (the plasma-mycroft developer), who are doing a great job.
One of the important missing features in Plasma wayland session is without a doubt possibility to share your screen or record you screen. To support this you need help of the compositor and somehow deliver all needed information to the client (application), in ideal way something what can be used by all DEs, such as Gnome. Luckily, this has been one of the primary goals of Pipewire, together with support for Flatpak. If you haven’t heard about Pipewire, it’s a new project that wants to improve audio and video handling in Linux, supporting all the usecases handled by PulseAudio and providing same level of handling for video input and output. With Pipewire supporting this, there was recently a new API added to xdg-desktop-portal for screen cast support and also for remote desktop. Using this API, applications can now have access to your screen content on Wayland sessions or in case they are running in sandbox. With various backend implementation, like xdg-desktop-portal-kde or xdg-desktop-portal-gtk, they just need to support one API to target all desktops. Screen cast portal works the way, that the client first needs to create a session between him and xdp (xdg-desktop-portal) backend implementation, user then gets a dialog with a screen he would like to share and starts screen sharing. Once he does that, xdp backend implementation creates a Pipewire stream, sends back response to the client with stream id and then client can connect to that stream and get its content. Once he no longer requests content of the selected stream, xdp backend implementation gets information that nobody is longer connected to the created Pipewire stream and can stop sharing screen information and xdp backend implementation is again ready to accept next requests for screen sharing. This is all happening in the background so there is really no cool picture I can show, at least this dialog which you get when you request to share a screen.
I finished support for screen cast portal in xdg-desktop-portal-kde last week and currently waiting for it to pass review and be merged to master. This is also currently blocked by two not merged reviews, one adding support for sending GBM buffers from KWin and one with new Remote Access Manager interface in KWayland, both authored by Oleg Chernovskiy, for which I’m really greatful. This all will hopefully land soon enough for Plasma 5.13. Testing this is currently a bit complicated as you need everything compiled yourself and besides my testing application there is really no app using this, except maybe Gnome remote desktop, but there should be support in future for this in Krfb, Chrome or in Firefox. Hopefully soon enough.
Last thing I would like to mention is for GSoC students. We also need remote desktop portal support to have full remote desktop experience so I decided to propose this as a GSoC idea so students can choose this interesting stuff as their GSoC work.
Every year we try to seed the foss-north event with a set of key speakers. This year, one of our seed speakers is Patricia Aas from the Vivaldi Browser. She will be speaking about isolating GPU access in its own process.
“Chromium’s process architecture has graphics access restricted to a separate GPU-process. There are several reasons why this could make sense, three common ones are: Security, Robustness and Dependency Separation.GPU access restricted to a single process requires an efficient framework for communication over IPC from the other processes, and most likely a framework for composition of surfaces. This talk describes both the possible motivations for this kind of architecture and Chromium’s solution for the IPC framework. We will demonstrate how a multiprocess program can compose into a single window on Linux.”
It is just 5 more days left of the Call for Papers. With the help of our great sponsors we have the opportunity to transport you to our conference if you are selected to speak. Make sure to make your submission before March 11 and you are in the race.
KDevelop with Cppcheck Integration
Starting with 5.1 release KDevelop supports built-in integration with static-analysis tool Cppcheck. Cppcheck provides unique code analysis to detect bugs and focuses on detecting undefined behaviour and dangerous coding constructs. The goal is to detect only real errors in the code (i.e. have very few false positives). Such analysis is very useful for all projects, especially for projects with complex structure and large code volume. Convenient integration with the development environment greatly simplifies and speeds up the process of code checking, as there is no need to study the analyzer documentation, its manual configuration and code navigation when processing the analysis results.
To use cppcheck integration in KDevelop, you only need to:
Let's take a closer look at the process of setting up and running the analyzer (we suppose that cppcheck is already installed).
First, you must verify that the path to the cppcheck executable is correct. The path should be automatically detected but if you install cppcheck to non-standard place or if you want to use non-system version such path must be setup through plugin configuration page: "Settings" -> "Configure KDevelop" -> "Analyzers" -> "Cppcheck":
By default "native" cppcheck's output it not shown during the check and also we hide it's XML output. This can be enabled with appropriate checkbox and output can be viewed manually in the standard "Test" tool view:
If cppcheck executable path is ok we can set up check parameters for the individual projects through project's configuration page.
"Project" -> "Open Configuration" -> "Cppcheck":
The config page contains tabbar to control cppcheck behavior and auto-updated command line panel which displays the resulting cppcheck command line. First tab allows you enable/disable different types of analysis. Full description can be found in the cppcheck documentation and short version is displayed as tooltip for the selected checkbox.
Second page allows you setup include directories where cppcheck will try to find headers during analysis. By default we enable only "project" includes but you also enable "system" directories (like /usr/include/). Note that enabling system directories can slowdown the analysis. You also can block some include directories if necessary:
Last tab allows you to pass custom extra parameters to the cppcheck. This may be useful since built-in GUI controls supports only commonly-used functionality. See cppcheck documentation for all supported parameters:
When all configuration steps are finished press "OK"/"Apply" to save your changes. Next we are ready to start code analysis.
This can be done with 3 ways:
When the analysis is started the standard "Problems" tool view is activated and opens "Cppcheck" tab. All problems will be placed into the table and can be activated with mouse-click on appropriate line. When some problem is activated the corresponding source file will be open in the editor and cursor will be placed on error's line:
You should analyze the problem's code and fix it if necessary. Note that some errors are not errors but only cppcheck recommendations. Some errors can be false-positive cppcheck triggers therefore no fixes needed. Also some error lines displays information not associated with source code - when, for example, cppcheck can't find include paths for some headers. Note that cppcheck tool is not ideal and you should carefully analyze each error line in the report and decide to fix or ignore it.
After fixing the errors found by cppcheck the analysis can be restarted. This can be do as described earlier or by "one-click" on first button in the "Cppcheck" problems view ("Re-Run Last Cppcheck Analysis").
Presented cppcheck integration plugin provides simple and easy-to-use mechanism for checking your code for some common errors. Regular using of such analysis tool can help you to catch and fix many errors before they happens in the released version of your software.
You unboxed your KDE Slimbook II, posted the pics to Instagram, and logged into the desktop. What you are seeing now is Plasma, a graphical environment created by a worldwide network of top-class programmers. Plasma may look familiar, but it is not Windows or macOS; it’s something much better. It is Free Software for starters — no hidden costs, bloatware and spyware here. Secondly, it is made to be tweaked, letting you adapt it to your precise needs.
In that vein, here are 5 things you can do just to get you started (click on any of the images to see a larger version):
The KDE neon operating system will detect and configure your WiFi adapter automatically. Inside your KDE Slimbook II are two powerful antennas that will give you a better reception when in range of a WiFi network.
To connect to your network, look on the right-hand side of the bar at the bottom of your screen (in Plasma’s “tray”). You will see a greyed-out symbol that looks like this: . Click on it and a menu will pop up. Hover your cursor over your network and click the Connect button. A textbox for your password will appear. Fill in your password and press Enter on your keyboard. A few seconds later, you will be connected to your network.
Your KDE Slimbook II does not come with an Ethernet port for a wired connection, but you can order it with a USB-to-Ethernet adapter. Pop that in, connect your Ethernet cable, and you will be immediately connected to your network.
Free Software, like the Plasma desktop and the KDE apps that come with it, are being worked on all the time. Bugs are squashed, controls are improved, and new features are added. These improvements are sent to you through what are called “updates”.
Updates are easy to install in KDE neon: the icon that tells you whether updates are available () is down in the tray, next to the network icon. When there are non-critical updates available, it will show a little blue circle in the lower right-hand corner of the icon. If there are important updates available, the circle will be orange. When there are critical updates (like updates that correct vulnerabilities), the circle will be red. Click the icon and you’ll see how many updates are ready.
Click on the Update button and Discover, Plasma’s app store, will open. There you can update everything in one go.
During the first boot, you were required to enter a user name and password, but you can have many more. If the computer is shared with family or colleagues, you can create as many users as you need. Or you could have a bare-bones guest account for when somebody asks to borrow your machine to check their email.
KDE neon’s Plasma desktop comes with a control panel that lets you do all this. Open the menu at the bottom left of your screen, and pick Settings from the list of applications in the Favorites tab — Favorites is the first tab you’ll see, so you can’t miss it. The icon for Settings looks like this: . Click it.
In the new window that opens, scroll down in the left-hand bar until you see Account Details in the Personalization section. Click on that and you will see two new options: KDE Wallet (you can use this to store your passwords), and User Manager. Click on User Manager.
Here you can add or delete users, and set up what they are allowed to do.
One warning, though: DO NOT mark the Enable administrator privileges for this user checkbox for guest user accounts. Your friend could accidentally modify your operating system once they have finished snapchatting, or whatever kids get up to these days.
Note that, if you like a highly-customised desktop, you will be using Settings a lot!
The easiest way to change your desktop is by changing the wallpaper; that is, the background image of your desktop. Use the right button of your mouse to click on any empty area of your desktop and choose Configure Desktop from the pop-up menu (it is the last option at the bottom of the menu). A configuration window will appear and the first choice is the one that helps you change the wallpaper. You can pick one of the pre-downloaded wallpapers (look in the /usr/share/wallpapers folder for more images), use your own photos, or select images you have previously downloaded from the Internet. To do this just click the + Add Image... button.
The KDE community also provides a wallpaper “store” (don’t worry — all wallpapers are free): click on the Get New Wallpapers... and you will be connected to the store.
Another nice modification to the default desktop is changing the application launcher. The application launcher is the proper name for the menu that pops up in the bottom left hand corner of the Plasma desktop; that is, the place where you go to find your apps.
When you open it, you can click on an empty space and then choose Application Launcher Settings… from the pop-up menu.
Or you could get rid of the standard launcher altogether and install a cooler one! On the right hand side of the panel (the grey bar that runs along the bottom of the screen), right at the end, you will see a button with a symbol. Click that and the configuration options for the panel will unfold. Click on the + Add Widgets... button and the widget menu will open on the right of the screen. Find Application Dashboard and double click on it. A new widget will appear on the right-hand side of the panel. Close the widget menu for the time being.
Click on the in the panel again. You can now drag the new widget to where you want it (usually that would be on the left edge of the panel) and also remove the “old” application launcher. Your new dashboard will look like what you can see on the right.
Incidentally, if you prefer your panels at the top of the screen, you can do that too. Press the panel’s button again, then click and drag on the Screen Edge button. You will be able to move the panel to whichever edge you want — top, bottom, left or right.
You can also have more than one panel: right click on an empty space on the desktop and select Add Panel. And then you can have one panel at the bottom for windows, apps and widgets, and another at the top for global menus, á la macOS. You can do this by adding the Global Menus widget () to your new top panel.
Another thing you can configure to your liking are the active corners. These are areas on your screen, usually in the corners and in the middle of each of the edges. You can configure the system to execute a certain action if the cursor hovers over these special areas.
For example, if you move the cursor to the upper left hand corner, the desktop background will go dark and all your open windows will slide so you can see them all. This is useful if you have many windows open, and you can’t find the one you are looking for. While all the windows are exposed, you can move your cursor to the one you want, click on it, and bring it to the front (see the animation on the left).
As with everything on the Plasma desktop, the behaviour of the active areas can be changed. Open Settings again, click on the Desktop Behavior option in the column on the left, and then choose Screen Edges. Click on any of the boxes in the corner or around the edges of the picture of a screen. Here you can re-program your chosen area to show all the windows, lock the screen, or open a text box that lets you run any command you want. When you are done, click on the Apply button and enjoy your personalised desktop areas.
As you can see, the customization possibilities are pretty much endless.
At some point you’re going to want to stop playing around with all the options Plasma offers you (satisfying as it is), and get some work done.
Probably the most popular Free Software office suite is LibreOffice. Libreoffice comes with a word processor, spreadsheets, a presentation editor, database management application, and so on. Open your application launcher (menu in the bottom left hand corner fo your screen) and click on Discover (). When Discover opens, type “libreoffice” in the search box to see what’s on offer.
You can install each Libreoffice application separately. For example, if you only need the word processor, but not anything else, you can install just that by picking LibreOffice Writer from the list. However, if you need everything, scroll down until you see libreoffice – office productivity suite (metapackage). By picking that, you will install all the components in one go.
Now you have LibreOffice installed, you can get to work… But, can you really? If you have any fashion sense at all, you’ll notice that the look of LibreOffice doesn’t really integrate well with Plasma.
The icons on the apps’ toolbars are way too cartoonish, for example, and the textboxes and dropdowns look like something taken from a 90’s interface. It would be against all rules of good taste to work with something that looks like it was designed by someone who thinks Comic Sans is still cool.
No, this will not do at all. You simply must have the Plasma experience all the way. Open Discover again and search for libreoffice-kde. Install it and restart LibreOffice.
This just scratches the surface of what you can do with Plasma on your KDE Slimbook II. As you may be guessing by now, everything is configurable, and to insane extremes. You can customise it just so that it adapts perfectly to what you like and how you work.
We’ll look at more ways of how you can make the most out of your Plasma desktop, including setting up social media accounts, synching with your cloud, and pairing up your phone, in the next instalment of this two-part series.
|new Justify splitters following the plasma theme|
and user-specified pattern for editing mode
|colorize transparent panel with plasma theme background-color|
when using dynamic background functionality
Every year we try to seed the foss-north event with a set of key speakers. This year, one of our seed speakers is Carsten Munk known from Jolla, libhybris, Meego, Maemo and more. This year he will speak about his new endevour Zipper – bringing blockchain technology to mobile devices.
“Zipper is an Ethereum based mobile platform which brings blockchain based services to our smartphones in one seamless and user-controlled experience.At first, Zipper provides everyday smartphone users an easy and safe way to manage their identity and private keys. This makes it possible for anyone to access blockchain based services out-of-the-box in an easy and intuitive way – just like Apple’s services on iOS today – while being in full control of their identity, transactions and data. Zipper works in an isolated compartment in Android and Sailfish OS smartphones, making Zipper and its wallet secure while still easily accessible.”
It is just 6 more days left of the Call for Papers. With the help of our great sponsors we have the opportunity to transport you to our conference if you are selected to speak. Make sure to make your submission before March 11 and you are in the race.
I’m from South Africa. I’ve been drawing my whole life, mostly with graphite pencil but when I discovered digital drawing I was hooked. I started out just using a standard desktop mouse and GIMP and got kind of good at it. Since then I have improve a lot and plan to keep improving and creating new art for as long as I can.
I paint as a hobby, but I sometimes use the skills I’ve learned from painting in a professional capacity when I need to edit or create images.
I don’t really have a specific genre besides perhaps drawing in a more realistic style. I like to challenge myself to draw new things. I usually paint something with life in it like creatures or people.
Jazza from the YouTube channel Draw with Jazza. Although he mostly does traditional art his ability to draw amazing things from random prompts really inspire me. There are also amazing artists on ArtStation.com and I only need to scroll through a few images before I feel the urge to draw something myself.
I found GIMP on a Linux computer in college and I played around with some of the filters. I was amazed at what was possible with a few simple steps. After browsing around on YouTube I saw some artists drawing pictures from scratch in Photoshop. Because I already knew how to draw with pencil I wanted to give it a try using free software and quickly fell in love with it.
So many things. The ease of changing things when you are already far into the drawing, the fact that you can undo mistakes and best of all it’s not as messy. I also love computers so drawing digitally is like having best of both worlds.
A friend told me about it after trying it with his Wacom tablet. I am a software developer so any new software is like a new toy for me. I checked out the website and what other people had created using it and I was intrigued.
The interface was so much more modern than GIMP, and I’m a firm believer that the interface makes a big difference in first impressions. I played around with it a bit and quickly saw that it had all the features I use with GIMP and more.
I love the interface. I also like the fact that you can do animations with it. I have only started dabbling in animation but so far I am fascinated by it. I also love how responsive Krita is and the fact that it supports my tablet, which GIMP did not. And finally I love that it is still being improved upon by the developers. It means any issues I might encounter can still be solved.
Having spent many hours drawing in Krita I can honestly say there is nothing that is really annoying. There is the occasional odd thing that happens as with any drawing software but nothing I haven’t been able to find a workaround for.
The amount of things you can do with it, all neatly wrapped up in a beautiful design. Also the fact that it is free but still has the quality of paid software.
Usually my newest drawing is my favourite but the Ferret mount I drew really stands out for me. I tried to push myself to create a sense of depth and a scene that I haven’t been able to achieve in any of my previous drawings. I learned a lot from drawing it and it was a lot of fun to do.
Some of the techniques I used is to blur the foreground and background and add a bright light source to create the impression of depth. I used the default brushes that come with Krita to create everything from the fur to the texture of the dirt.
I would just like to thank the team working on Krita for the amazing job they’ve done in creating a truly awesome drawing application.
Flashback time! At last year’s foss-north we had a great talk by Chris Lamb about reproducible builds. You can see the recording right here (you might have to click the link if your aggregator hides YouTube contents)
It is just 7 more days left of the Call for Papers. With the help of our great sponsors we have the opportunity to transport you to our conference if you are selected to speak. Make sure to make your submission before March 11 and you are in the race.
The wheels of the Usability & Productivity initiative chug along, knocking out issue after issue! Check out how the KDE universe improved this week:
I also want to make an exciting announcement: we’ve heard the prodigious amount of user feedback about the state of store.kde.org/Get Hot New Stuff, and we’ve started an initiative to clean it up. We’re also working to improve Discover’s display of store.kde.org resources. This initiative is in the early stages so it hasn’t borne fruit yet, but we believe it will provide a significant improvement in the experience of using 3rd-party plugins!
Like what you see? Consider becoming a part of this titanic and so far successful effort to produce the finest free software the world has ever known. Developers and bug triagers are in particular demand right now! It’s a great time to get involved.
Flashback time! At last year’s foss-north we had a great talk by Alexander Larsson introducing flatpak. You can see the recording right here (you might have to click the link if your aggregator hides YouTube contents)
It is just 8 more days left of the Call for Papers. With the help of our great sponsors we have the opportunity to transport you to our conference if you are selected to speak. Make sure to make your submission before March 11 and you are in the race.
This was a week of polish and preparation for Discover. We’ve got some nice new features in the pipeline but we’re not quite ready to announce them just yet. One is implemented but needs more polish, and another is under construction. I think you’ll like ’em once they’re ready! But in the meantime, here are some bugfixes and polish work:
Want to see faster progress on Discover? Help us out! KDE has great software and a strong focus on usability, productivity, and user satisfaction. But we’re short in the manpower department. There are lots of other ways to contribute, too!
At this year’s foss-north event FSFE will revive the Nordic Free Software Award and the conference will host the prize ceremony. Get your tickets for a great opportunity to meet with the FOSS community, learn new things and visit Gothenburg.
It is just 9 more days left of the Call for Papers. With the help of our great sponsors we have the opportunity to transport you to our conference if you are selected to speak. Make sure to make your submission before March 11 and you are in the race.
The fourth point release update to Kubuntu 16.04 LTS (Xenial Xerus) is out now. This contains all the bug-fixes added to 16.04 since its first release in April 2016. Users of 16.04 can run the normal update procedure to get these bug-fixes. In addition, we suggest adding the Backports PPA to update to Plasma 5.8.8. Read more about it:
Warning: 14.04 LTS to 16.04 LTS upgrades are problematic, and should not be attempted by the average user. Please install a fresh copy of 16.04.4 instead. To prevent messages about upgrading, change Prompt=lts with Prompt=normal or Prompt=never in the /etc/update-manager/release-upgrades file. As always, make a thorough backup of your data before upgrading.
Gwenview is a core KDE app, and an important tentpole of the Usability & Productivity initiative.
However, a few months ago Gwenview had no maintainer and few contributions. It was still a jewel, but was starting to bit-rot. Fast-forward to today: a lively crew of interested contributors are improving it daily, fixing bugs and resolving UI papercuts. Check out the Gwenview Phabricator project; it’s a hotbed of activity!
Gwenview highlights the value of joining a community over going your own way. Apps developed by a single person are vulnerable to dying when that person leaves the project, but apps with many developers can outlive the loss of any individual contributor.
Before starting a new project all by yourself, please consider joining an existing project whose design vision you can live with–it will be far more likely to outlive your interest in it. KDE offers a rich assortment of mature and popular cross-platform software already in use by people all over the world, so there are a lot of great options here!
Anyway, the new Gwenview team has been hard at work knocking out polish and fit-and-finish papercuts and adding new features. Here’s an assortment of what they’ve been up to recently:
As you might remember our dear Paul Adams decided to retire, this is a loss because of the person... But he was also providing a very nice service in the form of community data visualization. He was famously known among us for his "green blobs (turned blue blobs) and contributor network graphs".
Note that he just took the "green blobs" idea from Adriaan de Groot and later on turned them blue... He might have made them popular in the process but it's unclear if that's due to the color change or his prose. ;-)
Anyway, he was doing that for other communities than KDE, but he almost stopped now. For instance, he did it only once for Habitat in all of 2017. Luckily he published the scripts he was using in his git-viz repository so not all the knowledge was lost.
Earlier this year, I decided to take the torch and try to get into community data analytics myself. I got in touch with Paul to talk a bit about my plans. My first step was to try to modernize his scripts while staying true to his original visualization.
It turned out in an almost complete rewrite which I didn't quite expect. At the same time I wanted this modernization to be a good base for other visualization and also general data analytics. The most prominent part remaining is his git log parsing code although I extended it to work properly across repositories and not just on a single one. But next to that I'm now using pandas, networkx and bokeh for all the data processing and visualization descriptions. This turned out in nice, concise and maintainable code.
So you might wonder... What's possible now? Well, fairly similar visualizations than before but now they can span on more than one repository and they are fully interactive! No more fixed resolution pictures we generate fully dynamic HTML code.
To validate the scripts I used them on the whole year 2017 for all of KDEPIM (that is the parts in KDE Applications, in Extragear and in Playground).
Firstly, this gives us the infamous blue blobs diagram to show contributor weekly activity in all those repositories in 2017:
Clearly we can spot Christian Mollekopf and Laurent Montel as the most consistent committers throughout the year 2017. It should come as unsurprising since they are almost single handedly maintaining Kube/Sink and the rest of KDEPIM respectively. Daniel Vratil, maintainer of Akonadi is also very active and noticeable.
Secondly, this also gives us back the contributor network graphs. Here I did a small exception and used "Fruchterman & Reingold" for the force-directed layout instead of the "Kamada & Kawai" one. This is simply due to a personal preference. I find that in practice "Fruchterman & Reingold" is a bit more agressive at conserving the center for the cluster of most connected (core) contributors (although it sacrifices a bit in readability). So for all the KDEPIM repositories in 2017, we obtain the following network:
Surprisingly we can spot two disconnected nodes. Those two contributors touched files no one else touched in 2017. Nothing out of the ordinary, after investigating those two they were very self-contained punctual contributions for default SPAM settings and for improved wording in the GUI. Valuable but indeed don't necessarily require very deep integration in the core contributors network.
Then if we zoom in, we can easily spot the core KDEPIM contributors in 2017: Laurent Montel, Daniel Vratil and Volker Krause. They are the ones who connected most to other contributors via their commits last year. Of course this is a bit of a visual check and as such not very scientific.
Which leads me to the "what's next?" question.
Now I plan to build up on that work and add more tools and analysis. Paul's scripts and graphs were an excellent start hence why I did my best to stick to them. But now it's time to add more! Their are various questions which can be pursued:
Of course, there are other people doing such community analysis work out there, like GrimoireLab, Gitential and more... They are more providing off-the-shelf solutions than what I'm after. But probably some inspiration can be taken from them too!
The scripts I've been using for the visualizations above are available in my ComDaAn repository. Of course I hope to get them to evolve and to have new ones appear due to the questions listed in this post.
As you can see, it opens up a very large field and I'd like to explore more of those questions in the future and also try to apply them on other communities for which I likely have less preconceived knowledge and biases than KDE.
I am pleased to announce that Qt 5.11 Beta 1 is released today. Convenient online binary installers are available for trying out features coming in Qt 5.11. We will follow similar Beta process as earlier and provide multiple Beta releases via the online installer.
After the Qt 5.11 Beta 1 released today we will push out multiple new Beta N releases using the online installer. With this approach it is easy for users to test the new features and provide feedback. During the beta phase we expect to have new Beta N releases with 1-2 weeks intervals. When the maturity has increased sufficiently we will create a release candidate of Qt 5.11. These will be made available directly via the online installer, we are not planning publish separate blogs for the subsequent beta releases and release candidate(s). In addition to binaries, source packages of each beta release are of course also available for those who prefer to build themselves.
I hope many of you will install the Qt 5.11 Beta releases, test and provide us your feedback to complete Qt 5.11. For any issues you may find, please submit a detailed bug report to bugreports.qt.io (please remember to mention which beta you found the issue with, check for duplicates and known issues). You are also welcome to join the discussions in the Qt Project mailing lists, developer forums and to contribute to Qt.
First release of Falkon is finally out!
This week, Dan Vratil and me have merged a new feature in KScreen, Plasma’s screen configuration tool. Up until now, when plugging in a new display (a monitor, beamer or TV, for example), Plasma would automatically extend the desktop area to include this screen. In many cases, this is expected behavior, but it’s not necessarily clear to the user what just happened. Perhaps the user would rather want the new screen on the other side of the current, clone the existing screen, switch over to it or perhaps not use it at all at this point.
The new behavior is to now pop up a selection on-screen display (OSD) on the primary screen or laptop panel allowing the user to pick the new configuration and thereby make it clear what’s happening. When the same display hardware is plugged in again at a later point, this configuration is remembered and applied again (no OSD is shown in that case).
Another change-set which we’re about to merge is to pop up the same selection dialog when the user presses the display button which can be found on many laptops. This has been nagging me for quite a while since the display button switched screen configuration but provided very little in the way of visual feedback to the user what’s happening, so it wasn’t very user-friendly. This new feature will be part of Plasma 5.13 to be released in June 2018.
Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.
We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).
We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.
Elisa has been accepted by the KDE community as an official project with its own release schedule and hosted in the Extragear – Multimedia section. We are preparing a release schedule for our first stable release.
Now is really a good time to join the Elisa team. You will be able to work on code that will soon reach potential users. You will not have to wait for a long time given that we intend to release as soon as we can.
I would also like to thanks all people having offer their help on the HighDPI support. I dot yet own proper hardware to test that and people from the community stepped up to help. You are amazing. If any issues are still visible, please report them.
The following things have been integrated in Elisa git repository:
We are happy to announce the release of Qt Automotive Suite 2.0, the Qt solution for digital cockpits. developed in collaboration between KDAB, The Qt Company and Luxoft.
The Qt Automotive Suite provides OEMs and Tier1s with powerful frameworks and tools to build a consistent user experience for both the instrument cluster and the center stack. Being built on top of an open platform, the Qt Automotive Suite makes it possible for the automakers to stay in complete control of the user’s brand experience.
The Qt Automotive suite seamlessly integrates KDAB's runtime introspection tool GammaRay in Qt Creator and with embedded device support of Qt for Device Creation. See the release blogs for GammaRay 2.8 and GammaRay 2.9 for a summary of the new features and improvements.
Another focus area of KDAB has been a revamp of the Qt IVI module, providing a flexible framework to design create APIs to platform middleware. Using an IDL and code-generation as well as ready-made simulation backends enable work from day before the full hardware or software stack is available. Having tooling support built in makes your APIs runtime instrospectable out of the box, for easy debugging or automated testing. For more details, see KDAB engineer Mike Krus' presentation about the Qt IVI changes.
There are many more improvements in Qt Automotive Suite 2.0, make sure to check out the release blog.
We are excited to announce the Qt Automotive Suite 2.0, a great leap forward towards a unified HMI toolchain and framework for digital cockpit, available end of February 2018.
A few years back, we saw several challenges in the industry. One was that with increasing number of screens being present in the car, creating and maintaining a consistent digital UX across multiple displays with strong brand differentiation is unquestionably difficult. Why? Because a perfectly integrated digital UX for the digital cockpit does not end with the UI/UX design, it is only the beginning. When an OEM sends out the design specification, the supplier typically utilizes different software tools and technologies for developing the instrument cluster and center stack (aka. IVI or Infotainment) respectively. Somewhere down the line, there will be unavoidable HMI refinement needed to ensure the digital UX on different screens cooperate in a cohesive way.
Another challenge was that there was little reusability from one project to another, from one customer to another. The re-usability issue was particularly prominent on the center stack development. There was a lot of duplication of work when creating a new center stack HMI and it was just inefficient. Low re-usability made it difficult for the industry to rapidly innovate and differentiate on the HMI, and the development cycle was long and costly.
The third challenge we saw was that the center stack HMI has been traditionally monolithic, meaning all features (e.g. HVAC control, media player, radios) are packed into a single software instance. This approach not only created risks of introducing bugs, but more importantly, each feature may interfere with each other. If one feature crashes, the whole center stack needs to restart. This creates a lot of headache for developing and maintaining the HMI development – it is difficult to split the features into smaller sub projects for parallel development, and a pain to maintain a monolithic code base. Of course, if one feature needs update, the whole center stack needs to be rebuilt and reinstalled.
We solved these challenges, working with our strategic partners Luxoft and KDAB, announcing our first release of the Qt Automotive Suite 1.0 in 2016. We created the market’s first unified HMI toolchain and framework for digital cockpit development.
Today, we are happy to report that Qt Automotive Suite has been well received and are being adopted by some of the major OEMs in the world. An increasing number of customers are switching from the traditional specification writing and outsourcing, to owning the HMI design and development. We are glad to see the market is transitioning toward this direction that matches well with our vision, and the Qt Automotive Suite proves to be the right solution. Above all, we are seeing millions of cars shipped with Qt Automotive Suite in the coming years.
Figure 1: Qt Automotive Suite 2.0
Qt Automotive Suite 2.0 builds out a vision to provide easy-to-use tools that free designers and software engineers to rapidly create superior digital cockpits. Before we jump in to the key features, here is one digital cockpit reference that is built with Qt Automotive Suite.
As customers are adopting Qt Automotive Suite for production development, the feature set and stability must be carefully balanced. Now the Qt Automotive Suite 2.0 sits on top of Qt 5.9, bringing major performance improvement and feature set while receiving Long Term Support (LTS).
We believe a truly unified HMI toolchain and framework should also ship with advanced UI authoring tool for the designers. With 3D becoming a more significant part of the HMI we saw the need for a design tool that facilitates rapid 3D UI/UX concepting and design. For that, we now included the Qt 3D Studio into the suite.
Functional safety is a critical path that our customers must cross. Qt Automotive Suite 2.0 includes the Qt Safe Renderer, ensuring the rendering of safety critical telltales is reliable and certifiable by ISO 26262 part 6 ASIL-B specification.
Qt Application Manager brings a modern multi-process GUI architecture to the IVI. By separating the HMI into different functional units (for example, HVAC could be one unit while Navigation could be another), Qt Application Manager enables independent teams to develop and test both separately and simultaneously, thereby reducing project risk and shortening development time. In addition, splitting the UI into smaller applications also makes it easier to do system updates; smaller pieces of code are touched, and the OTA updates are smaller.
Qt Application Manager is a headless, core component. Despite the application management tasks, Qt Application Manager powers the Reference UI which implements a system UI compositor and the home screen along with a selection of apps.
In Qt Automotive Suite 2.0, the user can now make a custom application executable based on Qt Application Manager internal libraries. This greatly improves flexibility and makes integration in customer specific system setups easier. The new and extended system monitoring APIs has been extended and now also allow monitoring of resources consumed by separate applications. During the startup, the new logs allow tracing and profiling of apps in much larger details. Another improvement to mention is further polishing of the single-process mode, which allows running QML-runtime application on single process setups in the same way as on multi-process ones.
There are lots of other improvements under the hood, be sure to check out the latest documentation.
Since Qt Application Manager controls the application life cycle, direct launching from Qt Creator is now possible. A special plugin is provided now which wraps the command line tools provided in Qt Application Manager and integrates all essential steps into Qt Creator IDE. In Qt Automotive Suite 2.0, the plug-in uses the Generic Linux device type as a base. In the future it will directly use Boot2Qt device types to unify the device setup.
To tackle reusability, QtIVI brings a level of standardization within the Qt ecosystem for how to access and extend automotive-specific APIs. The applications developed in one generation of program can be reused on the next, even if the underlying platform is different. This is particularly important as OEMs are increasingly taking control of the car HMI development. Reducing duplication of work means significant cost saving and more focus on branded UX differentiation. Qt Automotive Suite 2.0 will integrate well with industry’s leading initiatives such as GENIVI and AGL, further increasing the re-usability on the platform level for the entire industry.
At its core, Qt IVI is built around a pattern based on the separation of API facing the application developer, so-called Feature, and the code implementing it, the Backend. There can be multiple backends per feature and the Core module provides support for finding the corresponding backend in an easy-to-use way.
Common use cases driving this separation are:
The module provides an extendable set of reference APIs for automotive features. It can be used to develop automotive applications and to provide automotive features to Qt-based applications in a structured manner.
We added a way to describe interfaces using an IDL (interface definition language) and then generate Qt/QML API code based on this definition. We use QFace IDL and its libraries, which provide a generic auto-generation framework. In addition to the generic integration of a QFace-based auto-generation into QtIVI Core, we migrated the Climate and Vehicle Settings APIs to be based on IDL.
From the interface definition, the tooling generates the QML and C++ APIs, code that allows properties (i.e. values) to be inspected and overridden by other tools such as GammaRay and Squish, and the code for a simulator that will allow a developer to observe and control vehicle data values in the absence of a real vehicle to test on. This is useful not only for debugging but also for automated testing.
Figure 2: Qt IVI auto generator architecture
The grey boxes are the only parts what the customer needs to implement. The visible part (to the end user) is the application that uses the vehicle data. The non-visible part is the communication with the vehicle data service which is typically an independent process that communicates via an IPC.
The Qt Automotive Suite deeply integrates GammaRay into the QtCreator which allows for runtime introspection, visualization and manipulation of internal structures such as scene graphs and state machines. This can provide an insight into the running system to diagnose those hard problems and understand where memory is being used.
In Automotive Suite 2.0, our partner KDAB has added new texture inspection capabilities, improved Qt Quick layout diagnostics, support for QML binding analysis, and many other improvements. In addition, GammaRay’s performance has been improved to reduce the analysis impact on the target application and increases the responsiveness of the remote views. Make sure you have a read on GammaRay.
Figure 3: GammaRay can identify resource waste and even suggest remedy
Since the initial release of Qt Automotive Suite, Qt Quick Controls 2.0 were released providing significant improvements in performance and more flexible styling. The Neptune reference UI has been upgraded to use Qt Quick Controls 2.0. The UI initialization supports staged loading for better performance. There is a new Notification framework with a Notification Center, Application Expose allowing to stop apps. Check out the documentation.
After 2.0, we will provide a new UX design and implementation developed by our partner Luxoft. Some of you might have seen it already at the CES or at the Embedded World shows this year.
Figure 4: New Neptune UI
The future System UI highlights Qt’s unique windows compositing capabilities. Take a look at the Phone, Maps, Player app widgets, they are run as separate apps and composited together on one screen managed by the System UI that acts as Application Launcher and HVAC control.
Qt Automotive Suite 2.0 is now more mature, and one aspect is that we now have much better documentation for all the existing and new tools, components, and APIs.
Let us reiterate this. Many times, parts of the system functionality will be delivered by second and third parties. Or third parties may want to develop apps specifically for your platform. The Qt Automotive Suite makes it easy to build a redistributable SDK that contains your specific HMI assets and added middleware together with the Qt tooling to allow 3rd parties to build and test their apps with minimal intervention.
The Qt Automotive Suite will be developed in the same open manner as Qt itself. The code is available at http://code.qt.io/cgit/ and there is an automotive specific mailing list (http://lists.qt-project.org/mailman/listinfo/automotive) for discussions on engineering and product direction.
With Qt Automotive Suite 2.0, we now provide a truly end-to-end solution for the digital cockpit HMI development, bringing designers and developers to work under the Qt ecosystem, maximizing consistent digital UX for all screens while equipped with Functional Safety. With reduced design-development roundtrip and underlying cost, OEMs can now focus more on HMI innovation and brand differentiation, which is a win for the whole industry and consumers like you and us.
A few users have asked for Sd card support and hopefully the basic version of that will be merged soon. This feature appears to be a more firmware dependent so it may take some time before each of our target firmwares has full support for SD Cards. The other feature that is being worked on is command injection. I have been thinking about adding this for a long time and I finally have a decent set of changes to support this feature in atcore. I can hear some of you out there asking about what exactly I mean by command injection is so I’ll explain. This feature will allow you to place specific commands for atcore into the gcode file. There is no need to worry about the commands breaking your gcode file since they are added in a way that makes them invisible to other hosts. You can trigger temperature changes, speed changes, pauses and more in this way. Using a few injected codes I was able to modify my test code that I print into a print that was made of abs with a tpu core. I didn’t have to touch atcore’s gui other then to resume after the auto initiated pause(s) and I can make prints with multiple colors and materials.
Papo Livre é um podcast que vem movimentando a cena do software livre no país. Tocado pelos amigos Antonio Terceiro, Paulo Santana e Thiago Mendonça, o projeto já tem quase 1 ano e trouxe para os ouvintes muita informação e entrevistas com brasileiros criadores ou participantes dos mais diferentes projetos de software livre.
Semanas atrás estive no programa concedendo entrevista sobre o KDE e falei bastante: comentei sobre a história do KDE, como o projeto passa de um ambiente desktop para uma comunidade com interesse em executar software livre nos mais diferentes dispositivos, ciclos de lançamento, como contribuir, como a comunidade se estrutura no Brasil, e muito mais. Também falei em temas mais pessoais como de que forma comecei no KDE, o que faço por lá, entre outras coisas.
A recepção do episódio tem sido muito boa, portanto dê uma escutada e comente abaixo o que achou – ficarei muito grato com o feedback de vocês.
E não deixem de conferir os demais episódios do Papo Livre.