March 17, 2019

The old among us might remember KDE4 and something one could call the “Pink Phase”, when people explored how “interesting” they could make their digital workplace by applying certain colors and themes… don’t most of us sometimes need a fluffy and color-intensive world to escape to… if only to learn to value reality again, when things get too fluffy ��

The most outstanding work useful for that had been done once by Florian Schepper who created the Plasma theme “Fluffy Bunny”, which won hearts over on first sight. Sadly though the theme bundle got lost, was recovered, only to then by the times getting lost again from the stores. Time to repeat that, at least the recovering ��

And so last week the internet and local backups had been scanned to restore the theme again, to quick first success:

Well, besides the regression Plasma5 has over the old Plasma �� With only thin & non-tiled border themes trendy and used the last years, sadly current Plasma has some issues, also does it assume that borders of panels are rectangular when creating the default BlurBehind mask. Some first patches (1, 2, 3) are already under review.

Get the initial restored version from your Plasma desktop via System settings/Workspace Theme/Plasma Theme/Get new Plasma Themes…/Search “Fluffy Bunny” or via the Web interface from store.kde.org and enjoy a bit of fluffy Plasma ��

Next up: restoring the “Plasma Bunny” theme from the once drafted “Fluffy” linux distribution… less flurry, but more pink!
Update: First version of new named “Unicorn” is now up in the store for your entertainment.

March 16, 2019

Week 62 for KDE’s Usability & Productivity initiative is here, and we didn’t let up! We’ve got new features, bugfixes, more icons… we’ve got everything! Take a look:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

Now that we have a way to access realtime public transport data this needs to be integrated into KDE Itinerary. There’s three use-cases being looked at so far, described below.

Realtime information

The first obvious use-case is displaying delays, platform changes, etc in the timeline and reservation details views, and notifying about such changes. This is already implemented for trains based on KPublicTransport, and to a very limited extend (gate changes) for flights using KPkPass for Apple Wallet boarding passes containing a working update API endpoint.

KDE Itinerary showing train delay and platform changes Train delay and platform changes in the train trip details page

For KDE Itinerary to check for changes you either need to use the “Check for Updates” action in the global action drawer, or enable automatic checking in the settings. KDE Itinerary will not reach out to online services on its own by default.

When enabled, the automatic polling tries to adapt the polling frequency to how far away an arrival or departure is, so you get current information within minutes without wasting battery or bandwidth. This still might need a bit of fine tuning and/or support for more corner cases (a departure delay past the scheduled arrival time was such a case for example), so feedback on this from use in practice is very much welcome.

Whenever changes are found, KDE Itinerary will also trigger a notification, which should work on all platforms.

KDE Itinerary notifying about a train delay Train delay notification on Android

Querying online services for realtime data might result in additional information that were previously not included in the timeline data, geo coordinates for all stations along the way for example. KDE Itinerary will try to augment the existing data with whatever new information we come across this way, having realtime data polling enabled will therefore also result in more navigation options or weather forecasts for more locations being shown.

Alternative connections

The second integration point currently being worked on is selecting alternative train connections, for example when having missed a connection or when having an unbound reservation that isn’t specific to a trip to begin with.

This is available in the context drawer on the details page of the corresponding train reservation. KDE Itinerary will then query for journeys along the same way as the current reservation, and allows you to pick one of the results as the new itinerary for this trip. Realtime information are of course shown here too, if available.

KDE Itinerary showing alternative train connection options. Three alternative connections options for a train trip, the first is expanded to show details

While displaying the connections already works, actually saving the result is still missing. Nevertheless feedback is already useful to see if sensible results are returned for existing bookings.

Filling gaps

The third use-case is filling gaps in the itinerary, such as how to get from the airport to the hotel by local transport. Or similarly, determining when you have to leave from your current location to make it to the airport in time for your flight. This would result in additional elements in the timeline containing suggested public transport routes.

Implementation on this hasn’t started yet, technically it’s a variations of the journey query already used in the previous point. The bigger challenge therefore will likely be on presenting this in a usable and useful way.

Contribute

This is all work in progress, so it’s the best point in time to influence things, any input or help is very much welcome of course. See our Phabricator workboard for what’s on the todo list, for coordinating work and for collecting ideas. For questions and suggestions, please feel free to join us on the KDE PIM mailing list or in the #kontact channel on Freenode or Matrix.

If you happen to know a source for realtime flight information that could be usable by Free Software without causing recurring cost I’d also be thankful for hints :)

March 15, 2019

I am pleased to announce that the second patch release of Qt 5.12 LTS, the Qt 5.12.2 is released today. While not adding new features, the Qt 5.12.2 release provides a number of bug fixes and other improvements.

Compared to Qt 5.12.1, the new Qt 5.12.2 contains more than 250 bug fixes. For details of the most important changes, please check the Change files of Qt 5.12.2.

With Qt 5.12.2 we bring back widely asked MinGW 32 bit prebuild binaries in addition to 64 bit ones.

Qt 5.12 LTS will receive many more patch releases throughout the coming years and we recommend all active developed projects to migrate to Qt 5.12 LTS. Qt 5.9 LTS is currently in ‘Strict’ phase and receives only the selected important bug and security fixes, while Qt 5.12 LTS is currently receiving all the bug fixes. With Qt 5.6 Support ending in March 2019 all active projects still using Qt 5.6 LTS should now migrate to a later version of Qt.

Qt 5.12.2 is available via the maintenance tool of the online installer. For new installations, please download latest online installer from Qt Account portal or from qt.io Download page. Offline packages are available for commercial users in the Qt Account portal and at the qt.io Download page for open-source users. You can also try out the Commercial evaluation option from the qt.io Download page.

The post Qt 5.12.2 Released appeared first on Qt Blog.

I’ve been playing around with type meta-tagging for my Voy reactive streams library (more on that some other time) and realized how useful it is to be able to check whether a given type is an instantiation of some class template, so I decided to write a short post about it.

Imagine you are writing a generic function, and you need to check whether you are given a value or a tuple so that you can unpack the tuple before doing anything with it.

If we want to check whether a type T is an instance of std::tuple or not, we can create the following meta-function:

template <typename T>
struct is_tuple: std::false_type {};

template <typename... Args>
struct is_tuple<std::tuple<Args...>>: std::true_type {};

The meaning of this code is simple:

  • By default, we return false for any type we are given
  • If the compiler is able to match the type T with std::tuple<Args...> for some list of types, then we return true.

To make it easier to use, we can implement a _v version of the function like the meta-functions in <type_traits> have:

template <typename T>
constexpr bool is_tuple_v = is_tuple<T>::value;

We can now use it to implement a merged std::invoke+std::apply function which calls std::apply if the user passes in a tuple, and std::invoke otherwise:

template <typename F, typename T>
auto call(F&& f, T&& t)
{
    if constexpr(is_tuple_v<T>) {
        return std::apply(FWD(f), FWD(t));
    } else {
        return std::invoke(FWD(f), FWD(t));
    }
}

Up one level

The previous meta-function works for tuples. What if we needed to check whether a type is an instance of std::vector or std::basic_string?

We could copy the previously defined meta-function, and replace all occurrences of “tuple” with “vector” or “basic_string”. But we know better than to do copy-paste-oriented programming.

Instead, we can increase the level of templatedness.

For STL algorithms, we use template functions instead of ordinary functions to allow us to pass in other functions as arguments. Here, we need to use template templates instead of ordinary templates.

template <template <typename...> typename Template,
          typename Type>
struct is_instance_of: std::false_type {};

template <template <typename...> typename Template,
          typename... Args>
struct is_instance_of<Template, Template<Args...>>: std::true_type {};

template <template <typename...> typename Template,
          typename Type>
constexpr bool is_instance_of_v = is_instance_of<Template, Type>::value;

The template <template <typename...> allows us to pass in template names instead of template instantiations (concrete types) to a template meta-function.

We can now check whether a specific type is an instantiation of a given template:

static_assert(is_instance_of_v<std::basic_string, std::string>);
static_assert(is_instance_of_v<std::tuple, std::tuple<int, double>>);

static_assert(!is_instance_of_v<std::tuple, std::vector<int>>);
static_assert(!is_instance_of_v<std::vector, std::tuple<int, double>>);

A similar trick is used alongside void_t to implement the detection idiom which allows us to do some compile-time type introspection and even simulate concepts.

I’ll cover the detection idiom in some of the future blog posts.


You can support my work on Patreon, or you can get my book Functional Programming in C++ at Manning if you're into that sort of thing.

March 14, 2019

We’re deep in bug fixing mode now, because in May we want to release the next major version of Krita: Krita 4.2.0. While there will be a host of new features, a plethora of bug fixes and performance improvements, one thing is unique: support for painting in HDR mode. Krita is the very first application, open source or proprietary, that offers this!

So, today we release a preview version of Krita 4.2.0 with HDR support baked in, so you can give the new functionality a try!

Of course, at this moment, only Windows 10 supports HDR monitors, and only with some very specific hardware. Your CPU and GPU need to be new enough, and you need to have a monitor that supports HDR. We know that the brave folks at Intel are working on HDR support for Linux, though!

What is HDR?

HDR stands for High Dynamic Range. The opposite is, of course, Low Dynamic Range.

Now, many people when hearing the word “HDR” will think of the HDR mode of their phone’s cameras. Those cameras map together images taken at different exposure levels in one image to create, within the small dynamic range of a normal image, the illusion of a high dynamic range image, with often quite unnatural results.

This is not that! Tone-mapping is old-hat. These days, manufacturers are bringing out new monitors that can go much brighter than traditional monitors, up to a 1000 nits (a standard for brightness), or even brighter for professional monitors. And modern systems, Intel 7th generation Core CPU’s, support these monitors.

And it’s not just brightness, these days most normal monitors are manufactured to display the sRGB gamut. This is fairly limited, and lacks quite a bit of greens (some profession monitors have a wider gamut, of course). HDR monitors use a far wider gamut, with the Rec. 2020 colorspace. And instead of using traditional exponential gamma correction, they use Perceptual Quantizer (PQ), which not just extends the dynamic range to sun-bright values, but also allows to encode very dark areas, not available in usual sRGB.

And finally, many laptop panels only support 6 bits per channel; most monitors only 8 bits, monitors for graphics professionals 10 bits per channel — but HDR monitors support from 10 to 16 bits per channel. This means much nicer gradations.

It’s early days, though, and from a developers’ point of view, the current situation is messy. It’s hard to understand how everything fits together, and we’ll surely have to update Krita in the future when things straighten out, or if we discover we’ve misunderstood something.

So… The only platform that supports HDR is Windows 10 via DirectX. Linux, nope, OpenGL, nah, macOS, not likely. Since Krita speaks OpenGL, not DirectX, we had to hack the OpenGL to DirectX compatibility layer, Angle, to support the extensions needed to work in HDR. Then we had to hack Qt to make it possible to convert the UI (things like buttons and panels) from sRGB to p2020-pq, while keeping the main canvas unconverted. We had to add, of course, a HDR capable color selector. All in all, quite a few months of hard work.

That’s just the technical bit: the important bit is how people actually can create new HDR images.

So, why is this cool?

You’ve got a wider range of colors to work with. You’ve got a wider range of dark to light to work with: you can actually paint with pure light, if you want to. What you’re doing when creating an HDR image is, in a way, not painting something as it should be shown on a display, but as the light falls on a scene. There’s so much new flexibility here that we’ll be discovering new ways to make use of it for quite some time!

If you’d have a HDR compatible set-up, and a browser that would support HDR, well, this video could be in HDR:

(Image by Wolthera van Hövell tot Westerflier)

And how do I use it?

Assuming you have a HDR capable monitor, a DisplayPort 1.4 or HDMI 2.0a (the ‘a’ is important!) or higher cable, the latest version of Windows 10 with WDDM 2.4 drivers and a CPU and GPU that supports hits, this is how it works:

You have to switch the display to HDR mode manually, in the Windows settings utility. Now Windows will start talking to the display in p2020-pq mode. To make sure that you don’t freak out because everything looks weird, you’ll have to select a default SDR brightness level.

You have to configure Krita to support HDR. In the Settings → Configure Krita → Display settings panel you need to select your preferred surface. You’ll also want to select the HDR-capable small color selector from the Settings → Dockers menu.

To create a proper HDR image, you will need to make a canvas using a profile with the rec 2020 gamut and a linear tone-response-curve: “Rec2020-elle-V4-g10.icc” is the profile you need to choose. HDR images are standardized to use the Rec2020 gamut, and the PQ trc. However, a linear TRC is easier to edit images in, so we don’t convert to PQ until we’re satisfied with our image.

Krita’s native .kra file format can save HDR images just fine. You should use that as your working format. For sharing with other image editors, you should use the OpenEXR format. For sharing on the Web, you can use the expanded PNG format. Since all this is so new, there’s not much support for that standard yet.

You can also make HDR animations… Which is pretty cool! And you can export your animations to mp4 and H.265. You need a version of FFMpeg that supports H.265. And after telling Krita where to find that, it’s simply a matter of:

  • Have an animation open.
  • Select File → Render Animation
  • Select Video
  • Select Render as MPEG-4 video or Matroska
  • Press the configure button next to the fileformat dropdown.
  • Select at the top ‘H.265, MPEG-H Part 2 (HEVC)’
  • Select for the Profile: ‘main10’.
  • The HDR Mode checkbox should now be enabled: toggle it.
  • Click ‘HDR Metadata’ to configure the HDR metadata
  • Finally, when done, click ‘render’.

If you have a CPU of 7th gen Core or later with intel Graphics integrated you can take advantage of HW accelerated encode to save time in the export stage: ffmpeg does that for you.

Download

Sorry, this version of Krita is only useful for Windows users. Linux graphics developers, get a move on!

Windows

XlsxWriter is a Python module for creating files in xlsx (MS Excel 2007+) format. It is used by certain python modules some of our customers needed (such as OCA report_xlsx module).

This module is available in pypi but it was not packaged for Fedora. I’ve decided to maintain it in Fedora and created a package review request which is helpfully reviewed by Robert-André Mauchin.

The package, providing python3 compatible module, is available for Fedora 28 onwards.

March 13, 2019

Last November I was invited to talk at the LinuxPiter conference. I held a presentation of the Ubports project, to which I still contribute in my little spare time.

The video recording from the conference has finally been published:

(there's also a version in Russian)

There was not a big audience, to be honest, but those that were there expressed a lot of interest in the project.

March 11, 2019

Okteta, a simple editor for the raw data of files, has been released in version 0.26.0. The 0.26 series mainly brings a clean-up of the public API of the provided shared libraries. The UI & features of the Okteta program have been kept stable, next to one added new feature: there is now a context menu in the byte array viewer/editor available.

Since the port to Qt5 & KF5 Okteta has not seen work on new features. Instead some rework of the internal architecture has been started, and is still on-going.

Though this release there is a small feature added again, and thus the chance to pick up on the good tradition of the series of all-new-features-in-a-picture, like done for 0.9, 0.7, 0.4, 0.3, and 0.2. See in one quick glance what is new since 0.9 (sic):

After many delays, we finally think the Timeline Refactoring branch is ready for production. This means that in the next days major changes will land in Kdenlive’s git Master branch scheduled for the KDE Applications 19.04 release.

A message to contributors, packagers and translators

A few extra dependencies have been added since we now rely on QML for the timeline as well as QtMultimedia to enable the new audio recording feature. Packagers or those compiling themselves, our development info page should give you all information to successfully build the new version.

We hope everything goes smoothly and will be having our second sprint near Lyon in France next week to fix the remaining issues.

We all hope you will enjoy this new version, more details will appear in the next weeks.

Stay tuned!

March 10, 2019

Introduction and approach

Fuzz-testing, also called Fuzzing, is an essential tool inside the tool-box of software developers and testers. The idea is simple yet effective: simply throw a huge amount of random test inputs at the target program, until you manage to get it to crash or otherwise misbehave. This crashes are often revealing some defects in the code, some overlooked corner-cases that are at best annoying for the end-user if she stumbles upon them, or at worse dangerous if the holes have security implications. As part of our refactoring efforts of the main components of Kdenlive, this is one of the tools we wanted to use to ensure as much stability as possible.

One of the most commonly used fuzzing library is called LibFuzzer, and is built upon LLVM. It has already helped finding thousands of issues in a wide range of projects, including well tested ones. LibFuzzer is a coverage based fuzzer, which means that it attempts to generate inputs that creates new execution paths. That way, it tries to cover the full scope of the target software, and is more likely to uncover corner-cases.
Building a library (in this case Kdenlive’s core library) with the correct instrumentation to support fuzzing is straightforward: with Clang, you simply need to pass the flag -fsanitize=fuzzer-no-link. And while we’re at it, we can also add Clang’s extremely useful Address sanitizer with -fsanitize=fuzzer-no-link, address. This way, we are going to detect any kind of memory malfunction as soon as it occurs.

Now that the library is ready for fuzzing, we need to create a fuzz target. That corresponds to the entry point of our program, to which the fuzzer is going to pass the random inputs. In general, it looks like this:

// fuzz_target.cc
extern "C" int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size) {
    DoSomethingWithData(Data, Size);
    return 0;
}

Now, the challenge is to come up with a good fuzzing function. Utilities that read from stdin or from an input file, like ffmpeg, any compression tool, any conversion tool, etc., are easy to fuzz since we just need to fuzz the data we feed them. In the case of Kdenlive, we also read project files, but this represents only a tiny amount of what a full editing software is supposed to do, and further-more our project opening loading logic is mostly deferred to third-party libraries. So, how to fuzz the interesting parts of Kdenlive? Well, if you look at it from a distance, Kdenlive can more or less be summed up as a “just” a (rich) GUI sitting on top of existing video manipulation libraries. That means that our most prominent source of inputs is the user: at the core, what Kdenlive must excel at is handling any kind of action the user may want to perform.

During the rewrite of our core modules, we changed a bit the architecture so that there is a clear separation between the model, which handles all the actual logic, and the view (written in QML), which is designed to be as thin as possible. Essentially, this means that any action executed by the user corresponds exactly to one or several call to the model’s API. This makes our life easier when performing fuzzing: in order to effectively push Kdenlive to its limits, we simply need to call random model functions in a random order with random parameters.

The plan is getting clear now, but one crucial piece is missing: how do we turn the input provided by LibFuzzer (a random string) into a random sequence of model actions?

Generating a maintainable script language

One obvious idea would be to define a scripting language that maps text to actions, for example move 0 3 4 could move clip 0 on track 3 at position 4. However, writing such a scripting language from scratch, and then an interpreter for it, is a daunting task and is hard to maintain: each time a new function is added in the API it must be added in the script language as well, and any change in the API is likely to break the interpreter.

Basically, we want to generate the script language and its interpreter programmatically in a semi-automated way. One way to do this is to use reflection: by enumerating all the methods in the API, we can figure out what is a legal operation, and interpret it correctly if it is indeed an existing operation. As of today, C++ still lacks native reflection capabilities, but there are some great libraries out there that allow you to fill this gap a little. We used RTTR, which is a Runtime reflection library. It requires to register the functions you want to make available: in the following snippet we register a method called “requestClipsUngroup” from our timeline model:

RTTR_REGISTRATION
{
    using namespace rttr;
    registration::class_("TimelineModel")
        .method("requestClipsUngroup", &TimelineModel::requestClipsUngroup)
               (parameter_names("itemIds", "logUndo"));
}

Note that specifying the names of the parameter is technically not required by RTTR, but it is useful for our purposes.

Once we have that, our script interpreter is much easier to write: when we obtain a string like “requestClipDoSomething”, we check the registered methods for anything similar, and if we find it, we also know which arguments to expect (their name as well as their type), so we can parse that easily as well (arguments are typically numbers, booleans or strings so they don’t require complicated parsing).

For Kdenlive, there is one caveat though: the model is, by design, very finicky with the inputs its receives. In our example function, the first parameter, itemIds is a list of ids of items on the timeline (clips, compositions,…). If one of the element of the input list is NOT a known item id, the model is going to send an abort, because everything is checked through an assert. This behavior was designed to make sure that the view cannot sneak an invalid model call without us knowing about it (by getting an immediate and irrevocable crash). The problem is that this is not going to play well within a fuzzing framework: if we let the fuzzer come up with random ids, there is little chance that they are going to be valid ids and the model is going to be crashing all the time, which is not what we want.
To work around this, we implemented a slight additional thing in our interpreter: whenever the argument is some kind of object id, for example an item id, we compute a list of currently valid ids (in the example, allValidItemIds). That way, if we parse an int with value i for this argument, we send allValidItemIds[i % allValidItemIds.size()] to the model instead. This ensures that all the ids it receives are always going to be valid.

The final step for this interpreter to be perfect is to automatically create a small translation table between the long API names and shorter aliases. The idea behind this is that the fuzzer is less likely to randomly stumble upon a complicated name like “requestClipUngroup” than a one letter name like “u”. In practice, LibFuzzer supports dictionaries, so it could in theory be able to deal with these complicated names, but maintaining a dictionary is one extra hassle, so if we can avoid it, it’s probably for the best. All in all, here is a sample of a valid script:

a 
c red  20 1
c blue  20 1
c green  20 1
b 0 -1 -1 $$ 0
b 0 -1 -1 $$ 0
b 0 -1 -1 $$ 0
e 0 294 295 0 1 1 0
e 0 298 295 23 1 1 0
e 0 299 295 45 1 1 0
e 0 300 296 4 1 1 0
e 0 299 295 43 1 1 0
e 0 300 296 9 1 1 0
l 0 2 299 294 1 0
l 0 2 300 294 1 0
e 0 299 295 43 1 1 0
e 0 300 296 9 1 1 0
e 0 299 295 48 1 1 0
e 0 294 296 8 1 1 0
e 0 294 295 3 1 1 0

Generating a corpus

To work optimally, Libfuzzer needs a initial corpus, which is an initial set of inputs that trigger diverse behaviors.
One could write some scripts by hand, but once again that would not scale very well and would not be maintainable. Luckily, we already have a trove of small snippets that call a lot of model functions: our unit-tests. So the question becomes: how do we (automatically) convert our unit-tests into scripts with the syntax described above?

The answer is, once again, reflection. We have a singleton class Logging that keeps track of all the operations that have been requested. We then instrument our API functions so that we can log the fact that they have been called:

bool TimelineModel::requestClipsUngroup(const std::unordered_set& itemIds, bool logUndo)
{
    TRACE(itemIds, logUndo);
    // do the actual work here
    return result
}

Here TRACE is a convenience macro that looks like this:

#define TRACE(...)                                                                                                                                             \
    LogGuard __guard;                                                                                                                                          \
    if (__guard.hasGuard()) {                                                                                                                                  \
        Logger::log(this, __FUNCTION__, {__VA_ARGS__});                                                                                                        \
    }

Note that it passes the pointer (this), the function name (__FUNCTION__) and the arguments to the logger.
The LogGuard is a small RAII utility that prevents duplicate logging in the case of nested calls: if our code looks like this:

int TimelineModel::foo(int foobaz) {
    TRACE(foobaz);
    return baz * 5;
}

int TimelineModel::bar(int barbaz) {
    TRACE(barbaz);
    return foo(barbaz - 2);
}

If bar is called, we want to have only one logging entry, and discard the one that would result from the inner foo call. To this end, the LogGuard prevents further logging until its deleted, which happens when it goes out of scope, i.e when bar returns. Sample implementation:

class LogGuard{
public:
    LogGuard()
        : m_hasGuard(Logger::start_logging()) {}
    ~LogGuard()
    {
        if (m_hasGuard) Logger::stop_logging();
    }
    // @brief Returns true if we are the top-level caller.
    bool hasGuard() const { return m_hasGuard; }
protected:
    bool m_hasGuard = false;
};

Once we have a list of the function calls, we can generate the script by simply dumping them in a format that is consistent with what the interpreter expects.

This kind corpus is very useful in practice. Here is the output of LibFuzzer after a few iterations on an empty corpus

#1944	NEW    cov: 6521 ft: 10397 corp: 46/108b lim: 4 exec/s: 60 rss: 555Mb L: 4/4 MS: 1 ChangeBit-

The important metric is “cov”, which indicates how well we cover the full source code. Note that at this point, not a single valid API call has been made.

With a corpus generated through our unit tests, it looks like this

#40	REDUCE cov: 13272 ft: 65474 corp: 1148/1077Kb lim: 6 exec/s: 2 rss: 1340Mb L: 1882/8652 MS: 2 CMP-EraseBytes- DE: "movit.convert"-

The coverage is more than twice as big! And at this point at lot of valid calls are made all the time.

Summary

In a nutshell, here are the steps we went through to be able to efficiently fuzz a complex application like Kdenlive:

  • Structure the code so that model and view are well separated
  • Generate a scripting language using reflection, to be able to query the model
  • Trace the API calls of the unit-tests to generate an initial script corpus
  • Fuzz the model through the script interface
  • Profit!

For us at Kdenlive, this approach has already proved useful to uncover bugs that were not caught by our test-cases. See this commit for example: https://invent.kde.org/kde/kdenlive/commit/fcd1ccd6250aea6a977a0856a284a9ac1f5341ee. Note that our logger is able to emit either a script or a unit-test after an execution: this means that when we find a script that triggers a bug, we can automatically convert it back to a unit-test to be added to our test library!

In week 61 for KDE’s Usability & Productivity initiative, the MVP has got to be the KDE community itself–all of you. You see, Spectacle has gotten a lot of work thanks to new contributors David Redondo and Nils Rother after I mentioned on Reddit a week ago that “a team of 2-4 developers could make Spectacle sparkle in a month“. I wasn’t expecting y’all to take it so literally! The KDE community continues to amaze. But there’s a lot more, too:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

March 07, 2019

pulseaudio-qt 1.0.0 is out!

It’s a Qt framework C++ bindings library for the PulseAudio sound system.

It was previously part of plasma-pa but is now standalone so it can be used by KDE Connect and anyone else who wants it.

https://download.kde.org/stable/pulseaudio-qt/

sha256: a0a4f2793e642e77a5c4698421becc8c046c426811e9d270ff2a31b49bae10df pulseaudio-qt-1.0.0.tar.xz

The tar is signed by my GPG key.

Facebooktwitterlinkedinby feather

I’ve released libqaccessibilityclient 0.4.0.

Changes:

  • bump version for new release
  • Revert “add file to extract strings”
  • add file to extract strings
  • Set include dir for exported library target
  • Create and install also a QAccessibilityClientConfigVersion.cmake file
  • Create proper CMake Config file which also checks for deps
  • Use imported targets for Qt libs, support BUILD_TESTING option
  • Use newer signature of cmake’s add_test()
  • Remove usage of dead QT_USE_FAST_CONCATENATION
  • Remove duplicated cmake_minimum_required
  • Use override
  • Use nullptr
  • Generate directly version
  • Add some notes about creating releases

Signed using my key: Jonathan Riddell <jr@jriddell.org> 2D1D5B0588357787DE9EE225EC94D18F7F05997E

6630f107eec6084cafbee29dee6a810d7174b09f7aae2bf80c31b2bc6a14deec libqaccessibilityclient-0.4.0.tar.xz

https://download.kde.org/stable/libqaccessibilityclient/

What is it?

Most of the stack is part of Qt 5, so nothing to worry about, that’s the part that lets applications expose their UI over DBus for AT-SPI, so they work
nicely with assisitve tools (e.g. Orca). In accessibility language, the applications act as “servers” and the screen reader for example is a client.

This library is for writing clients, so applications that are assistive, such as screen readers. It currently has two users: KMag and Simon with Plasma also taking an interest. KMag can use it to follow the focus (e.g. when editing text, it can automatically magnify the part of the document where the cursor is. For Simon Listens, the use is to be able to let the user trigger menus and buttons by voice input.

Facebooktwitterlinkedinby feather

We are happy to announce the release of Qt Creator 4.9 Beta2!

Please have a look at the blog post for the Beta for an overview of what is new in Qt Creator 4.9. Also see the change log for a more complete list of changes.

Get Qt Creator 4.9 Beta2

The opensource version is available on the Qt download page under “Pre-releases”, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.9 Beta2 is also available under Preview > Qt Creator 4.9.0-beta2 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.9 Beta2 released appeared first on Qt Blog.

KDevelop 5.3.2 released

We today provide a stabilization and bugfix release with version 5.3.2. This is a bugfix-only release, which introduces no new features and as such is a safe and recommended update for everyone currently using KDevelop 5.3.1.

You can find the updated Windows 32- and 64 bit installers, the Linux AppImage, as well as the source code archives on our download page.

kdevelop

  • Don't call clear() on a shared pointer we don't own. (commit. fixes bug #403644)
  • Workaround the bug found by ASan, which can be seen on FreeBSD CI. (commit. code review D18463)
  • Kdev-clazy: use canonical paths. (commit. code review D15797)
  • Prevent the Extra Arguments ComboBox to Stretch Too Much. (commit. code review D18414)
  • CMake plugin: don't hardcode a default install prefix. (commit. code review D17255)
  • Appimage: skip unneeded cp of cmake, removed later again. (commit. code review D18175)
  • Clang plugin: Handle CUDA files better. (commit. code review D17909)
  • Clang: detect Clang builtin dirs at runtime on Unix. (commit. code review D17858)
  • Actually cleanup the duchain from the background thread. (commit. fixes bug #388743)
  • Appimage: add okteta libs, as used by the debugger memory view. (commit. code review D17888)
  • Grewpview: Fix potential crash in "Find in Files". (commit. fixes bug #402617)
  • Add All Top-Level Targets to the Menu. (commit. code review D18021)
  • Show "Move into Source" action in code menu. (commit. code review D17525)
  • QuickOpen: Trim whitespace from input. (commit. code review D17885)
  • Update kdevelop app icon to latest breeze-icons version. (commit. code review D17839)
  • Appimage: have only kdevelop appdata in the appimage. (commit. code review D17582)
  • Fix first run of appimage creation: get install_colorschemes.py via $SRC. (commit. code review D17581)
  • Fix crash in documentation view. (commit. fixes bug #402026)
  • CMake: skip server entries without empty build system information. (commit. code review D17679)
  • 2 missing KTextEditorPluginIntegration::MainWindow slots. (commit. code review D17465)
  • Polish Purpose integration in the PatchReview plugin. (commit. code review D17424)
  • Hex editor plugin: prepare for incompatible API change of libraries from upcoming Okteta 0.26.0

kdev-python

  • Fix crash when finding context inside annotated assigments. (commit. fixes bug #403045)

kdev-php

No changes

kfunk Thu, 2019/03/07 - 09:30
Category
Tags

March 06, 2019

Folks offered a lot of feedback on my article on headerbars last year, some of which was skeptical or critical. I’d like to address some of the arguments people made in this follow-up:

It doesn’t really matter that there’s less visible or actual window drag space

People pointed out that you can drag on the UI controls, or use the hidden Alt-drag shortcut. Other people questioned how useful it is to drag windows around anyway, preferring to maximize everything or use window tiling keyboard shortcuts. I will address these responses:

“Just drag on the controls in the headerbar”

First of all, dragging controls to drag the window isn’t very intuitive. What other controls outside the headerbar allow you to move the window when dragging on them? Sure, you can learn it, but this isn’t the same as a good user interface that re-uses familiarity with existing elements rather than making you learn new modes.

Second, you can’t drag the window by dragging on controls that themselves implement draggable behaviors–such as tab bars, comboboxes, pop-up menus, and sliders. So those controls can’t be put on the headerbar without reducing the drag area and violating the “you can drag the window by dragging on the controls” rule. In the original post, I gave an example of Firefox’s horrendous CSD that puts draggable tabs in the headerbar, reducing the drag area for the window itself to almost nothing. Ensuring draggability by only using controls that are not themselves draggable reduces developer flexibility compared to a traditional titlebar. It’s just not a problem if you have a separate titlebar.

“Just alt-drag the window or move it with keyboard shortcuts”

This somewhat flippant answer describes a workaround, not a general-purpose solution for everyone. Most users drag their windows around all the time and probably don’t know any of those shortcuts. For them, adequate window draggability is important–especially if your desktop happens to be targeting these people as the #1 user group.

“Just maximize the window and then there’s plenty of visible drag space on the headerbar”

it depends on the implementation (e.g. Firefox’s CSD is perfectly capable of having almost no drag space when maximized), but this is broadly true. However, what if you’re not the kind of user who maximizes all of their windows? In any event, window draggability is least important for maximized windows. The whole point of dragging a window around is to move it somewhere else, which requires that it be smaller than the area in which it’s moved.

Taken together, these issues demonstrate that reduced draggability reduces developer flexibility and can be a real problem. It can’t just be dismissed as much ado about nothing.

You don’t need menubars anyway

A lot of people expressed many variants of this argument:

“You don’t need menubars anymore because actions and commands can be located inline, available contextually”

This works, but imposes a high bar in UI design, and results in frustrating and awkward software when implemented by a team without at least one person with A+ UI design skills who everyone else actually listens to. This approach is also very labor-intensive and bug-prone as by definition it’s custom-made for each app and requires a lot of code. It may not scale well for content with many actions you can perform on it, since there’s a limited amount of space in the content view to put contextual actions. It furthermore throws away users’ existing familiarity and muscle memory; for example when cut/copy/paste/find are always available in the same place in the Edit menu, users never need to re-learn how to cut, copy, and paste, and find text in each app. Finally, this approach doesn’t help the user learn keyboard shortcuts.

So yes, this approach works, but results in significant drawbacks. It’s definitely not a 100% slam-dunk superior user interface.

“Menubars are overkill for simple apps”

This is a reasonable argument. Most of the items in the average menu bar pertain to text manipulation, file handling, and view adjustment. An app that doesn’t have any text manipulation or file handling can probably get away with putting what’s left in toolbar buttons. In fact KDE’s Discover already does this, for just those reasons. A number of other simple mouse-centric KDE apps like the games could probably do the same.

But the thing is, you don’t need to implement a CSD headerbar to get rid of your menubar! You can do it with a traditional toolbar and a traditional titlebar, and gain all the advantages of those individual user interface elements: guaranteed space to drag the window, a legible window title, a user-customizable toolbar that can include draggable UI elements, and so on. No need to throw the baby out with the bathwater.

“Menubars don’t exist on mobile and mobile is the future therefore we need to get rid of them on the desktop or else people under 25 will perceive us as stogy old farts and won’t want to use our apps”

I see no evidence that the under-25 crowd hates desktop apps with menubars. A lot of KDE’s most active members are young, and all fully understand the productivity benefits of real desktop apps with traditional user interfaces.

The truth is, mobile phones are really only good for communication, travel, controlling portable hardware (cameras, drones, etc) and content consumption that doesn’t benefit from a large screen. Other than these use cases, mobile apps are universally worse to use than traditional desktop apps–especially for productivity. They are slower, more awkward, have fewer features, take longer to accomplish the same tasks, are harder to multi-task with, have user interfaces that are constantly in a state of flux, and are an absolute nightmare for anything that requires precise text manipulation or formatting.

Most of these limitations are imposed by the hardware’s own input and output capabilities. This means that the only way to improve the situation is to plug in a big screen and some faster, more precise input devices–essentially turning the mobile device into an underpowered desktop computer. Not coincidentally, KDE’s Kirigami user interface toolkit explicitly supports this use case anyway. ��

I think Star Trek nailed the ideal mobile/desktop split decades ago: people use their mobile devices for communication, information gathering, and relaxing, but when they need to be productive, they use the huge desktop-computer-style consoles built into the walls of their ships. The bigger devices offer speed, power, precision, and good multi-tasking. When you need to get something important done, those are more important features than being able to have the device in your pocket or purse all the time.

The lesson is clear: portable mobile devices are for convenience, not productivity. Mobile isn’t going to kill the desktop any more than air freight killed rail freight. They’re just for different things.

“Menubars are old and clunky and clumsy and obsolete and a relic of the past”

If this is true, why haven’t we come with anything better yet after more than two decades of user interface experimentation?

  • MS Office style Ribbons take up much more space than the combination of a menubar and toolbar they replace, make it harder to find what you’re looking for when the context sensitivity feature isn’t implemented perfectly, and don’t teach the user keyboard shortcuts.
  • Hamburger menus in the toolbar have to leave out most functionality or else they become too cluttered. Most also don’t make any effort to teach the user keyboard shortcuts.
  • Action drawers that slide in from the left or right are basically just hamburger menus, with all the same drawbacks.
  • Inline controls were already addressed above.

I have yet to encounter a feature-rich, high-functionality desktop app that completely does away with the concept of the main menubar without feeling awkward, crippled, slow, or like something has been left out. I’m sure someday we will kill the menubar after we’ve invented something genuinely better that doesn’t feel like a regression in any way, but we’re not there yet.

And in the meantime, you don’t declare something to be dead before you’ve actually killed it.

The menubar isn’t dead yet for desktop apps because we haven’t yet come up with anything that’s universally better. The GTK & GNOME approach has a hamburger menu on the toolbar that holds global controls, coupled with a limited number of inline editing controls. most of which are only available via a hidden UI (the right-click context menu or undiscoverable keyboard shortcuts). This is not a solution, it’s a regression: many features must be removed or hidden to make it not visually overwhelming; the inline controls are invisible unless you think to right-click everywhere; and it’s impossible to learn keyboard shortcuts at the moment of use. This is a steep price to pay for visual cleanliness and approachability.

Displaying a title is an application-specific issue

Only when there’s no titlebar. ��

When the window manager guarantees your window a titlebar, app developers don’t have to reinvent the wheel by implementing custom labels to solve the same problem over and over again; they just set the title appropriately and move on.

Iff you implement a headerbar but want to put a title in it, it competes for space with the rest of the items in the headerbar. That means it can’t be very long unless your app has almost no UI controls exposed on the headerbar, which is a problem for any application that could benefit from showing a long title–like web browsers, where the titlebar has traditionally showed the website’s title (imagine that).

When you use a traditional discrete titlebar, you don’t have any of these challenges to solve, so you can focus on more important parts of your app.

Headerbars were meant for simple apps; it’s okay for complicated professional apps to not use them

Some people defend headerbars by arguing that the menubar is obsolete, then say that complicated apps can and should still have menubars. This doesn’t work: either menus are obsolete, or they aren’t. If they aren’t, then removing them is a mistake.

The better argument is that headerbars are only intended for very simple apps that don’t really benefit much from a menubar anyway because most of the default entries are inapplicable and would be disabled placeholders. I already acknowledged that it’s probably fine to remove the menubar for very simple apps that don’t have selectable text or do any file handling. But as I mentioned before, there’s no reason why you need to implement a CSD headerbar if your app doesn’t have a menubar.

It’s KDE’s fault for not supporting client-side decorations properly, so nobody should avoid them just to work around KDE brokenness

There is a kernel of truth here: KDE developers have indeed opted not to support GTK client-side decorations as they are currently implemented.

However, this is only because that implementation utilizes optional extensions to the standard that are very challenging for KWin to also implement. Many of KWin’s architectural design decisions reflect the original spec, and not this optional extension to it. Adding support is very technically challenging, and was judged to present an unacceptable risk of breakage.

At this point, GTK developers may be thinking, “Well, that’s not my problem that you wrote KWin in such an inflexible way.” But it’s not quite kosher to extend a spec with an optional single-vendor extension and then blame other people for not adopting it. That’s not really following the spec.

Nonetheless, there are rumblings of being able to support GTK CSDs in Wayland. For now though, the situation on X11 is unfortunate but pretty much unavoidable. You can’t break a spec and then ask everyone else to support your brokenness; it defeats the entire purpose of having a spec in the first place.

Anyone interested in more of the technical details can read through the comments in the following bug reports:

https://bugs.kde.org/show_bug.cgi?id=379637

https://bugzilla.gnome.org/show_bug.cgi?id=csd

https://gitlab.gnome.org/GNOME/gtk/issues/499

https://gitlab.gnome.org/GNOME/gtk/issues/489

https://gitlab.gnome.org/GNOME/gtk/issues/215

https://gitlab.gnome.org/GNOME/gtk/issues/488

The bottom line is that it’s not fair to blame KWin and its developers for not supporting the GTK developers’ decision to implement client-side decorations without extending the X11 window manager spec so that everybody could follow it.

I hope this post helps to clarify my position on CSDs. I suspect that these rebuttals won’t put the issue to rest because CSD headerbars have always been more about aesthetics and vertical space savings than functionality, power, and flexibility. That’s fine! all of these arguments I’m making are sort of a roundabout way of saying that I prefer the latter traits to the former ones. I think it’s a good thing that we have choice in the desktop space–real choice, where well-implemented competing visions and design goals appeal to different sorts of people. And that’s what we’re all doing here: empowering people to find what works best for them so their needs and desires can be provided by high-quality open-source software that respects their privacy, freedom, and need for highly usable and productive applications.

March 05, 2019

There has been lots of excitement around WebAssembly and more specifically Qt for WebAssembly recently. Unfortunately, there are no snapshots available yet. Even if there are, you need to install a couple of requirements locally to set up your development environment.
I wanted to try it out and the purpose of this post is to create a developer environment to test a project against the current state of this port. This is where docker comes in.
Historically, docker has been used to create web apps in the cloud, allowing to scale easily, provide implicit sand-boxing and being lightweight. Well, at least more lightweight than a whole virtual machine.
These days, usage covers way more cases

  • Build Environment (standalone)
  • Development Environment (to create SDKs for other users)
  • Continuous Integration (run tests inside a container)
  • Embedded runtime

Containers and Embedded are not part of this post, but the highlights of this approach are

  • Usage for application development
  • Resources can be controlled
  • Again, sandboxing / security related items
  • Cloud features like App Deployment management, OTA, etc…

We will probably get more into this in some later post. In the meantime, you can also check what our partners at Toradex are up to with their Torizon project. Also, Burkard wrote an interesting article about using docker in conjunction with Open Embedded.
Let’s get back to Qt WebAssembly and how to tackle the goal. The assumption is, that we have a working project for another platform written with Qt.
The target is to create a container capable of compiling above project against Qt for WebAssembly. Next, we want to test the application in the browser.

1. Creating a dev environment

Morten has written a superb introduction on how to compile Qt for WebAssembly. Long-term we will create binaries for you to use and download via the Qt Installer. But for now, we aim to have something minimal, where we do not need to take care of setting up dependencies. Another advantage of using docker hub is that the resulting image can be shared with anyone.
The first part of the Dockerfile looks like this:

FROM trzeci/emscripten AS qtbuilder

RUN mkdir -p /development
WORKDIR /development

RUN git clone --branch=5.13 git://code.qt.io/qt/qt5.git

WORKDIR /development/qt5

RUN ./init-repository

RUN mkdir -p /development/qt5_build
WORKDIR /development/qt5_build

RUN /development/qt5/configure -xplatform wasm-emscripten -nomake examples -nomake tests -opensource --confirm-license
RUN make -j `grep -c '^processor' /proc/cpuinfo`
RUN make install

Browsing through the Docker Hub shows a lot of potential starting points. In this case, I’ve selected a base image which has emscripten installed and can directly be used in the follow up steps.
The next steps are generally a one-to-one copy of the build instructions.
For now, we have one huge container with all build artifacts (object files, generated mocs, …), which is too big to be shared and those artifacts are unnecessary to move on. Some people tend to use volume sharing for this. The build happens on a mount from the host system to the image and install copies them into the image. Personally, I prefer to not clobber my host system for this part.
In the later versions of Docker, one can create multi-stage builds, which allow to create a new image and copy content from a previous one into it. To achieve this, the remaining Dockerfile looks like this:

FROM trzeci/emscripten AS userbuild

COPY --from=qtbuilder /usr/local/Qt-5.13.0/ /usr/local/Qt-5.13.0/
ENV PATH="/usr/local/Qt-5.13.0/bin:${PATH}"
WORKDIR /project/build
CMD qmake /project/source && make

Again, we use the same base container to have em++ and friends available and copy the installation content of the Qt build to the new image. Next, we add it to the PATH and change the working directory. The location will be important later. CMD specifies the execution command when the container is launched non-interactively.

 

2. Using the dev environment / Compile your app

 

The image to use for testing an application is now created. To test the build of a project, create a build directory, and invoke docker like the following

 

docker run --rm -v <project_source>:/project/source -v <build_directory>:/project/build maukalinow/qtwasm_builder:latest

This will launch the container, call qmake and make and leave the build artifacts in your build directory. Inside the container /project/build reflects as the build directory, which is the reason for setting the working directory above.
To reduce typing this each time, I created a minimal batch script for myself (yes, I am a Windows person �� ). You can find it here.

 

3. Test the app / Containers again

Hopefully, you have been able to compile your project, potentially needing some adjustments, and now it is time to verify correct behavior on runtime in the browser. What we need is a browser run-time to serve the content created. Well, again docker can be of help here. With no specific preference, mostly just checking the first hit on the hub, you can invoke a runtime by calling

 

docker run --rm -p 8090:8080 -v <project_dir>:/app/public:ro netresearch/node-webserver

This will create a webserver, which you can browse to from your local browser. Here’s a screenshot of the animated tiles example

animated_dws

 

Also, are you aware that there is also Qt Http Server being introduced recently? It might be an interesting idea to encapsulate it into a container and check whether additional files can be removed to minimize the image size. For more information, check our posts here.

If you want to try out the image itself you can find and pull it from here.

 

I’d like to close this blog asking for some feedback from you readers and get into some discussion.

  • Are you using containers in conjunction with Qt already?
  • What are your use-cases?
  • Did you try out Qt in conjunctions with containers already? What’s your experience?
  • Would you expect Qt to provide “something” out-of-the-box? What would that be?

 

The post Using Docker to test Qt for WebAssembly appeared first on Qt Blog.

March 04, 2019

About me:

My name is Zlatko Mašek and I am a programmer originally from Croatia, but I live in Ireland. As you can find on my blog I am interested in art, cooking and social change; not necessarily in that order. I have a master’s in Information Science and in university I fell in love with Python working on early stuff with natural language processing. Today it is my programming language of choice and career-wise I am trying to find a more meaningful application for coding that benefits the society. However, Python, as a general purpose programming language, also allows me to tinker on small projects as a hobby for myself. When it comes to art, I was always drawn to it, even if it took me years to get myself doing it. It’s currently just a hobby, not a career, and I do things both traditionally and digitally with a graphics tablet. It helps me relax and push my mind into a wandering mode.

Krita Plug-in:

Ever since Krita allowed scripting in Python, I was eyeing what I could do with it. Since it’s using QT and I had no previous experience with it, I wanted to learn a bit about it because I’m programming with Python as my day job. Doing image manipulation to transform images for different usages between different systems is always a fun challenge. I wanted to switch from direct image scripting to a plug-in based workflow so I didn’t have to do too much context switching between work-time and hobby-time. Krita being cross-platform also helped because I didn’t have to deal with installing Python on operating systems that don’t have it pre-installed. The plug-in I made is simple enough. It slices the image and prepares tiles for the usage in a tiling library like Leaflet. You need to make sure that you have a flattened image saved beforehand and it’s the last thing you do when preparing for an export. Also make sure that the image is rectangular if you don’t want the plug-in to crop it by itself. The plug-in is fired up by going to the Tools -> Scripts -> Krita – Leaflet in the menu bar.

plugin-selection.png

Then you pick a folder for exporting the tiles and the zoom level. I will use a low zoom level here because the higher the zoom level, the longer it takes for the plug-in to finish processing and it’s heavier on the resource usage. You press the “Leafletize” button and wait for it to finish processing.

plugin.png

You’d obviously check the output folder then and reload the image back to the saved version since the processing is destructive. The tiles are of 256×256 px dimensions separated in folders that Leaflet can use. Creating a basic JavaScript code to load them is trivial enough. Just to test it, you can create an index.html in the output folder and fill it up with this code that is using the CDN libraries so you don’t have to download anything:

<!doctype html>
<html lang="en">
  <head>
    <meta name="viewport" content="width=device-width, minimum-scale=1.0, initial-scale=1.0, user-scalable=yes">
    <title>Krita - Leaflet example</title>
    <link rel="stylesheet" href="https://unpkg.com/leaflet@1.4.0/dist/leaflet.css" integrity="sha512-puBpdR0798OZvTTbP4A8Ix/l+A4dHDD0DGqYW6RQ+9jxkRFclaxxQb/SJAWZfWAkuyeQUytO7+7N4QKrDh+drA==" crossorigin=""/>
    <script src="https://unpkg.com/leaflet@1.4.0/dist/leaflet.js" integrity="sha512-QVftwZFqvtRNi0ZyCtsznlKSWOStnDORoefr1enyq5mVL4tmKB3S/EnC3rRJcxCPavG10IcrVGSmPh6Qw5lwrg==" crossorigin=""></script>
    <style type="text/css">
        body {
            text-align: center;
        }
        #map {
            margin: auto;
            width: 800px;
            height: 600px;
        }
    </style>
  </head>
  <body>
      <div id="map"></div>
      <script type="text/javascript">
          var map = L.map('map').setView([0.0, -0.0], 0);
          L.tileLayer('./{z}/{x}/{y}.png', {
              maxZoom: 4,
              noWrap: true
          }).addTo(map);
      </script>
  </body>
</html>

Loading the HTML is also easy. Here, I will execute the Python server as a module. It has to be in the same folder in order to pick it up.:

python -m SimpleHTTPServer

You can fire up the browser of choice to see the results on http://localhost:8000/

leaflet.png

I wanted to create that plug-in so I could tile and slice a custom map for a browser based video game that needed a map functionality. It wasn’t going to use any fancy libraries for canvas processing, but ordinary web technologies. That’s why I made this plug-in. You can get it from the Krita-Leaflet repository.

Could you tell us something about yourself?

My name is Ari Suonpää and I reside in a small town in Finland. The elements of nature are close to our home, which keeps me inspired whenever I go outside.

Do you paint professionally, as a hobby artist, or both?

Sometimes I do commissions, but this is mainly a hobby. After all, I do have a day job.

What genre(s) do you work in?

My favorite genres are fantasy and landscapes. Many times I can combine these two as fantasy landscapes are my favorite subjects.

Whose work inspires you most — who are your role models as an artist?

There are so many different artists that inspire me. Whenever I need inspiration I just browse through my favorites in DeviantArt. As for traditional painters the most influence comes from Hudson River school, like Albert Bierstadt and Thomas Cole. And Richard Schmid has a style that I’m still trying to learn from. And of course, Bob Ross as he showed how fun a painting process can be.

How and when did you get to try digital painting for the first time?

I bought my first Intuos tablet as a student. That was about 15 years ago. The tools, both software and hardware, have evolved a lot since then. I started to learn traditional painting at the same time frame. Since then I’ve been using both digital and traditional techniques. Almost everything I learn about art can be applied to both.

What makes you choose digital over traditional painting?

It’s the convenience of grabbing your laptop and pen to spend a few minutes on a painting. And the freedom of having undo. It’s so much
easier to test new things when working digitally. But I think digital tools are not a full substitute for traditional media. Nothing beats the way you can handle textures using a real brush. And you don’t get the smell of the real oil paint while painting digitally. When doing traditional paintings I sometimes snap a photo of the painting and try things on top of it digitally. It’s nice to be able to utilize the best of both worlds.

How did you find out about Krita?

In fact I don’t remember.

What was your first impression?

It seemed like a simple enough solution to get things done. Many of the commercial applications have so many features that it can feel
overwhelming.

What do you love about Krita?

The simplicity, great brush performance, and Linux support although I paint on a Surface Book.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Nothing really annoys me about Krita. But some features would be highly appreciated: a bevel layer effect would help me to add a feel of
thickness to digital brush strokes. Now I have to use Photoshop for that. The same goes with a mixer brush tool in Photoshop. I wish Krita
had an option to paint with multiple colors at the same time.

What sets Krita apart from the other tools that you use?

It’s open source which I highly appreciate. That’s the same reason I’m using Blender too.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I haven’t used Krita for that long, so I have only a few pieces to choose from. I suppose it would be my latest one depicting a harbor and
a ship. It’s the first painting fully painted in Krita where I really invested time to get things right. It used the Digital Atelier brushes
through the whole painting process.

What techniques and brushes did you use in it?

I was trying to imitate traditional paintings by using a limited palette and the awesome Digital Atelier brush set.

Where can people see more of your work?

Facebook: https://www.facebook.com/artofarisuonpaa/
DeviantArt: https://www.deviantart.com/arisuonpaa

Anything else you’d like to share?

My suggestion for any new artist is to get your basic foundation right. Spend your time on learning values, perspective, and color theory. All
the art instructions for traditional painting and drawing still apply to digital tools too. Two of the most common mistakes for beginner digital artists are 1) doing the whole painting with a soft brush, or 2) shading with pure black and white. Usually, it’s best to vary the hardness to get both soft and hard edges. Light and shadow have always color. Just take a painting you admire, and adjust the saturation to see this.

You should ask others for comments on different art sites. I learned a lot when some helpful individuals gave me constructive feedback and even did some paintovers.

March 03, 2019

It’s time for week 60 for KDE’s Usability & Productivity initiative, and this one is positively overflowing with goodies! Will you even be able to handle it? I THINK NOT!!! But read it and see:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

gcomprisHi,
We are pleased to announce the release of GCompris version 0.96.

This new version includes updated translation for several languages, and a few bug fixes.

Translations that received a big update:

  • Brazilian Portuguese (100%)
  • Breton (100%)
  • Finnish (90%)
  • Indonesian (100%)
  • Norwegian Nynorsk (97%)
  • Polish (100%)

 

 

This means we have now 19 languages fully supported: British English, Brazilian Portuguese, Breton, Catalan, Catalan (Valencian), Chinese Traditional, Dutch, French, Galician, Greek, Hungarian, Indonesian, Italian, Malayalam, Polish, Portuguese, Romanian, Swedish, Ukrainian.

We still have 15 partially supported languages: Basque (78%), Belarusian (68%), Chinese Simplified (69%), Estonian (62%), Finnish (90%), German (84%), Hindi (76%), Irish Gaelic (82%), Norwegian Nynorsk (97%), Russian (77%), Scottish Gaelic (70%), Slovak (62%), Slovenian (56%), Spanish (93%), Turkish (73%).

We decided again for this release to keep the translations that dropped below 80%. It would be sad to have to disable 10 languages from the application, but if no one updates the following translations, they will be disabled in next release: Basque, Belarusian, Chinese Simplified, Estonian, Hindi, Russian, Scottish Gaelic, Slovak, Slovenian and Turkish.

For the windows version, we added a new entry in the start menu called GCompris (Safe Mode) to launch it with software rendering mode. This was needed as the auto-detection of OpenGL support was not reliable. Now, users can easily choose between OpenGL and software rendering without changing the configuration file.

Known issues:

 

  • The progress bar for downloads doesn’t work anymore. This is a side effect from our switch to https for hosting the files. We are looking to improve this for next release.

 

As a side note dedicated to GNU/Linux distribution packagers, this new version now requires OpenSSL to be able to download voices and additional images.

 

As usual you can download this new version from our download page. It will also be available soon on the Android and Windows store.

Thank you all,
Timothée & Johnny

March 02, 2019

openexpo

On the 20th of June, KDE will be attending OpenExpo Europe 2019.

The event is held in Madrid, Spain, and the organisers have kindly given us a space on the exhibition floor. We will be showcasing the best of what KDE has to offer in the business world. This will include devices that will show off the versatility and potential of Plasma and Plasma Mobile on everything - from mobiles, embedded devices, SBCs and low-powered devices (like the Pinebook), to its capability for adapting to vehicle infotainment systems and high-end ultrabooks, like the KDE Slimbook.

We will also be running videos and slide shows demonstrating the flexibility of Plasma, Plasma Mobile and all our applications on all platforms, and informing attendees how KDE Frameworks, such as Kirigami, can be useful for fast and flexible multiplatform development.

This is an event aimed mainly at businesses that want to work with other businesses, so if you or your company would like to get in touch with other IT companies, especially Spanish ones (which is what we want to do), this may be a good chance to broaden your market.

For those looking to share their knowledge, the OpenExpo Europe CfP (Call for Participation) is also still open and they are looking for speakers.

Mark your calendars, come and visit our stand, and keep up to date with all the new stuff KDE is working on!

One of the larger missing pieces for KDE Itinerary is access to dynamic data for your current mode of transport. That is delays or other service interruptions, gate or platform changes, or finding a specific connection for e.g. an unbound train booking. At least for ground-based public transport we have been making some progress on this in the last few months, resulting in a new library, KPublicTransport.

Data Model

The information we are talking about here are essentially how to get from A to B, and whether a given journey from A to B is on schedule. Stop and route schedule information are often found in the same context, but since we don’t need that for KDE Itinerary those aren’t considered here.

A bit less abstract and following how Navitia names things, this gives us the following objects to work with:

  • Location: A place identified by a name, geo coordinates and/or a data source specific identifier (such as UIC station codes). While generic places are usually supported, we are mostly interested in a specific sub-type, stop areas or stop points. That is places a public transport line stops, such as train stations or bus stops.
  • Lines and Routes: A line is a set of stops connected by a public transport service (e.g. “Circle line”), a route is specific service on that line, typically identified by the direction.
  • Departures and Arrivals: That is, information about when and where a specific service arrives or leaves from a stop.
  • Journeys: A journey consists of multiple sections, that combined describe how to get from location A to location B, using public transport services, but possibly also containing walks to and from a stop, or transfer instructions.

The properties these objects can have vary depending on the backend providing the information, ranging from basic things like human-readable names, times or geo coordinates to things like the CO2 impact of a journey or facilities found at a station or inside a train.

To obtain this information, we need the following query operations:

  • Given two locations and a timeframe, find a set of possible journeys.
  • Given a stop and a timeframe, query departures or arrivals.
  • Disambiguate or auto-complete locations. That’s essentially querying locations by name. This is needed if all we have as a destination is e.g. “Berlin”, which could refer to dozens of train stations.

Depending on the backend, those queries may support additional parameters, such as preferred modes of transportation or your walking speed.

This model is largely based on the most complex case, local public transport. Long-distance railway services tend to be closer to flights (using per-day unique service identifiers), but can be represented by this as well. Flights are out of scope here.

To simplify the implementation, this is also closely following what Navitia and other vendor-specific APIs provide, rather than the needs of KDE Itinerary. We’d for example be much more interested in querying delays and disruptions for a given journey section, rather than for a given stop. This can however be achieved using the above building blocks.

Online vs Offline

All this isn’t exactly new though, the KDE4-era public transport plasmoid had this already. However that focused on GTFS as a data source in its later stages. GTFS is a Google-defined standard format for providing the entire base schedule of a public transport network. This has the advantage of being fully offline capable, once you have transferred the data set once. While this somewhat works on desktop, the disk space and bandwidth cost for mobile devices might already be to high. Things get even worse when looking at GTFS Realtime, which is designed for providing live data on an entire public transport network, in the extreme case containing high frequency position updates for all vehicles in the network.

GTFS was designed to feed this information into applications like Google Maps, not for direct consumption by every single user. From a user point of view we would like to only deal with the data for the journey we are interested in to conserve disk space and bandwidth usage, even if that means giving up the possibility for offline support. Offline support however cannot work with realtime data anyway, and that’s one of the most valuable features here.

So, what we would need is a network service that aggregates base schedule and realtime information provided by public transport networks via GTFS or any other format, and that allows us to query for the data we are interested in. And that’s exactly what Navitia does.

Online Backends

Unfortunately reality is slightly more more complicated, as it’s not enough to just support Navitia as a backend. While Navitia has extensive coverage, some important providers are missing and are not providing data that could be fed into Navitia. SCNF is one such case that is relatively easy to support, they are running their own Navitia instance. Others such as Deutsche Bahn have their own APIs.

So we need a system that can use several Navitia instances as well as vendor specific APIs as backends. While the concepts map fairly well to the services I have seen so far, aggregating results from different backends brings in new challenges such as aligning different naming and identification of lines or stops.

Implementation

The code used to be part of KDE Itinerary until this week, but has now been moved to its own repository and library, KPublicTransport. Having this as a standalone framework makes sense as there are other use-cases for it beyond the irregular travel scenario covered by KDE Itinerary, such as helping with your daily commute.

Next to the implementation, this also contains three example applications to trigger the three types of queries mentioned above, two of which are shown below. There’s also some initial API documentation.

KPublicTransport departure query example application showing departures from Brussels Gare de Midi. Example application showing departures information for Brussels Gare de Midi with local and long distance trains, and realtime platform change information.
KPublicTransport journey query example application showing a journey from Vienna airport to Vienna central station. Example application showing a journey from Vienna airport to the Akademy 2018 accommodation, with realtime delay and platform change information.

What’s next?

While KPublicTransport is at a point to build prototypes with it, there are of course still many things to do. Besides completing and extending the implementation, provider coverage outside of Europe is something to look into too, the current backends and test-cases are all largely Europe-centric. If you come across GTFS datasets not integrated into Navitia’s dataset yet, pointing the Navitia team to those is an easy way to help. Integration into KDE Itinerary is another big subject, I’ll write a separate post about that.

KDE Project:

Dirk Hohndel talked about his experiences with QML and Kirigami at LCA 2019:
Producing an application for both desktop and mobile

I think that's quite useful for us as feedback from somebody who is not directly part of the KDE community.

February 28, 2019

My last post shows how to create a stub Python/Kirigami app that doesn’t do anything. Time to change that! In this post we’re filling the screen with some controls.

Kirigami Pages

Kirigami apps are typically organized in Pages. Those are the different ‘Screens’ of an app. If you come from the Android world you can think of them as the view part of activities. In our case we want to have an initial page that offers to enter a stop or a destination and opens a new page that shows a list of possible routes. Clicking on one of the list items opens a new page with a detailed view about the connections.

Pages are organized in a pagestack where pages can be pushed and popped. On a phone only the topmost page is shown, whereas on a larger screen (desktop or tablet) multiple pages can be shown next to each other.

A single page on the phone
Two pages next to each other on the desktop

So let’s create some pages! I’m going put each page in its own .qml file and let the name end with Page. Our first version of StartPage.qml looks like this:

import QtQuick 2.2
import QtQuick.Layouts 1.1
import QtQuick.Controls 2.4
import org.kde.kirigami 2.0 as Kirigami

Kirigami.Page
{
    title: "Start journey"
}

It produces an empty page with a title. Before we can actually see it we need to add it to the pageStack. Replace the Label {} declaration in main.qml with

pageStack.initialPage: Qt.resolvedUrl("StartPage.qml")

pageStack.initialPage is, well, setting the initial Page of the Page stack. Qt.resolveUrl is converting the relative URL of the QML file into an absolute one. Starting the app gives us an empty page

Time to fill it with some content.

Basic controls

On the start page we need need a way to enter start and destination of our journey as well as the date and time of our travel. For start and destination we are using simple TextFields from QtQuick Controls 2. Note that the older version 1 of QtQuick Controls is still around for the foreseable future, but we want to avoid using that. We’re extending StartPage.qml with our controls

ColumnLayout {
    width: parent.width

    Label {
        text: "From:"
    }
    TextField {
        Layout.fillWidth: true
        placeholderText: "Würzburg..."
    }
    Label {
        text: "To:"
    }
    TextField {
        Layout.fillWidth: true
        placeholderText: "Berlin..."
    }
}

A ColumnLayout is a component that positions its children vertically. We set it to be as wide as its parent, the page. The TextFields shall span the whole width as well. Instead of using the same ‘width: parent.width’ we are using ‘Layout.fillWidth: true’. This property is only available to children of a Layout. The difference to the first way is that all the width that is not already occupied by other elements in the layout is filled.

Next we need some way to enter a departure date and time. Unfortunately I’m not aware of any ready-to-use date and time pickers in QtQuick and Kirigami, so I’ll leave this open for a future post. For the time being two simple placeholder buttons shall be enough. Let’s add them to our ColumnLayout

RowLayout {
    width: parent.width
    Button {
        text: "Pick date"
        Layout.fillWidth: true
    }
    Button {
        text: "Pick time"
        Layout.fillWidth: true
    }
}

Now our app looks like this. Both buttons have the “Layout.fillWidth” property set to true, resulting in each one getting 50% of the space.

The buttons look a bit weird, don’t they? That’s because they are using the built-in QtQuick Controls style. If you are using Plasma you are probably used to the org.kde.desktop style which emulates the active Qt Widgets style. We can force our app to use the org.kde.desktop style by running ‘QT_QUICK_CONTROLS_STYLE=”org.kde.desktop” ./main.py’

Looks closer to what we have on the desktop, doesn’t it? Qt also offers a ‘material’ style that follows Android’s material guidelines

Next we need a way to press “Search”. We could solve that with yet another button, but Kirigami offers another way. Pages in Kirigami can have Actions associated with them. The presentation differes from the phone to the desktop. On the phone actions are displayed on the bottom where they are easily reachable while on the desktop they are displayed in form of a toolbar at the top of the page. Let’s add an action to our page

Kirigami.Page
{
    id: root

    title: "Start journey"

    actions.main: Kirigami.Action {
        icon.name: "search"
        text: "Search"
        onTriggered: pageStack.push(Qt.resolvedUrl("ConnectionsPage.qml"))
    }

    ColumnLayout {

On the phone we get this

while on the desktop we get that

You can force the mobile view on the desktop by setting the QT_QUICK_CONTROLS_MOBILE variable to 1.

Triggering the action pushes ConnectionsPage.qml on the pageStack. Of cource we need to create that one now:

import QtQuick 2.2
import QtQuick.Layouts 1.1
import QtQuick.Controls 2.4
import org.kde.kirigami 2.4 as Kirigami

Kirigami.Page
{
    title: "Connections"
}

Right now it’s just an empty page, we’re going to fill it with life in the next post.

You can find the full source code for all posts on Git.

Happy hacking!

KDE is happy to announce that we will be part of Google Summer of Code 2019. GSoC is a program where students recieve stipends to work on free software for 3 months. Getting paid for open source work, that’s the dream, right?

KDE Connect is participating with 3 interesting projects that also involve other areas of KDE

1. Improving KDE Connect on Windows

KDE Connect builds and runs on Windows, but there are a lot of things that can be improved. This mostly involves the functionality that makes use of notifications. A large part of this task is about improving KNotifications on Windows.

2. An SMS app for Plasma Mobile

Plasma Mobile does not have a functional SMS app yet. We believe that the best way to create one is to reuse the SMS UI we’ve been developing for KDE Connect. We verified that the Plasma Mobile SMS stack is functional on the Nexus 5 (I have no information about other devices). The UI is already running on a phone, what’s missing is a backend that talks to ofono

3. Barcode scanning infrastructure

Scanning a barcode is one of those tasks that comes up in various different apps but the developers don’t want to implement it themselves. For those kind of tasks we have the Purpose framework in KDE. It allows the developer to specify a desired action and let the user choose from available services to fulfil them. E.g. the Share feature in Dolphin is implemented via Purpose. This task is about adding a new action type to Purpose that allows to scan a barcode. Possible implementations could use the local camera or the camera of a device connected via KDE Connect.

If you are interested in doing one of those tasks and have some basic understanding of C++ please contact us on #kdeconnect on Freenode or on Telegram.

The deadline for student applications is April 9th and composing a good application takes some time, so please contact us rather soon.

Please note that we require students to have done some minor work (e.g bug fixes) before starting GSoC. Don’t worry if you don’t have done anything yet, there is still time for it ��

Happy hacking!

KDE and open source in general has used IRC since the 90s but times change and these days people expect more than text with lots of internals exposed to the user.  So KDE has set up a Matrix server which talks to other Matrix server and importantly also talks to IRC servers and their channels because some people will never change.  The bridging to IRC isn’t perfect but it works much neater than on e.g. Telegram where the IRC user is one bot, here the IRC user is an individual user and you can set it up to use the same nickname you’ve been using since 1995.  Unless you use square brackets in your nickname in which case I’ve no sympathy ��

But it still requires a bit of understanding and setup.  For one thing you need an app to talk to it, and the more common apps seem to be Riot web and Riot Android. KDE has its own setup of Riot web called webchat.kde.org and you can get the Android client from F-Droid or Google Play.  Once you make an account you also need to tick some boxes (including one saying you are over 16 which vexes somewhat but it doesn’t be beyond the ability of most 15 year old to work out how to work around it).

Channels are called rooms and you can then search for them on the kde.org server or on the matrix.org server.   Or, once you work out the syntax, you can join channels on Freenode IRC or OFTC IRC.  You can also bridge IRC channels to Matrix Rooms and make it mostly transparent which works.

There’s voice and video calling too using Jitsu and important features like emojis and stickerpacks, although the Konqi sticker pack is still to be added.

I had some faff getting my nick from Freenode recovered but managed that before long.  Remember to set a nice pic so people can recognise you.

I’ve now stopped using my IRC app and don’t tend to look at Telegram unless someone pings me.  It’s great that KDE now has modern and open communications.  Thanks to the sysadmins and Matrix team and others who worked on this.

Next step: getting forums and mailing lists moving onto Discourse ��

More docs on the KDE Matrix wiki page.

Facebooktwitterlinkedinby feather

Every summer Google puts on a program that helps university developers get involved with the open source community. This is known as Google Summer of Code (GSoC). Krita has always participated in GSoC through the KDE community, and plans to do it again in 2019!

If you, or someone you know, is in university and wants to work on Krita, you have come to the right place.

Project-based

Submitting a resumé or CV isn’t how this program works. For you to be picked, you need to be involved with the Krita community early and show you have some capacity to do programming.

The summer program involves focusing on one project. You will have a mentor assigned to help learn the ropes. Here are some potential project ideas.

If there is another project that you want to see, you can also propose your own. Use these guidelines to help formulate ideas: Student Proposal Guidelines.

So, hang out with our community on Krita’s chat channel, build Krita, start hacking on some bugs or feature requests and discuss what you would like to do with us.

Knowledge required

There are a lot of programming languages and technologies out there. These are some of the main technologies that Krita uses:

  • Qt 5 Framework
  • C++, Python and QML
  • GIT
  • IRC — how developers do most of their communication

Deadline and Contact Us

You can find the timeline for the whole GSoC program here. Student application begin March 25, but you should *really* start being involved earlier if you want to have a shot. Ohh, did we mention this is a paid position?

If these rules haven’t scared you off, get in contact with us if you have any questions. Drop by the forum or talk with us on the chat room (IRC)

I'm glad to announce KStars first release of 2019: v3.1.0 for MacOS, Linux, and Windows. This release focuses on improvements to stability & performance of KStars. In 3.0.0, we introduced quite a few features which resulted in a few regressions that we worked hard to iron out in this release.


Ekos Scheduler


Eric Dejouhanet fixed a few bugs with the Ekos Scheduler along with making steady improvements to cover all corner cases for multi-object multi-night scheduling which can become quite complex.


Ring-Field Focusing


Eric also added a new feature to the Focus module: Ring-field focusing. This method limits the area which stars are detected between an inner and an outer ring. This can be useful in images where
galaxies or nebulaes can be mistaken as stellar object which often lead to erroneous HFR reporting.

This can only be used for full-field focusing.


Meridian Flip


Wolfgang Reissenberger migrated the Meridian Flip handling code to the Ekos Mount tab. This makes meridian flip possible even if there is no capture process going. If the mount is tracking and passes the meridian, it can be triggered to flip now in the Mount Module.


Ekos Documentation


Yuri Chornoivan migrated online Ekos documentation to KStars official documentation. This was a tremendous effort due to the volume of the documentation data involved. Now Ekos documentation are properly managed by us in KDE and should get translations like the rest of the official documentation.


Ekos PAA for Non-GOTO mounts


Sebastian Neubert introduced manual workflow for users who would like to use Ekos Polar Alignment Assistant Tool with manual non-GOTO mounts. Now the users are prompted to rotate the mount when it is required to do so until the process is complete. This is Sebastian's first code contribution to KStars, welcome to the team!


MacOS Build Script


Robert Lancaster developed a comprehensive script along with detailed documentation for building KStars with Craft on MacOS. This enables users to wish to either develop for KStars or use the latest bleeding edge versions from trying it out with a single script that pulls all the sources and compile them accordingly with the aid of KDE's spectacular Craft technology.


DSLR LiveView


The LiveView window received a few enhancement to enable zooming and panning for supported DSLR cameras.

 

 


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.