May 27, 2019

Could you tell us something about yourself?

I am a happily married cat mom living in the states, more specifically the beautiful Pacific Northwest. I have played Dungeons and Dragons and other TTRPGs for over a decade and maintain a weekly play schedule to see this day; tabletop games are my passion!

Do you paint professionally, as a hobby artist, or both? What genre(s) do you work in?

I would say both, though more a hobby than professional. I currently lend my talents to Worldbuilding Magazine, an online and community driven magazine whose mission really resonated with me! I stay working within the realm of fantasy, especially dark or horror fantasy.

Whose work inspires you most — who are your role models as an artist?

My longtime role model has been Donato Giancola, ever since I saw his work in my teens on the cover of the Lord of the Rings trilogy book I’d been reading at the time. I also greatly admire Karla Ortiz for her  strength of will and personality – she inspires me all the time to treat others with respect and work my butt off!

How and when did you get to try digital painting for the first time?

I was in a junior high photography class in which we were able to use old model Wacom tablets for our work. Of course I started drawing with it instead and my professor loved what I did so much he GAVE me his own tablet so I could continue practicing at home. I had that Wacom until I started college years after and upgraded to an Intuos.

What makes you choose digital over traditional painting?

I really don’t like messes!! I feel silly about it but it’s true.

How did you find out about Krita?

It was about three years ago when my main computer quit on me, and I didn’t have the cash to buy the updated Windows 10 OS for my new hard drive. I opted for trying Linux Mint, and tested Krita as my Photoshop replacement. Love at first sight! I currently run Manjaro KDE and it continues to be my only painting software (even on my Microsoft surface).

What was your first impression?

It was a very familiar interface coming from Photoshop. I enjoyed the intuitive UI and the ease of adjusting it to my specifications.

What do you love about Krita?

I love the way it looks, I LOVE the brush engine and how much control you have over your brush settings. Also the reference tool??? The best thing I’ve ever seen and I use it constantly.

What do you think needs improvement in Krita?

I’d like a more user friendly text tool but I know that’s in the works and gets improved upon often.

Is there anything that really annoys you?

Not being able to move my selection outlines across the canvas to use the same shape on another area.

What sets Krita apart from the other tools that you use?

It’s open source! I donate often but imagine a digital artist just starting out? I know I couldn’t have afforded Photoshop without my student discount at the time. I also really love the brush engine.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I think it would have to be my piece “Dana” because it was the first painting I’d finished in a long time, and it was the first time I felt like I finally understood what I was doing! The lighting came together well, and I actually found myself really loving doing the leather texture on her clothes.

What techniques and brushes did you use in it?

I tried to use a minimal amount of brushes, mostly using the round brush set to pen pressure opacity and a custom brush that is the same but with a little texture. This piece I specifically strove to paint in the textures, yes even the leather on her gauntlets and padded armor.

Where can people see more of your work?

I have a website at annahannon.com and I also post sometimes on Twitter: @jedejane

Anything else you’d like to share?

I have been using Krita as my primary replacement for Photoshop. It has been a rare thing for me to ‘wish for Photoshop’. The brushes work great, even on my huge canvas sizes. I really enjoy this software and spread the word everywhere I go. I think the collaborators on this project are doing fantastic work for the community and I support them fully.

Thank you for taking this time to interview me! Love yourself and others, and take care.

This wonderful depiction of Konqi was made by Tyson Tan for the KDE community.

It has been nearly three months since I embarked on an adventure in the land known as dev docs. And while the set period for that work is coming to a close, the truth is that the journey has really only just begun. Just like the pioneers of old, the first important step is to get to survey the land and map it for future adventurers.

The KDE community’s developer documentation isn’t exactly new territory but, through the years, it has grown from a garden to a huge forest with only a brave few doing the work to keep things from getting out of hand. They could use a helping hand.

The KDE community is putting out a call for heroes to assemble and help put order to our developer (and eventually also user) documentation. We need all kinds of heroes, from technical/documentation writers to new developers to veteran members. We need all kinds of eyes, fresh eyes, seasoned eyes, outsider eyes, community eyes, to have a better grasp of what developers need and want from our documentation.

There’s quite a list of tasks to be done, but here’s a few that can be accomplished by people with different skills and familiarity with KDE:

  • Proofreading the Apidocs: The API docs are our professional, public-facing documentation for KDE Frameworks and other libraries. Making them look professional can be as simple as checking them for typos and grammatical errors. Bonus quest: Make sure the Apidocs also comply with the KDE Library Documentation Policy.
  • Tutorials: KDE has a plethora of tutorials, both for external developers on TechBase and contributors on the Community Wiki. Simply checking if the tutorials still work is a sure way of checking whether they need updating in the first place.
  • Projects and Teams Info: KDE is home to many software projects and teams, some of which have moved on or disappeared. Marking which ones are the latter and updating the information for those that are still active can lessen the friction for newcomers trying to figure out where to start or who to contact.

Most of these can be done on the KDE Wikis. All you need is a single KDE Identity account, which is also a great way to get started contributing to the community. KDE is a great community that creates great software. It’s about time we also get known for having great documentation as well and for that we’ll need everyone’s help.

Advertisements

May 26, 2019

HI KDE, HI GSOC2019

Hi I’m SonGeon live in Seoul, South Korea.

On this summer I’m working with the KDE community by participating the “Google Summer of Code” Program.

My main goal during GSOC period is making a markdown view, WYSIWIG editor using C++ and Qt.

There were two reasons that I started to make a new markdown view.

First, most markdown editors are using webview based renderer. But webview based editors have the lack of printing options. Because Markdown is aiming to make a good looking document with simple text notations on the web environment. In a single webpage, It doesn’t have pagination for printing.

So webview based renders have the same problems. For example, document elements are printed across multiple pages and the document’s paragraphs, word spacing, and line spacing are slightly different compared to the screen. If the markdown editor support the preview of the paging, better text rendering with the layout of printing, It will be more powerfull like word processors.

Second, the KDE project already has the markdown renderer kmarkdownWebview. Currently, It has a forked third-party javascript library for markdown rendering. I want to minimize the dependencies. And It use the Qt’s QWebEngine and QWebChannel. Those are used to run a JS library and It brings a lot of overhead.

I think writing new renderer using Qt API and C++ without a third-party library is a lighter approach. So I choose to make parser with the Boost Spirit. It’s the PEG parser generator implemented in the boost library and It’s super fast.

I want to contribute many way not only makaing my own markdown editor. One of the main goal in this summer is make as a KDE part to reuse it. It can be markdown preivew module in Okular.

If you interested in this project please feel free to contact me. I’ll check KDevelop, kde-soc telegram group (I joined in irc but it’s hard to check) . Or you can use my email kde.jen6@gmail.com.

Two days ago I mentioned here that the bug report count of KTextEditor and Kate has risen to some not that manageable amount.

For developers that report a bugs or wish, the best way to really get it solved is to scratch your own itch and provide some patch.

I know this is not feasible for all bug reporters, as not all are developers nor will even the developers all have either time nor perhaps the right skill set to tackle the issue on their own.

But if you have the time and you are at least a bit familiar with C++/Qt, you should give it a try.

We can help you to get your patch done, that is much easier for e.g. myself than to motivate me to work on a bug or wish that doesn’t concern my normal workflow or lie within my skill set.

For example we have a lot of issues with left-to-right text rendering or related to languages that use complex Unicode surrogates. Given I have zero knowledge of any language using this my motivation to dig into these issues is small (and I will more likely break more things than fix them).

The same holds for issues in our Vi mode. I don’t use this mode myself nor do I really know how Vi commands shall behave in real life. Therefore any fix or enhancement there is beyond me.

A good example for such a “Scratch Your Own Itch” approach is bug 407910.

It is a small request, to have some action/shortcut to reset the font size to the default one. We have since years some zoom in/out actions/shortcuts but nothing to go back to the configured one.

I rarely use the zoom stuff, perhaps once in a month, if I want to show something to a colleague on my screen or projector and it is really not readable with my normal font size. Therefore my motivation to invest any work into yet an other action I will not use regularly is small.

But, in this case, the reporter had the time to invest a bit work into this.

He provided a patch via our KDE Phabricator - D21412.

We needed some iterations to get the patch into a usable shape in the bug report and in Phabricator, but thanks to the persistence of the reporter, it got now pushed to our repository.

If nobody would have stepped up to provide at least some initial patch for this, such a request for sure would have rotted again in our bug database.

This is not the first time such a nice thing happened, this is just a recent example how such things can work out.

Therefore, if you report something and are capable of given it a try on your own, please do so!

Perhaps even some of the existing bugs or wishes are stuff you want to take care of yourself because they concern you!

I think not a lot motivates your more to do something than an issue you have with a tool for your workflow. At least for me that was the reason to at all start the development of Kate (I missed a MDI variant of KWrite) and join the work on stuff like KTextEditor.

Week 72 in Usability & Productivity initiative is here and it’s chock-full of goodies! We continue to polish Plasma 5.16 ahead of its release in two weeks. There was one point in time when veteran KDE developer and author of the new notifications system Kai Uwe Broulik was literally committing fixes faster than I could add them to this blog post! In addition, features for Plasma 5.17 as well as many of our apps are starting to trickle in. Check it out:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

May 25, 2019

Right now, I am having a hard time understanding BGL’s (the Boost Graph Library) template spaghetti, so decided to write a blogpost while I decipher it, one at a time, documenting the whole thing along the way.

Kube is still alive! I got distracted for a while, both professionally and privately, and writing blog posts is unfortunately always the first thing that ends up on the chopping block. Anyways, lot’s of progress has been made (103 commits in sink, ~130 commits in the kube codebase):

  • For sink we had a variety of bugfixes and performance improvements, especially on the more recent CalDAV/CardDAV backends.
  • For CalDAV/CardDAV we now do basic autodiscovery using the .well-known url’s. The DNS part of the spec has not been implemented so far.
    This means for a properly set-up server you only have to specify the base url, and everything else will be discovered automatically from there.
  • On the more user-facing front we have:
    • Sent emails are now collapsed by default
    • Plain text is now the preferred method of viewing emails. You can still view the HTML variant if available by clicking a button.
      • The Addressbook is no longer read-only and you can now create contacts as well.
    • A visually reworked composer that avoids becoming too wide and removes a lot of the visual clutter.
    • The calendar can now render recurring-events.
    • It is now possible to create events as well.
    • Work on a tasks view has started

Releases

You may have noticed that it’s been a while since the last release. This is not only because releases are additional work, but also because we already have a continuous delivery method with the nightly flatpak.
It’s clear that releases do provide value, both as a communication tool which version should be packaged, and if they would be maintained. With the current manpower we cannot maintain releases though, which makes it significantly less interesting.

With that said, the 0.8 release with the calendar is now long overdue and should be coming out soonish.

Experimental flatpak

Just to put it out there; Additionally to the usual “master” branch of the flatpak, there is also an “experimental” branch, containing, surprise, various experimental bits and pieces.

This currently entails:

  • A plugin that stores the accounts password encrypted by the accounts gpg key (blindly assuming there is one with a matching email address).
  • A search view
  • The upcoming calendar view (which we’ll move over in the next release)
  • The above todo view (which will take a little longer to move to master)
  • A “File as expense” plugin (a showcase how we could do extensions in the mail view).
  • The Inbox crusher view (an experiment for a view to go through your inbox one-by-one).

It typically serves as a staging ground for new components, and is the version that I’m running day-to-day. flatpak makes it easy to switch back and forth between the branches on top of the same dataset, so you can try it and switch back if you don’t like what you see.

To give it a shot use the following command to install and switch to the experimental flatpak branch:

flatpak -y --user install --from https://files.kube-project.com/flatpak/com.kubeproject.kube.experimental.flatpakref
flatpak --user make-current com.kubeproject.kube experimental

To switch back simply issue:

flatpak --user make-current com.kubeproject.kube master

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to: kube-project.com

Although kdesrc-build supports building Qt5 just like the KDE Frameworks, I prefer to use the prebuilt Qt binaries from qt.io for testing against alpha and beta releases. This saves me a lot of time on my machine, which is not powerful enough to compile the full Qt in a reasonable time.

This guide assumes that you already installed Qt from qt.io/download-qt-installer.

To reproduce my setup, an additional step is required compared to a kdesrc-build setup that uses the system Qt, nevertheless you start as usual, by cloning kdesrc-build itself:

git clone https://invent.kde.org/kde/kdesrc-build

Since kdesrc-build should be available anywhere on the system, not just in its own directory, we add it to the PATH variable by appending export PATH=~/kde/src/kdesrc-build:$PATH to your .profile file. Please remember to adapt the path according to where you cloned kdesrc-build to.

echo "export PATH=~/kde/src/kdesrc-build:$PATH" >> ~/.profile
source ~/.profile

Next, the initial setup needs to be performed:

kdesrc-build --initial-setup

This step created all the required configuration files, which now need slight adaptions in order to work with the Qt we downloaded from qt.io in the beginning.

Edit ~/.kdesrc-buildrc, and replace the path to Qt qtdir with the path you installed Qt to. The line should then look similar to this:

    qtdir  ~/.local/Qt/5.13.0/gcc_64 # Where to find Qt5

Now as a final step we need to prevent kdesrc-build from trying to build its own Qt, which can be done by commenting one include line: include /home/jbb/kde/kdesrc-build/qt5-build-include, so it looks like #include /home/jbb/kde/kdesrc-build/qt5-build-include.

This should be it! Have fun compiling up to date KDE software using an up to date Qt without compiling Qt for hours :)

kdesrc-build kirigami --include-dependencies

To activate your newly created environment, you can use

source ~/.config/kde-env-master.sh

The KDE mothership has been sailing towards the “Streamlined Onboarding” land for almost 2 years now. It has been a long trip, with its ups and downs, hurdles and joys. Set sail When I first proposed the idea for this goal, the destination felt being so far ahead, if ever reachable. I could have not have imagined that it would be voted in by the community, adopted and worked on collectively.

May 24, 2019

The bug report count of KTextEditor (implementing the editing part used in Kate/KWrite/KDevelop/Kile/…) and Kate itself reached again some value over 200.

If you have time and need an itch to scratch, any help to tackle the currently open bugs would be highly appreciated.

The full list can be found with this bugs.kde.org query.

Easy things anybody with a bit time could do would be:

  • check if the bug still is there with current master builds, if not, close it it
  • check if it is the duplicate of a similar still open bug, if yes, mark it as duplicate

Beside that, patches for any of the existing issues are very welcome.

I think the best guide how to setup some development environment is on our KDE Community Wiki. I myself use a kdesrc-build environment like described there, too.

Patches can be submitted for an review via our KDE Phabricator.

If it is just a small change and you don’t want to spend time on Phabricator, attaching a git diff versus current master to the bug is ok, too. Best mark the bug with a [PATCH] prefix in the subject.

The team working on the code is small, therefore please be a bit patient if you wait for reactions. I hope we have improved our reaction time in the last months but we still are lacking in that respect.

May 23, 2019

Would you like your C++ code to compile twice as fast (or more)?

Yeah, so would I. Who wouldn't. C++ is notorious for taking its sweet time to get compiled. I never really cared about PCHs when I worked on KDE, I think I might have tried them once for something and it didn't seem to do a thing. In 2012, while working on LibreOffice, I noticed its build system used to have PCH support, but it had been nuked, with the usual poor OOo/LO style of a commit message stating the obvious (what) without bothering to state the useful (why). For whatever reason, that caught my attention, reportedly PCHs saved a lot of build time with MSVC, so I tried it and it did. And me having brought the PCH support back from the graveyard means that e.g. the Calc module does not take 5:30m to build on a (very) powerful machine, but only 1:45m. That's only one third of the time.

In line with my previous experience, on Linux that did nothing. I made the build system support also PCH with GCC and Clang, because it was there and it was simple to support it too, but there was no point. I don't think anybody has ever used that for real.

Then, about a year ago, I happened to be working on a relatively small C++ project that used some kind of an obscure build system called Premake I had never heard of before. While fixing something in it I noticed it also had PCH support, so guess what, I of course enabled it for the project. It again made the project build faster on Windows. And, on Linux, it did too. Color me surprised.

The idea must have stuck with me, because a couple weeks back I got the idea to look at LO's PCH support again and see if it can be made to improve things. See, the point is, PCHs for that small project were rather small, it just included all the std stuff like <vector> and <string>, which seemed like it shouldn't make much of a difference, but it did. Those standard C++ headers aren't exactly small or simple. So I thought that maybe if LO on Linux used PCHs just for those, it would also make a difference. And it does. It's not breath-taking, but passing --enable-pch=system to configure reduces Calc module build time from 17:15m to 15:15m (that's a less powerful machine than the Windows one). Adding LO base headers containing stuff like OUString makes it go down to 13:44m and adding more LO headers except for Calc's own leads to 12:50m. And, adding even Calc's headers, results in 15:15m again. WTH?

It turns out, there's some limit when PCHs stop making things faster and either don't change anything, or even make things worse. Trying with the Math module, --enable-pch=system and then --enable-pch=base again improve things in a similar fashion, and then --enable-pch=normal or --enable-pch=full just doesn't do a thing. Where it that 2/3 time reduction --enable-pch=full does with MSVC?

Clang has recently received a new option, -ftime-trace, which shows in a really nice and simple way where the compiler spends the time (take that, -ftime-report). And since things related to performance simply do catch my attention, I ended up building the latest unstable Clang just to see what it does. And it does:

So, this is bcaslots.cxx, a smaller .cxx file in Calc. The first graph is without PCH, the second one is with --enable-pch=base, the third one is --enable-pch=full. This exactly confirms what I can see. Making the PCH bigger should result in something like the 4th graph, as it does with MSVC, but it results in things actually taking longer. And it can be seen why. The compiler does spend less and less time parsing the code, so the PCH works, but it spends more time in this 'PerformPendingInstantiations', which is handling templates. So, yeah, in case you've been living under a rock, templates make compiling C++ slow. Every C++ developer feeling really proud about themselves after having written a complicated template, raise your hand (... that includes me too, so let's put them back down, typing with one hand is not much fun). The bigger the PCH the more headers each C++ file ends up including, so it ends up having to cope with more templates. With the largest PCH, the compiler needs to spend only one second parsing code, but then it spends 3 seconds sorting out all kinds of templates, most of which the small source file does not need.

This one is column2.cxx, a larger .cxx file in Calc. Here, the biggest PCH mode leads to some improvement, because this file includes pretty much everything under the sun and then some more, so less parsing makes some savings, while the compiler has to deal with a load of templates again, PCH or not. And again, one second for parsing code, 4 seconds for templates. And, if you look carefully, 4 seconds more to generate code, most of it for those templates. And after the compiler spends all this time on templates in all the source files, it gets all passed to the linker, which will shrug and then throw most of it away (and that will too take a load of time, if you still happen to use the BFD linker instead of gold/lld with -gsplit-dwarf -Wl,--gdb-index). What a marvel.

Now, in case there seems to be something fishy about the graphs, the last graph indeed isn't from MSVC (after all, its reporting options are as "useful" as -ftime-report). It is from Clang. I still know how to do performance magic ...



A few months ago, we received a phone call from a bioinformatics group at a European university. The problem they were having appeared very simple. They wanted to know how to usemmap() to be able to load a large data set into RAM at once. OK I thought, no problem, I can handle that one. Turns out this has grown into a complex and interesting exercise in profiling and threading.

The background is that they are performing Markov-Chain Monte Carlo simulations by sampling at random from data sets containing SNP (pronounced “snips”) genetic markers for a selection of people. It boils down to a large 2D matrix of floats where each column corresponds to an SNP and each row to a person. They provided some small and medium sized data sets for me to test with, but their full data set consists of 500,000 people with 38 million SNP genetic markers!

The analysis involves selecting a column (SNP) at random in the data set and then performing some computations on the data for all of the individuals and collecting some summary statistics. Do that for all of the columns in the data set, and then repeat for a large number of iterations. This allows you to approximate the underlying true distribution from the discreet data that has been collected.

That’s the 10,000 ft view of the problem, so what was actually involved? Well we undertook a bit of an adventure and learned some interesting stuff along the way, hence this blog series.

The stages we went through were:

  1. Preprocessing
  2. Loading the Data
  3. Fine-grained Threading
  4. Preprocessing Reprise
  5. Coarse Threading

In this blog, I’ll detail stages 1 and 2. The rest of the process will be revealed as the blog series unfolds, and I’ll include a final summary at the end.

1. Preprocessing

The first thing we noticed when looking at the code they already had is that there is quite some work being done when reading in the data for each column. They do some summary statistics on the column, then scale and bias all the data points in that column such that the mean is zero. Bearing in mind that each column will be processed many times, (typically 10k – 1 million), this is wasteful to repeat every time the column is used.

So, reusing some general advice from 3D graphics, we moved this work further up the pipeline to a preprocessing step. The SNP data is actually stored in a compressed form which takes the form of quantizing 4 SNP values into a few bytes which we then decompress when loading. So the preprocessing step does the decompression of SNP data, calculates the summary statistics, adjusts the data and then writes the floats out to disk in the form of a ppbed file (preprocessed bed where bed is a standard format used for this kind of data).

The upside is that we avoid all of this work on every iteration of the Monte Carlo simulation at runtime. The downside is that 1 float per SNP per person adds up to a hell of a lot of data for the larger data sets! In fact, for the full data set it’s just shy of 69 TB of floating point data! But to get things going, we were just worrying about smaller subsets. We will return to this later.

2. Loading the data

Even on moderately sized data sets, loading the entirety of the data set into physical RAM at once is a no-go as it will soon exhaust even the beefiest of machines. They have some 40 core, many-many-GB-of-RAM machine which was still being exhausted. This is where the original enquiry was aimed – how to use mmap(). Turns out it’s pretty easy as you’d expect. It’s just a case of setting the correct flags so that the kernel doesn’t actually take a copy of the data in the file. Namely, PROT_READ and MAP_SHARED:

void Data::mapPreprocessBedFile(const string &preprocessedBedFile)
{
    // Calculate the expected file sizes - cast to size_t so that we don't overflow the unsigned int's
    // that we would otherwise get as intermediate variables!
    const size_t ppBedSize = size_t(numInds) * size_t(numIncdSnps) * sizeof(float);
 
    // Open and mmap the preprocessed bed file
    ppBedFd = open(preprocessedBedFile.c_str(), O_RDONLY);
    if (ppBedFd == -1)
        throw("Error: Failed to open preprocessed bed file [" + preprocessedBedFile + "]");
 
    ppBedMap = reinterpret_cast<float *>(mmap(nullptr, ppBedSize, PROT_READ, MAP_SHARED, ppBedFd, 0));
    if (ppBedMap == MAP_FAILED)
        throw("Error: Failed to mmap preprocessed bed file");
 
    ...
}

When dealing with such large amounts of data, be careful of overflows in temporaries! We had a bug where ppBedSize was overflowing and later causing a segfault.

So, at this point we have a float *ppBed pointing at the start of the huge 2D matrix of floats. That’s all well and good but not very convenient for working with. The code base already made use of Eigen for vector and matrix operations so it would be nice if we could interface with the underlying data using that.

Turns out we can (otherwise I wouldn’t have mentioned it). Eigen provides VectorXf and MatrixXf types for vectors and matrices but these own the underlying data. Luckily Eigen also provides a wrapper around these in the form of Map. Given our pointer to the raw float data which is mmap()‘d, we can use the placement new operator to wrap it up for Eigen like so:

class Data
{
public:
    Data();
 
    // mmap related data
    int ppBedFd;
    float *ppBedMap;
    Map<MatrixXf> mappedZ;
}
 
 
void Data::mapPreprocessBedFile(const string &preprocessedBedFile)
{
    ...
 
    ppBedMap = reinterpret_cast<float *>(mmap(nullptr, ppBedSize, PROT_READ, MAP_SHARED, ppBedFd, 0));
    if (ppBedMap == MAP_FAILED)
        throw("Error: Failed to mmap preprocessed bed file");
 
    new (&mappedZ) Map<MatrixXf>(ppBedMap, numRows, numCols);
}

At this point we can now do operations on the mappedZ matrix and they will operate on the huge data file which will be paged in by the kernel as needed. We never need to write back to this data so we didn’t need the PROT_WRITE flag for mmap.

Yay! Original problem solved and we’ve saved a bunch of work at runtime by preprocessing. But there’s a catch! It’s still slow. See the next blog in the series for how we solved this.

The post Little Trouble in Big Data – Part 1 appeared first on KDAB.

May 22, 2019

Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.

We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).

We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.

I am happy to announce the release of 0.4.0 version of the Elisa music player.

The new features are explained in the following posts New features in Elisa, New Features in Elisa: part 2 and Elisa 0.4 Beta Release and More New Features.

There have been a couple more changes not yet covered.

Improved Grid Views Elements

Nate Graham has reworked the grid elements (especially visible with the albums view).

I must confess that I was a bit uneasy with this change (it was a part mostly unchanged since the early versions). I am now very happy about this change.

Screenshot_20190522_221314Before
Screenshot_20190522_215801After

Getting Involved

I would like to thank everyone who contributed to the development of Elisa, including code contributions, testing, and bug reporting and triaging. Without all of you, I would have stopped working on this project.

New features and fixes are already being worked on. If you enjoy using Elisa, please consider becoming a contributor yourself. We are happy to get any kind of contributions!

We have some tasks that would be perfect junior jobs. They are a perfect way to start contributing to Elisa. There are more not yet reported here but reported in bugs.kde.org.

The flathub Elisa package allows an easy way to test this new release.

Elisa source code tarball is available here. There is no Windows setup. There is currently a blocking problem with it (no icons) that is being investigated. I hope to be able to provide installers for later bugfix versions.

The phone/tablet port project could easily use some help to build an optimized interface on top of Kirigami. It remains to be seen how to handle this related to the current desktop UI.

 

Network

Chapter 1

Network protocols

define format, order of messages sent, received entities, actions token on transmission (token ring같은 경우를 말하는듯)

Network structure

  • Network Edge : hosts client and servers
  • On DSL(Digital Subscriber Line) using frequency division multiplexing (음성, 데이타 주파수 분리)
  • Host sending packet : packet : L bits, link transmission rate(게bandwidth) : R (bit/sec) packet transmission delay = L/R
  • Network Core : interconnected routers, network of networks
  • packet-switching : application layer 메서지를 패킷단위로 쪼개 다음 라우터에게 보내줌
  • store and forward : 패킷 전체가 수신되기 전까지는 forward할 수 없음.
  • end-end delay : 2L/R
  • arrival rate가 transmission rate를 넘어가게 되면 패킷이 큐에 들어가서 기다림. (queueing delay) 라우터 메모리가 꽉차게 되면 packet dropped. (loss)
  • Circuit switching(Alternative core)
  • src/dest 간의 경로가 보내기전에 결정되고 모든 리소스를 사용함. 안쓰이면 idle 공유따윈 없음
  • FDM(주파수 별로 다른 유저가 사용), TDM(시간을 쪼개서 유저간 공유)
  • Packet Switching vs Circuit Switching

http://gaia.cs.umass.edu/kurose_ross/interactive/ps_versus_cs.php

35명 전체 유저 중 전체 유저가 각각 한번에 시간을 10%씩 점유할 때 10명 이상의 사람이 동시 접속 할 확률

Probability problem in networking.

http://icawww1.epfl.ch/sc250_2004/lecture_notes/sc250_exo2.pdf

  • circuit switching은 큰 데이터(video)같은걸 전송할 때 좋음. packet switching은 대역폭 보장

Internet Structure

  • Network evolution was driven by economics and national policies
  • 소규모의 Network는 ISP를 통해 연결됨. ISP는 상호간의 연결이 필요함
  • Access ISP는 호텔, 회사, 대학에서 제공하는 네트워크를 의미
    1. N개의 Access ISP(Node)가 있을 때 Interconnect 하려면 n(n-3)/2 + n O(n^2) scale
    2. 그래서 Global ISP가 출현해서 ISP간을 연결해주는 Network를 만듬.
    3. Global ISP가 여러개 출현하면서 Global ISP간을 이어주는 IXP(Internet Exchange Point)가 나옴.
    4. Access ISP를 묶어서 Global ISP에 연결해주는 regional networks들이 등장
    5. Content Provider(Google, Akamai)들이 자체 네트워크를 구성함

Delay, Loss, Throughput

  • Packet Delay Sources
  • Transmission Delay : 라우터로 들어오는 패킷에 대한 딜레이 (L/R_in)
  • Nodal Processing Delay : 패킷 헤더 조사, 라우팅 경로 결정 (매우 짧은시간)
  • Queueing Delay : 큐에 들어가서 output link로 나가기까지의 딜레이 (congestion 상황에 따라 다름)
  • Propagation Delay : 매질을 통해 실제 데이터가 전송되는 속도 Ex. 라우터간 거리 200km, 전기의 속도 210^8m/sec → 210^5 / 2*10^8 = 1/1000 sec = 1 ms
  • Packet Loss : 라우터에 큐가 꽉 차면서 발생하는 문제.
  • Throughput : 1초에 몇 비트를 받고있는가? 전송률 (bits/time unit)
  • instantaeous : 순간 전송률
  • average : 평균 전송률
  • 전송량 Rc, Rs이 있고 전달해주는 Core의 전송량이 R이고 열명이 나눠 쓸 경우 throughput은 min(Rc, Rs, R/10)

http://gaia.cs.umass.edu/kurose_ross/interactive/end-end-throughput.php


Chapter 3 Transport-layer

Transport-layer services

다른 host에 있는 Application이 서로 통신 할 수 있는 Logical communication
Network layer은 다른 Host 끼리 서로 통신할 수 있는 것 구별하기 break message into segments and reassemble it

Multiplexing / Demultiplexding at Transport Layer

Transport Layer에서 어떻게 데이터를 처리해줄것인지에 대한 내용 Network Level에서 IP Datagrams를 받음. 여기에는 host, dest ip 정보가 들어있음. 그리고 Datagram한 개에는 Transport Layer segment가 들어있고 여기 헤더에 port정보가 들어있음.

multiplexing : L7 → L4 일 때 여러개의 Application Level socket에서 들어온 데이터들에 transport header를 추가 해줘서 어떤 포트로 갈지에 대한 정보를 넣어 줌 demultiplexing : L4 → L7 헤더에 있는 포트 정보를 보고 해당 프로세스 소켓으로 넘겨줌

When demuliplexing UDP는 기본적으로 Connectionless 그냥 dest ip, port만 알게 되면 보낼 수 있음. src ip,port 와 상관 없이 dest ip, port만 알게되면 같은 소켓으로 통신가능. TCP는 그와 반대로 Connection orientied src ip,port 그리고 dest ip, port 총 4개를 이용해 소켓을 구분. 저 4개 중 한 개라도 달라지면 다른 소켓을 사용.


UDP

Connectionless : No handshaking, independent UDP segment. 순서 없음. 수신 송신에 대한 확인이 없기 때문에 loss발생 가능. → No reliability (Application Level에서 하려면 할 수 있긴함.) 하지만 이런 특성 때문에 간단해서 많이 쓰임. Congestion Control이 없기 때문에 원하는 만큼 보낼 수 있음.

(https://tools.ietf.org/html/rfc768)

Source Port is an optional field, when meaningful, it indicates the port of the sending process, and may be assumed to be the port to which a reply should be addressed in the absence of any other information.

Checksum

UDP헤더를 보면 Checksum이 존재하는데 IP header의 일부, UDP header, UDP data를 가지고 한다. 문제는 Transport Layer에서는 IP header에 대한 정보를 안가지고 있기 때문에 정보중 일부인 persudo header를 만들어 계산한다. → TCP도 같은 방법으로 http://www.netfor2.com/udpsum.htm ← checksum calculation code

흠,,, L3 switching 에서 DSR(Direct Server Return)을 하게 될 경우 dst ip addr을 vip로 바꿔주게 될텐데 그러면 다시 checksum을 업데이트.. http://tech.kakao.com/2014/05/28/l3dsr/ 그냥 갑자기 난 생각

이때 one’s complement를 사용해서 계산을 한다. left most에 carry가 발생하면 더해주는 방식으로. checksum에 들어가는거는 마지막의 one’s complement를 취해준다(not) IPv4기준으로 checksum은 optional. 0으로 넣어두면 검사를 안한다는 뜻으로 사용한다.


RDT(Reliable data transfer)- 중요

Reliable data transfer은 좀 일반적인 의미로써의 프로토콜로 어떤 프로토콜이던 spec을 맞추면 rdt protocol이라 불릴 수 있음. The requirements are retransmission, error detection, and acknowledgments. PPT상에서는 Unreliable data transfer send함수를 랩핑 해서 하는 식으로

rdt 1.0

그냥 안전한 채널(no loss, no bit error)에서 전송하면 그 자체로 안-전

rdt 2.0

조건 : bit error 발생하는 상황. no loss. → checksum 추가. feedback 추가 : 체크섬이 이상할경우 NACK (Retransmission), 괜찮으면 ACK

문제점 : ACK/NACK 가 corrupt되는 상황이 발생할수도 있음

rdt 2.1

해결책

  • ACK/NACK에 Checksum을 추가해서 error검출. corupt가 발생시 데이터 재전송
  • Sequence number를 추가해서 recv측에서 duplicated data를 검출해냄

질문

  • 과연 NAK가 필요한가? → 1번 State에서 corrupt가 일어났을 때 NAK대신 0번 ACK를 받는다면? (실제로 여러 TCP Implementation에서 쓴 receipt “Triple duplicate ack”
    → TCP Fast retransmittion) : 2번 중복된 ACK가 오더라도 timeout 되기 전까지는 no retransmission But 3번 중복된 ACK가 오게 되면 Fast retransmission. (SR ARQ)
  • 이전보다 2배 늘어난 state

rdt 2.2

NAK-free protocol : ACK만 쓰임. corrupt 발생시 이전 ACK를 보냄. → ACK에 sequence 필수 포함

rdt 3.0

조건 : bit error 발생, loss 발생. 해결 : timeout을 둬서 시간안에 ACK가 안올경우 retransmission.

Stop&Wait 분석: 1 Gbps Link, 15ms propagation, 8kb packet

→ Transmission Time = 8*10^3bit / 10^9 bit/sec = 8 / 10^6 sec = 8000 ms = 8usec (micro second) Utilization : 전송에 쓰인 시간 / (총 걸린 시간) → Transmission Time / (RTT + Transmission Time) = 0.008 ms / (30.008) ms = 0.00027


Pipelined protocols

문제 : 하나 보내고 time out을 기다리던가 하는건 솔직히 말이 안됨. 여러 개를 보낼 수 있는 방법을 생각.

해결 : Pipelining을 사용해서 여러개를 한꺼번에 보내자 → 여러개를 보내기 위해서 Sequence number를 확장해야함. 각 패킷마다 unique한 number가 필요 → Buffer를 확장해야함. 이전에는 한개만 가지고 했지만 이제는 pipeline 할때 필요한 사이즈 만큼 필요

효과 : 3개씩 보낸다고 하면 Utilization이 3배 확장!

이제 밑에서 기술하는 protocol들에서는 Sliding window라는 개념이 들어간다. 위에서 얘기한 buffer와 같은 얘기로 0~10까지 보내야 할 패킷들이 있을 때 buffer size가 3이라면 (0, 2) ~ (8, 10) 이렇게 움직이며 처리를 한다.

Pipelined protocols를 볼때는 세가지 event point를 중점적으로 봐야한다.

  1. Send invocation : send함수가 호출 될 때 sequence number과 sliding window status
  2. Receipt of an ACK : ACK를 받을 때 어떻게 처리 하는지
  3. Timeout Event : Timeout을 받을 때 어떻게 처리 하는지

GBN (Go-back-N)

처음 Send가 호출 됐을 때 window가 꽉 찼다면 거-절. 아니면 윈도우에 넣고 패킷 전송.

N개의 패킷을 받을 때 마지막에 잘 수신된 패킷의 sequence number ack를 보냄. (cumulative ack)

→ 0~5 까지의 패킷을 보낸다고 했을때 0, 1, 2 ack를 보낼 수도 있지만 2 ack만 보내게 되면 2 이전의 것까지 다 잘 도착한 상황이라고 가정

Timeout의 기준은 제일 오래 ACK를 못받은 패킷(윈도우 맨 처음) 발생시 timeout된것 부터 전부 다시 전송 → timer은 가장 마지막 패킷 기준으로 한 개. 어차피 윈도우 전체를 다시 재전송 하는데..

Error가 발생하거나 순서대로 오지 않을 경우 이전 sequence number를 포함한 ACK를 보냄. out-of-order discard : 2번까지 ACK를 보낸 후 3번을 못받고 4번이 먼저옴. 그러면 discard하고 2번 ACK를 보냄

duplicated ack를 받게 될 경우 무시. sender은 해당 ACK가 오지 않을 경우 timeout이 날 때 까지 기다렸다가 retransmission.

→ GBN은 rdt프로토콜들의 명세가 다 지켜지고 스펙이 좋기 때문에 후에 TCP에서 GBN Style을 많이 사용. 그러나 segment가 순서대로 오지 않아도 buffer상에는 순서대로 들어감. + SR(Selective Repeat)

SR (Selective Repeat)

send가 호출 됐을 때 남아있는 Sequence number가 있는지 확인한다. 남은 sequence number가 현재 sender의 윈도우에 범위 안에 있다면 바로 보내고 아니라면 buffering하거나 다시 상위 레이어로 올린다.

N개의 패킷을 받을 때 각각 패킷에 대해 모두 ACK를 보낸다.. (individual ACK)

각각 ACK가 안와서 timeout된 패킷에 대해 다시 재전송. → unACKed packet 갯수만큼 타이머 필요

문제점 : sender와 receiver간의 sliding window synchronization

윈도우 사이즈가 3, Sequence Number범위가 0~3일 때 아래와 같이 ACK가 drop될 경우 synchronization이 깨지게됨. 슬라이딩 윈도우가 어떻게 움직이는지는 보이지 않고 Sequence number가 어떻게 오는지만 알 수 있기 때문에 이런 문제가 생김. 이게 지금 윈도우의 못 받은 0인지 아님 이전 윈도우의 0인지를 구분할 수 없는 문제.

최소 Sliding window size *2 ≤ Sequence number size가 돼야 이 문제를 해결 할 수 있다.


TCP(Transmission Control Protocol)

Connection-oriented : 맨 처음 서로 연결 할 때 establish과정이 필요함. → handshaking establish 함으로써 서로 state variables들을 초기화 한다.

Full duplex data : 양방향 통신으로 서로 데이터 통신을 할 수 있다.

Point to Point : 양 끝 단말기기간 둘이서만 통신을 한다. → multicasting은 TCP로 불가능

Flow controlled : sender가 receiver의 buffer를 과부화 시키지 않음

TCP segment structure

  • Src, Dst port each 16bits
  • Sequence Number 32bits : 이 세그먼트의 첫번째 바이트의 순서 → SYN은 ISN(initial sequence number)를 뜻하고 맨 첨 데이터의 바이트는 ISN+1

    If SYN is present the sequence number is the initial sequence number (ISN) and the first data octet is ISN+1.

  • Acknowledgement Number 32bits : 이 다음 받을 세그먼트의 예상 sequence number
  • Data offset 4bits : 4byte 단위로 TCP 헤더의 길이 및 데이터 시작 offset을 알려준다.
  • Reserved 6bits : 예약석 0으로 셋팅
  • Control Bits 6bits
  • URG : Urgent Pointer 필드가 사용될 때.
  • ACK : Acknowledgment 필드가 사용될 때.
  • RST, SYN, FIN : connection establishment
  • PSH : Push Function 상위 레이어로 데이터를 바로 보냄
  • URG : 긴급한 데이터가 있다고 표현할 때? Urgent Pointer랑 같이 쓰임 (PSH랑 안중요)
  • Window 16bits : Flow control에 쓰이는 size. 그냥 간단하게 receiver가 받고싶어하는 세그먼트 크기라고 생각
  • Checksum 16bits : 데이터 체크섬. udp랑 방식이 같다.

밑에는 안중요해서 패스


TCP RTT

RTT < TimeOut 이여야 패킷이 정상적으로 도달 할 수 있는데 RTT가 워낙 많은 경우의 수가 있다. 그러면 대략 측정을 어떻게 할 것 인가?

Sample RTT : 데이터를 보낸 이후 ACK가 돌아오기 까지를 걸린시간을 체크. 근데 이 값은 network congestion에 따라 값이 많이 바뀐다. 그래서 이 값을 그냥 쓰진 않는다. Estimate RTT : (1 - a)EstimateRTT + a * SampleRTT 각각 SampleRTT를 지수이동평균 (Exponential Moving Average)를 구해준다. 지수 이동 평균은 최근 값에 높은 가중치를 주고 이전 값 또한 영향을 미칠 수 있게 한다. 위 식에서 가중치 a 는 일반적으로 0.125 라는 값을 많이 쓴다. 근데 이 값은 평균이지 RTT보다 크다는 보장이 없다. 실제 그래프를 보게 되면 평균 정도라 정반정도가 timeout이 날 수도 있다. DevRTT : (1-b)DevRTT + B|SampleRTT - EstimatedRTT| 이 값은 safety margin을 주기 위한 값으로 평균과 현재 값의 차를 계속해서 평균을 내 놓는다. 여기서 가중치 b는 일반적으로 0.25 값을 사용한다. TimeoutInterval = EstimatedRTT + 4DevRTT ( 최종 값 )


TCP RDT

TCP 또한 rdt를 하기 위해 기능들을 추가

  1. Pipelined segments
  2. Cumulative acks
  3. single retransmission timer

Retransmission Condition : timeout, duplicate ack Timeout : retransmit unacked 패킷중 sequence number가 제일 작은 것 부터

Sender가 패킷을 보낼 때 Seq에 현재 Sequence Number 이랑 data size를 보내준다. Receiver은 잘 받았을 경우 ACK에 다음 받을껄로 예상대는 Sequence Number를 보낸다. 위에 써놨다 싶이 Cumulative acks가 적용된다.

Receiver은 out-of-order packet을 받았을 때 원래 받아야 할 Sequence number를 포함한 ack를 보내준다.

이 과정 중 Timeout delay가 너무 길 경우 TCP fast retransmit을 이용한다. lost segment에 대한 ACK를 3 번 보낼 경우 sender는 timeout 때와 같은 행동을 한다.


Flow Control

OS TCP Code → TCP socket recv buffer → USER Application Process

이런 방식으로 구조가 돼 있을 때 unordered segment를 처리 하지 않는다는 가정하에 ordered segment를 받았을 경우 recv buffer에 들어가게 된다. 그러면 Application Process에서 Recv를 통해 버퍼에 있는 데이터들을 가져간다. 그치만,,, Application Process가 처리할 양이 많다면 recv buffer가 꽉 차게될 수도 있다. 이렇게 되면 패킷이 drop되고 다시 전송 받아야한다.

이러한 이유에서 flow control이 만들어졌다.

위 그림이 Receiver의 버퍼이다. RecvBuffer : 전체 버퍼의 크기 일반적으로 4096bytes RecvWindow (rwnd) : buffer에 남은 공간으로 이 공간을 Window Size로 정해준다.

Receiver : 자신이 남은 RcvWindow를 rwnd에 담아서 보내준다

Sender : Rwnd 만큼 보내준다 했을 때 LastByteSent, LastByteAcked 이 두가지 변수를 고려해야 한다. LastByteAcked - LaskyteSent ≤ Rwnd 여야 하기 때문에 이 값을 세션이 유지되는 동안 유지해야한다.

이걸 지키면 overflow나지 않는 걸 보장 할 수 있음


Three-way Handshaking (Connection)

여러 호스트를 받을 수 있는 UDP랑은 달라달라달라~ TCP는 sender, receiver당 한개의 Connection이 만들어진다. 그래서 이 Connection을 맺기 위한 과정을 handshake이라고 한다.

이 과정은 세 번에 걸쳐서 진행된다고 해서 Three-way handshaking이라고 불린다.

TCP A <——> TCP B

  1. ———SEQ (ISN(initial Sequence Number) A) CTL →

  2. ←———SEQ(ISN, B), ACK(ISN A + 1), CTL<SYN, ACK> -

  3. ———SEQ(ISN, A+1), ACK(ISN B + 1), CTL—>

다음과 같은 과정으로 진행된다. 이렇게 3번에 걸쳐서 하는 이유는 서로의 ISN(Initial Sequence Number)를 동기화 시키기 위해서이다. 또한 2-way로 할 경우 서로의 state를 못 보게 된다. connection initiation 과정에서 네트워크 상황 때문에 오래된 Duplicate connection request는 half-open 상황이 일어나게 한다. 즉 한 쪽만 open되고 나머지 한쪽은 이미 timeout되서 커넥션이 일어나지 않을 수도 있는 상황을 뜻 한다.

하지만 three-way hand shaking 은 SYN-ACK, ACK 이 과정에서 상대방에 state를 알 수 있어 Simultaneous Connection Synchronizationduplicated initialization 등에 대해 대응할 수 있다. 잘못된 커넥션은 RST condition을 통해 끊어준다. 자세한 내용은 https://tools.ietf.org/html/rfc793#page-31 에서 볼 수 있다.

Session Closing 과정은 Four-way handshaking이라 불린다.

이 과정은 양쪽 모두 닫는걸 목적으로 하기 때문에 FIN, ACK 를 양쪽에서 한 번 씩 보내준 후 각 요청에 대해 ACK를 한 번 씩 보내준다.


Congestion Control

Flow Control이 Host의 buffer가 overflow될 수 있는 상황이였다면 Congestion은 Network 상황이 안좋다는 뜻이다. 여기서 Network의 상황이 안좋다는 것은 Router의 link buffer들이 꽉 찰 수 있다는 것을 뜻 한다.

cwnd( Congestion Window ) : sender-slide limit, ack를 받기 전까지 얼마나 많은 양의 데이터를 보낼 수 있는지. 이 값은 알고리즘에 의해 정해지는 값이다. rwnd (Receiver’s advertised window) : receiver-side limit, 얼마나 많은 데이터를 감당 할 수 있는지 SMSS(Sender Maximum Segment Size) : Sender가 전송할 수 있는 가장 큰 segment size. network의 MTU(Maximum Transmission Unit)에 따라 결정된다. RMSS(Receiver Maximum Segment Size) : Receiver가 받길 원하는 가장 큰 segment size. Connection이 맺어질 때 이 정보를 MSS option에 같이 보낸다.

ssthresh(Slow Start Threshold) : slow start를 할지 Congestion avoidance를 사용할지를 결정하는 변수.

Slow Start & Congestion Avoidance

Slow Start와 Congestion Avoidance는 기본적으로 network에 capacity를 추측하는 알고리즘이다. 이 알고리즘에 따라서 변화하는 두 변수 cwnd, ssthresh가 어떤 이벤트에 따라서 값이 변화하는지를 알아보자.

먼저 현재 네트워크 상황을 모르기 때문에 데이터를 천천히 보내는 것 부터 시작한다. (Slow start) 그렇다면 처음에 아무것도 안 보내줄 수는 없기 때문에 cwnd에 초기값을 2*SMSS보다 작거나 같게 설정 해준다. ssthresh는 값이 조금 커야한다. ssthresh = max (FlightSize / 2, 2*SMSS) 이정도로 설정한다고 한다.

이렇게 초기값이 잡히고 나면 ACK가 돌아올 때 마다 cwnd = min(rwnd, 2*cwnd) 로 설정을 해준다. 이 과정은 cwnd < ssthresh일 때 까지 계속 증가한다. 이 과정을 Slow Start 라고 부른다.

cwnd > ssthresh인 상황이 오게 되면 Congestion Avoidance상황으로 가게 된다. 네트워크가 congestion되지 않도록 천천히 보내는 경우이다. 이 때 부터는 ACK가 올때 마다 cwnd = cwnd + SMSS*(SMSS/cwnd) 즉 ACK가 윈도우 갯수 만큼 올 때 1씩 증가한다. 이렇게 쬐끔씩 올라가는 과정을 Additive Increase 라고 부른다. 혼잡해지지 않게 조심 또 조심해주는 것 이다.

그리고 Congestion Avoidance 에서 timeout이 일어나게 되면 congestion이 발생했다는걸 감지하고 ssthresh = cwnd/2 , cwnd = 1*SMSS 다음과 같이 설정하고 slow start로 다시 돌아가게 된다. 즉 cwnd를 최솟값부터 해서 congestion 상태를 최소화하는것이다. 이렇게 window를 확 줄이는것을 Multiplicaive Decrease 라고 부른다. 그리고 이렇게 window를 제어하는 알고리즘을 Addictive Incerease & Mulitplicative Decrease 라고 부른다.

Fast Retransmit & Fast Recovery

이제 약간 입이 지겨울 정도로 얘기해서 여기서는 대략 설명한다. ACK가 중복돼서 3번 오게 되면 해당 segment만 loss가 일어났다고 판단을 해서 timeout을 기다리지 않고 바로 retransmit을 해준다. 이 과정을 Fast Retransmit이라고 한다. 왜 3 번 오는것만 빠르게 처리 해주냐면 해당 segment만 drop돼서 reordering 때문이라 전체 다 보내는 것 보다는 해당 segment를 빠르게 처리해 주는 것이 좋다.

retransmit을 한 후에는 congestion이 약간 발생했다고 가정해서 일단 ssthresh = cwnd / 2, cwnd = ssthresh + 3 다음과 같이 값을 설정한다. 그리고 duplicate ACK들이 올때마다 cwnd = cwnd + SMSS를 해준다. 이과정을 Fast Recovery라고 부른다. Congestion을 피해야 하긴 하지만 다시 Slow Start까지는 아닌 정도라 이렇게 한다.

Reference

RFC 793 - Transmission Control Protocol

Transmission Control Protocol

KDAB has released a new version of KDSoap. This is version 1.8.0 and comes more than one year since the last release (1.7.0).

KDSoap is a tool for creating client applications for web services without the need for any further component such as a dedicated web server.

KDSoap lets you interact with applications which have APIs that can be exported as SOAP objects. The web service then provides a machine-accessible interface to its functionality via HTTP. Find out more...

Version 1.8.0 has a large number of improvements and fixes:

General

  • Fixed internally-created faults lacking an XML element name (so e.g. toXml() would abort)
  • KDSoapMessage::messageAddressingProperties() is now correctly filled in when receiving a message with WS-Addressing in the header

Client-side

  • Added support for timing out requests (default 30 minutes, configurable with KDSoapClientInterface::setTimeout())
  • Added support for soap 1.2 faults in faultAsString()
  • Improved detection of soap 1.2 faults in HTTP response
  • Stricter namespace check for Fault elements being received
  • Report client-generated faults as SOAP 1.2 if selected
  • Fixed error code when authentication failed
  • Autodeletion of jobs is now configurable (github pull #125)
  • Added error details in faultAsString() – and the generated lastError() – coming from the SOAP 1.2 detail element.
  • Fixed memory leak in KDSoapClientInterface::callNoReply
  • Added support for WS-UsernameToken, see KDSoapAuthentication
  • Extended KDSOAP_DEBUG functionality (e.g. “KDSOAP_DEBUG=http,reformat” will now print http-headers and pretty-print the xml)
  • Added support for specifying requestHeaders as part of KDSoapJob via KDSoapJob::setRequestHeaders()
  • Renamed the missing KDSoapJob::returnHeaders() to KDSoapJob::replyHeaders(), and provide an implementation
  • Made KDSoapClientInterface::soapVersion() const
  • Added lastFaultCode() for error handling after sync calls. Same as lastErrorCode() but it returns a QString rather than an int.
  • Added conversion operator from KDDateTime to QVariant to void implicit conversion to base QDateTime (github issue #123).

Server-side

  • New method KDSoapServerObjectInterface::additionalHttpResponseHeaderItems to let server objects return additional http headers. This can be used to implement support for CORS, using KDSoapServerCustomVerbRequestInterface to implement OPTIONS response, with “Access-Control-Allow-Origin” in the headers of the response (github issue #117).
  • Stopped generation of two job classes with the same name, when two bindings have the same operation name. Prefixed one of them with the binding name (github issue #139 part 1)
  • Prepended this-> in method class to avoid compilation error when the variable and the method have the same name (github issue #139 part 2)

WSDL parser / code generator changes, applying to both client and server side

  • Source incompatible change: all deserialize() functions now require a KDSoapValue instead of a QVariant. If you use a deserialize(QVariant) function, you need to port your code to use KDSoapValue::setValue(QVariant) before deserialize()
  • Source incompatible change: all serialize() functions now return a KDSoapValue instead of a QVariant. If you use a QVariant serialize() function, you need to port your code to use QVariant KDSoapValue::value() after serialize()
  • Source incompatible change: xs:QName is now represented by KDQName instead of QString, which allows the namespace to be extracted. The old behaviour is available via KDQName::qname().
  • Fixed double-handling of empty elements
  • Fixed fault elements being generated in the wrong namespace, must be SOAP-ENV:Fault (github issue #81).
  • Added import-path argument for setting the local path to get (otherwise downloaded) files from.
  • Added -help-on-missing option to kdwsdl2cpp to display extra help on missing types.
  • Added C++17 std::optional as possible return value for optional elements.
  • Added -both to create both header(.h) and implementation(.cpp) files in one run
  • Added -namespaceMapping @mapping.txt to import url=code mappings, affects C++ class name generation
  • Added functionality to prevent downloading the same WSDL/XSD file twice in one run
  • Added “hasValueFor{MemberName}()” accessor function, for optional elements
  • Generated services now include soapVersion() and endpoint() accessors to match the setSoapVersion(…) and setEndpoint(…) mutators
  • Added support for generating messages for WSDL files without services or bindings
  • Fixed erroneous QT_BEGIN_NAMESPACE around forward-declarations like Q17__DialogType.
  • KDSoapValue now stores the namespace declarations during parsing of a message and writes
  •     namespace declarations during sending of a message
  • Avoid serialize crash with required polymorphic types, if the required variable wasn’t actually provided
  • Fixed generated code for restriction to base class (it wouldn’t compile)
  • Prepended “undef daylight” and “undef timezone” to all generated files, to fix compilation errors in wsdl files that use those names, due to nasty Windows macros
  • Added generation for default attribute values.

Get KDSoap…

KDSoap on github…

The post KDSoap 1.8.0 released appeared first on KDAB.

May 21, 2019

I’m very excited to start off the Google Summer of Code blogging experience regarding the project I’m doing with my KDE mentors David Edmundson and Nate Graham. What we’ll be trying to achieve this summer is have SDDM be more in sync with the Plasma desktop. What does that mean? The essence of the problem...... Continue Reading →

May 20, 2019

Plasma 5.16 beta was released last week and there’s now a further couple of weeks to test it to find and fix all the beasties. To help out download the Neon Testing image and install it in a virtual machine or on your raw hardware. You probably want to do a full-upgrade to make sure you have the latest builds. Then try out the new notifications system, or the new animated wallpaper settings or anything else mentioned in the release announcement. When you find a problem report it on bugs.kde.org and/or chat on the Plasma Matrix room. Thanks for your help!

I have been at many software events and have helped or have been part of the organization in a few of them. Based on that experience and the fact that I have participated in the last two editions, let me tell you that J On The Beach is a great event.

The main factors that leads me to such a conclusion are:

  • It is all about contents. I have seen many events that, over time, loose the focus on the quality of the contents. It is a hard focus to keep, specially as you grow. @JOTB19 had great content: well delivered talks and workshops, performed by bright people with something to say which was relevant to the audience.
    • I think the event has not reached its limit yet, specially when it comes to workshops.img5
    • Designing the content structure to target the right audience is as important as bringing speakers with great things to say. As any event matures, tough decisions will need to be taken in order to find its own space and identity among outstanding competitors.
      • When it comes to themes, will J On The Beach keep targeting several topics, or will it narrow them to one or two? Will they always be the same or will they rotate?
      • When it comes to size, will it grow or will it remain in the current numbers? Will the price increase or will be kept in the current range?
      • When it comes to contents, will the event focus more energy and time allocation on the “hands on” learning sessions or will workshops be kept as relevant compared to the talks, as they are today?  Will the talks length be reduced? Will we see lightning talks?
  • J On The Beach was well organised. A good organization is not the one that does not run into any trouble but the one that handles them smoothly so there is little or no perceived impact. This event has a diligent team behind it, based on the little/no impact I perceived.
  • Support from local companies. As Málaga matures as software hub, more and more companies arrive to this area expecting to grow in size, so the need to attract local talent grows in parallel.
    • Some of these foreign companies understand how important it is to show up in local events to be known by as many local developers as possible. J On The Beach has captured the attention of several of these companies.
    • The organizers have understood this reality and support them to use the event to openly  recruit people. This symbiotic relation is a very productive one from what I have witnessed.
    • It is a hard relation to sustain though, specially if the event does not grow is size, so probably in the future the current relation will need to add additional common interests to remain productive for both sides.
  • Global by default. Most events in Spain have traditionally been designed for Spaniards first, turning into more global events as they grow. J On The Beach is global by default, by design, since day 1. It is harder to succeed that way, but beyond the activation point it turns out to be easier to become sustainable. The organizers took the risk and have reached that point already, which provides the event a bright future in my opinion.
    • The fact that the event is able to attract developers from many countries, specially from eastern European ones, makes J On The Beach very attractive to foreign companies already located in Málaga, from the recruitment perspective. Málaga is a great place not just to work in English but also to live in English. There are well established communities from many different countries in the metropolitan area, due to how strong the touristic industry is here. These factors, together with others like logistics, affordable living costs, good public health care system, sunny weather, availability of international and multilingual schools, etc., reduce the adaptation effort when relocating,  specially for developer’s families. J On The Beach brings tasty fishes to the pond.

Let me name a couple of points that can make the event even better:

  • img10It is very hard to find a venue that fits any event during its consolidation phase and evolves with it. This edition’s venue represents a significant improvement compared to last year edition. There is room for improvement though.
    • It would be ideal to find a place in Málaga itself, closer to where the companies are located and places to hang out after the event, which at the same time, keep the good things the current venue/location provides, which are plenty.
    • Finding the right venue is tough. There are decision-making factors that participants do not usually see but are essential like costs, how supportive the venue staff and owners are, accommodation availability in the surrounding area, availability on the selected dates, etc. It is one of the most difficult points to get right, in my experience.                       img1
  • Great events deserve great keynote speakers. They are hard to get but often reflects the difference between great and must-attend events.
    • Great keynote speakers does not necessarily mean popular ones. I see already celebrities in bigger and more expensive events. I would love to see in Málaga old time computer science cowboys.  I mean those first class engineers who did something relevant some time ago and have witnessed the evolution of our industry and their own inventions. They are able to bring a perspective that very few can provide, extremely valuable in these fast pace changing times. Those gems are harder to see at big/popular events and might be a good target for a smaller, high quality event. I think that it would be a great sign of success if such a kind of professionals come to talk at J On The Beach.

I am very glad there is such a great event close to where I live. J On The Beach is not just worth for local developers but also for those abroad. I attend to several events in other countries every year with more name but less value than J On The Beach. It will definitely be on my 2020 agenda. Thanks to every person involved in making it possible.

Pictures taken from the J On The Beach website.

May 19, 2019

Continuing with the addition of line terminating style for the Straight Line annotation tool, I have added the ability to select the line start style also. The required code changes are committed today.

Line annotation with circled start and closed arrow ending.

Currently it is supported only for PDF documents (and poppler version ≥ 0.72), but that will change soon — thanks to another change by Tobias Deiminger under review to extend the functionality for other documents supported by Okular.

libqaccessibilityclient 0.4.1 is out now
https://download.kde.org/stable/libqaccessibilityclient/
http://embra.edinburghlinux.co.uk/~jr/tmp/pkgdiff_reports/libqaccessibilityclient/0.4.0_to_0.4.1/changes_report.html
Signed by Jonathan Riddell
https://sks-keyservers.net/pks/lookup?op=vindex&search=0xEC94D18F7F05997E
  • version 0.4.1
  • Use only undeprecated KDEInstallDirs variables
  • KDECMakeSettings already cares for CMAKE_AUTOMOC & BUILD_TESTING
  • Fix use in cross compilation
  • Q_ENUMS -> Q_ENUM
  • more complete release instructions

 

Recently I have been researching into possibilities to make members of KoShape copy-on-write. At first glance, it seems enough to declare d-pointers as some subclass of QSharedDataPointer (see Qt’s implicit sharing) and then replace pointers with instances. However, there remain a number of problems to be solved, one of them being polymorphism.

polymorphism and value semantics

In the definition of KoShapePrivate class, the member fill is stored as a QSharedPointer:

QSharedPointer<KoShapeBackground> fill;

There are a number of subclasses of KoShapeBackground, including KoColorBackground, KoGradientBackground, to name just a few. We cannot store an instance of KoShapeBackground directly since we want polymorphism. But, well, making KoShapeBackground copy-on-write seems to have nothing to do with whether we store it as a pointer or instance. So let’s just put it here – I will come back to this question at the end of this post.

d-pointers and QSharedData

The KoShapeBackground heirarchy (similar to the KoShape one) uses derived d-pointersfor storing private data. To make things easier, I will here use a small example to elaborate on its use.

derived d-pointer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
class AbstractPrivate
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
// it is not yet copy-constructable; we will come back to this later
// Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QScopedPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
// it is not yet copy-constructable
// Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

The main goal of making DerivedPrivate a subclass of AbstractPrivate is to avoid multiple d-pointers in the structure. Note that there are constructors taking a reference to the private data object. These are to make it possible for a Derived object to use the samed-pointer as its Abstract parent. The Q_D() macro is used to convert the d_ptr, which is a pointer to AbstractPrivate to another pointer, named d, of some of its descendent type; here, it is a DerivedPrivate. It is used together with the Q_DECLARE_PRIVATE() macro in the class definition and has a rather complicated implementation in the Qt headers. But for simplicity, it does not hurt for now to understand it as the following:

#define Q_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

where Class##Private means simply to append string Private to (the macro argument) Class.

Now let’s test it by creating a pointer to Abstract and give it a Derived object:

1
2
3
4
5
6
7
int main()
{
QScopedPointer<Abstract> ins(new Derived());
ins->foo();
ins->modifyVar();
ins->foo();
}

Output:

foo 0 0foo 1 1

Looks pretty viable – everything’s working well! – What if we use Qt’s implicit sharing? Just make AbstractPrivate a subclass of QSharedData and replace QScopedPointer with QSharedDataPointer.

making d-pointer QSharedDataPointer

In the last section, we commented out the copy constructors since QScopedPointer is not copy-constructable,but here QSharedDataPointer is copy-constructable, so we add them back:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

And testing the copy-on-write mechanism:

1
2
3
4
5
6
7
8
9
int main()
{
QScopedPointer<Derived> ins(new Derived());
QScopedPointer<Derived> ins2(new Derived(*ins));
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

But, eh, it’s a compile-time error.

error: reinterpret_cast from type &aposconst AbstractPrivate*&apos to type &aposAbstractPrivate*&apos casts away qualifiers Q_DECLARE_PRIVATE(Abstract)

Q_D, revisited

So, where does the const removal come from? In qglobal.h, the code related to Q_D is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
template <typename T> inline T *qGetPtrHelper(T *ptr) { return ptr; }
template <typename Ptr> inline auto qGetPtrHelper(const Ptr &ptr) -> decltype(ptr.operator->()) { return ptr.operator->(); }

// The body must be a statement:
#define Q_CAST_IGNORE_ALIGN(body) QT_WARNING_PUSH QT_WARNING_DISABLE_GCC("-Wcast-align") body QT_WARNING_POP
#define Q_DECLARE_PRIVATE(Class) \
inline Class##Private* d_func() \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<Class##Private *>(qGetPtrHelper(d_ptr));) } \
inline const Class##Private* d_func() const \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<const Class##Private *>(qGetPtrHelper(d_ptr));) } \
friend class Class##Private;

#define Q_D(Class) Class##Private * const d = d_func()

It turns out that Q_D will call d_func() which then calls an overload of qGetPtrHelper() that takes const Ptr &ptr. What does ptr.operator->() return? What is the difference between QScopedPointer and QSharedDataPointer here?

QScopedPointer‘s operator->() is a const method that returns a non-const pointer to T; however, QSharedDataPointer has two operator->()s, one being const T* operator->() const, the other T* operator->(), and theyhave quite different behaviours – the non-const variant calls detach() (where copy-on-write is implemented), but the other one does not.

qGetPtrHelper() here can only take d_ptr as a const QSharedDataPointer, not a non-const one; so, no matter which d_func() we are calling, we can only get a const AbstractPrivate *. That is just the problem here.

To resolve this problem, let’s replace the Q_D macros with the ones we define ourselves:

#define CONST_SHARED_D(Class) const Class##Private *const d = reinterpret_cast<const Class##Private *>(d_ptr.constData())#define SHARED_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

We will then use SHARED_D(Class) in place of Q_D(Class) and CONST_SHARED_D(Class) for Q_D(const Class). Since the const and non-const variant really behaves differently, it should help to differentiate these two uses. Also, delete Q_DECLARE_PRIVATE since we do not need them any more:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { CONST_SHARED_D(Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { SHARED_D(Derived); d->var++; d->bar++; }
};

With the same main() code, what’s the result?

foo 0 0foo 1 16606417foo 0 0

… big whoops, what is that random thing there? Well, if we use dynamic_cast in place of reinterpret_cast, the program simply crashes after ins->modifyVar();, indicating that ins‘s d_ptr.data() is not at all a DerivedPrivate.

virtual clones

The detach() method of QSharedDataPointer will by default create an instance of AbstractPrivate regardless of what the instance really is. Fortunately, it is possible to change that behaviour through specifying the clone() method.

First, we need to make a virtual function in AbstractPrivate class:

virtual AbstractPrivate *clone() const = 0;

(make it pure virtual just to force all subclasses to re-implement it; if your base class is not abstract you probably want to implement the clone() method) and then override it in DerivedPrivate:

virtual DerivedPrivate *clone() const { return new DerivedPrivate(*this); }

Then, specify the template method for QSharedDataPointer::clone(). As we will re-use it multipletimes (for different base classes), it is better to define a macro:

1
2
3
4
5
6
7
#define DATA_CLONE_VIRTUAL(Class) template<>                      \
Class##Private *QSharedDataPointer<Class##Private>::clone() \
{ \
return d->clone(); \
}
// after the definition of Abstract
DATA_CLONE_VIRTUAL(Abstract)

It is not necessary to write DATA_CLONE_VIRTUAL(Derived) as we are never storing a QSharedDataPointer<DerivedPrivate> throughout the heirarchy.

Then test the code again:

foo 0 0foo 1 1foo 0 0

– Just as expected! It continues to work if we replace Derived with Abstract in QScopedPointer:

QScopedPointer<Abstract> ins(new Derived());QScopedPointer<Abstract> ins2(new Derived(* dynamic_cast<const Derived *>(ins.data())));

Well, another problem comes, that the constructor for ins2 seems too ugly, and messy. We could, like the private classes, implement a virtual function clone() for these kinds of things, but it is still not gentle enough, and we cannot use a default copy constructor for any class that contains such QScopedPointers.

What about QSharedPointer that is copy-constructable? Well, then these copies actually point to the same data structures and no copy-on-write is performed at all. This still not wanted.

the Descendents of …

Inspired by Sean Parent’s video, I finally come up with the following implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
template<typename T>
class Descendent
{
struct concept
{
virtual ~concept() = default;
virtual const T *ptr() const = 0;
virtual T *ptr() = 0;
virtual unique_ptr<concept> clone() const = 0;
};
template<typename U>
struct model : public concept
{
model(U x) : instance(move(x)) {}
const T *ptr() const { return &instance; }
T *ptr() { return &instance; }
// or unique_ptr<model<U> >(new model<U>(U(instance))) if you do not have C++14
unique_ptr<concept> clone() const { return make_unique<model<U> >(U(instance)); }
U instance;
};

unique_ptr<concept> m_d;
public:
template<typename U>
Descendent(U x) : m_d(make_unique<model<U> >(move(x))) {}

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}
Descendent(Descendent && that) : m_d(move(that.m_d)) {}

Descendent & operator=(const Descendent &that) { Descendent t(that); *this = move(t); return *this; }
Descendent & operator=(Descendent && that) { m_d = move(that.m_d); return *this; }

const T *data() const { return m_d->ptr(); }
const T *constData() const { return m_d->ptr(); }
T *data() { return m_d->ptr(); }
const T *operator->() const { return m_d->ptr(); }
T *operator->() { return m_d->ptr(); }
};

This class allows you to use Descendent<T> (read as “descendent of T“) to represent any instance of any subclass of T. It is copy-constructable, move-constructable, copy-assignable, and move-assignable.

Test code:

1
2
3
4
5
6
7
8
9
int main()
{
Descendent<Abstract> ins = Derived();
Descendent<Abstract> ins2 = ins;
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

It gives just the same results as before, but much neater and nicer – How does it work?

First we define a class concept. We put here what we want our instance to satisfy. We would like to access it as const and non-const, and to clone it as-is. Then we define a template class model<U> where U is a subclass of T, and implement these functionalities.

Next, we store a unique_ptr<concept>. The reason for not using QScopedPointer is QScopedPointer is not movable, but movability is a feature we actually will want (in sink arguments and return values).

Finally it’s just the constructor, moving and copying operations, and ways to access the wrapped object.

When Descendent<Abstract> ins2 = ins; is called, we will go through the copy constructor of Descendent:

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}

which will then call ins.m_d->clone(). But remember that ins.m_d actually contains a pointer to model<Derived>, whose clone() is return make_unique<model<Derived> >(Derived(instance));. This expression will call the copy constructor of Derived, then make a unique_ptr<model<Derived> >, which calls the constructor of model<Derived>:

model(Derived x) : instance(move(x)) {}

which move-constructs instance. Finally the unique_ptr<model<Derived> > is implicitly converted to unique_ptr<concept>, as per the conversion rule. “If T is a derived class of some base B, then std::unique_ptr<T> is implicitly convertible to std::unique_ptr<B>.”

And from now on, happy hacking — (.>w<.)

Hot on the heels of last week, this week’s Usability & Productivity report continues to overflow with awesomeness. Quite a lot of work what you see featured here is already available to test out in the Plasma 5.16 beta, too! But why stop? Here’s more:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

Hi! I’m Akhil K Gangadharan and I’ve been selected for GSoC this year with Kdenlive. My project is titled ‘Revamping the Titler Tool’ and my work for this summer aims to kickoff the complete revamp of one of the major tools used in video-editing in Kdenlive, called the Titler tool.

Titler Tool?

The Titler tool is used to create, you guessed it, title clips. Title clips are clips that contain text and images that can be composited over videos.

The Titler Tool

The Titler tool

Why revamp it?

In Kdenlive, the titler tool is implemented using QGraphicsView which is considered deprecated since the release of Qt5. This makes it obviously prone to bugs that may appear in the upstream to affect the functionality of the tool. This has caused issues in the past, popular features like the Typewriter effect had to be dropped because of QGraphicsView which lead to uncontrollable crashes.

How?

Using QML.

Currently the Titler Tool uses QPainter, which paints every property and every animation is required to be programmed. QML allows creating powerful animations easily as QML as a language is designed for designing UI, which can be then rendered to create title clips as per our need.

Implementation details - a brief overview

For the summer, I intend to complete work on the backend implementation. The first step is to write and test a complete MLT producer module which can render QML frames. And then to begin test integration of this module with a new titler tool.

This is how the backend currently looks like -

current backend

After the revamp, the backend would look like -

new backend

After the backend is done with, we begin integrating it with Kdenlive and evolve the titler to use the new backend.

A great long challenge lies ahead, and I’m looking forward to this summer and beyond with the community to complete writing the tool - right from the backend to the new UI.

Finally, a big thanks to the Kdenlive community for getting me here and to my college student community, FOSS@Amrita for all the support and love!

May 18, 2019

Are you using Kubuntu 19.04, our current Stable release? Or are you already running our daily development builds?

We currently have Plasma 5.15.90 (Plasma 5.16 Beta)  available in our Beta PPA for Kubuntu 19.04, and in our 19.10 development release daily live ISO images.

For 19.04 Disco Dingo, add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

For already installed 19.10 Eoan Ermine development release systems, simply upgrade your system.

Update directly from Discover, or use the command line:

sudo apt update && sudo apt full-upgrade -y

And reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

Otherwise, to test or install the live image grab an ISO build from the daily live ISO images link.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use your launchpad.net account is required to post testing feedback to the Kubuntu team. 
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.15.5?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu 19.10 as well as added to our backports.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

May 17, 2019

Thanks to Nick Richards we've been able to convince flathub to momentarily accept our old appdata files as still valid, it's a stopgap workaround, but at least gives us some breathing time. So the updates are coming in as we speak.

I was accepted to Google Summer of Code. I will work with Krita implementing an Animated Vector Brush Read more...

I tried Latte Dock for a week or so, in order to see if this great piece of software can improve my desktop experience. Here is what I think about.

A week with Latte Dock

Latte Dock is a dock that provides multiple visual effects in order to improve the experience with icons and applications. I’ve already written about it, mostly in a negative way and not because of the software nor its quality, but because I don’t see too much excitement around another OSX style dock bar.
However, in order to let me give a more accurate review about the experience with Latte, I decided to try it forcing to use it as the only one dock on my main computer.

I don’t know much about the history and goals of the project, I suspect it is aimed to be a more elegant dock than the default plasma one is, and is of course inspired to the OS X dock that provides parabolic zooms and removes the application tray. It is worth noting that there are multiple layouts out there, each one adding one or more features to customize the appearence of the dock, for example adding a top bar or changing the bar layout and size. I’d rather used the out-of-the-box layout in order to get a more unbiased impression.

My Plasma Dock

My plasma dock was pretty much simple and is composed by (from left to right):

  • the plasma menu;
  • the switch desktop applet;
  • the application tray;
  • a deck of my main applications (four);
  • the plasma dashboard icon;
  • the notification area;
  • the (digital) clock;
  • the logout applet.

I tend to keep my panel always laid out the same on all my computers, so that my eyes...

May 16, 2019



Plasma 5.16

KDE Plasma 5.16

Thursday, 16 May 2019.

Today KDE launches the beta release of Plasma 5.16.

In this release, many aspects of Plasma have been polished and
rewritten to provide high consistency and bring new features. There is a completely rewritten notification system supporting Do Not Disturb mode, more intelligent history with grouping, critical notifications in fullscreen apps, improved notifications for file transfer jobs, a much more usable System Settings page to configure everything, and many other things. The System and
Widget Settings have been refined and worked on by porting code to
newer Kirigami and Qt technologies and polishing the user interface.
And of course the VDG and Plasma team effort towards Usability & Productivity
goal
continues, getting feedback on all the papercuts in our software that make your life less
smooth and fixing them to ensure an intuitive and consistent workflow for your
daily use.

For the first time, the default wallpaper of Plasma 5.16 will
be decided by a contest where everyone can participate and submit art. The
winner will receive a Slimbook One v2 computer, an eco-friendly, compact
machine, measuring only 12.4 x 12.8 x 3.7 cm. It comes with an i5 processor, 8
GB of RAM, and is capable of outputting video in glorious 4K. Naturally, your
One will come decked out with the upcoming KDE Plasma 5.16 desktop, your
spectacular wallpaper, and a bunch of other great software made by KDE. You can find
more information and submitted work on the competition wiki
page
, and you can submit your own wallpaper in the
subforum
.

Desktop Management



New Notifications

New Notifications



Theme Engine Fixes for Clock Hands!

Theme Engine Fixes for Clock Hands!



Panel Editing Offers Alternatives

Panel Editing Offers Alternatives



Login Screen Theme Improved

Login Screen Theme Improved

  • Completely rewritten notification system supporting Do Not Disturb mode, more intelligent history with grouping, critical notifications in fullscreen apps, improved notifications for file transfer jobs, a much more usable System Settings page to configure everything, and more!
  • Plasma themes are now correctly applied to panels when selecting a new theme.
  • More options for Plasma themes: offset of analog clock hands and toggling blur behind.
  • All widget configuration settings have been modernized and now feature an improved UI. The Color Picker widget also improved, now allowing dragging colors from the plasmoid to text editors, palette of photo editors, etc.
  • The look and feel of lock, login and logout screen have been improved with new icons, labels, hover behavior, login button layout and more.
  • When an app is recording audio, a microphone icon will now appear in the System Tray which allows for changing and muting the volume using mouse middle click and wheel. The Show Desktop icon is now also present in the panel by default.
  • The Wallpaper Slideshow settings window now displays the images in the selected folders, and allows selecting and deselecting them.
  • The Task Manager features better organized context menus and can now be configured to move a window from a different virtual desktop to the current one on middle click.
  • The default Breeze window and menu shadow color are back to being pure black, which improves visibility of many things especially when using a dark color scheme.
  • The "Show Alternatives..." button is now visible in panel edit mode, use it to quickly change widgets to similar alternatives.
  • Plasma Vaults can now be locked and unlocked directly from Dolphin.


Settings



Color Scheme

Color Scheme



Application Style and Appearance Settings

Application Style and Appearance Settings

  • There has been a general polish in all pages; the entire Appearance section has been refined, the Look and Feel page has moved to the top level, and improved icons have been added in many pages.
  • The Color Scheme and Window Decorations pages have been redesigned with a more consistent grid view. The Color Scheme page now supports filtering by light and dark themes, drag and drop to install themes, undo deletion and double click to apply.
  • The theme preview of the Login Screen page has been overhauled.
  • The Desktop Session page now features a "Reboot to UEFI Setup" option.
  • There is now full support for configuring touchpads using the Libinput driver on X11.


Window Management



Window Management

Window Management

  • Initial support for using Wayland with proprietary Nvidia drivers has been added. When using Qt 5.13 with this driver, graphics are also no longer distorted after waking the computer from sleep.
  • Wayland now features drag and drop between XWayland and Wayland native windows.
  • Also on Wayland, the System Settings Libinput touchpad page now allows you to configure the click method, switching between "areas" or "clickfinger".
  • KWin's blur effect now looks more natural and correct to the human eye by not unnecessary darkening the area between blurred colors.
  • Two new default shortcuts have been added: Meta+L can now be used by default to lock the screen and Meta+D can be used to show and hide the desktop.
  • GTK windows now apply correct active and inactive colour scheme.


Plasma Network Manager



Plasma Network Manager with Wireguard

Plasma Network Manager with Wireguard

  • The Networks widget is now faster to refresh Wi-Fi networks and more reliable at doing so. It also has a button to display a search field to help you find a particular network from among the available choices. Right-clicking on any network will expose a "Configure…" action.
  • WireGuard is now compatible with NetworkManager 1.16.
  • One Time Password (OTP) support in Openconnect VPN plugin has been added.


Discover



Updates in Discover

Updates in Discover

  • In Discover's Update page, apps and packages now have distinct "downloading" and "installing" sections. When an item has finished installing, it disappears from the view.
  • Tasks completion indicator now looks better by using a real progress bar. Discover now also displays a busy indicator when checking for updates.
  • Improved support and reliability for AppImages and other apps that come from store.kde.org.
  • Discover now allows you to force quit when installation or update operations are proceeding.
  • The sources menu now shows the version number for each different source for that app.

Read the full announcement

It’s been a great pleasure to be chosen to work with KDE during GSoC this year. I’ll be working on KIOFuse and hopefully by the end of the coding period it will be well integrated with KIO itself. Development will mainly by coordinated on the #kde-fm channel (IRC Nick: feverfew) with fortnightly updates on my blog so feel free to pop by! Here’s a small snippet of my proposal to give everyone an idea of what I’ll be working on:

KIOSlaves are a powerful feature within the KIO framework, allowing KIO-aware applications
such as Dolphin to interact with services out of the local filesystem over URLs such as fish://
and gdrive:/. However, KIO-unaware applications are unable to interact seamlessly with KIO
Slaves. For example, editing a file in gdrive:/ in LibreOffice will not save changes to your Google Drive. One potential solution is to make use of FUSE, which is an interface provided
by the Linux kernel, which allows userspace processes to provide a filesystem which can be
mounted and accessed by regular applications. ​KIOFuse is a project by fvogt that
allows the possibility to mount KIO filesystems in the local system; therefore exposing them to
POSIX-compliant applications such as Firefox and LibreOffice.

This project intends to polish KIOFuse such that it is ready to be a KDE project. In particular,
I’ll be focusing on the following four broad goals:
• ​Improving compatibility with KDE and non-KDE applications by extending and improving
supported filesystem operations.
• ​Improving KIO Slave support.
• ​Performance and usability improvements.
• ​Adding a KDE Daemon module to allow the management of KIOFuse mounts and the
translation of KIO URLs to their local path equivalents.

We’re still on track to release Krita 4.2.0 this month! Compared to the alpha release, we fixed over thirty issues. This release also has a fresh splash screen by Tyson Tan and restores Python support to the Linux AppImage. The Linux AppImage does not have support for sound, the macOS build does not have support for G’Mic.

Warning: Linux users should be careful with distribution packages. We have a host of patches for Qt queued up, some of which are important for distributions to carry until the patches are merged and released in a new version of Qt.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.