May 25, 2019

Kube is still alive! I got distracted for a while, both professionally and privately, and writing blog posts is unfortunately always the first thing that ends up on the chopping block. Anyways, lot’s of progress has been made (103 commits in sink, ~130 commits in the kube codebase):

  • For sink we had a variety of bugfixes and performance improvements, especially on the more recent CalDAV/CardDAV backends.
  • For CalDAV/CardDAV we now do basic autodiscovery using the .well-known url’s. The DNS part of the spec has not been implemented so far.
    This means for a properly set-up server you only have to specify the base url, and everything else will be discovered automatically from there.
  • On the more user-facing front we have:
    • Sent emails are now collapsed by default
    • Plain text is now the preferred method of viewing emails. You can still view the HTML variant if available by clicking a button.
      • The Addressbook is no longer read-only and you can now create contacts as well.
    • A visually reworked composer that avoids becoming too wide and removes a lot of the visual clutter.
    • The calendar can now render recurring-events.
    • It is now possible to create events as well.
    • Work on a tasks view has started

Releases

You may have noticed that it’s been a while since the last release. This is not only because releases are additional work, but also because we already have a continuous delivery method with the nightly flatpak.
It’s clear that releases do provide value, both as a communication tool which version should be packaged, and if they would be maintained. With the current manpower we cannot maintain releases though, which makes it significantly less interesting.

With that said, the 0.8 release with the calendar is now long overdue and should be coming out soonish.

Experimental flatpak

Just to put it out there; Additionally to the usual “master” branch of the flatpak, there is also an “experimental” branch, containing, surprise, various experimental bits and pieces.

This currently entails:

  • A plugin that stores the accounts password encrypted by the accounts gpg key (blindly assuming there is one with a matching email address).
  • A search view
  • The upcoming calendar view (which we’ll move over in the next release)
  • The above todo view (which will take a little longer to move to master)
  • A “File as expense” plugin (a showcase how we could do extensions in the mail view).
  • The Inbox crusher view (an experiment for a view to go through your inbox one-by-one).

It typically serves as a staging ground for new components, and is the version that I’m running day-to-day. flatpak makes it easy to switch back and forth between the branches on top of the same dataset, so you can try it and switch back if you don’t like what you see.

To give it a shot use the following command to switch the current flatpak branch:

flatpak --user make-current com.kubeproject.kube experimental

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to: kube-project.com

Although kdesrc-build supports building Qt5 just like the KDE Frameworks, I prefer to use the prebuilt Qt binaries from qt.io for testing against alpha and beta releases. This saves me a lot of time on my machine, which is not powerful enough to compile the full Qt in a reasonable time.

This guide assumes that you already installed Qt from qt.io/download-qt-installer.

To reproduce my setup, an additional step is required compared to a kdesrc-build setup that uses the system Qt, nevertheless you start as usual, by cloning kdesrc-build itself:

git clone https://invent.kde.org/kde/kdesrc-build

Since kdesrc-build should be available anywhere on the system, not just in its own directory, we add it to the PATH variable by appending export PATH=~/kde/src/kdesrc-build:$PATH to your .profile file. Please remember to adapt the path according to where you cloned kdesrc-build to.

echo "export PATH=~/kde/src/kdesrc-build:$PATH" >> ~/.profile
source ~/.profile

Next, the initial setup needs to be performed:

kdesrc-build --initial-setup

This step created all the required configuration files, which now need slight adaptions in order to work with the Qt we downloaded from qt.io in the beginning.

Edit ~/.kdesrc-buildrc, and replace the path to Qt qtdir with the path you installed Qt to. The line should then look similar to this:

    qtdir  ~/.local/Qt/5.13.0/gcc_64 # Where to find Qt5

Now as a final step we need to prevent kdesrc-build from trying to build its own Qt, which can be done by commenting one include line: include /home/jbb/kde/kdesrc-build/qt5-build-include, so it looks like #include /home/jbb/kde/kdesrc-build/qt5-build-include.

This should be it! Have fun compiling up to date KDE software using an up to date Qt without compiling Qt for hours :)

kdesrc-build kirigami --include-dependencies

To activate your newly created environment, you can use

source ~/.config/kde-env-master.sh

May 24, 2019

Hay una especie de moda para que nuestro escritorio se parezca al de la compañia de la manzana. En el blog hemos hablado de diversas alternativas: MacOS Sierra CT, MacOs icons o Mojave CT. No obstante hay creadores que le han querido dar una vuelta de tuerca a este tipo de iconos y han diseñado McMojave-circle, un conjunto de iconos que nos ofrece un aspecto similar a los entornos de trabajo MacOS pero de forma redondeada.

McMojave-circle, iconos redondos estilo MacOS para Plasma

Lo he repetido muchas veces: el VDG (equipo  de diseño visual) de Plasma 5 realizó un gran trabajo extraordinario con el tema Breeze (o Brisa) y proporcionó a nuestro escritorio favorito de un aspecto espectacular, homogéneo, moderno y colorido. Dicho entorno está a la altura de cualquier otro, sea libre o privado.

Pero como es habitual, hay muchas personas que les gusta el cambio, la variedad o el aspecto de otro entorno, así que les disfrutan cambiando el pack de iconos, y para esto Plasma ofrece un mundo de posibilidades.

Nacidos de la mano y de la mente de Vinceliuce os presento McMojave-circle, otro tema de iconos estilo McOSC para Plasma 5 completo, precioso pero redondeado, lo cual le confiere un extra de originalidad.

McMojave-circle, iconos redondos estilo MacOS para Plasma

Y como siempre digo, si os gusta este conjunto de iconos podéis “pagarlo” de muchas formas en la página de KDE Store, que estoy seguro que el desarrollador lo agradecerá: puntúale positivamente, hazle un comentario en la página o realiza una donación. Ayudar al desarrollo del Software Libre también se hace simplemente dando las gracias, ayuda mucho más de lo que os podéis imaginar, recordad la campaña I love Free Software Day 2017 de la Free Software Foundation donde se nos recordaba esta forma tan secilla de colaborar con el gran proyecto del Software Libre y que en el blog dedicamos un artículo.

Más información: KDE Store

 

 

The bug report count of KTextEditor (implementing the editing part used in Kate/KWrite/KDevelop/Kile/…) and Kate itself reached again some value over 200.

If you have time and need an itch to scratch, any help to tackle the currently open bugs would be highly appreciated.

The full list can be found with this bugs.kde.org query.

Easy things anybody with a bit time could do would be:

  • check if the bug still is there with current master builds, if not, close it it
  • check if it is the duplicate of a similar still open bug, if yes, mark it as duplicate

Beside that, patches for any of the existing issues are very welcome.

I think the best guide how to setup some development environment is on our KDE Community Wiki. I myself use a kdesrc-build environment like described there, too.

Patches can be submitted for an review via our KDE Phabricator.

If it is just a small change and you don’t want to spend time on Phabricator, attaching a git diff versus current master to the bug is ok, too. Best mark the bug with a [PATCH] prefix in the subject.

The team working on the code is small, therefore please be a bit patient if you wait for reactions. I hope we have improved our reaction time in the last months but we still are lacking in that respect.

May 23, 2019

Would you like your C++ code to compile twice as fast (or more)?

Yeah, so would I. Who wouldn't. C++ is notorious for taking its sweet time to get compiled. I never really cared about PCHs when I worked on KDE, I think I might have tried them once for something and it didn't seem to do a thing. In 2012, while working on LibreOffice, I noticed its build system used to have PCH support, but it had been nuked, with the usual poor OOo/LO style of a commit message stating the obvious (what) without bothering to state the useful (why). For whatever reason, that caught my attention, reportedly PCHs saved a lot of build time with MSVC, so I tried it and it did. And me having brought the PCH support back from the graveyard means that e.g. the Calc module does not take 5:30m to build on a (very) powerful machine, but only 1:45m. That's only one third of the time.

In line with my previous experience, on Linux that did nothing. I made the build system support also PCH with GCC and Clang, because it was there and it was simple to support it too, but there was no point. I don't think anybody has ever used that for real.

Then, about a year ago, I happened to be working on a relatively small C++ project that used some kind of an obscure build system called Premake I had never heard of before. While fixing something in it I noticed it also had PCH support, so guess what, I of course enabled it for the project. It again made the project build faster on Windows. And, on Linux, it did too. Color me surprised.

The idea must have stuck with me, because a couple weeks back I got the idea to look at LO's PCH support again and see if it can be made to improve things. See, the point is, PCHs for that small project were rather small, it just included all the std stuff like <vector> and <string>, which seemed like it shouldn't make much of a difference, but it did. Those standard C++ headers aren't exactly small or simple. So I thought that maybe if LO on Linux used PCHs just for those, it would also make a difference. And it does. It's not breath-taking, but passing --enable-pch=system to configure reduces Calc module build time from 17:15m to 15:15m (that's a less powerful machine than the Windows one). Adding LO base headers containing stuff like OUString makes it go down to 13:44m and adding more LO headers except for Calc's own leads to 12:50m. And, adding even Calc's headers, results in 15:15m again. WTH?

It turns out, there's some limit when PCHs stop making things faster and either don't change anything, or even make things worse. Trying with the Math module, --enable-pch=system and then --enable-pch=base again improve things in a similar fashion, and then --enable-pch=normal or --enable-pch=full just doesn't do a thing. Where it that 2/3 time reduction --enable-pch=full does with MSVC?

Clang has recently received a new option, -ftime-trace, which shows in a really nice and simple way where the compiler spends the time (take that, -ftime-report). And since things related to performance simply do catch my attention, I ended up building the latest unstable Clang just to see what it does. And it does:

So, this is bcaslots.cxx, a smaller .cxx file in Calc. The first graph is without PCH, the second one is with --enable-pch=base, the third one is --enable-pch=full. This exactly confirms what I can see. Making the PCH bigger should result in something like the 4th graph, as it does with MSVC, but it results in things actually taking longer. And it can be seen why. The compiler does spend less and less time parsing the code, so the PCH works, but it spends more time in this 'PerformPendingInstantiations', which is handling templates. So, yeah, in case you've been living under a rock, templates make compiling C++ slow. Every C++ developer feeling really proud about themselves after having written a complicated template, raise your hand (... that includes me too, so let's put them back down, typing with one hand is not much fun). The bigger the PCH the more headers each C++ file ends up including, so it ends up having to cope with more templates. With the largest PCH, the compiler needs to spend only one second parsing code, but then it spends 3 seconds sorting out all kinds of templates, most of which the small source file does not need.

This one is column2.cxx, a larger .cxx file in Calc. Here, the biggest PCH mode leads to some improvement, because this file includes pretty much everything under the sun and then some more, so less parsing makes some savings, while the compiler has to deal with a load of templates again, PCH or not. And again, one second for parsing code, 4 seconds for templates. And, if you look carefully, 4 seconds more to generate code, most of it for those templates. And after the compiler spends all this time on templates in all the source files, it gets all passed to the linker, which will shrug and then throw most of it away (and that will too take a load of time, if you still happen to use the BFD linker instead of gold/lld with -gsplit-dwarf -Wl,--gdb-index). What a marvel.

Now, in case there seems to be something fishy about the graphs, the last graph indeed isn't from MSVC (after all, its reporting options are as "useful" as -ftime-report). It is from Clang. I still know how to do performance magic ...



A este paso voy a cambiar el nombre del blog ya que los eventos empiezan a monopolizar las entradas del mismo. Hoy quiero promocionar el curso de Docker en Linux Center en València que se va a celebrar el próximo sábado 25 de mayo. Este curso será impartido por Raúl Rodrigo, desarrollador oficial de la distribución LliureX desde 2009 .

 

Curso de Docker en Linux Center en València

La gente de Linux Center se ha propuesto ofrecernos una serie de cursos, ponencias y talleres para que aprendamos todos los Curso de Docker en Linux Center en Valenciaentresijos de algunos de los proyectos libres. De esta forma, ya han publicado una fabulosa programación que sorprende por su cantidad y calidad.

Ya hemos hablado del taller «Bots para Telegram, como automatizar hasta el infinito y más allá» o el «Ciclo de Raspberry Pi«, ambos realizados por Lorenzo Carbonell, (aka @atareao), los cursos de Linux Essentials de Javier Sepúlveda o los de Inkscape de Adriana Fuentes.

Así que me complace compartir con vosotros que el próximo 25 de mayo se impartirá un curso de Docker nivel básico/medio a cargo de Raúl Rodrigo, uno de los desarrolladores oficiales de la distribución LliureX desde 2009 y que quiere compartir sus conocimientos en otros campos de la informática.

Me interesa, dame más información

El programa es el siguiente:

* Explicación de Linux:

– Organización de un sistema Linux
– ¿Qué es el kernel?

* Comparativa con lxc, virtualización, correos rkt

– lxc : Virtualizador de sistema
– coreos : demonio de init
– docker : virtualización de aplicación
* Ventajas de Docker
– recursos compartidos y no asignados (mas ligeros)
– No usan sistema operativo completo
– reutilización de imágenes
– tiempo de arranque
– solo aplicaciones Linux (ni windows ni mac)

* Docker CE y Docker EE

* Imágenes y contenedores: definición y diferenciación de cada uno de estos

Las características básicas del curso son las siguientes:

  • Fecha: 25 de mayo
  • Hora: 11:00 horas
  • Lugar: Linux Center
  • ¿Curso Gratuito o de Pago?: Gratis ��
  • ¿Podemos retransmitirlo via streaming?: No

 

Más información: Linux Center

A few months ago, we received a phone call from a bioinformatics group at a European university. The problem they were having appeared very simple. They wanted to know how to usemmap() to be able to load a large data set into RAM at once. OK I thought, no problem, I can handle that one. Turns out this has grown into a complex and interesting exercise in profiling and threading.

The background is that they are performing Markov-Chain Monte Carlo simulations by sampling at random from data sets containing SNP (pronounced “snips”) genetic markers for a selection of people. It boils down to a large 2D matrix of floats where each column corresponds to an SNP and each row to a person. They provided some small and medium sized data sets for me to test with, but their full data set consists of 500,000 people with 38 million SNP genetic markers!

The analysis involves selecting a column (SNP) at random in the data set and then performing some computations on the data for all of the individuals and collecting some summary statistics. Do that for all of the columns in the data set, and then repeat for a large number of iterations. This allows you to approximate the underlying true distribution from the discreet data that has been collected.

That’s the 10,000 ft view of the problem, so what was actually involved? Well we undertook a bit of an adventure and learned some interesting stuff along the way, hence this blog series.

The stages we went through were:

  1. Preprocessing
  2. Loading the Data
  3. Fine-grained Threading
  4. Preprocessing Reprise
  5. Coarse Threading

In this blog, I’ll detail stages 1 and 2. The rest of the process will be revealed as the blog series unfolds, and I’ll include a final summary at the end.

1. Preprocessing

The first thing we noticed when looking at the code they already had is that there is quite some work being done when reading in the data for each column. They do some summary statistics on the column, then scale and bias all the data points in that column such that the mean is zero. Bearing in mind that each column will be processed many times, (typically 10k – 1 million), this is wasteful to repeat every time the column is used.

So, reusing some general advice from 3D graphics, we moved this work further up the pipeline to a preprocessing step. The SNP data is actually stored in a compressed form which takes the form of quantizing 4 SNP values into a few bytes which we then decompress when loading. So the preprocessing step does the decompression of SNP data, calculates the summary statistics, adjusts the data and then writes the floats out to disk in the form of a ppbed file (preprocessed bed where bed is a standard format used for this kind of data).

The upside is that we avoid all of this work on every iteration of the Monte Carlo simulation at runtime. The downside is that 1 float per SNP per person adds up to a hell of a lot of data for the larger data sets! In fact, for the full data set it’s just shy of 69 TB of floating point data! But to get things going, we were just worrying about smaller subsets. We will return to this later.

2. Loading the data

Even on moderately sized data sets, loading the entirety of the data set into physical RAM at once is a no-go as it will soon exhaust even the beefiest of machines. They have some 40 core, many-many-GB-of-RAM machine which was still being exhausted. This is where the original enquiry was aimed – how to use mmap(). Turns out it’s pretty easy as you’d expect. It’s just a case of setting the correct flags so that the kernel doesn’t actually take a copy of the data in the file. Namely, PROT_READ and MAP_SHARED:

void Data::mapPreprocessBedFile(const string &preprocessedBedFile)
{
    // Calculate the expected file sizes - cast to size_t so that we don't overflow the unsigned int's
    // that we would otherwise get as intermediate variables!
    const size_t ppBedSize = size_t(numInds) * size_t(numIncdSnps) * sizeof(float);
 
    // Open and mmap the preprocessed bed file
    ppBedFd = open(preprocessedBedFile.c_str(), O_RDONLY);
    if (ppBedFd == -1)
        throw("Error: Failed to open preprocessed bed file [" + preprocessedBedFile + "]");
 
    ppBedMap = reinterpret_cast<float *>(mmap(nullptr, ppBedSize, PROT_READ, MAP_SHARED, ppBedFd, 0));
    if (ppBedMap == MAP_FAILED)
        throw("Error: Failed to mmap preprocessed bed file");
 
    ...
}

When dealing with such large amounts of data, be careful of overflows in temporaries! We had a bug where ppBedSize was overflowing and later causing a segfault.

So, at this point we have a float *ppBed pointing at the start of the huge 2D matrix of floats. That’s all well and good but not very convenient for working with. The code base already made use of Eigen for vector and matrix operations so it would be nice if we could interface with the underlying data using that.

Turns out we can (otherwise I wouldn’t have mentioned it). Eigen provides VectorXf and MatrixXf types for vectors and matrices but these own the underlying data. Luckily Eigen also provides a wrapper around these in the form of Map. Given our pointer to the raw float data which is mmap()‘d, we can use the placement new operator to wrap it up for Eigen like so:

class Data
{
public:
    Data();
 
    // mmap related data
    int ppBedFd;
    float *ppBedMap;
    Map<MatrixXf> mappedZ;
}
 
 
void Data::mapPreprocessBedFile(const string &preprocessedBedFile)
{
    ...
 
    ppBedMap = reinterpret_cast<float *>(mmap(nullptr, ppBedSize, PROT_READ, MAP_SHARED, ppBedFd, 0));
    if (ppBedMap == MAP_FAILED)
        throw("Error: Failed to mmap preprocessed bed file");
 
    new (&mappedZ) Map<MatrixXf>(ppBedMap, numRows, numCols);
}

At this point we can now do operations on the mappedZ matrix and they will operate on the huge data file which will be paged in by the kernel as needed. We never need to write back to this data so we didn’t need the PROT_WRITE flag for mmap.

Yay! Original problem solved and we’ve saved a bunch of work at runtime by preprocessing. But there’s a catch! It’s still slow. See the next blog in the series for how we solved this.

The post Little Trouble in Big Data – Part 1 appeared first on KDAB.

May 22, 2019

Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.

We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).

We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.

I am happy to announce the release of 0.4.0 version of the Elisa music player.

The new features are explained in the following posts New features in Elisa, New Features in Elisa: part 2 and Elisa 0.4 Beta Release and More New Features.

There have been a couple more changes not yet covered.

Improved Grid Views Elements

Nate Graham has reworked the grid elements (especially visible with the albums view).

I must confess that I was a bit uneasy with this change (it was a part mostly unchanged since the early versions). I am now very happy about this change.

Screenshot_20190522_221314Before
Screenshot_20190522_215801After

Getting Involved

I would like to thank everyone who contributed to the development of Elisa, including code contributions, testing, and bug reporting and triaging. Without all of you, I would have stopped working on this project.

New features and fixes are already being worked on. If you enjoy using Elisa, please consider becoming a contributor yourself. We are happy to get any kind of contributions!

We have some tasks that would be perfect junior jobs. They are a perfect way to start contributing to Elisa. There are more not yet reported here but reported in bugs.kde.org.

The flathub Elisa package allows an easy way to test this new release.

Elisa source code tarball is available here. There is no Windows setup. There is currently a blocking problem with it (no icons) that is being investigated. I hope to be able to provide installers for later bugfix versions.

The phone/tablet port project could easily use some help to build an optimized interface on top of Kirigami. It remains to be seen how to handle this related to the current desktop UI.

 

Me complace anunciar que la Comunidad de openSUSE acaba de publicar que ha sido lanzado  openSUSE Leap 15.1, la nueva gran de su sistema operativo basado en GNU/Linux. Una gran noticia en estos días que nos ha tocado despedirnos de una distribución muy querida por sus ususarios: Antergos.

Lanzado openSUSE Leap 15.1

La primera distribución que ocupó mis ordenadores sigue su evolución y hoy 22 de mayo ha llegado una nueva versión. Nos encontramos ante openSUSE Leap 15.1, la nueva distribución de la Comunidad que destaca por su estabilidad y robustez.

Lanzado openSUSE Leap 15.1

En palabras de Haris Sehic, miembro de la comunidad de openSUSE, «Continuidad y estabilidad es lo que estamos ofreciendo a los usuarios con Leap 15.1» y como nos describe Victorhck «openSUSE Leap es la versión de lanzamientos regulares (normalmente cada año) de esta distribución de GNU/Linux comunitaria apoyada por la empresa SUSE, que lleva desarrollando y manteniendo soluciones empresariales basadas en Linux y plataformas de código abierto muchos años

Las novedades de openSUSE Leap 15.1

Son bastantes las novedades que nos ofrece openSUSE Leap 15.1, entre las que destaco:

  • Mejoras tanto varias funcionalidades de YaST como el propio instalador, como por ejemplo:
    • Nueva interfaz gráfica para gestionar Firewalld.
    • Particionador mejorado.
    • Mejoras en la experiencia para monitores 4k display (HiDPI)
  • Versiones de escritorio KDE Plasma 5.12 y GNOME 3.26 y la verisón de soporte extendido (LTS).
  • Versiones exclusivas para hardware específico (Linode, Slimbooks y Tuxedo)
  • Gran variedad de escritorios Linux, incluyendo los tradicionales KDE, GNOME y también el eficiente Xfce.
  • Mejoras de la alta deficinición HD-audio, controladores mejorados de USB-audio

.

  • Actualización menor de la biblioteca de gráficos Mesa 3d Graphic Library y de herramientas de código abierto como el software de creación 3D Blender
  • Ahora Leap 15.1 utiliza Network Manager de manera predeterminada tanto para equipos portátiles como de escritorio.
  • Actualizaciones de software para MultiMedia Card (MMC) y para MMC empotrado (eMMC).

Más información: openSUSE | Victorhck In The Free World

KDAB has released a new version of KDSoap. This is version 1.8.0 and comes more than one year since the last release (1.7.0).

KDSoap is a tool for creating client applications for web services without the need for any further component such as a dedicated web server.

KDSoap lets you interact with applications which have APIs that can be exported as SOAP objects. The web service then provides a machine-accessible interface to its functionality via HTTP. Find out more...

Version 1.8.0 has a large number of improvements and fixes:

General

  • Fixed internally-created faults lacking an XML element name (so e.g. toXml() would abort)
  • KDSoapMessage::messageAddressingProperties() is now correctly filled in when receiving a message with WS-Addressing in the header

Client-side

  • Added support for timing out requests (default 30 minutes, configurable with KDSoapClientInterface::setTimeout())
  • Added support for soap 1.2 faults in faultAsString()
  • Improved detection of soap 1.2 faults in HTTP response
  • Stricter namespace check for Fault elements being received
  • Report client-generated faults as SOAP 1.2 if selected
  • Fixed error code when authentication failed
  • Autodeletion of jobs is now configurable (github pull #125)
  • Added error details in faultAsString() – and the generated lastError() – coming from the SOAP 1.2 detail element.
  • Fixed memory leak in KDSoapClientInterface::callNoReply
  • Added support for WS-UsernameToken, see KDSoapAuthentication
  • Extended KDSOAP_DEBUG functionality (e.g. “KDSOAP_DEBUG=http,reformat” will now print http-headers and pretty-print the xml)
  • Added support for specifying requestHeaders as part of KDSoapJob via KDSoapJob::setRequestHeaders()
  • Renamed the missing KDSoapJob::returnHeaders() to KDSoapJob::replyHeaders(), and provide an implementation
  • Made KDSoapClientInterface::soapVersion() const
  • Added lastFaultCode() for error handling after sync calls. Same as lastErrorCode() but it returns a QString rather than an int.
  • Added conversion operator from KDDateTime to QVariant to void implicit conversion to base QDateTime (github issue #123).

Server-side

  • New method KDSoapServerObjectInterface::additionalHttpResponseHeaderItems to let server objects return additional http headers. This can be used to implement support for CORS, using KDSoapServerCustomVerbRequestInterface to implement OPTIONS response, with “Access-Control-Allow-Origin” in the headers of the response (github issue #117).
  • Stopped generation of two job classes with the same name, when two bindings have the same operation name. Prefixed one of them with the binding name (github issue #139 part 1)
  • Prepended this-> in method class to avoid compilation error when the variable and the method have the same name (github issue #139 part 2)

WSDL parser / code generator changes, applying to both client and server side

  • Source incompatible change: all deserialize() functions now require a KDSoapValue instead of a QVariant. If you use a deserialize(QVariant) function, you need to port your code to use KDSoapValue::setValue(QVariant) before deserialize()
  • Source incompatible change: all serialize() functions now return a KDSoapValue instead of a QVariant. If you use a QVariant serialize() function, you need to port your code to use QVariant KDSoapValue::value() after serialize()
  • Source incompatible change: xs:QName is now represented by KDQName instead of QString, which allows the namespace to be extracted. The old behaviour is available via KDQName::qname().
  • Fixed double-handling of empty elements
  • Fixed fault elements being generated in the wrong namespace, must be SOAP-ENV:Fault (github issue #81).
  • Added import-path argument for setting the local path to get (otherwise downloaded) files from.
  • Added -help-on-missing option to kdwsdl2cpp to display extra help on missing types.
  • Added C++17 std::optional as possible return value for optional elements.
  • Added -both to create both header(.h) and implementation(.cpp) files in one run
  • Added -namespaceMapping @mapping.txt to import url=code mappings, affects C++ class name generation
  • Added functionality to prevent downloading the same WSDL/XSD file twice in one run
  • Added “hasValueFor{MemberName}()” accessor function, for optional elements
  • Generated services now include soapVersion() and endpoint() accessors to match the setSoapVersion(…) and setEndpoint(…) mutators
  • Added support for generating messages for WSDL files without services or bindings
  • Fixed erroneous QT_BEGIN_NAMESPACE around forward-declarations like Q17__DialogType.
  • KDSoapValue now stores the namespace declarations during parsing of a message and writes
  •     namespace declarations during sending of a message
  • Avoid serialize crash with required polymorphic types, if the required variable wasn’t actually provided
  • Fixed generated code for restriction to base class (it wouldn’t compile)
  • Prepended “undef daylight” and “undef timezone” to all generated files, to fix compilation errors in wsdl files that use those names, due to nasty Windows macros
  • Added generation for default attribute values.

Get KDSoap…

KDSoap on github…

The post KDSoap 1.8.0 released appeared first on KDAB.

May 21, 2019

I’m very excited to start off the Google Summer of Code blogging experience regarding the project I’m doing with my KDE mentors David Edmundson and Nate Graham. What we’ll be trying to achieve this summer is have SDDM be more in sync with the Plasma desktop. What does that mean? The essence of the problem...... Continue Reading →

May 20, 2019

If you occassionally do performance profiling as I do, you probably know Valgrind's Callgrind and the related UI KCachegrind. While Callgrind is a pretty powerful tool, running it takes quite a while (not exactly fun to do with something as big as e.g. LibreOffice).

Recently I finally gave Linux perf a try. Not quite sure why I didn't use it before, IIRC when I tried it somewhen long ago, it was probably difficult to set up or something. Using perf record has very little overhead, but I wasn't exactly thrilled by perf report. I mean, it's text UI, and it just gives a list of functions, so if I want to see anything close to a call graph, I have to manually expand one function, expand another function inside it, expand yet another function inside that, and so on. Not that it wouldn't work, but compared to just looking at what KCachegrind shows and seeing ...

When figuring out how to use perf, while watching a talk from Milian Wolff, on one slide I noticed a mention of a Callgrind script. Of course I had to try it. It was a bit slow, but hey, I could finally look at perf results without feeling like that's an effort. Well, and then I improved the part of the script that was slow, so I guess I've just put the effort elsewhere :).

And I thought this little script might be useful for others. After mailing Milian, it turns out he just created the script as a proof of concept and wasn't interested in it anymore, instead developing Hotspot as UI for perf. Fair enough, but I think I still prefer KCachegrind, I'm used to this, and I don't have to switch the UI when switching between perf and callgrind. So, with his agreement, I've submitted the script to KCachegrind. If you would find it useful, just download this do something like:

$ perf record -g ...
$ perf script -s perf2calltree.py > perf.out
$ kcachegrind perf.out



Plasma 5.16 beta was released last week and there’s now a further couple of weeks to test it to find and fix all the beasties. To help out download the Neon Testing image and install it in a virtual machine or on your raw hardware. You probably want to do a full-upgrade to make sure you have the latest builds. Then try out the new notifications system, or the new animated wallpaper settings or anything else mentioned in the release announcement. When you find a problem report it on bugs.kde.org and/or chat on the Plasma Matrix room. Thanks for your help!

I have been at many software events and have helped or have been part of the organization in a few of them. Based on that experience and the fact that I have participated in the last two editions, let me tell you that J On The Beach is a great event.

The main factors that leads me to such a conclusion are:

  • It is all about contents. I have seen many events that, over time, loose the focus on the quality of the contents. It is a hard focus to keep, specially as you grow. @JOTB19 had great content: well delivered talks and workshops, performed by bright people with something to say which was relevant to the audience.
    • I think the event has not reached its limit yet, specially when it comes to workshops.img5
    • Designing the content structure to target the right audience is as important as bringing speakers with great things to say. As any event matures, tough decisions will need to be taken in order to find its own space and identity among outstanding competitors.
      • When it comes to themes, will J On The Beach keep targeting several topics, or will it narrow them to one or two? Will they always be the same or will they rotate?
      • When it comes to size, will it grow or will it remain in the current numbers? Will the price increase or will be kept in the current range?
      • When it comes to contents, will the event focus more energy and time allocation on the “hands on” learning sessions or will workshops be kept as relevant compared to the talks, as they are today?  Will the talks length be reduced? Will we see lightning talks?
  • J On The Beach was well organised. A good organization is not the one that does not run into any trouble but the one that handles them smoothly so there is little or no perceived impact. This event has a diligent team behind it, based on the little/no impact I perceived.
  • Support from local companies. As Málaga matures as software hub, more and more companies arrive to this area expecting to grow in size, so the need to attract local talent grows in parallel.
    • Some of these foreign companies understand how important it is to show up in local events to be known by as many local developers as possible. J On The Beach has captured the attention of several of these companies.
    • The organizers have understood this reality and support them to use the event to openly  recruit people. This symbiotic relation is a very productive one from what I have witnessed.
    • It is a hard relation to sustain though, specially if the event does not grow is size, so probably in the future the current relation will need to add additional common interests to remain productive for both sides.
  • Global by default. Most events in Spain have traditionally been designed for Spaniards first, turning into more global events as they grow. J On The Beach is global by default, by design, since day 1. It is harder to succeed that way, but beyond the activation point it turns out to be easier to become sustainable. The organizers took the risk and have reached that point already, which provides the event a bright future in my opinion.
    • The fact that the event is able to attract developers from many countries, specially from eastern European ones, makes J On The Beach very attractive to foreign companies already located in Málaga, from the recruitment perspective. Málaga is a great place not just to work in English but also to live in English. There are well established communities from many different countries in the metropolitan area, due to how strong the touristic industry is here. These factors, together with others like logistics, affordable living costs, good public health care system, sunny weather, availability of international and multilingual schools, etc., reduce the adaptation effort when relocating,  specially for developer’s families. J On The Beach brings tasty fishes to the pond.

Let me name a couple of points that can make the event even better:

  • img10It is very hard to find a venue that fits any event during its consolidation phase and evolves with it. This edition’s venue represents a significant improvement compared to last year edition. There is room for improvement though.
    • It would be ideal to find a place in Málaga itself, closer to where the companies are located and places to hang out after the event, which at the same time, keep the good things the current venue/location provides, which are plenty.
    • Finding the right venue is tough. There are decision-making factors that participants do not usually see but are essential like costs, how supportive the venue staff and owners are, accommodation availability in the surrounding area, availability on the selected dates, etc. It is one of the most difficult points to get right, in my experience.                       img1
  • Great events deserve great keynote speakers. They are hard to get but often reflects the difference between great and must-attend events.
    • Great keynote speakers does not necessarily mean popular ones. I see already celebrities in bigger and more expensive events. I would love to see in Málaga old time computer science cowboys.  I mean those first class engineers who did something relevant some time ago and have witnessed the evolution of our industry and their own inventions. They are able to bring a perspective that very few can provide, extremely valuable in these fast pace changing times. Those gems are harder to see at big/popular events and might be a good target for a smaller, high quality event. I think that it would be a great sign of success if such a kind of professionals come to talk at J On The Beach.

I am very glad there is such a great event close to where I live. J On The Beach is not just worth for local developers but also for those abroad. I attend to several events in other countries every year with more name but less value than J On The Beach. It will definitely be on my 2020 agenda. Thanks to every person involved in making it possible.

Pictures taken from the J On The Beach website.

May 19, 2019

Lacking VLC and libvlc in Craft, phonon-vlc cannot be built successfully on macOS. It caused the failed building of KDE Connect in Craft.

As a small step of my GSoC project, I managed to build KDE Connect by removing the phonon-vlc dependency. But it’s not a good solution. I should try to fix phonon-vlc building on macOS. So during the community bonding period, to know better the community and some important tools in the Community, I tried to fix phonon-vlc.

Fixing phonon-vlc

At first, I installed libVLC in MacPorts. All Header files and libraries are installed into the system path. So theoretically, there should not be a problem of the building of phonon-vlc. But an error occurred:

We can figure that the compiling is ok, the error is just at the end, during the linking. The error message tells us there is no QtDBus lib. So to fix it, I made a small patch to add QtDBus manually in the CMakeLists file.

1
2
3
4
5
6
7
8
9
10
11
12
13
diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
index 47427b2..1cdb250 100644
--- a/src/CMakeLists.txt
+++ b/src/CMakeLists.txt
@@ -81,7 +81,7 @@ if(APPLE)
endif(APPLE)

automoc4_add_library(phonon_vlc MODULE ${phonon_vlc_SRCS})
-qt5_use_modules(phonon_vlc Core Widgets)
+qt5_use_modules(phonon_vlc Core Widgets DBus)

set_target_properties(phonon_vlc PROPERTIES
PREFIX ""

And it works well!

A small problem is that Hannah said she didn’t get an error during linking. It may be something about Qt version. If someone gets some idea, welcome to contact me.

My Qt version is 5.12.3.

Fixing VLC

To fix VLC, I tried to pack the VLC binary just like the one on Windows.

But unfortunately, in the .app package, the Header files are not completed. Comparing to Windows version, the entire plugins folder is missing.

So I made a patch for all those files. But the patch is too huge (25000 lines!). So it is not a good idea to merge it into master branch.

Thanks to Hannah, she has made a libs/vlc blueprint in the master branch, so in Craft, feel free to install it by running craft libs/vlc.

Troubleshooting

If you cannot build libs/vlc, just like me, you can also choose the binary version VLC with Header files patch.

The patch of Headers for binary is too big. Adding it to the master branch is not a good idea. So I published it on my own repository:
https://github.com/Inokinoki/craft-blueprints-inoki

To use it, run craft --add-blueprint-repository https://github.com/inokinoki/craft-blueprints-inoki.git and the blueprint(s) will be added into your local blueprint directory.

Then, craft binary/vlc will help get the vlc binary and install Header files, libraries into Craft include path and lib path. Finally, you can build what you want with libvlc dependency.

Conclusion

Up to now, KDE Connect is using QtMultimedia rather than phonon and phonon-vlc to play a sound. But this work could be also useful for other applications or libraries who depend on phonon, phonon-vlc or vlc. This small step may help build them successfully on macOS.

I hope this can help someone!

Hi, everyone!

I’m Weixuan XIAO, with the nickname: Inoki, sometimes Inokinoki is used to avoid duplicated username.

I’m glad to be selected in Google Summer of Code 2019 to work for KDE Community to make KDE Connect work on macOS. And I’m willing to be a long-term contributor in KDE Community.

As a Chinese student, I’m studying in France for my engineering degree. At the same time, I’m waiting for my bachelor degree at Shanghai University.

I major in Real-Time System and Embedded Engineering. With strong interests in Operating System and Computer Architecture, I like playing with small devices like Arduino and Raspberry Pi, different systems like macOS and Linux(especially Manjaro with KDE, they are the best partner).

Japanese culture makes me crazy, for example, the animation and the game. Even my nickname is actually the pronunciation of my real name in Japanese. So if all of these is the choice of Steins Gate, I’ll normally accept them :)

I speak Chinese, French, English, and a little Japanese. But I realize that my English is awful. So if I make any mistake, please tell me. This would improve my English and I will appreciate it.

I hope we can have a good summer in 2019. And have some good codes :)

Continuing with the addition of line terminating style for the Straight Line annotation tool, I have added the ability to select the line start style also. The required code changes are committed today.

Line annotation with circled start and closed arrow ending.

Currently it is supported only for PDF documents (and poppler version ≥ 0.72), but that will change soon — thanks to another change by Tobias Deiminger under review to extend the functionality for other documents supported by Okular.

libqaccessibilityclient 0.4.1 is out now
https://download.kde.org/stable/libqaccessibilityclient/
http://embra.edinburghlinux.co.uk/~jr/tmp/pkgdiff_reports/libqaccessibilityclient/0.4.0_to_0.4.1/changes_report.html
Signed by Jonathan Riddell
https://sks-keyservers.net/pks/lookup?op=vindex&search=0xEC94D18F7F05997E
  • version 0.4.1
  • Use only undeprecated KDEInstallDirs variables
  • KDECMakeSettings already cares for CMAKE_AUTOMOC & BUILD_TESTING
  • Fix use in cross compilation
  • Q_ENUMS -> Q_ENUM
  • more complete release instructions
Facebooktwitterlinkedinby feather

Recently I have been researching into possibilities to make members of KoShape copy-on-write. At first glance, it seems enough to declare d-pointers as some subclass of QSharedDataPointer (see Qt’s implicit sharing) and then replace pointers with instances. However, there remain a number of problems to be solved, one of them being polymorphism.

polymorphism and value semantics

In the definition of KoShapePrivate class, the member fill is stored as a QSharedPointer:

QSharedPointer<KoShapeBackground> fill;

There are a number of subclasses of KoShapeBackground, including KoColorBackground, KoGradientBackground, to name just a few. We cannot store an instance of KoShapeBackground directly since we want polymorphism. But, well, making KoShapeBackground copy-on-write seems to have nothing to do with whether we store it as a pointer or instance. So let’s just put it here – I will come back to this question at the end of this post.

d-pointers and QSharedData

The KoShapeBackground heirarchy (similar to the KoShape one) uses derived d-pointersfor storing private data. To make things easier, I will here use a small example to elaborate on its use.

derived d-pointer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
class AbstractPrivate
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
// it is not yet copy-constructable; we will come back to this later
// Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QScopedPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
// it is not yet copy-constructable
// Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

The main goal of making DerivedPrivate a subclass of AbstractPrivate is to avoid multiple d-pointers in the structure. Note that there are constructors taking a reference to the private data object. These are to make it possible for a Derived object to use the samed-pointer as its Abstract parent. The Q_D() macro is used to convert the d_ptr, which is a pointer to AbstractPrivate to another pointer, named d, of some of its descendent type; here, it is a DerivedPrivate. It is used together with the Q_DECLARE_PRIVATE() macro in the class definition and has a rather complicated implementation in the Qt headers. But for simplicity, it does not hurt for now to understand it as the following:

#define Q_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

where Class##Private means simply to append string Private to (the macro argument) Class.

Now let’s test it by creating a pointer to Abstract and give it a Derived object:

1
2
3
4
5
6
7
int main()
{
QScopedPointer<Abstract> ins(new Derived());
ins->foo();
ins->modifyVar();
ins->foo();
}

Output:

foo 0 0foo 1 1

Looks pretty viable – everything’s working well! – What if we use Qt’s implicit sharing? Just make AbstractPrivate a subclass of QSharedData and replace QScopedPointer with QSharedDataPointer.

making d-pointer QSharedDataPointer

In the last section, we commented out the copy constructors since QScopedPointer is not copy-constructable,but here QSharedDataPointer is copy-constructable, so we add them back:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

And testing the copy-on-write mechanism:

1
2
3
4
5
6
7
8
9
int main()
{
QScopedPointer<Derived> ins(new Derived());
QScopedPointer<Derived> ins2(new Derived(*ins));
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

But, eh, it’s a compile-time error.

error: reinterpret_cast from type &aposconst AbstractPrivate*&apos to type &aposAbstractPrivate*&apos casts away qualifiers Q_DECLARE_PRIVATE(Abstract)

Q_D, revisited

So, where does the const removal come from? In qglobal.h, the code related to Q_D is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
template <typename T> inline T *qGetPtrHelper(T *ptr) { return ptr; }
template <typename Ptr> inline auto qGetPtrHelper(const Ptr &ptr) -> decltype(ptr.operator->()) { return ptr.operator->(); }

// The body must be a statement:
#define Q_CAST_IGNORE_ALIGN(body) QT_WARNING_PUSH QT_WARNING_DISABLE_GCC("-Wcast-align") body QT_WARNING_POP
#define Q_DECLARE_PRIVATE(Class) \
inline Class##Private* d_func() \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<Class##Private *>(qGetPtrHelper(d_ptr));) } \
inline const Class##Private* d_func() const \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<const Class##Private *>(qGetPtrHelper(d_ptr));) } \
friend class Class##Private;

#define Q_D(Class) Class##Private * const d = d_func()

It turns out that Q_D will call d_func() which then calls an overload of qGetPtrHelper() that takes const Ptr &ptr. What does ptr.operator->() return? What is the difference between QScopedPointer and QSharedDataPointer here?

QScopedPointer‘s operator->() is a const method that returns a non-const pointer to T; however, QSharedDataPointer has two operator->()s, one being const T* operator->() const, the other T* operator->(), and theyhave quite different behaviours – the non-const variant calls detach() (where copy-on-write is implemented), but the other one does not.

qGetPtrHelper() here can only take d_ptr as a const QSharedDataPointer, not a non-const one; so, no matter which d_func() we are calling, we can only get a const AbstractPrivate *. That is just the problem here.

To resolve this problem, let’s replace the Q_D macros with the ones we define ourselves:

#define CONST_SHARED_D(Class) const Class##Private *const d = reinterpret_cast<const Class##Private *>(d_ptr.constData())#define SHARED_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

We will then use SHARED_D(Class) in place of Q_D(Class) and CONST_SHARED_D(Class) for Q_D(const Class). Since the const and non-const variant really behaves differently, it should help to differentiate these two uses. Also, delete Q_DECLARE_PRIVATE since we do not need them any more:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { CONST_SHARED_D(Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { SHARED_D(Derived); d->var++; d->bar++; }
};

With the same main() code, what’s the result?

foo 0 0foo 1 16606417foo 0 0

… big whoops, what is that random thing there? Well, if we use dynamic_cast in place of reinterpret_cast, the program simply crashes after ins->modifyVar();, indicating that ins‘s d_ptr.data() is not at all a DerivedPrivate.

virtual clones

The detach() method of QSharedDataPointer will by default create an instance of AbstractPrivate regardless of what the instance really is. Fortunately, it is possible to change that behaviour through specifying the clone() method.

First, we need to make a virtual function in AbstractPrivate class:

virtual AbstractPrivate *clone() const = 0;

(make it pure virtual just to force all subclasses to re-implement it; if your base class is not abstract you probably want to implement the clone() method) and then override it in DerivedPrivate:

virtual DerivedPrivate *clone() const { return new DerivedPrivate(*this); }

Then, specify the template method for QSharedDataPointer::clone(). As we will re-use it multipletimes (for different base classes), it is better to define a macro:

1
2
3
4
5
6
7
#define DATA_CLONE_VIRTUAL(Class) template<>                      \
Class##Private *QSharedDataPointer<Class##Private>::clone() \
{ \
return d->clone(); \
}
// after the definition of Abstract
DATA_CLONE_VIRTUAL(Abstract)

It is not necessary to write DATA_CLONE_VIRTUAL(Derived) as we are never storing a QSharedDataPointer<DerivedPrivate> throughout the heirarchy.

Then test the code again:

foo 0 0foo 1 1foo 0 0

– Just as expected! It continues to work if we replace Derived with Abstract in QScopedPointer:

QScopedPointer<Abstract> ins(new Derived());QScopedPointer<Abstract> ins2(new Derived(* dynamic_cast<const Derived *>(ins.data())));

Well, another problem comes, that the constructor for ins2 seems too ugly, and messy. We could, like the private classes, implement a virtual function clone() for these kinds of things, but it is still not gentle enough, and we cannot use a default copy constructor for any class that contains such QScopedPointers.

What about QSharedPointer that is copy-constructable? Well, then these copies actually point to the same data structures and no copy-on-write is performed at all. This still not wanted.

the Descendents of …

Inspired by Sean Parent’s video, I finally come up with the following implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
template<typename T>
class Descendent
{
struct concept
{
virtual ~concept() = default;
virtual const T *ptr() const = 0;
virtual T *ptr() = 0;
virtual unique_ptr<concept> clone() const = 0;
};
template<typename U>
struct model : public concept
{
model(U x) : instance(move(x)) {}
const T *ptr() const { return &instance; }
T *ptr() { return &instance; }
// or unique_ptr<model<U> >(new model<U>(U(instance))) if you do not have C++14
unique_ptr<concept> clone() const { return make_unique<model<U> >(U(instance)); }
U instance;
};

unique_ptr<concept> m_d;
public:
template<typename U>
Descendent(U x) : m_d(make_unique<model<U> >(move(x))) {}

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}
Descendent(Descendent && that) : m_d(move(that.m_d)) {}

Descendent & operator=(const Descendent &that) { Descendent t(that); *this = move(t); return *this; }
Descendent & operator=(Descendent && that) { m_d = move(that.m_d); return *this; }

const T *data() const { return m_d->ptr(); }
const T *constData() const { return m_d->ptr(); }
T *data() { return m_d->ptr(); }
const T *operator->() const { return m_d->ptr(); }
T *operator->() { return m_d->ptr(); }
};

This class allows you to use Descendent<T> (read as “descendent of T“) to represent any instance of any subclass of T. It is copy-constructable, move-constructable, copy-assignable, and move-assignable.

Test code:

1
2
3
4
5
6
7
8
9
int main()
{
Descendent<Abstract> ins = Derived();
Descendent<Abstract> ins2 = ins;
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

It gives just the same results as before, but much neater and nicer – How does it work?

First we define a class concept. We put here what we want our instance to satisfy. We would like to access it as const and non-const, and to clone it as-is. Then we define a template class model<U> where U is a subclass of T, and implement these functionalities.

Next, we store a unique_ptr<concept>. The reason for not using QScopedPointer is QScopedPointer is not movable, but movability is a feature we actually will want (in sink arguments and return values).

Finally it’s just the constructor, moving and copying operations, and ways to access the wrapped object.

When Descendent<Abstract> ins2 = ins; is called, we will go through the copy constructor of Descendent:

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}

which will then call ins.m_d->clone(). But remember that ins.m_d actually contains a pointer to model<Derived>, whose clone() is return make_unique<model<Derived> >(Derived(instance));. This expression will call the copy constructor of Derived, then make a unique_ptr<model<Derived> >, which calls the constructor of model<Derived>:

model(Derived x) : instance(move(x)) {}

which move-constructs instance. Finally the unique_ptr<model<Derived> > is implicitly converted to unique_ptr<concept>, as per the conversion rule. “If T is a derived class of some base B, then std::unique_ptr<T> is implicitly convertible to std::unique_ptr<B>.”

And from now on, happy hacking — (.>w<.)

Hot on the heels of last week, this week’s Usability & Productivity report continues to overflow with awesomeness. Quite a lot of work what you see featured here is already available to test out in the Plasma 5.16 beta, too! But why stop? Here’s more:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

Hi! I’m Akhil K Gangadharan and I’ve been selected for GSoC this year with Kdenlive. My project is titled ‘Revamping the Titler Tool’ and my work for this summer aims to kickoff the complete revamp of one of the major tools used in video-editing in Kdenlive, called the Titler tool.

Titler Tool?

The Titler tool is used to create, you guessed it, title clips. Title clips are clips that contain text and images that can be composited over videos.

The Titler Tool

The Titler tool

Why revamp it?

In Kdenlive, the titler tool is implemented using QGraphicsView which is considered deprecated since the release of Qt5. This makes it obviously prone to bugs that may appear in the upstream to affect the functionality of the tool. This has caused issues in the past, popular features like the Typewriter effect had to be dropped because of QGraphicsView which lead to uncontrollable crashes.

How?

Using QML.

Currently the Titler Tool uses QPainter, which paints every property and every animation is required to be programmed. QML allows creating powerful animations easily as QML as a language is designed for designing UI, which can be then rendered to create title clips as per our need.

Implementation details - a brief overview

For the summer, I intend to complete work on the backend implementation. The first step is to write and test a complete MLT producer module which can render QML frames. And then to begin test integration of this module with a new titler tool.

This is how the backend currently looks like -

current backend

After the revamp, the backend would look like -

new backend

After the backend is done with, we begin integrating it with Kdenlive and evolve the titler to use the new backend.

A great long challenge lies ahead, and I’m looking forward to this summer and beyond with the community to complete writing the tool - right from the backend to the new UI.

Finally, a big thanks to the Kdenlive community for getting me here and to my college student community, FOSS@Amrita for all the support and love!

May 18, 2019

Are you using Kubuntu 19.04, our current Stable release? Or are you already running our daily development builds?

We currently have Plasma 5.15.90 (Plasma 5.16 Beta)  available in our Beta PPA for Kubuntu 19.04, and in our 19.10 development release daily live ISO images.

For 19.04 Disco Dingo, add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

For already installed 19.10 Eoan Ermine development release systems, simply upgrade your system.

Update directly from Discover, or use the command line:

sudo apt update && sudo apt full-upgrade -y

And reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

Otherwise, to test or install the live image grab an ISO build from the daily live ISO images link.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use your launchpad.net account is required to post testing feedback to the Kubuntu team. 
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.15.5?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu 19.10 as well as added to our backports.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

May 17, 2019

Thanks to Nick Richards we've been able to convince flathub to momentarily accept our old appdata files as still valid, it's a stopgap workaround, but at least gives us some breathing time. So the updates are coming in as we speak.

I was accepted to Google Summer of Code. I will work with Krita implementing an Animated Vector Brush Read more...

I tried Latte Dock for a week or so, in order to see if this great piece of software can improve my desktop experience. Here is what I think about.

A week with Latte Dock

Latte Dock is a dock that provides multiple visual effects in order to improve the experience with icons and applications. I’ve already written about it, mostly in a negative way and not because of the software nor its quality, but because I don’t see too much excitement around another OSX style dock bar.
However, in order to let me give a more accurate review about the experience with Latte, I decided to try it forcing to use it as the only one dock on my main computer.

I don’t know much about the history and goals of the project, I suspect it is aimed to be a more elegant dock than the default plasma one is, and is of course inspired to the OS X dock that provides parabolic zooms and removes the application tray. It is worth noting that there are multiple layouts out there, each one adding one or more features to customize the appearence of the dock, for example adding a top bar or changing the bar layout and size. I’d rather used the out-of-the-box layout in order to get a more unbiased impression.

My Plasma Dock

My plasma dock was pretty much simple and is composed by (from left to right):

  • the plasma menu;
  • the switch desktop applet;
  • the application tray;
  • a deck of my main applications (four);
  • the plasma dashboard icon;
  • the notification area;
  • the (digital) clock;
  • the logout applet.

I tend to keep my panel always laid out the same on all my computers, so that my eyes...

May 16, 2019



Plasma 5.16

KDE Plasma 5.16

Thursday, 16 May 2019.

Today KDE launches the beta release of Plasma 5.16.

In this release, many aspects of Plasma have been polished and
rewritten to provide high consistency and bring new features. There is a completely rewritten notification system supporting Do Not Disturb mode, more intelligent history with grouping, critical notifications in fullscreen apps, improved notifications for file transfer jobs, a much more usable System Settings page to configure everything, and many other things. The System and
Widget Settings have been refined and worked on by porting code to
newer Kirigami and Qt technologies and polishing the user interface.
And of course the VDG and Plasma team effort towards Usability & Productivity
goal
continues, getting feedback on all the papercuts in our software that make your life less
smooth and fixing them to ensure an intuitive and consistent workflow for your
daily use.

For the first time, the default wallpaper of Plasma 5.16 will
be decided by a contest where everyone can participate and submit art. The
winner will receive a Slimbook One v2 computer, an eco-friendly, compact
machine, measuring only 12.4 x 12.8 x 3.7 cm. It comes with an i5 processor, 8
GB of RAM, and is capable of outputting video in glorious 4K. Naturally, your
One will come decked out with the upcoming KDE Plasma 5.16 desktop, your
spectacular wallpaper, and a bunch of other great software made by KDE. You can find
more information and submitted work on the competition wiki
page
, and you can submit your own wallpaper in the
subforum
.

Desktop Management



New Notifications

New Notifications



Theme Engine Fixes for Clock Hands!

Theme Engine Fixes for Clock Hands!



Panel Editing Offers Alternatives

Panel Editing Offers Alternatives



Login Screen Theme Improved

Login Screen Theme Improved

  • Completely rewritten notification system supporting Do Not Disturb mode, more intelligent history with grouping, critical notifications in fullscreen apps, improved notifications for file transfer jobs, a much more usable System Settings page to configure everything, and more!
  • Plasma themes are now correctly applied to panels when selecting a new theme.
  • More options for Plasma themes: offset of analog clock hands and toggling blur behind.
  • All widget configuration settings have been modernized and now feature an improved UI. The Color Picker widget also improved, now allowing dragging colors from the plasmoid to text editors, palette of photo editors, etc.
  • The look and feel of lock, login and logout screen have been improved with new icons, labels, hover behavior, login button layout and more.
  • When an app is recording audio, a microphone icon will now appear in the System Tray which allows for changing and muting the volume using mouse middle click and wheel. The Show Desktop icon is now also present in the panel by default.
  • The Wallpaper Slideshow settings window now displays the images in the selected folders, and allows selecting and deselecting them.
  • The Task Manager features better organized context menus and can now be configured to move a window from a different virtual desktop to the current one on middle click.
  • The default Breeze window and menu shadow color are back to being pure black, which improves visibility of many things especially when using a dark color scheme.
  • The "Show Alternatives..." button is now visible in panel edit mode, use it to quickly change widgets to similar alternatives.
  • Plasma Vaults can now be locked and unlocked directly from Dolphin.


Settings



Color Scheme

Color Scheme



Application Style and Appearance Settings

Application Style and Appearance Settings

  • There has been a general polish in all pages; the entire Appearance section has been refined, the Look and Feel page has moved to the top level, and improved icons have been added in many pages.
  • The Color Scheme and Window Decorations pages have been redesigned with a more consistent grid view. The Color Scheme page now supports filtering by light and dark themes, drag and drop to install themes, undo deletion and double click to apply.
  • The theme preview of the Login Screen page has been overhauled.
  • The Desktop Session page now features a "Reboot to UEFI Setup" option.
  • There is now full support for configuring touchpads using the Libinput driver on X11.


Window Management



Window Management

Window Management

  • Initial support for using Wayland with proprietary Nvidia drivers has been added. When using Qt 5.13 with this driver, graphics are also no longer distorted after waking the computer from sleep.
  • Wayland now features drag and drop between XWayland and Wayland native windows.
  • Also on Wayland, the System Settings Libinput touchpad page now allows you to configure the click method, switching between "areas" or "clickfinger".
  • KWin's blur effect now looks more natural and correct to the human eye by not unnecessary darkening the area between blurred colors.
  • Two new default shortcuts have been added: Meta+L can now be used by default to lock the screen and Meta+D can be used to show and hide the desktop.
  • GTK windows now apply correct active and inactive colour scheme.


Plasma Network Manager



Plasma Network Manager with Wireguard

Plasma Network Manager with Wireguard

  • The Networks widget is now faster to refresh Wi-Fi networks and more reliable at doing so. It also has a button to display a search field to help you find a particular network from among the available choices. Right-clicking on any network will expose a "Configure…" action.
  • WireGuard is now compatible with NetworkManager 1.16.
  • One Time Password (OTP) support in Openconnect VPN plugin has been added.


Discover



Updates in Discover

Updates in Discover

  • In Discover's Update page, apps and packages now have distinct "downloading" and "installing" sections. When an item has finished installing, it disappears from the view.
  • Tasks completion indicator now looks better by using a real progress bar. Discover now also displays a busy indicator when checking for updates.
  • Improved support and reliability for AppImages and other apps that come from store.kde.org.
  • Discover now allows you to force quit when installation or update operations are proceeding.
  • The sources menu now shows the version number for each different source for that app.

Read the full announcement

It’s been a great pleasure to be chosen to work with KDE during GSoC this year. I’ll be working on KIOFuse and hopefully by the end of the coding period it will be well integrated with KIO itself. Development will mainly by coordinated on the #kde-fm channel (IRC Nick: feverfew) with fortnightly updates on my blog so feel free to pop by! Here’s a small snippet of my proposal to give everyone an idea of what I’ll be working on:

KIOSlaves are a powerful feature within the KIO framework, allowing KIO-aware applications
such as Dolphin to interact with services out of the local filesystem over URLs such as fish://
and gdrive:/. However, KIO-unaware applications are unable to interact seamlessly with KIO
Slaves. For example, editing a file in gdrive:/ in LibreOffice will not save changes to your Google Drive. One potential solution is to make use of FUSE, which is an interface provided
by the Linux kernel, which allows userspace processes to provide a filesystem which can be
mounted and accessed by regular applications. ​KIOFuse is a project by fvogt that
allows the possibility to mount KIO filesystems in the local system; therefore exposing them to
POSIX-compliant applications such as Firefox and LibreOffice.

This project intends to polish KIOFuse such that it is ready to be a KDE project. In particular,
I’ll be focusing on the following four broad goals:
• ​Improving compatibility with KDE and non-KDE applications by extending and improving
supported filesystem operations.
• ​Improving KIO Slave support.
• ​Performance and usability improvements.
• ​Adding a KDE Daemon module to allow the management of KIOFuse mounts and the
translation of KIO URLs to their local path equivalents.

We’re still on track to release Krita 4.2.0 this month! Compared to the alpha release, we fixed over thirty issues. This release also has a fresh splash screen by Tyson Tan and restores Python support to the Linux AppImage. The Linux AppImage does not have support for sound, the macOS build does not have support for G’Mic.

Warning: Linux users should be careful with distribution packages. We have a host of patches for Qt queued up, some of which are important for distributions to carry until the patches are merged and released in a new version of Qt.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

May 15, 2019

The flatpak and flathub developers have changed what they consider a valid appdata file, so our appdata files that were valid last month are not valid anymore and thus we can't build the KDE Applications 19.04.1 packages.

Help testing the Plasma Theme switching now

You like Plasma Themes? You design Plasma Themes even yourself? You want to see switching Plasma Themes working correctly, especially for Plasma panels?

Please get one of the Live images with latest code from the Plasma developers hands (or if you build manually yourself from master branches, last night’s code should be fine) and give the switching of Plasma Themes a good test, so we can be sure things will work as expected on arrival of Plasma 5.16:

If you find glitches, please report them here in the comments, or better on the #plasma IRC channel.

Plasma Themes, to make it more your Plasma

One of the things which makes Plasma so attractive is the officially supported option to customize also the style, and that beyond colors and wallpaper, to allow users to personalize the look to their likes. And designers have picked up on that and did a good set of custom designs (store.kde.org lists at the time of writing 470 themes).

And while in the last Plasma versions some regressions in the Theme support had sneaked in, because most people & developers are happily using the default Breeze theme, with Plasma 5.16 some theming fixes are to arrive.

Plasma Theme looking good on first contact with Plasma 5.16

The most annoying pain point has been that on selecting and applying a new Plasma Theme, the theme data was not correctly picked up especially by Plasma panels. Only after a restart of Plasma could the theme be fully experienced. Which made quick testing of themes e.g. from store.kde.org a sad story, given most themes looked a bit broken. And without knowing more one would think it is the theme’s fault.

But some evenings & nights have been spent to hunt down the reasons, and it seems they all have been found and corrected. So when one clicks the “Apply” button, the Plasma Theme instantly styles your Plasma desktop as it should, especially the panels.

And dressing up your desktop to match your day or week or mood or your shirt with one of those partially excellent themes is only a matter of some mouse clicks. No more restart needed ��

Coming to your system with Plasma 5.16 and KDE Frameworks 5.59 (both to be released in June).

Make your Plasma Theme take advantage of Plasma 5.16

Theme designers, please study the recently added page on techbase.kde.org about Porting Themes to latest Plasma 5. It lists the changes known so far and what to do for them. Please extend the list if something is yet missing.
And tell your theme designer friends about this page, so they can improve their themes as well.


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.