May 22, 2019

KDAB has released a new version of KDSoap. This is version 1.8.0 and comes more than one year since the last release (1.7.0).

KDSoap is a tool for creating client applications for web services without the need for any further component such as a dedicated web server.

KDSoap lets you interact with applications which have APIs that can be exported as SOAP objects. The web service then provides a machine-accessible interface to its functionality via HTTP. Find out more...

Version 1.8.0 has a large number of improvements and fixes:

General

  • Fixed internally-created faults lacking an XML element name (so e.g. toXml() would abort)
  • KDSoapMessage::messageAddressingProperties() is now correctly filled in when receiving a message with WS-Addressing in the header

Client-side

  • Added support for timing out requests (default 30 minutes, configurable with KDSoapClientInterface::setTimeout())
  • Added support for soap 1.2 faults in faultAsString()
  • Improved detection of soap 1.2 faults in HTTP response
  • Stricter namespace check for Fault elements being received
  • Report client-generated faults as SOAP 1.2 if selected
  • Fixed error code when authentication failed
  • Autodeletion of jobs is now configurable (github pull #125)
  • Added error details in faultAsString() – and the generated lastError() – coming from the SOAP 1.2 detail element.
  • Fixed memory leak in KDSoapClientInterface::callNoReply
  • Added support for WS-UsernameToken, see KDSoapAuthentication
  • Extended KDSOAP_DEBUG functionality (e.g. “KDSOAP_DEBUG=http,reformat” will now print http-headers and pretty-print the xml)
  • Added support for specifying requestHeaders as part of KDSoapJob via KDSoapJob::setRequestHeaders()
  • Renamed the missing KDSoapJob::returnHeaders() to KDSoapJob::replyHeaders(), and provide an implementation
  • Made KDSoapClientInterface::soapVersion() const
  • Added lastFaultCode() for error handling after sync calls. Same as lastErrorCode() but it returns a QString rather than an int.
  • Added conversion operator from KDDateTime to QVariant to void implicit conversion to base QDateTime (github issue #123).

Server-side

  • New method KDSoapServerObjectInterface::additionalHttpResponseHeaderItems to let server objects return additional http headers. This can be used to implement support for CORS, using KDSoapServerCustomVerbRequestInterface to implement OPTIONS response, with “Access-Control-Allow-Origin” in the headers of the response (github issue #117).
  • Stopped generation of two job classes with the same name, when two bindings have the same operation name. Prefixed one of them with the binding name (github issue #139 part 1)
  • Prepended this-> in method class to avoid compilation error when the variable and the method have the same name (github issue #139 part 2)

WSDL parser / code generator changes, applying to both client and server side

  • Source incompatible change: all deserialize() functions now require a KDSoapValue instead of a QVariant. If you use a deserialize(QVariant) function, you need to port your code to use KDSoapValue::setValue(QVariant) before deserialize()
  • Source incompatible change: all serialize() functions now return a KDSoapValue instead of a QVariant. If you use a QVariant serialize() function, you need to port your code to use QVariant KDSoapValue::value() after serialize()
  • Source incompatible change: xs:QName is now represented by KDQName instead of QString, which allows the namespace to be extracted. The old behaviour is available via KDQName::qname().
  • Fixed double-handling of empty elements
  • Fixed fault elements being generated in the wrong namespace, must be SOAP-ENV:Fault (github issue #81).
  • Added import-path argument for setting the local path to get (otherwise downloaded) files from.
  • Added -help-on-missing option to kdwsdl2cpp to display extra help on missing types.
  • Added C++17 std::optional as possible return value for optional elements.
  • Added -both to create both header(.h) and implementation(.cpp) files in one run
  • Added -namespaceMapping @mapping.txt to import url=code mappings, affects C++ class name generation
  • Added functionality to prevent downloading the same WSDL/XSD file twice in one run
  • Added “hasValueFor{MemberName}()” accessor function, for optional elements
  • Generated services now include soapVersion() and endpoint() accessors to match the setSoapVersion(…) and setEndpoint(…) mutators
  • Added support for generating messages for WSDL files without services or bindings
  • Fixed erroneous QT_BEGIN_NAMESPACE around forward-declarations like Q17__DialogType.
  • KDSoapValue now stores the namespace declarations during parsing of a message and writes
  •     namespace declarations during sending of a message
  • Avoid serialize crash with required polymorphic types, if the required variable wasn’t actually provided
  • Fixed generated code for restriction to base class (it wouldn’t compile)
  • Prepended “undef daylight” and “undef timezone” to all generated files, to fix compilation errors in wsdl files that use those names, due to nasty Windows macros
  • Added generation for default attribute values.

Get KDSoap…

KDSoap on github…

The post KDSoap 1.8.0 released appeared first on KDAB.

May 21, 2019

El número de eventos crece y crece. Hoy quiero compartir con vosotros la primera reunión de la comunidad hispana de LibreOffice tendrá lugar en València el 25 de mayo de este año. Otra reunión de otro gran proyecto del mundo del Software Libre que, si puedes, no debes perderte.

Primera reunión de la comunidad hispana de LibreOffice tendrá lugar en València

LibreOffice es la suite ofimática que ha conquistado al Software Libre al ofrecer una alternativa más que digna a la omnipresente Microsoft Office. Gracias a ella podemos tener un procesador de textos potente, una hoja de cálculo precisa, un generador de presentaciones versátil, una aplicación de dibujo completa y muchas otras pequeñas aplicaciones complementarias.

Es, sin duda alguna, una suite perfecta para el usuario medio y, me atrevería a decir, utilizable por cualquier empresa y cuyo crecimiento ha sido constante desde que en 2010 LibreOffice nació como una derivación de OpenOffice y fue amparada por la Document Foundation.

Así que me complace presentar la primera reunión de la comunidad hispana de LibreOffice tendrá lugar en València, concretamente el próximo sábado 25 de mayo a las 10:00.

Primera reunión de la comunidad hispana de LibreOffice tendrá lugar en València

El lugar de la reunión es, como viene siendo tradicional, Las Naves (Carrer de Joan Verdeguer, 16, 46024 València) en la sala Think Tank, un lugar perfecto y precioso para este tipo de eventos.

De esta forma, la agenda del evento  con horarios provisionales es la siguiente:

  • 10:00 – Charla Introducción- ¿ Qué es The Document Foundation ?
  • 10:30 – Charla-  Presentación Cursos Online Autoaprendizaje
  • 11:00 – Descanso
  • 11:15 – Charla – El proyecto de migración a LibreOffice de la Generalitat Valenciana
  • 11:45 – Charla – Creación de macros con Python
  • 12:15 – Descanso
  • 12:30 – Charla – Control de Calidad en LibreOffice
  • 13:00 – Coloquio
  • 14:00 – Fin del evento
  • Comida en un restaurante por determinar.

Como vemos, un gran menú que se complementará con una gran sobremesa donde estoy seguro que se seguirá hablando de la suite ofimática y de muchos otros proyectos e iniciativas relacionadas con el mundo del Software Libre.

Más información: Wiki de Document Foundation

 

I’m very excited to start off the Google Summer of Code blogging experience regarding the project I’m doing with my KDE mentors David Edmundson and Nate Graham. What we’ll be trying to achieve this summer is have SDDM be more in sync with the Plasma desktop. What does that mean? The essence of the problem...... Continue Reading →

May 20, 2019

Vuelve la sección «Cómo…» al blog. Hoy quiero compartir con todos vosotros cómo compartir carpetas vía samba con Dolphin, el explorador de archivos de la Comunidad KDE que tan fácil me hace la vida. En realidad es muy sencillo pero por defecto KDE Neon no inluye esta opción.

Cómo compartir carpetas vía samba con Dolphin

Recientemente he hecho cambios en mis ordenadores de escritorio. Por razones que no vienen al caso, he migrado todos ellos a KDE Neon. Con ello inicio un nuevo ciclo de aprendizaje sin el paraguas del gran Yast para la configuración de ciertos servicios.

Uno de ellos es la compartición de archivos en red vía samba, así que me ha tocado aprender a realizarlo utilizando solo los servicios que proporciona las aplicaciones KDE, lo cual recordaba que era bastante sencillo.

Y es que recordaba que en las propiedades de la carpeta que nos proporciona Dolphin aparecía una pestaña que activaba la compartición de la misma vía Samba. No obstante, a mi no me apareciía, así que investigué un poco y descubri la razón.

Para que aparezca esa opción debemos instalar el paquete «kdenetwork-filesharing» al sistema. Los pasos para llevarlo a cabo son muy sencillos:

  • Abrimos la consola..
  • Actualizamos nuestra paquetería:

$ sudo apt update

  • Instalamos el paquete correspondiente:

$ sudo apt install kdenetwork-filesharing

Una vez realizado este paso previo, al abrir Dolphin y seleccionar «Propiedades» con el botón derecho del ratón sobre una carpeta nos aparecerá la pestaña de «Compartición«, tal y como vemos en la imagen inferior.

Cómo compartir carpetas vía samba con Dolphin

Ahora simplemente debemos  pinchar en la pestaña y activar  «Compartir con Samba (Microsoft Windows)», poner un nombre a dicha carpeta y dar los permisos adecuados según vuestro grado de confianza.

Como vemos, fácil y sencillo, aunque no sé porqué no está por defecto en KDE Neon siendo algo bastante usual si disponemos de varios equipos.

If you occassionally do performance profiling as I do, you probably know Valgrind's Callgrind and the related UI KCachegrind. While Callgrind is a pretty powerful tool, running it takes quite a while (not exactly fun to do with something as big as e.g. LibreOffice).

Recently I finally gave Linux perf a try. Not quite sure why I didn't use it before, IIRC when I tried it somewhen long ago, it was probably difficult to set up or something. Using perf record has very little overhead, but I wasn't exactly thrilled by perf report. I mean, it's text UI, and it just gives a list of functions, so if I want to see anything close to a call graph, I have to manually expand one function, expand another function inside it, expand yet another function inside that, and so on. Not that it wouldn't work, but compared to just looking at what KCachegrind shows and seeing ...

When figuring out how to use perf, while watching a talk from Milian Wolff, on one slide I noticed a mention of a Callgrind script. Of course I had to try it. It was a bit slow, but hey, I could finally look at perf results without feeling like that's an effort. Well, and then I improved the part of the script that was slow, so I guess I've just put the effort elsewhere :).

And I thought this little script might be useful for others. After mailing Milian, it turns out he just created the script as a proof of concept and wasn't interested in it anymore, instead developing Hotspot as UI for perf. Fair enough, but I think I still prefer KCachegrind, I'm used to this, and I don't have to switch the UI when switching between perf and callgrind. So, with his agreement, I've submitted the script to KCachegrind. If you would find it useful, just download this do something like:

$ perf record -g ...
$ perf script -s perf2calltree.py > perf.out
$ kcachegrind perf.out



Plasma 5.16 beta was released last week and there’s now a further couple of weeks to test it to find and fix all the beasties. To help out download the Neon Testing image and install it in a virtual machine or on your raw hardware. You probably want to do a full-upgrade to make sure you have the latest builds. Then try out the new notifications system, or the new animated wallpaper settings or anything else mentioned in the release announcement. When you find a problem report it on bugs.kde.org and/or chat on the Plasma Matrix room. Thanks for your help!

I have been at many software events and have helped or have been part of the organization in a few of them. Based on that experience and the fact that I have participated in the last two editions, let me tell you that J On The Beach is a great event.

The main factors that leads me to such a conclusion are:

  • It is all about contents. I have seen many events that, over time, loose the focus on the quality of the contents. It is a hard focus to keep, specially as you grow. @JOTB19 had great content: well delivered talks and workshops, performed by bright people with something to say which was relevant to the audience.
    • I think the event has not reached its limit yet, specially when it comes to workshops.img5
    • Designing the content structure to target the right audience is as important as bringing speakers with great things to say. As any event matures, tough decisions will need to be taken in order to find its own space and identity among outstanding competitors.
      • When it comes to themes, will J On The Beach keep targeting several topics, or will it narrow them to one or two? Will they always be the same or will they rotate?
      • When it comes to size, will it grow or will it remain in the current numbers? Will the price increase or will be kept in the current range?
      • When it comes to contents, will the event focus more energy and time allocation on the “hands on” learning sessions or will workshops be kept as a relevant compared to the talks as they are today?  Will the talks length be reduced? Will we see lightning talks?
  • J On The Beach was well organised. A good organization is not the one that does not run into any trouble but the one that handles them smoothly so there is little or no perceived impact. This event has a diligent team behind it, based on the little/no impact I perceived.
  • Support from local companies. As Málaga matures as software hub, more and more companies arrive to this area expecting to grow in size, so the need to attract local talent grows in parallel.
    • Some of these foreign companies understand how important it is to show up in local events to be known by as many local developers as possible. J On The Beach has captured the attention of several of these companies.
    • The organizers have understood this reality and support them to use the event to openly  recruit people. This symbiotic relation is a very productive one from what I have witnessed.
    • It is a hard relation to sustain though, specially if the event does not grow is size, so probably in the future the current relation will need to add additional common interests to remain productive for both sides.
  • Global by default. Most events in Spain have traditionally been designed for Spaniards first, turning into more global events as they grow. J On The Beach is global by default, by design, since day 1. It is harder to succeed that way, but beyond the activation point it turns out to be easier to become sustainable. The organizers took the risk and have reached that point already, which provides the event a bright future in my opinion.
    • The fact that the event is able to attract developers from many countries, specially from eastern European ones, makes J On The Beach very attractive to foreign companies already located in Málaga, from the recruitment perspective. Málaga is a great place not just to work in English but also to live in English. There are well established communities from many different countries in the metropolitan area, due to how strong the touristic industry is here. These factors, together with others like logistics, affordable living costs, good public health care system, sunny weather, availability of international and multilingual schools, etc., reduce the adaptation effort when relocating,  specially for developer’s families. J On The Beach brings tasty fishes to the pond.

Let me name a couple of points that can make the event even better:

  • img10It is very hard to find a venue that fits any event during its consolidation phase and evolves with it. This edition’s venue represents a significant improvement compared to last year edition. There is room for improvement though.
    • It would be ideal to find a place in Málaga itself, closer to where the companies are located and places to hang out after the event, which at the same time, keep the good things the current venue/location provides, which are plenty.
    • Finding the right venue is tough. There are decision-making factors that participants do not usually see but are essential like costs, how supportive the venue staff and owners are, accommodation availability in the surrounding area, availability on the selected dates, etc. It is one of the most difficult points to get right, in my experience.                       img1
  • Great events deserve great keynote speakers. They are hard to get but often reflects the difference between great and must-attend events.
    • Great keynote speakers does not necessarily mean popular ones. I see already celebrities in bigger and more expensive events. I would love to see in Málaga old time computer science cowboys.  I mean those first class engineers who did something relevant some time ago and have witnessed the evolution of our industry and their own inventions. They are able to bring a perspective that very few can provide, extremely valuable in these fast pace changing times. Those gems are harder to see at big/popular events and might be a good target for a smaller, high quality event. I think that it would be a great sign of success if such a kind of professionals come to talk at J On The Beach.

I am very glad there is such a great event close to where I live. J On The Beach is not just worth for local developers but also for those abroad. I attend to several events in other countries every year with more name but less value than J On The Beach. It will definitely be on my 2020 agenda. Thanks to every person involved in making it possible.

Pictures taken from the J On The Beach website.

May 19, 2019

Lacking VLC and libvlc in Craft, phonon-vlc cannot be built successfully on macOS. It caused the failed building of KDE Connect in Craft.

As a small step of my GSoC project, I managed to build KDE Connect by removing the phonon-vlc dependency. But it’s not a good solution. I should try to fix phonon-vlc building on macOS. So during the community bonding period, to know better the community and some important tools in the Community, I tried to fix phonon-vlc.

Fixing phonon-vlc

At first, I installed libVLC in MacPorts. All Header files and libraries are installed into the system path. So theoretically, there should not be a problem of the building of phonon-vlc. But an error occurred:

We can figure that the compiling is ok, the error is just at the end, during the linking. The error message tells us there is no QtDBus lib. So to fix it, I made a small patch to add QtDBus manually in the CMakeLists file.

1
2
3
4
5
6
7
8
9
10
11
12
13
diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
index 47427b2..1cdb250 100644
--- a/src/CMakeLists.txt
+++ b/src/CMakeLists.txt
@@ -81,7 +81,7 @@ if(APPLE)
endif(APPLE)

automoc4_add_library(phonon_vlc MODULE ${phonon_vlc_SRCS})
-qt5_use_modules(phonon_vlc Core Widgets)
+qt5_use_modules(phonon_vlc Core Widgets DBus)

set_target_properties(phonon_vlc PROPERTIES
PREFIX ""

And it works well!

A small problem is that Hannah said she didn’t get an error during linking. It may be something about Qt version. If someone gets some idea, welcome to contact me.

My Qt version is 5.12.3.

Fixing VLC

To fix VLC, I tried to pack the VLC binary just like the one on Windows.

But unfortunately, in the .app package, the Header files are not completed. Comparing to Windows version, the entire plugins folder is missing.

So I made a patch for all those files. But the patch is too huge (25000 lines!). So it is not a good idea to merge it into master branch.

Thanks to Hannah, she has made a libs/vlc blueprint in the master branch, so in Craft, feel free to install it by running craft libs/vlc.

Troubleshooting

If you cannot build libs/vlc, just like me, you can also choose the binary version VLC with Header files patch.

The patch of Headers for binary is too big. Adding it to the master branch is not a good idea. So I published it on my own repository:
https://github.com/Inokinoki/craft-blueprints-inoki

To use it, run craft --add-blueprint-repository https://github.com/inokinoki/craft-blueprints-inoki.git and the blueprint(s) will be added into your local blueprint directory.

Then, craft binary/vlc will help get the vlc binary and install Header files, libraries into Craft include path and lib path. Finally, you can build what you want with libvlc dependency.

Conclusion

Up to now, KDE Connect is using QtMultimedia rather than phonon and phonon-vlc to play a sound. But this work could be also useful for other applications or libraries who depend on phonon, phonon-vlc or vlc. This small step may help build them successfully on macOS.

I hope this can help someone!

Hi, everyone!

I’m Weixuan XIAO, with the nickname: Inoki, sometimes Inokinoki is used to avoid duplicated username.

I’m glad to be selected in Google Summer of Code 2019 to work for KDE Community to make KDE Connect work on macOS. And I’m willing to be a long-term contributor in KDE Community.

As a Chinese student, I’m studying in France for my engineering degree. At the same time, I’m waiting for my bachelor degree at Shanghai University.

I major in Real-Time System and Embedded Engineering. With strong interests in Operating System and Computer Architecture, I like playing with small devices like Arduino and Raspberry Pi, different systems like macOS and Linux(especially Manjaro with KDE, they are the best partner).

Japanese culture makes me crazy, for example, the animation and the game. Even my nickname is actually the pronunciation of my real name in Japanese. So if all of these is the choice of Steins Gate, I’ll normally accept them :)

I speak Chinese, French, English, and a little Japanese. But I realize that my English is awful. So if I make any mistake, please tell me. This would improve my English and I will appreciate it.

I hope we can have a good summer in 2019. And have some good codes :)

Empiezo la típica serie de artículos para promocionar y animar la asistencia al gran evento de este principios de verano para los simpatizantes y colaboradores del proyecto KDE: Akademy-es 2019 de Vigo. Bienvenidos a «Camino Akademy-es 2019 de Vigo, el alojamiento» donde comentaré los hoteles que la organización recomienda. Que quede claro que no se tiene ningún vínculo económico con ellos, simplemente se hace para facilitar la estancia.

Camino Akademy-es 2019 de Vigo, el alojamiento

Del 28 al30 de Junio se va a celebrar en Vigo la esperada Akademy-es 2019, el encuentro organizado por KDE EspañaGALPon. en el  MARCO (Museo de Arte Contemporáneo).

En el blog ya hemos comentado en el blog poco a poco vamos teniendo sus novedades como la apertura del registro de asistente, la búsqueda de patrocinio y el call for papers (presenta tu ponencia) que fue ampliado ayer.

Camino Akademy-es 2019 de Vigo, el alojamiento

Si está interesado en asistir uno de los puntos más importantes es el alojamiento. Aunque cada persona es libre de elegir su zona de descanso y reposo, no está de más compartir también este aspecto con el resto de los asistentes.

Las razones son obvias pero las relataré:

  • Si quieres, nunca estarás solo.
  • Tendrás ayuda si tienes cualquier problema (te falta algún cargador, se te ha olvidado algún objeto, etc.)
  • Podrás compartir viajes hasta la zona de charlas y reuniones.
  • Si no sabes cocinar, seguro que podrás
  • Y, si eres de los que les gusta trabajar de noche, encontrarás otros muchos noctámbulos con los que compartir esas horas productivas.
  • Sin olvidar que, al elegir esta opción, los lazos de amistad crecen exponencialmente.

En esta ocasión la organización de Akademy-es, perfectamente asesorada por GALPon , no recomienda dos hoteles:

  • Hotel Junquera: Visitado por GALPon, el estado de las habitaciones es impecable, tal y como las anuncian. Dispone de mini-piscina y gimnasio (gratuíto). Sus precios van desde 36 € la habitación individual más básica a 62€ la más completa para dos personas.
  • Hotel Ogalia: Hotel recién inaugurado que ofrece un descuento del 10% indicando «congreso akademy» si se reserva directamente al hotel.

Si queréis más información sobre el alojamiento solo tienes que visitar la página dedicada en Akademy-es 2019.

 

 

Continuing with the addition of line terminating style for the Straight Line annotation tool, I have added the ability to select the line start style also. The required code changes are committed today.

Line annotation with circled start and closed arrow ending.

Currently it is supported only for PDF documents (and poppler version ≥ 0.72), but that will change soon — thanks to another change by Tobias Deiminger under review to extend the functionality for other documents supported by Okular.

libqaccessibilityclient 0.4.1 is out now
https://download.kde.org/stable/libqaccessibilityclient/
http://embra.edinburghlinux.co.uk/~jr/tmp/pkgdiff_reports/libqaccessibilityclient/0.4.0_to_0.4.1/changes_report.html
Signed by Jonathan Riddell
https://sks-keyservers.net/pks/lookup?op=vindex&search=0xEC94D18F7F05997E
  • version 0.4.1
  • Use only undeprecated KDEInstallDirs variables
  • KDECMakeSettings already cares for CMAKE_AUTOMOC & BUILD_TESTING
  • Fix use in cross compilation
  • Q_ENUMS -> Q_ENUM
  • more complete release instructions
Facebooktwitterlinkedinby feather

Recently I have been researching into possibilities to make members of KoShape copy-on-write. At first glance, it seems enough to declare d-pointers as some subclass of QSharedDataPointer (see Qt’s implicit sharing) and then replace pointers with instances. However, there remain a number of problems to be solved, one of them being polymorphism.

polymorphism and value semantics

In the definition of KoShapePrivate class, the member fill is stored as a QSharedPointer:

QSharedPointer<KoShapeBackground> fill;

There are a number of subclasses of KoShapeBackground, including KoColorBackground, KoGradientBackground, to name just a few. We cannot store an instance of KoShapeBackground directly since we want polymorphism. But, well, making KoShapeBackground copy-on-write seems to have nothing to do with whether we store it as a pointer or instance. So let’s just put it here – I will come back to this question at the end of this post.

d-pointers and QSharedData

The KoShapeBackground heirarchy (similar to the KoShape one) uses derived d-pointersfor storing private data. To make things easier, I will here use a small example to elaborate on its use.

derived d-pointer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
class AbstractPrivate
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
// it is not yet copy-constructable; we will come back to this later
// Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QScopedPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
// it is not yet copy-constructable
// Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

The main goal of making DerivedPrivate a subclass of AbstractPrivate is to avoid multiple d-pointers in the structure. Note that there are constructors taking a reference to the private data object. These are to make it possible for a Derived object to use the samed-pointer as its Abstract parent. The Q_D() macro is used to convert the d_ptr, which is a pointer to AbstractPrivate to another pointer, named d, of some of its descendent type; here, it is a DerivedPrivate. It is used together with the Q_DECLARE_PRIVATE() macro in the class definition and has a rather complicated implementation in the Qt headers. But for simplicity, it does not hurt for now to understand it as the following:

#define Q_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

where Class##Private means simply to append string Private to (the macro argument) Class.

Now let’s test it by creating a pointer to Abstract and give it a Derived object:

1
2
3
4
5
6
7
int main()
{
QScopedPointer<Abstract> ins(new Derived());
ins->foo();
ins->modifyVar();
ins->foo();
}

Output:

foo 0 0foo 1 1

Looks pretty viable – everything’s working well! – What if we use Qt’s implicit sharing? Just make AbstractPrivate a subclass of QSharedData and replace QScopedPointer with QSharedDataPointer.

making d-pointer QSharedDataPointer

In the last section, we commented out the copy constructors since QScopedPointer is not copy-constructable,but here QSharedDataPointer is copy-constructable, so we add them back:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
private:
Q_DECLARE_PRIVATE(Abstract)
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { Q_D(const Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { Q_D(Derived); d->var++; d->bar++; }
private:
Q_DECLARE_PRIVATE(Derived)
};

And testing the copy-on-write mechanism:

1
2
3
4
5
6
7
8
9
int main()
{
QScopedPointer<Derived> ins(new Derived());
QScopedPointer<Derived> ins2(new Derived(*ins));
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

But, eh, it’s a compile-time error.

error: reinterpret_cast from type &aposconst AbstractPrivate*&apos to type &aposAbstractPrivate*&apos casts away qualifiers Q_DECLARE_PRIVATE(Abstract)

Q_D, revisited

So, where does the const removal come from? In qglobal.h, the code related to Q_D is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
template <typename T> inline T *qGetPtrHelper(T *ptr) { return ptr; }
template <typename Ptr> inline auto qGetPtrHelper(const Ptr &ptr) -> decltype(ptr.operator->()) { return ptr.operator->(); }

// The body must be a statement:
#define Q_CAST_IGNORE_ALIGN(body) QT_WARNING_PUSH QT_WARNING_DISABLE_GCC("-Wcast-align") body QT_WARNING_POP
#define Q_DECLARE_PRIVATE(Class) \
inline Class##Private* d_func() \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<Class##Private *>(qGetPtrHelper(d_ptr));) } \
inline const Class##Private* d_func() const \
{ Q_CAST_IGNORE_ALIGN(return reinterpret_cast<const Class##Private *>(qGetPtrHelper(d_ptr));) } \
friend class Class##Private;

#define Q_D(Class) Class##Private * const d = d_func()

It turns out that Q_D will call d_func() which then calls an overload of qGetPtrHelper() that takes const Ptr &ptr. What does ptr.operator->() return? What is the difference between QScopedPointer and QSharedDataPointer here?

QScopedPointer‘s operator->() is a const method that returns a non-const pointer to T; however, QSharedDataPointer has two operator->()s, one being const T* operator->() const, the other T* operator->(), and theyhave quite different behaviours – the non-const variant calls detach() (where copy-on-write is implemented), but the other one does not.

qGetPtrHelper() here can only take d_ptr as a const QSharedDataPointer, not a non-const one; so, no matter which d_func() we are calling, we can only get a const AbstractPrivate *. That is just the problem here.

To resolve this problem, let’s replace the Q_D macros with the ones we define ourselves:

#define CONST_SHARED_D(Class) const Class##Private *const d = reinterpret_cast<const Class##Private *>(d_ptr.constData())#define SHARED_D(Class) Class##Private *const d = reinterpret_cast<Class##Private *>(d_ptr.data())

We will then use SHARED_D(Class) in place of Q_D(Class) and CONST_SHARED_D(Class) for Q_D(const Class). Since the const and non-const variant really behaves differently, it should help to differentiate these two uses. Also, delete Q_DECLARE_PRIVATE since we do not need them any more:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class AbstractPrivate : public QSharedData
{
public:
AbstractPrivate() : var(0) {}
virtual ~AbstractPrivate() = default;

int var;
};

class Abstract
{
public:
Abstract(const Abstract &other) = default;
~Abstract() = default;
protected:
explicit Abstract(AbstractPrivate &dd) : d_ptr(&dd) {}
public:
virtual void foo() const = 0;
virtual void modifyVar() = 0;
protected:
QSharedDataPointer<AbstractPrivate> d_ptr;
};

class DerivedPrivate : public AbstractPrivate
{
public:
DerivedPrivate() : AbstractPrivate(), bar(0) {}
virtual ~DerivedPrivate() = default;

int bar;
};

class Derived : public Abstract
{
public:
Derived() : Abstract(*(new DerivedPrivate)) {}
Derived(const Derived &other) = default;
~Derived() = default;
protected:
explicit Derived(AbstractPrivate &dd) : Abstract(dd) {}
public:
void foo() const override { CONST_SHARED_D(Derived); cout << "foo " << d->var << " " << d->bar << endl; }
void modifyVar() override { SHARED_D(Derived); d->var++; d->bar++; }
};

With the same main() code, what’s the result?

foo 0 0foo 1 16606417foo 0 0

… big whoops, what is that random thing there? Well, if we use dynamic_cast in place of reinterpret_cast, the program simply crashes after ins->modifyVar();, indicating that ins‘s d_ptr.data() is not at all a DerivedPrivate.

virtual clones

The detach() method of QSharedDataPointer will by default create an instance of AbstractPrivate regardless of what the instance really is. Fortunately, it is possible to change that behaviour through specifying the clone() method.

First, we need to make a virtual function in AbstractPrivate class:

virtual AbstractPrivate *clone() const = 0;

(make it pure virtual just to force all subclasses to re-implement it; if your base class is not abstract you probably want to implement the clone() method) and then override it in DerivedPrivate:

virtual DerivedPrivate *clone() const { return new DerivedPrivate(*this); }

Then, specify the template method for QSharedDataPointer::clone(). As we will re-use it multipletimes (for different base classes), it is better to define a macro:

1
2
3
4
5
6
7
#define DATA_CLONE_VIRTUAL(Class) template<>                      \
Class##Private *QSharedDataPointer<Class##Private>::clone() \
{ \
return d->clone(); \
}
// after the definition of Abstract
DATA_CLONE_VIRTUAL(Abstract)

It is not necessary to write DATA_CLONE_VIRTUAL(Derived) as we are never storing a QSharedDataPointer<DerivedPrivate> throughout the heirarchy.

Then test the code again:

foo 0 0foo 1 1foo 0 0

– Just as expected! It continues to work if we replace Derived with Abstract in QScopedPointer:

QScopedPointer<Abstract> ins(new Derived());QScopedPointer<Abstract> ins2(new Derived(* dynamic_cast<const Derived *>(ins.data())));

Well, another problem comes, that the constructor for ins2 seems too ugly, and messy. We could, like the private classes, implement a virtual function clone() for these kinds of things, but it is still not gentle enough, and we cannot use a default copy constructor for any class that contains such QScopedPointers.

What about QSharedPointer that is copy-constructable? Well, then these copies actually point to the same data structures and no copy-on-write is performed at all. This still not wanted.

the Descendents of …

Inspired by Sean Parent’s video, I finally come up with the following implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
template<typename T>
class Descendent
{
struct concept
{
virtual ~concept() = default;
virtual const T *ptr() const = 0;
virtual T *ptr() = 0;
virtual unique_ptr<concept> clone() const = 0;
};
template<typename U>
struct model : public concept
{
model(U x) : instance(move(x)) {}
const T *ptr() const { return &instance; }
T *ptr() { return &instance; }
// or unique_ptr<model<U> >(new model<U>(U(instance))) if you do not have C++14
unique_ptr<concept> clone() const { return make_unique<model<U> >(U(instance)); }
U instance;
};

unique_ptr<concept> m_d;
public:
template<typename U>
Descendent(U x) : m_d(make_unique<model<U> >(move(x))) {}

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}
Descendent(Descendent && that) : m_d(move(that.m_d)) {}

Descendent & operator=(const Descendent &that) { Descendent t(that); *this = move(t); return *this; }
Descendent & operator=(Descendent && that) { m_d = move(that.m_d); return *this; }

const T *data() const { return m_d->ptr(); }
const T *constData() const { return m_d->ptr(); }
T *data() { return m_d->ptr(); }
const T *operator->() const { return m_d->ptr(); }
T *operator->() { return m_d->ptr(); }
};

This class allows you to use Descendent<T> (read as “descendent of T“) to represent any instance of any subclass of T. It is copy-constructable, move-constructable, copy-assignable, and move-assignable.

Test code:

1
2
3
4
5
6
7
8
9
int main()
{
Descendent<Abstract> ins = Derived();
Descendent<Abstract> ins2 = ins;
ins->foo();
ins->modifyVar();
ins->foo();
ins2->foo();
}

It gives just the same results as before, but much neater and nicer – How does it work?

First we define a class concept. We put here what we want our instance to satisfy. We would like to access it as const and non-const, and to clone it as-is. Then we define a template class model<U> where U is a subclass of T, and implement these functionalities.

Next, we store a unique_ptr<concept>. The reason for not using QScopedPointer is QScopedPointer is not movable, but movability is a feature we actually will want (in sink arguments and return values).

Finally it’s just the constructor, moving and copying operations, and ways to access the wrapped object.

When Descendent<Abstract> ins2 = ins; is called, we will go through the copy constructor of Descendent:

Descendent(const Descendent & that) : m_d(move(that.m_d->clone())) {}

which will then call ins.m_d->clone(). But remember that ins.m_d actually contains a pointer to model<Derived>, whose clone() is return make_unique<model<Derived> >(Derived(instance));. This expression will call the copy constructor of Derived, then make a unique_ptr<model<Derived> >, which calls the constructor of model<Derived>:

model(Derived x) : instance(move(x)) {}

which move-constructs instance. Finally the unique_ptr<model<Derived> > is implicitly converted to unique_ptr<concept>, as per the conversion rule. “If T is a derived class of some base B, then std::unique_ptr<T> is implicitly convertible to std::unique_ptr<B>.”

And from now on, happy hacking — (.>w<.)

Hot on the heels of last week, this week’s Usability & Productivity report continues to overflow with awesomeness. Quite a lot of work what you see featured here is already available to test out in the Plasma 5.16 beta, too! But why stop? Here’s more:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a donation to the KDE e.V. foundation.

Hi! I’m Akhil K Gangadharan and I’ve been selected for GSoC this year with Kdenlive. My project is titled ‘Revamping the Titler Tool’ and my work for this summer aims to kickoff the complete revamp of one of the major tools used in video-editing in Kdenlive, called the Titler tool.

Titler Tool?

The Titler tool is used to create, you guessed it, title clips. Title clips are clips that contain text and images that can be composited over videos.

The Titler Tool

The Titler tool

Why revamp it?

In Kdenlive, the titler tool is implemented using QGraphicsView which is considered deprecated since the release of Qt5. This makes it obviously prone to bugs that may appear in the upstream to affect the functionality of the tool. This has caused issues in the past, popular features like the Typewriter effect had to be dropped because of QGraphicsView which lead to uncontrollable crashes.

How?

Using QML.

Currently the Titler Tool uses QPainter, which paints every property and every animation is required to be programmed. QML allows creating powerful animations easily as QML as a language is designed for designing UI, which can be then rendered to create title clips as per our need.

Implementation details - a brief overview

For the summer, I intend to complete work on the backend implementation. The first step is to write and test a complete MLT producer module which can render QML frames. And then to begin test integration of this module with a new titler tool.

This is how the backend currently looks like -

current backend

After the revamp, the backend would look like -

new backend

After the backend is done with, we begin integrating it with Kdenlive and evolve the titler to use the new backend.

A great long challenge lies ahead, and I’m looking forward to this summer and beyond with the community to complete writing the tool - right from the backend to the new UI.

Finally, a big thanks to the Kdenlive community for getting me here and to my college student community, FOSS@Amrita for all the support and love!

May 18, 2019

Are you using Kubuntu 19.04, our current Stable release? Or are you already running our daily development builds?

We currently have Plasma 5.15.90 (Plasma 5.16 Beta)  available in our Beta PPA for Kubuntu 19.04, and in our 19.10 development release daily live ISO images.

For 19.04 Disco Dingo, add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

For already installed 19.10 Eoan Ermine development release systems, simply upgrade your system.

Update directly from Discover, or use the command line:

sudo apt update && sudo apt full-upgrade -y

And reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

Otherwise, to test or install the live image grab an ISO build from the daily live ISO images link.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use your launchpad.net account is required to post testing feedback to the Kubuntu team. 
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.15.5?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu 19.10 as well as added to our backports.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

May 17, 2019

Thanks to Nick Richards we've been able to convince flathub to momentarily accept our old appdata files as still valid, it's a stopgap workaround, but at least gives us some breathing time. So the updates are coming in as we speak.

I was accepted to Google Summer of Code. I will work with Krita implementing an Animated Vector Brush Read more...

I tried Latte Dock for a week or so, in order to see if this great piece of software can improve my desktop experience. Here is what I think about.

A week with Latte Dock

Latte Dock is a dock that provides multiple visual effects in order to improve the experience with icons and applications. I’ve already written about it, mostly in a negative way and not because of the software nor its quality, but because I don’t see too much excitement around another OSX style dock bar.
However, in order to let me give a more accurate review about the experience with Latte, I decided to try it forcing to use it as the only one dock on my main computer.

I don’t know much about the history and goals of the project, I suspect it is aimed to be a more elegant dock than the default plasma one is, and is of course inspired to the OS X dock that provides parabolic zooms and removes the application tray. It is worth noting that there are multiple layouts out there, each one adding one or more features to customize the appearence of the dock, for example adding a top bar or changing the bar layout and size. I’d rather used the out-of-the-box layout in order to get a more unbiased impression.

My Plasma Dock

My plasma dock was pretty much simple and is composed by (from left to right):

  • the plasma menu;
  • the switch desktop applet;
  • the application tray;
  • a deck of my main applications (four);
  • the plasma dashboard icon;
  • the notification area;
  • the (digital) clock;
  • the logout applet.

I tend to keep my panel always laid out the same on all my computers, so that my eyes...

May 16, 2019



Plasma 5.16

KDE Plasma 5.16

Thursday, 16 May 2019.

Today KDE launches the beta release of Plasma 5.16.

In this release, many aspects of Plasma have been polished and
rewritten to provide high consistency and bring new features. There is a completely rewritten notification system supporting Do Not Disturb mode, more intelligent history with grouping, critical notifications in fullscreen apps, improved notifications for file transfer jobs, a much more usable System Settings page to configure everything, and many other things. The System and
Widget Settings have been refined and worked on by porting code to
newer Kirigami and Qt technologies and polishing the user interface.
And of course the VDG and Plasma team effort towards Usability & Productivity
goal
continues, getting feedback on all the papercuts in our software that make your life less
smooth and fixing them to ensure an intuitive and consistent workflow for your
daily use.

For the first time, the default wallpaper of Plasma 5.16 will
be decided by a contest where everyone can participate and submit art. The
winner will receive a Slimbook One v2 computer, an eco-friendly, compact
machine, measuring only 12.4 x 12.8 x 3.7 cm. It comes with an i5 processor, 8
GB of RAM, and is capable of outputting video in glorious 4K. Naturally, your
One will come decked out with the upcoming KDE Plasma 5.16 desktop, your
spectacular wallpaper, and a bunch of other great software made by KDE. You can find
more information and submitted work on the competition wiki
page
, and you can submit your own wallpaper in the
subforum
.

Desktop Management



New Notifications

New Notifications



Theme Engine Fixes for Clock Hands!

Theme Engine Fixes for Clock Hands!



Panel Editing Offers Alternatives

Panel Editing Offers Alternatives



Login Screen Theme Improved

Login Screen Theme Improved

  • Completely rewritten notification system supporting Do Not Disturb mode, more intelligent history with grouping, critical notifications in fullscreen apps, improved notifications for file transfer jobs, a much more usable System Settings page to configure everything, and more!
  • Plasma themes are now correctly applied to panels when selecting a new theme.
  • More options for Plasma themes: offset of analog clock hands and toggling blur behind.
  • All widget configuration settings have been modernized and now feature an improved UI. The Color Picker widget also improved, now allowing dragging colors from the plasmoid to text editors, palette of photo editors, etc.
  • The look and feel of lock, login and logout screen have been improved with new icons, labels, hover behavior, login button layout and more.
  • When an app is recording audio, a microphone icon will now appear in the System Tray which allows for changing and muting the volume using mouse middle click and wheel. The Show Desktop icon is now also present in the panel by default.
  • The Wallpaper Slideshow settings window now displays the images in the selected folders, and allows selecting and deselecting them.
  • The Task Manager features better organized context menus and can now be configured to move a window from a different virtual desktop to the current one on middle click.
  • The default Breeze window and menu shadow color are back to being pure black, which improves visibility of many things especially when using a dark color scheme.
  • The "Show Alternatives..." button is now visible in panel edit mode, use it to quickly change widgets to similar alternatives.
  • Plasma Vaults can now be locked and unlocked directly from Dolphin.


Settings



Color Scheme

Color Scheme



Application Style and Appearance Settings

Application Style and Appearance Settings

  • There has been a general polish in all pages; the entire Appearance section has been refined, the Look and Feel page has moved to the top level, and improved icons have been added in many pages.
  • The Color Scheme and Window Decorations pages have been redesigned with a more consistent grid view. The Color Scheme page now supports filtering by light and dark themes, drag and drop to install themes, undo deletion and double click to apply.
  • The theme preview of the Login Screen page has been overhauled.
  • The Desktop Session page now features a "Reboot to UEFI Setup" option.
  • There is now full support for configuring touchpads using the Libinput driver on X11.


Window Management



Window Management

Window Management

  • Initial support for using Wayland with proprietary Nvidia drivers has been added. When using Qt 5.13 with this driver, graphics are also no longer distorted after waking the computer from sleep.
  • Wayland now features drag and drop between XWayland and Wayland native windows.
  • Also on Wayland, the System Settings Libinput touchpad page now allows you to configure the click method, switching between "areas" or "clickfinger".
  • KWin's blur effect now looks more natural and correct to the human eye by not unnecessary darkening the area between blurred colors.
  • Two new default shortcuts have been added: Meta+L can now be used by default to lock the screen and Meta+D can be used to show and hide the desktop.
  • GTK windows now apply correct active and inactive colour scheme.


Plasma Network Manager



Plasma Network Manager with Wireguard

Plasma Network Manager with Wireguard

  • The Networks widget is now faster to refresh Wi-Fi networks and more reliable at doing so. It also has a button to display a search field to help you find a particular network from among the available choices. Right-clicking on any network will expose a "Configure…" action.
  • WireGuard is now compatible with NetworkManager 1.16.
  • One Time Password (OTP) support in Openconnect VPN plugin has been added.


Discover



Updates in Discover

Updates in Discover

  • In Discover's Update page, apps and packages now have distinct "downloading" and "installing" sections. When an item has finished installing, it disappears from the view.
  • Tasks completion indicator now looks better by using a real progress bar. Discover now also displays a busy indicator when checking for updates.
  • Improved support and reliability for AppImages and other apps that come from store.kde.org.
  • Discover now allows you to force quit when installation or update operations are proceeding.
  • The sources menu now shows the version number for each different source for that app.

Read the full announcement

It’s been a great pleasure to be chosen to work with KDE during GSoC this year. I’ll be working on KIOFuse and hopefully by the end of the coding period it will be well integrated with KIO itself. Development will mainly by coordinated on the #kde-fm channel (IRC Nick: feverfew) with fortnightly updates on my blog so feel free to pop by! Here’s a small snippet of my proposal to give everyone an idea of what I’ll be working on:

KIOSlaves are a powerful feature within the KIO framework, allowing KIO-aware applications
such as Dolphin to interact with services out of the local filesystem over URLs such as fish://
and gdrive:/. However, KIO-unaware applications are unable to interact seamlessly with KIO
Slaves. For example, editing a file in gdrive:/ in LibreOffice will not save changes to your Google Drive. One potential solution is to make use of FUSE, which is an interface provided
by the Linux kernel, which allows userspace processes to provide a filesystem which can be
mounted and accessed by regular applications. ​KIOFuse is a project by fvogt that
allows the possibility to mount KIO filesystems in the local system; therefore exposing them to
POSIX-compliant applications such as Firefox and LibreOffice.

This project intends to polish KIOFuse such that it is ready to be a KDE project. In particular,
I’ll be focusing on the following four broad goals:
• ​Improving compatibility with KDE and non-KDE applications by extending and improving
supported filesystem operations.
• ​Improving KIO Slave support.
• ​Performance and usability improvements.
• ​Adding a KDE Daemon module to allow the management of KIOFuse mounts and the
translation of KIO URLs to their local path equivalents.

We’re still on track to release Krita 4.2.0 this month! Compared to the alpha release, we fixed over thirty issues. This release also has a fresh splash screen by Tyson Tan and restores Python support to the Linux AppImage. The Linux AppImage does not have support for sound, the macOS build does not have support for G’Mic.

Warning: Linux users should be careful with distribution packages. We have a host of patches for Qt queued up, some of which are important for distributions to carry until the patches are merged and released in a new version of Qt.

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

May 15, 2019

The flatpak and flathub developers have changed what they consider a valid appdata file, so our appdata files that were valid last month are not valid anymore and thus we can't build the KDE Applications 19.04.1 packages.

Help testing the Plasma Theme switching now

You like Plasma Themes? You design Plasma Themes even yourself? You want to see switching Plasma Themes working correctly, especially for Plasma panels?

Please get one of the Live images with latest code from the Plasma developers hands (or if you build manually yourself from master branches, last night’s code should be fine) and give the switching of Plasma Themes a good test, so we can be sure things will work as expected on arrival of Plasma 5.16:

If you find glitches, please report them here in the comments, or better on the #plasma IRC channel.

Plasma Themes, to make it more your Plasma

One of the things which makes Plasma so attractive is the officially supported option to customize also the style, and that beyond colors and wallpaper, to allow users to personalize the look to their likes. And designers have picked up on that and did a good set of custom designs (store.kde.org lists at the time of writing 470 themes).

And while in the last Plasma versions some regressions in the Theme support had sneaked in, because most people & developers are happily using the default Breeze theme, with Plasma 5.16 some theming fixes are to arrive.

Plasma Theme looking good on first contact with Plasma 5.16

The most annoying pain point has been that on selecting and applying a new Plasma Theme, the theme data was not correctly picked up especially by Plasma panels. Only after a restart of Plasma could the theme be fully experienced. Which made quick testing of themes e.g. from store.kde.org a sad story, given most themes looked a bit broken. And without knowing more one would think it is the theme’s fault.

But some evenings & nights have been spent to hunt down the reasons, and it seems they all have been found and corrected. So when one clicks the “Apply” button, the Plasma Theme instantly styles your Plasma desktop as it should, especially the panels.

And dressing up your desktop to match your day or week or mood or your shirt with one of those partially excellent themes is only a matter of some mouse clicks. No more restart needed ��

Coming to your system with Plasma 5.16 and KDE Frameworks 5.59 (both to be released in June).

Make your Plasma Theme take advantage of Plasma 5.16

Theme designers, please study the recently added page on techbase.kde.org about Porting Themes to latest Plasma 5. It lists the changes known so far and what to do for them. Please extend the list if something is yet missing.
And tell your theme designer friends about this page, so they can improve their themes as well.

This summer will be a little bit interesting as I joined theGoogle Summer of Code (GSoC).The software I will be working on is Krita.Krita is a painting software I have been using for more than one year.Since the (pre)release of Krita 4.0, I use it to paint all my works.

Before using Krita, I used to use PaintToolSAI, and there are quite a lot of conceptsand functionalities in it that I find really useful; after getting involved in theKrita community I am pretty lucky to be able to introduce these little shiny starsto our community, and even implement some of them.

My project for GSoC is onthe undo/redo system in Krita. The system currently works using an undo stack to storage individual changes to the document,and invoking these commands to perform undos and redos. This system is complex and not easyto maintain. As Dmitry suggests, a better solution wouldbe storing the states of the document as shallow copies, since it simplifies the system and make history brushes possible. It would be a rather hugeand fundamental change in the code, and he recommends me to experiment with vector layers first.

Another part of the project, which is not a research, is the snapshot docker that would allowusers to temporarily save some states of the document and return to them quickly at a later time.This is an enhancement on the GUI level, as the tile data in paint layers are shallow copied, makingit possible to make a clone of the document relatively fast.

I will make more posts on KDE and Krita in the near future. Let’s keep in touch! (.w.)

While discussing data extraction methods for KItinerary earlier I briefly mentioned barcodes as one source of information. It’s a subject that deserves a few more details though, as it’s generally good to know what information you are sharing when your ticket barcode gets scanned.

Why Barcodes?

Barcodes on booking confirmations or tickets serve multiple purposes:

  • Carrying some form of token used for validation. This can be a simple number or actual cryptographic signatures. That token typically does not contain any direct information about you or your booking, but it can act as a key for online lookup of such information, in which case it is even relevant to protect just that token from a privacy point of view.
  • Information about you or your booking. Often this is a machine-readable version of what’s also printed in human readable form on a ticket, such as your name, booking number or details about what you booked. From a privacy point of view even more problematic are cases where the barcode contains additional information not visible on the human readable part.

For data extraction we of course benefit from a machine readable format that doesn’t require fragile text parsing in PDF or HTML files. Additionally, barcodes tend to use systematic identifiers instead of ambiguous and/or localized human readable names, for example for airports or stations. The most well-known such identifier is probably the 3 letter IATA airport code. Such identifiers allow us to easily retrieve additional information about that location from sources like Wikidata.

KDE Itinerary’s nighlty Flatpak builds therefore recently got the ZXing-C++ dependency added to make full use of that, and we are working on getting that into the nightly Android builds too. If you are deploying or packaging KDE Itinerary or the KMail integration plug-in by other means you probably want to make sure to have ZXing-C++ available too.

While we are mainly interested in itinerary related information, we also come in touch with whatever else is in the barcodes. Besides general privacy insights this also has the very practical impact on how to sanitize our test data. While it’s fairly straightforward to replace your credit card number in a simple ASCII-based code, doing this in partially understood binary codes with cryptographic security features is next to impossible.

Barcode Types

There’s a number of different aspects of the barcodes that are relevant for understanding what is (or can be) encoded in them:

  • The size of the encoded data. That’s a very good indicator if there is only a ticket token or also additional booking information. One-dimensional codes can only store short alpha-numeric payloads, which is usually a strong indicator of a token-only code. Two-dimensional codes like QR or Aztec on the other hand can store up to a few hundred bytes.
  • ASCII or binary payloads. Many of the barcode codecs are optimized for alpha-numeric content rather than arbitrary binary data, so this doesn’t necessarily say anything about the amount of data in there. Textual content is however much easier to analyze, any barcode scanning app can show you the content. Many of those scanners however choke on e.g. 0 bytes, so even capturing the full binary payload isn’t straightforward.
  • Standardized or proprietary content. In some areas barcode content is standardized to achieve inter-operator compatibility, airline boarding passes being the extreme with a sinle international standard. Unfortunately, there are few other standards, let alone some with even remotely such a wide coverage. So in many cases we encounter vendor-specific codes with little or no public documentation. Those however are often a bit simpler in their structure, while standards tend to be modular and offer support for extensions. Standardization also doesn’t necessarily imply the specification is publicly available, but it makes it at least more likely that it’s findable somewhere on the Internet ;-)

Flights

As mentioned above there is only one relevant barcode type for flights, “IATA Bar Coded Boarding Passes (BCBP)”. It’s a fairly old standard, containing a modular ASCII payload for one to four legs. The set of mandatory fields is very small:

  • Passenger name (as 6bit ASCII and truncated to 20 characters).
  • Booking reference.
  • Start and destination IATA airport codes.
  • Flight number.
  • Day of flight. This is the number of days since January 1st of the year of the trip. The year however is not encoded at all.
  • Seat number and class.
  • Passenger sequence number (part of the unique identification of a passenger).

Privacy-wise, this is already enough to be problematic, as was shown at 33C3 in 2016. For KItinerary’s data extraction this is almost all the useful information in here, particularly annoying is the lack of a full date, requiring us to guess the year from context.

However, there’s plenty of optional fields that are populated based on the airline and the travel destination. A few noteworthy examples are:

  • Frequent flyer number (which sometimes doubles as a credit card).
  • Baggage tag numbers.
  • A “document id”, which has been seen containing the passport number for flights to the UK for example.
  • A variable length vendor-specific field. This is often seen to be used by Lufthansa-associated airlines, with unknown content.
  • Fields specific to US security requirements (and only used for flights in or to the US).
  • A cryptographic signature of the content, to be specified by “local authorities”. This so far has also only been observed for US destinations.

It would be interesting to explore if a “privacy mode” for boarding passes in KDE Itinerary would work in practice, that is only presenting the mandatory fields of the boarding pass and see how far you get with that at the airport. It’s unlikely to work for security-related fields or with signatures as used in the US, but fields primarily of commercial interest are probably avoidable in other parts of the world.

Trains

For train tickets the situation is a lot more diverse. The closest thing to an international standard is UIC 918.3, which is the big 50x50mm Aztec code found on European international tickets, as well as on domestic tickets in at least Austria, Denmark, Germany and Switzerland. UIC 918.3 however only defines a container format with a minimal header, cryptographic signatures and a zlib compressed payload.

To get an idea on the variety of payloads we find on train ticket, here’s an overview of what KItinerary supports so far, order roughly by complexity and usefulness of the content:

  • Koleje Małopolskie (a local polish provider): a simple JSON structure containing almost all relevant trip information, even exact times in UNIX format. Very useful and very easy to extract. Contains the passenger name, but at least nothing beyond what’s on the paper ticket. Uses human readable station names rather than station identifiers though.
  • SNCF (French national railway): a simple fixed-length ASCII format encoding one or two legs of a trip. Easy to extract and useful too, privacy-wise this contains the passenger birth date beyond what’s on the paper ticket. There’s still 4 bytes in there with unknown meaning.
  • Trenitalia (Italian national railway): A 67 byte binary blob encoding one leg of a trip. It seems very optimized for size, with numeric values having no alignment at all, so it needs to be looked at as a bit array rather than a byte array. Being entirely undocumented, we had to decode this ourselves. This is ongoing work, the current state can be found in the wiki, about half of the content can be attributed to a meaning or is always 0. The data we got out of this so far is quite useful, but it’s still incomplete (date/time values for example are suspected to be in there, but haven’t been decoded yet successfully). With parts of the content stlll being unknown it’s to early to asses this for privacy concerns.
  • RCT2 (the standard UIC 918.3 payload for European international tickets, and also used by DSB, ÖBB and SBB): There’s at least decent documentation about this. Unfortunately it’s of very limited use for data extraction. RCT2 is essentially an ASCII art representation of the upper part on the corresponding paper ticket, being designed for display to a human reader rather than for machine reading. The limited space in there conflicts with the realities of multi-lingual tickets, leading to a rather flexible interpretations of the standard. Relevant information for us like the exact train a ticket is valid for is not part of the specified fields in many cases, but encoded in an operator-specific format in a free text description field. Therefore KItinerary is only using this as fallback if no other information is available. Being designed as an exact representation of the paper ticket, it has not been seen containing any additional information.
  • Deutsche Bahn (vendor-specific payload for UIC918.3): That’s another modular hybrid binary/textual structure, wrapped inside the UIC 918.3 container, relatively complicated to decode and unfortunately containing very little useful information for KItinerary. Many fields are related to tariff details, but there’s also the passenger name and in older versions also full or partial numbers of the (credit) card used for payment and/or identification. This has meanwhile been fixed though. Tickets with an option for local public transport at the destination contain additional operator-specifc payloads, it’s unknown whether those contain useful/sensitive information.

There’s also a few operators we know use barcodes with trip-related content, but that we don’t support yet due to not having enough information or sample data to properly decode their barcodes:

  • VIA Rail (Canadian railway): ASCII payload, structurally probably comparable to SNCF, so this might be fairly easy to support given a sufficient amount of samples.
  • VR (Finish national railway): A 108 byte binary code with entirely unknown content so far. It looks more complex than the Trenitalia one, with the larger size and more parts of the code changing between even adjacent tickets, but not entirely random which suggests there is no encryption, compression or other sophisticated encoding.

Other transport operators like SNCB (Belgian national railway) or Flixbus are also using barcodes, but those seem merely to contain ticket tokens. The same is true for all event ticket samples we have so far.

Contribute

One of the easiest way to help with decoding such barcodes is looking for prior work or documents on that subject in your local language. For Deutsche Bahn I found numerous useful sources online, all in German though. For SNCF some material exists as well, but it required French language skills to find that.

While obviously conflicting with striving for privacy, another very helpful way to help is donating test samples, especially for not fully understood yet barcodes. Decoding an entirely undocumented binary code requires enough samples so you can look at meaningful differences between partially differing tickets, and enough samples to verify you theories on the semantics of certain bits with sufficient certainty. We are not talking about machine-learning scale amounts here though, for the current understanding of the Trenitalia codes it took about 30 barcodes from about a dozen different bookings.

And of course if you like solving binary puzzles, there are some nice challenges here too ;-)

May 14, 2019

Hello, my name is Sharaf. My nick on IRC is sh_zam.

My project is to port Krita to android devices. We've been successful in making the APK, but it only works if I build it, as it requires tweaking qt libraries, a bit. At the moment, my goal is to make the build system fully automatic and spit out the signed APKs for different architectures at the end.

Once I do that, I'll move on to UI, events and other fun stuff!

So, there's a lot to do and learn. Now I'll go back to coding and will write more technical stuff in my blogs in future, as I'm not that good at other stuff (-:

So, thank you KDE for choosing me and I hope I'll learn a lot from this community!

Plasma-nm

WireGuard support

We already had WireGuard support in Plasma 5.15, but it existed as a VPN plugin based on a NM WireGuard plugin, which wasn’t really working very well and didn’t utilize many of already existing NM properties. With release of NetworkManager 1.16, we have a new native support of WireGuard which is much more usable. It now exists as a new connection type so it’s implemented a bit differently compared to other VPNs. This mean that we had to implement first support for this connection type and its properties into NetworkManagerQt and implement an UI on top of that. The UI part of the new WireGuard support, same as the old VPN plugin, were implemented by Bruce Anderson. We are also probably (at this moment) the only one who provides an UI for WireGuard configuration so thank you Bruce for such a big contribution.

OTP support in Openconnect VPN plugin

Another big contribution, this time made by Enrique Melendez, is support for one time passwords in the Openconnect VPN plugin. This support was missing for some time so starting with Plasma 5.16, you should be able to use TOTP/HOTP/RSA/Yubikey tokens for your Openconnect connections.

PAN GlobalProtect VPN

OpenConnect 8.00 introduced support for PAN GlobalProtect VPN protocol. You can now see this new VPN type entry thanks to Alejandro Valdes.

Xdg-desktop-portal-kde

Remote desktop portal

Remote desktop portal brings possibility to control remotely your Wayland Plasma sessions. It utilizes screensharing portal to get the screen content and adds API for mouse/keyboard/touch control. Unfortunately at this moment only mouse support is implemented, mainly because I use KWayland::FakeInput protocol and mouse support is the only one currently implemented there. At this moment there is no Qt/KDE based application using remote desktop portal (or at least released one), but I have added support into Krfb, which is currently on review and I hope to get it merged for KDE Applications 19.08. Alternatively you can use gnome-remote-desktop.

Here is a short demo of remote desktop in action over VNC protocol. On the server side I’m running Krfb on Plasma wayland session and I control it from my second laptop using Krdc.

This is a follow-up post to Qt on CMake Workshop Summary – Feb ’19

Intro

From May 2nd to May 3rd another Qt on CMake workshop was hosted at the KDAB premises in Berlin, where interested stakeholders from both The Qt Company and KDAB gathered together to drive the CMake build system in Qt further. Many of KDAB’s customers are using CMake in their Qt projects, so we are keen to see the CMake support for Qt improve and happy to help out to make it happen. The workshop was public, for anyone interested, but we had no external visitors this time. We’d be happy to have some more CMake enthusiasts or interested people in these workshops, so be sure to sign up for the next CMake workshop (watch the qt-development mailing list for this)!

This workshop in May was mostly intended to reassess what has happened in the wip/cmake branch of qtbase since the last workshop and to discuss any further work. We spent almost half of the first day just deciding how to approach certain things such as how the CMake build system port will affect the upcoming Qt6 work, which is currently gaining momentum as well. We had between 8 and 10 people present across the 2 day workshop, from KDAB and (mostly) The Qt Company.

Workshop summary

Excerpt of the top-level CMakeLists.txt in qtbase.git

First of all: Thanks to Alexandru Croitor for driving the porting efforts and for organizing sprints and meetings where interested people can keep track of the progress!

The workshop summary notes are also courtesy of Alexandru, let me try to quickly recap the most interesting bits:

CMake config files in Qt6 and beyond

One of the key considerations of the CMake config files installed as part of the upcoming Qt6 was that there should a) be the possibility to just be able to use CMake targets like Qt::Core (compared to Qt5::Core) and functions/macro names like qt_do_x() (instead of qt5_do_x()) to allow most applications to just pull in a Qt version of their choice and then use “versionless” CMake identifiers. This allows to upgrade Qt versions more easily, without a lot of search-replace in CMake code. Note that you can continue to use the version-specific identifiers as before. This is an additional feature.

But on the other hand we’d also like to keep the possibility to mix Qt version X and Qt version Y in the same CMake project. Think about a project where two executables are being built, one depending on Qt6, the other one a potential Qt7 version. This is not as cumbersome as you’d think; we experience a lot of customer projects where people have this setup. It might as well be the case during a porting project, where old code might still continue to use an older Qt version.

Consider this example (which is not fully implemented yet, but you get the idea):

### Usecase: application wants to mix both Qt5 and Qt6, to allow gradual porting 

set(QT_CREATE_VERSIONLESS_TARGETS OFF) 
find_package(Qt5 COMPONENTS Core Gui Widgets) # Creates only Qt5::Core 
find_package(Qt6 COMPONENTS Core Gui Widgets) # Creates only Qt6::Core 
target_link_libraries(myapp1 Qt5::Core)
target_link_libraries(myapp2 Qt6::Core)

### Usecase: application doesn't mix Qt5 and Qt6, but allows to fully switch to link against either Qt5 or Qt6 

set(MY_APP_QT_MAJOR_VERSION 6) # <- potentially set at command line by application developer
# set(QT_CREATE_VERSIONLESS_TARGETS ON) <- Default, doesn't need to be set 
find_package(Qt${MY_APP_QT_MAJOR_VERSION} COMPONENTS Core Gui Widgets) # Creates Qt5::Core and Qt::Core OR Qt6::Core and Qt::Core, based on the variable
target_link_libraries(myapp Qt::Core) # Just links to whatever Qt major version was requested

More details (and development notes from the past meetings):

After a lot of back and forth we actually found a straight-forward way to at least create the two namespaces in the CMake config files easily, see e.g.: https://codereview.qt-project.org/#/c/253169/

QMake will still be around in Qt6

As it stands, existing users of Qt and specifically users of QMake do not have to fear the CMake port too much, for now. The current bias is towards keeping the qmake executable (and the associated mkspec functionality) around for the Qt6 lifetime, as we’d obviously create a lot of additional porting effort for our users. During the Qt6 lifetime it would probably be wise to consider moving your pet project to a CMake build system, but only time will tell.

QMake is currently built via the CMake build system in the wip/cmake branch and already available for use. Upside-down world, right. Additionally, we’re looking into generating the QMake module .pri files using CMake as well. All this is definitely no witch craft but just needs dedicated people to implement all of it.

Further notes

You can find a lot more details on the Wiki, in case you are curious, I would not like to duplicate even more of the really comprehensive work log produced here: https://wiki.qt.io/CMake_Port/Development_Notes

If you would like to learn more about CMake, we are offering a one-day Introduction to CMake training at the KDAB training day as part of Qt World Summit in Berlin this year.

If you have comments or if you want to help out, please ideally post feedback on the Qt Project infrastructure. Send a mail to the qt-development mailing list or comment on the wiki page dedicated for the CMake port. Or just join us in the IRC channel #qt-cmake on Freenode!

 

The post Qt on CMake Workshop Summary – May ’19 appeared first on KDAB.

May 12, 2019

The Kate lsp branch contains now the infrastructure as used by Qt Creator. In addition, clangd is now somehow started in a working state for the first project opened inside Kate.

For example, if you use the CMake Kate project generator and you compile Kate from the “lsp” branch, clangd should pick up the compile_commands.json for a CMake generated Kate project.

;=) Unfortunately not much more than starting and informing clangd about the open workspaces (for the first opened project) works ATM.

If you press ALT-1 over some identifier, you will get some debug output on the console about found links, like below:

qtc.languageclient.parse: content: “{\“id\”:\“{812e04c6-2bca-42e3-a632-d616fdc2f7d4}\“,\“jsonrpc\”:\“2.0\“,\“result\”:[{\“range\”:{\“end\”:{\“character\”:20,\“line\”:67},\“start\”:{\“character\”:6,\“line\”:67}},\“uri\”:\“file:///local/cullmann/kde/src/kate/kate/katemainwindow.h\“}]}”

The current ALT-1 handling is a big hack, as then one just adds the current document and triggers the GotoDefinitionRequest. A proper implementation tracks the opened/closed documented of the editor.

But at least in principle Kate is now able to start some language server processes and talk a bit with them, all thanks to the nice code borrowed from Qt Creator.

:=) As my spare time is limited, any help in bringing the branch up-to-speed is highly welcome, just drop us a mail to kwrite-devel@kde.org or mail me in private (cullmann@kde.org). A working LSP integration will help to make Kate more attractive for programmers of many languages.


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.