€ 20000 0.00
Please donate what you can to help make the Randa Meetings 2017 possible. Read more.

August 18, 2017

The last week of GSoC 2017 is about to begin. My project is in a pretty good state I would say: I have created a big solution for the Xwayland Present support, which is integrated firmly and not just attached to the main code path like an afterthought. But there are still some issues to sort out. Especially the correct cleanup of objects is difficult. That’s only a problem with sub-surfaces though. So, if I’m not able to solve these issues in the next few days I’ll just allow full window flips. This would still include all full screen windows and for example also the Steam client as it’s directly rendering its full windows without usage of the compositor.

I still hope though to solve the last problems with the sub-surfaces, since this would mean that we can in all direct rendering cases on Wayland use buffer flips, which would be a huge improvement compared to native X.

In any case at first I’ll send the final patch for the Present extension to the xorg-devel mailing list. This patch will add a separate mode for flips per window to the extension code. After that I’ll send the patches for Xwayland, either with or without sub-surface support.

That’s already all for now as a last update before the end and with still a big decision to be made. In one week I can report back on what I chose and how the final code looks like.


After posting the image of the 5.11 wallpaper feedback started coming in and one thing fairly consistently mentioned is how dark/muted it is. Of course, there were mixed opinions on whether it was good to be dark or if it was a little too far, but it was a clear observation, especially compared to the previous wallpapers.

So I took a few minutes to adjust the wallpaper. There were lots of people who liked having something more subtle, so I didn’t stray too far. I adjusted the blues to be more saturated, the browns are lighter towards the bottom to reduce banding, the orange is a bit brighter, and reds on the right were tweaked. I also reduced an “atmosphere” gradient. Lastly, I removed a noise filter used to combat banding.

Overall it’s not that much lighter, but it should be less muddy and washed out. If you didn’t have them side-by-side ideally you may not notice the changes, but hopefully it just feels a bit better.

Here’s the adjusted wallpaper:


Both versions are available on the KDE Store.

This is a good year to be a Qt contributor.

There was Qt Day Italy in June. From what I hear, the event was a success. The talks were great and everything worked. This was the sixth Qt Day Italy, so there is tradition behind this event!

Even though it is not a Qt event, KDE Akademy is worth mentioning. Akademy is the annual world summit of KDE, one of the largest Free Software communities in the world. It is a free, non-commercial event organized by the KDE Community. This year Akademy was in Almeria Spain, in late July, 22nd to 27th. KDE has over the years brought many excellent developers to Qt, and they are definitely the biggest open source project using Qt.

Starting now is the QtCon Brasil event in Rio de Janeiro on August 18th to 20th. Taking inspiration from last years QtCon in Berlin, QtCon Brasil the first Qt community event in Brasil. The speaker list is impressive and there is an optional training day before the event. For South-American Qt developers right now is the time and place to be in Rio!

This year Qt Contributors’ Summit is being organised as a pre-event to Qt World Summit, and we are calling it Qt Contributors’ Days. As is tradition, the event will be run in the unconference style with the twist that topics can be suggested beforehand on the schedule wiki page. Qt Contributors’ Days gathers Qt contributors to the same location to discuss where Qt is and what direction it is heading in. The discussions are technical with the people implementing the actual features going into details of their work.

You can register to Qt Contributors’ Days at the Qt World Summit site, the nominal fee is to cover the registration expense, if it is an issue, please contact me.

The late summer and autumn are shaping up to be great times for Qt contributors!

The post 2017 for Qt Contributors appeared first on Qt Blog.

It’s been a long time coming ..

Tobias and Raphael pushed the button today to push QtWebEngine into FreeBSD ports. This has been a monumental effort, because the codebase is just .. ugh. Not meant for third-party consumption, let’s say. There are 76 patches needed to get it to compile at all. Lots of annoying changes to make, like explaining that pkg-config is not a Linux-only technology. Nor is NSS, or Mesa, while #include <linux/rtnetlink.h> is, in fact, Linux-only. Lots of patches can be shared with the Chromium browser, but it’s a terrible time-sink nonetheless.

This opens the way for some other ports to be updated — ports with QtWebEngine dependencies, like Luminance (an HDR-image-processing application).

The QtWebEngine parts have been in use for quite some time in the plasma5/ branch of Area51, so for users of the unofficial ports, this announcement is hardly news. Konqueror with QtWebEngine underneath is a fine and capable browser; my go-to if I have Plasma5 available at all on FreeBSD.

Ya he hablado del Maratón Linuxero y del cuarto ensayo que se realizó el domingo pasado, donde se anunciaba que se había conseguido algunos patrocinadores de evento. Pues bien, desde hace unos días ya se saben los regalos del Maratón Linuxero que vendrán de la mano de Slimbook, Vant y Pcubuntu. Una gran noticia que vale la pena compartir con todos vosotros.

¿Todavía no conoces El Maratón Linuxero?

Nos repetimos más que el ajo pero es que creo que va ser de la cosas más grandes que tendremos en cuanto de repercusión mediática los amantes del Software Libre. El próximo 3 de septiembre, a partir de las 15 horas peninsular de España, un grupo de podcasters emitirá de forma continuada una hora de radio online con el objetivo de realizar el más extenso podcast de Software Libre que se ha realizado jamás en el mundo hispanohablante.

Y por si queréis más motivación no os perdáis la épica promoción creada por Paco Estrada y a Ricardo Ocell:


Los regalos del Maratón Linuxero: Slimbook, Vant y Pcubuntu

El pasado domingo 13 de agosto, se realizó la cuarta prueba del Maratón Linuxero, y en su primera hora se anunció que se habían conseguido que tres empresas patrocinaran el evento aportando regalos al evento. Aunque no se especificaron cuáles serían.
Por si no habéis tenido la oportunidad de escuchar dicha prueba aquí abajo os la dejo.

Un par de días más tarde se desvelaron dichos regalos que vienen de la mano de Slimbook, Vant y Pcubuntu. Aprovecho para agradecer públicamente a las tres empresas el interés.

Para ser más específicos, los regalos son los siguientes.

De la mano de Slimbook, la empresa que crea esos maravillosos ultrabooks y el expecional Slimbook One, y de la que he hecho recientemente un unboxing de un Slimbook Pro. No obstante, para este Maratón Linuxero Slimboo va a regalar 10 packs de camiseta o polo más pegatinas y webcam cover.

Los regalos del Maratón Linuxero

La segunda tanda de regalos viene de la mano de la empresa Vant, la cual se caracteriza por ofrecer una extensa gama de equipos, ensamblados completamente en nuestro país, que cubre las necesidades de cualquier usuario. Vant regalará 10 packs de teclado más ratón Linux para que solo aparezca Tux y no la tecla de Windows en tu escritorio.

Y para fnalizar, la empresa PCubuntu,una tienda online de ordenadores que apuesta por el software libre y gratuito y que tiene licencia de Ubuntu, obsequiará al afortunado ganador con un PC de Sobremesa que seguro que a nadie despreciará.

¿Qué os parece? A mi me parece fabuloso y estoy seguro que a vosotros también. La mecánica la podéis consultar en la página web del Maratón Linuxero.

Por cierto, puedes contactar con el Maratón Linuxero de las siguientes formas:

August 17, 2017

Well, it’s that time of the year again where I talk about wallpapers!

For those who watched the livestream of the beach wallpaper, you’ll notice this isn’t what I had been working on. Truth be told after the stream I hit a few artistic blocks which brought progress to a grinding halt. I plan to finish that wallpaper, but for this release I created something entirely different while I decide what to do with it. I enjoyed this “wireframe” effect, and will probably experiment with it again.

This wallpaper is named “Opal”, specifically after wood opal resulting from when water carrying mineral deposits will petrify wood it runs across. Wood opal is pretty special stuff, and often it can often look straight out of a fairy tale.


The Plasma 5.11 wallpaper “Opal” is available on the KDE Store.

Finalmente ha llegado el día previsto por los desarrolladores y ha sido lanzado KDE Aplicaciones 17.08, la gran actualización de agosto del conjunto de programas que conforman el ecosistema de KDE. Como es habitual desde que se dio el salto a KF5 y Qt5 las novedades siguen dos vías: portar aplicaciones a dichas versiones y aportar mejoras a las aplicaciones ya migradas.

Lanzado KDE Aplicaciones 17.08, la gran actualización de agosto

Lanzado KDE Aplicaciones 17.08El 17 de agosto de 2017 fue la fecha marcada en el calendario por la Comunidad KDE para lanzar la gran actualización de agosto del conjunto de sus aplicaciones, que evidentemente viene tras las correspondientes versiones previas.

De esta forma ya se ha realizado el anuncio oficial y en breve estará disponible en vuestra distribución favorita, aunque eso depende de los equipos de mantenimiento de las mismas. En KDE Neon, que es la que tengo en mi portátil espero tenerla como mucho mañana.

Como es habitual se ha trabajado en varias líneas: solucionar errores, agregar nuevas funcionalidades a las aplicaciones existentes, migrar más programas a KF5/Qt5 y acoger más aplicaciones al ecosistema KDE.

De esta forma, un total de 12 aplicaciones más han sido migradas ya a KF5/Qt5:kmag, kmousetool, kgoldrunner, kigo, konquest, kreversi, ksnakeduel, kspaceduel, ksudoku, kubrick, lskat y umbrello.


Las novedades de KDE Aplicaciones 17.08

No tenemos novedades muy espectaculares registradas, pero entre ellas nos encontramos con:

  • Dolphin, el explorador de archivos, permite mostrar la fecha de borrado en la papelera de reciclaje, así como se ha añadido la “fecha de creación” como propiedad de vista.
  • KIO-Extras ha mejorado su soporte para carpetas compartidas vía Samba.
  • KAlgebra mejora su interfaz Kirigami para escritorio.
  • Kdenlive ha solucionado el problema con el “Efecto congelación”
  • KDEPim:
    • Reactivado el Akonadi Trasnport Support para KMailtrasnport.
    • La posibiidad de utilizar un editor externo ha sido devuelta en forma de plugin.
    • El Asistente de Importación de Cuentas de Akonadi se ha mejorado para poder utilizar cualquier tipo de cuenta.

Como se puede ver, esta actualización es 100% recomendable para todos los simpatizantes al proyecto KDE y supone otro pasito adelante para la Comunidad.

Solo queda decir…

¡KDE Rocks!


Más información: KDE.org


At the recently concluded Akademy 2017 in the incredibly hot but lovely Almería, yours truly went and did something a little silly: Submitted both a talk (which got accepted) and hosted a BoF, both about Open Collaboration Services, and the software stack which KDE builds to support that API in the software we produce. The whole thing was amazing. A great deal of work, very tiring, but all 'round amazing. I even managed to find time to hack a little bit on Calligra Gemini, which was really nice.

This blog entry collects the results from the presentation and the BoF. I realise this is quite long, but i hope that you stick with it. In the BoF rundown, i have highlighted the specific results, so hopefully you'll be able to skim-and-detail-read your specific interest areas ;)

First, A Thank You In Big Letters

Before we get to that, though, i thought i'd quickly touch on something which i've seen brought up about what the social media presence of the attendees looks like during the event: If you didn't know better, you might imagine we did nothing but eat, party and go on tours. My personal take on that is, we post those pictures to say thank you to the amazing people who provide us with the possibility to get together and talk endlessly about all those things we do. We post those pictures, at least in part, because a hundred shots of a group of people in an auditorium get a bit samey, and while the discussions are amazing, and the talks are great, they don't often make for exciting still photography. Video, however, certainly does that, and those, i hear, are under way for the presentations, and the bof wrapups are here right now :)

Nothing will stop our hackers. And this is before registration and the first presentation!

Presenting Presentations

Firstly, it felt like the presentation went reasonably well, and while i am not able to show you the video, i'll give you a quick run-down of the main topic covered in it. Our very hard working media team is working on the videos at the moment, though, so keep your eyes on the KDE Community YouTube channel to catch those when they're released.

The intention of the presentation was to introduce the idea that just because we are making Free software, that does not mean we can survive without money. Consequently, we need some way to feed funds back to the wildly creative members of our community who produce the content you can find on the KDE Store. To help work out a way of doing this in a fashion that fits in with our ideals, described by the KDE Vision, i laid out what we want to attempt to achieve in five bullet points, tongue-in-cheek called Principia pene Premium, or the principles of almost accidental reward:
  • Financial support for creators
  • Not pay-for-content
  • Easy
  • Predictable
  • Almost (but not quite) accidental
The initial point is the problem itself, that we want the creators of the content on the store to be financially rewarded somehow. The rest are limiting factors on that:

Not pay-for-content alludes to the fact that we don't want to encourage paywalls. The same way we make our software available under Free licenses of various types, we want to encourage the creators of the content used in it to release their work in similarly free ways.

Easy means easy for our creators, as well as the consumers of the content they produce. We don't want either them to have to jump through hoops to receive the funds, or to send it.

Predictable means that we want it to be reasonably predictable for those who give funds out to the creators. If we can ensure that there are stable outgoings for them, say some set amount each month or year, then it makes it easier to budget, and not have to worry. Similarly, we want to try and make it reasonably predictable for our creators, and this is where the suggestion about Liberapay made by several audience members comes in, and i will return to this in the next section.

Finally, perhaps the most core concept here is that we want to make it possible to almost (but not quite) accidentally send one of the creators funds. Almost, because of course we don't want to actually do so accidentally. If that were the case, the point of being predictable would fly right out the window. We do, however, want to make it so easy that it is practically automatically done.

All of this put together brings us to the current state of the KDE Store's financial support system: Plings. These are an automatic repayment system, which the store handles for every creator who has added PayPal account information to their profile. It is paid out monthly, and the amount is based on the Pling Factor, which is (at the time of writing) a count of how many downloads the creator has accumulated over all content items over course of the month, and each of those is counted as $0.01 USD.

Space-age looking crazy things at the Almería Solar Platform. Amazing place. Wow. So science.

Birds of a Feather Discuss Together

On Wednesday, a little while before lunch, it was time for me to attend my final BoF session of the week (as i would be leaving early Thursday). This one was slightly different, of course, because i was the host. The topic was listed as Open Collaboration Service 1.7 Preparation, but ended up being more of a discussion of what people wanted to be able to achieve with the store integration points we have available.

Most of the items which were identified were points about KNewStuff, our framework designed for easy integration of remote content using either OCS, or static sources (used by e.g. KStars for their star catalogues).

Content from alternate locations was the first item to be mentioned, which suggests a slight misunderstanding about the framework's abilities. The discussion revealed that what was needed was less a question of being able to replace existing sources in various applications, so much as needing the ability to control the access to KNewStuff more granularly. Specifically, being able to enable/disable specific sources was highlighted, perhaps through using Kiosk. It might still make sense to be able to overlay sources - one example given was the ability to overlay the wallpapers source (used in Plasma's Get New Wallpapers) with something pointing to a static archive of wallpapers (so users might be able to get a set of corporate-vetted backgrounds, rather than just one). This exact use case should already be possible, simply by providing a static XML source, and then replacing the wallpapers.knsrc file normally shipped by Plasma with another, pointing to that source.

A more complete set of Qt Quick components was requested, and certainly this would be very useful. As it stands, the components are very minimal and really only provide a way to list available items, and install/update/remove them. In particular two things were pointed out: There is no current equivalent of KNS3::Button in the components, and further no Kiosk support, both of which were mentioned as highly desired by the Limux project.

Signing and Security was highlighted as an issue. Currently, KNSCore::Security exists as a class, however it is marked as "Do not use, non-functional, internal and deprecated." However, it has no replacement that i am myself aware of, and needs attention by someone who, well, frankly knows anything of actual value about signing. OCS itself has the information and KNS does consume this and make it available, it simply seems to not be used by the framework. So, if you feel strongly about signing and security issues, and feel like getting into KNewStuff, this is a pretty good place to jump in.

Individual download item install/uninstall was mentioned as well, as something which would be distinctly useful for many things (as a simple example, you might want more than one variant of a wallpaper installed). Right now, Entries are marked as installed when one download item is installed, and uninstalling implicitly uninstalls that download item. There is a task on the KNewStuff workboard which has collected information about how to adapt the framework to support this.

But KNewStuff wasn't the only bit to get some attention. Our server-side software stack had a few comments along the way as well.

One was support for Liberapay which is a way to distribute monetary wealth between people pretty much automatically, which fits very nicely into the vision of creator support put forward in my presentation. In short, what it allows us to do

One topic which comes up regularly is adding support for the upload part of the OCS API to our server-side stack. Now, the reason for this lack is not that simply adding that is difficult at all, because it certainly isn't - quite the contrary, the functionality practically exists already. The problem here is much more a case of vetting: How do we ensure that this will not end up abused by spammers? The store already has spam entries to be handled every day, and we really want to avoid opening up a shiny, new vector for those (insert your own choice of colloquialism here) spammers to send us things we want to not have on the store. Really this deserves a write-up of its own, on account of the sheer scope of what might be done to alleviate the issues, but what we spoke about essentially came down the following:

  • Tight control of who can upload, so people have to manually be accepted by an administration/editors team as uploaders before they are given the right to do so through the API. In essence, this would be possible through establishing a network of trust, and through people using the web interface first. As we also want people to approach without necessarily knowing people who know people, a method for putting yourself up for API upload permission approval will also be needed. This might possibly be done through setting a requirement for people who have not yet contributed in other ways to do so (that is, upload some content through the web first, and then request api upload access). Finally, since we already have a process in place for KDE contributors, matching accounts with KDE commit access might also be another way to achieve a short-cut (you already have access to KDE's repositories, ability to publish things on the store would likely not be too far a stretch).
  • Quality control of the content itself. This is something which has been commented on before. Essentially, it has been discussed that having linting tools that people can use locally before uploading things would be useful (for example, to ensure that a kpackage for a Plasma applet is correct, or that a wallpaper is correctly packaged, or that there is correct data in a Krita brush resource bundle, or that an AppImage or Flatpak or Snap is what it says it is, just to mention a few). These tools might then also be used on the server-side, to ensure that uploaded content is correctly packaged. In the case of the API, what might be done is to return the result of such a process in the error message field of a potentially failed OCS content/add or content/edit call, which then in turn would be something useful to present to the user (in place of a basic "sorry, upload failed" message). 

For OCS itself, adding mimetype as an explicit way to search and filter entries and downloaditems was suggested. As it stands, it could arguably be implemented by clients and servers, however having it explicitly stated in the API would seem to make good sense.

The proposal to add tagging support to OCS currently awaiting responses on the OCS Webserver phabricator was brought up. In short, while there are review requests open for adding support for the proposal to Attica and KNewStuff respectively, the web server needs the support added as well, and further the proposal itself needs review by someone who is not me. No-one who attended the BoF felt safe in being able to review this in any sensible way, and so: If you feel like you are able to help with this, please do take part and make comments if you think something is wrong.

Finally, both at the BoF and outside of it, one idea that has been kicked around for a while and got some attention was the idea of being able to easily port and share settings between installations of our software. To be able to store some central settings remotely, such as wallpaper, Plasma theme and so on, and then apply those to a new installation of our software. OCS would be able to do this (using its key/value store), and what is needed here is integration into Plasma. However, as with many such things, this is less a technical issue (we have most of the technology in place already), and more a question of design and messaging. Those of you who have ever moved from one Windows 10 installation to another using a Microsoft account will recognise the slightly chilling feeling of the sudden, seemingly magical appearance of all your previous settings on the machine. As much as the functionality is very nifty, that feeling is certainly not.

Solar powered sun-shade platform outside the university building. With fancy steps. And KDE people on top ;)

Another Thank You, and a Wish

Akademy is not the only event KDE hosts, and very soon there is going to be another one, in Randa in the Swiss alps, this year about accessibility. I will not delve into why this topic is so important, and can only suggest you read the article describing it. It has been my enormous privilege to be a part of that several years, and while i won't be there this year, i hope you will join in and add your support.

The word of the day is: Aircon. Because the first night at the Residencia Civitas the air conditioning unit in the room i shared with Scarlett Clark did not work, making us very, very happy when it was fixed for the second night ;)
It finally landed! KStars 2.8.1 aka Hipster release is out for Windows & MacOS!

The highlight for this release is experimental support for HiPS: Hierarchical Progressive Surveys. HiPS provides multi-resolution progressive surveys to be overlayed directly in client applications, such as KStars. It provides an immersive experience as you can explore the night sky dynamically.

With over 200+ surveys across the whole electromagnetic spectrum from radio, infrared, visual, to even gamma rays, the user can pan and zoom progressively deeper into the data visually.

HiPS Support in KStars

HiPS support in KStars has been made possible with collaboration with the excellent open source planetarium software SkyTechX. This truely demonstrates the power of open source to accelerate development and access to more users.

Another feature, also imported from SkyTechX, is the Polaris Hour Angle, which is useful for users looking to polar align their mount.

Polar Hour Angle

GSoC 2017 student Csaba Kertész continued to merge many code improvements. Moreover, many bugs fixes landed in this release. The following are some of the notable fixes and improvements:
  • BUGS:382721 Just use less than and greater than keys for time.
  • BUGS:357671 Patch by Robert Barlow to support custom catalogs in WUT.
  • Improved comet and asteroid position accuracy.
  • Ekos shall fallback to user defined scopes if INDI driver does not provide scope metadata.
  • Fixed command line parsing.
  • Fixed many PHD2 external guider issues.
  • Fixed selection of external guiders in Ekos Equipment Profile.
  • Fixed rotator device infinite loop.
  • Fixed scheduler shutdown behavior.
  • Fixed Ekos Mosaic Position Angle.
  • Fixed issue with resetting Polar Alignment Assistant Tool rotation state.
  • Fixed issue with Ekos Focus star selection timeout behavior.
  • Ekos always switches to CLIENT mode when taking previews.
  • Handle proper removal of devices in case of Multiple-Devices-Per-Driver drivers
  • Display INDI universal messages for proper error reporting.
  • Better logging with QLoggingCategory.

Ekos Mosaic Tool with HiPS

Kdenlive 17.08 is released bringing minor fixes and improvements. Some of the highlights include fixing the Freeze effect and resolving inconsistent checkbox displays in the effects pannel. Downloaded transition Lumas now appear in the interface. Now it is possible to assign a keyboard shortcut for the Extract Frame feature also a name is now suggested based on the frame number. Navigation of clip markers in the timeline behave as expected upon opening the project. Audio clicks issues are resolved although this requires building MLT from git or wait for a release. In this cycle we’ve also bumped the Windows version from Alpha to Beta.

We continue steadfastly making progress in the refactoring branch due for the 17.12 release. We will soon make available a package for testing purposes. Stay tuned for the many exciting features coming soon.

Full list of changes

  • Fix audio mix clicks when using recent MLT. Commit. Fixes bug #371849
  • Fix some checkbox displaying inconsistent info. Commit.
  • Fix downloaded lumas do not appear in interface (uninstall/reinstall existing lumas will be required for previously downloaded). Commit. Fixes bug #382451
  • Make it possible to assign shortcut to extract frame feature,. Commit. Fixes bug #381325
  • Gardening: fix GCC warnings (7). Commit.
  • Gardening: fix GCC warnings (6). Commit.
  • Gardening: fix GCC warnings (5). Commit.
  • Gardening: fix GCC warnings (4). Commit.
  • Gardening: fix GCC warnings (3). Commit.
  • Gardening: fix GCC warnings (2). Commit.
  • Gardening: fix GCC warnings (1). Commit.
  • Fix clip markers behavior broken on project opening. Commit. Fixes bug #382403
  • Fix freeze effect broken (cannot change frozen frame). Commit.
  • Use QString directly. Commit.
  • Use isEmpty. Commit.
  • Use isEmpty(). Commit.
  • Remove qt module in include. Commit.
  • Use constFirst. Commit.
  • Make it compile. Commit.
  • Use Q_DECL_OVERRIDE. Commit.
  • Use nullptr. Commit.
  • Avoid using #elifdef. Commit.
  • Try harder to set KUrlRequester save mode in the renderwidget. Commit.
  • Make sure that text is not empty. Commit.
  • Use QLatin1Char(…). Commit.
  • Cmake: remove unused FindQJSON.cmake. Commit.
  • Port some foreach to c++ for(…:…). Commit.
  • Fix compiler settings for Clang. Commit.

Last month, I attended KDE's annual conference, Akademy 2017. This year, it was held in Almeria, a small Andalusian city on the south-east coast of Spain.

The name of the conference is no misspelling, it's part of KDE's age old tradition of naming everything to do with KDE with a 'k'.

It was a collection of amazing, life-changing experiences. It was my first solo trip abroad and it taught me so much about travel, KDE, and getting around a city with only a handful of broken phrases in the local language.


My trip began from the recently renamed Kempegowda International Airport, Bangalore. Though the airport is small for an international airport, the small size of the airport works to its advantage as it is very easy to get around. Check-in and immigration was a breeze and I had a couple of hours to check out the loyalty card lounge, where I sipped soda water thinking about what the next seven days had in store.

The first leg of my trip was on a Etihad A320 to Abu Dhabi, a four hour flight departing at 2210 on 20 July. The A320 isn't particularly unique equipment, but then again, it was a rather short leg. The crew onboard that flight seemed to be a mix of Asian and European staff.

Economy class in Etihad was much better than any other Economy class product I'd seen before. Ample legroom, very clean and comfortable seats, and an excellent IFE system. I was content looking at the HUD-style map visualisation which showed the AoA, vertical speed, and airspeed of the airplane.

On the way, I scribbled a quick diary entry and started reading part 1 of Sanderson's Stormlight Archive - 'The Way of Kings'.

Descent to Abu Dhabi airport started about 3:30 into the flight. Amber city lights illuminated the desert night sky. Even though it was past midnight, the plane's IFE reported an outside temperature of 35C. Disembarking from the plane, the muggy atmosphere hit me after spending four hours in an air-conditioned composite tube.

The airport was dominated by Etihad aircraft - mainly Airbus A330s, Boeing 787-8s, and Boeing 777-300ERs. There were also a number of other airlines part of the Etihad Airways Partners Alliance such as Alitalia, Jet Airways, and some Air Berlin equipment. As it was a relatively tight connection, I didn't stop to admire the birds for too long. I traversed a long terminal to reach the boarding gate for the connecting flight to Madrid.

The flight to Madrid was another Etihad operated flight, only this time on the A320's larger brethren - the A330-200. This plane was markedly older than the A320 I had been on the first leg of the trip. Fortunately, I got the port side window seat in a 2-5-2 configuration. The plane had a long take-off roll and took off a few minutes after 2am. Once we reached cruising altitude, I opened the window shade. The clear night sky was full of stars and I must have spent at least five minutes with my face glued to the window.

I tried to sleep, preparing myself for the long day ahead. Soon after waking up, the plane landed at Madrid Barajas Airport and taxied for nearly half-an-hour to reach the terminal. After clearing immigration, I picked up my suitcase and waited for a bus which would take me to my next stop - the Madrid Atocha Railway Station. Located in the heart of city, the Atocha station is one of Madrid's largest train stations and connects it to the rest of Spain. My train to Almeria was later that day - at 3:15 in the afternoon.

On reaching Atocha, I got my first good look at Madrid.

My facial expression was quite similar

I was struck by how orderly everything was, starting with the traffic. Cars gave the right of way to pedestrians. People walked on zebra crossings and cyclists stuck to the defined cycling lanes. A trivial detail, but it was a world apart from Bangalore. Shining examples of Baroque and Gothic architecture were scattered among newer establishments.

Having a few hours to kill before I had to catch my train, I roamed around Buen Retiro Park, one of Spain's largest public parks. It was a beautiful day, bright and sunny with the warmth balanced out by a light breeze.

My heavy suitcase compelled me to walk slowly which let me take in as much as I could. Retiro Park is a popular stomping ground for joggers, skaters, and cyclists alike. Despite it being 11am on a weekday, I saw plenty of people jogging though the park. After this, I trudged through some quaint cobbled neighbourhoods with narrow roads and old apartment buildings dotted with small shops on the ground floor.

Maybe it was the sleep-deprivation or dehydration after a long flight, but everything felt so surreal! I had to pinch myself a few times - to believe that I had come thousands of miles from home and was actually travelling on my own in a foreign land.

I returned to Atocha and waited for my train. By this time, I came to realise that language was going to be a problem for this trip as very few people spoke English and my Spanish was limited to a few basic phrases - notably 'No hables Espanol' and 'Buenos Dias' :P Nevertheless, the kind folks at the station helped me find the train platform.

Trains in Spain are operated by state-owned train companies. In my case, I would be travelling on a Renfe train going till Almeria. The coaches are arranged in a 2-2 seating configuration, quite similar to those in airplanes, albeit with more legroom and charging ports. The speed of these trains is comparable to fast trains in India, with a top speed of about 160km/hr. The 700km journey was scheduled to take about 7 hours. There was plenty of scenery on the way with sloping mountain ranges and deserted valleys.

Big windows encouraged sightseeing

After seven hours, I reached the Almeria railway station at 10pm. According to Google Maps, the hostel which KDE had booked for all the attendees was only 800m away - well within walking distance. However, unbeknownst to me, I started walking in the opposite direction (my phone doesn't have a compass!). This kept increasing the Google Maps ETA and only when I was 2km off track I realised something was very wrong. Fortunately, I managed to get a taxi to take me to Residencia Civitas - a university hostel where all the Akademy attendees would be staying for the week.

After checking in to Civitas, I made my way to the double room. Judging from the baggage and the shoes in the corner, someone had moved in here before I did. About half an hour later, I found out who - Rahul Yadav, a fourth year student at DTU, Delhi. Exhausted after an eventful day of travel, I switched off the lights and went to sleep.

The Conference

The next day, I got to see other the Akademy attendees over breakfast at Civitas. In all, there were about 60-70 attendees, which I was told was slightly smaller than previous years.

The conference was held at University of Almería, located a few kilometres from the hostel. KDE had hired a public bus for transport to and from the hostel for all the days of the conference. The University was a stone's throw from the Andalusian coastline. After being seated in one of the larger lecture halls, Akademy 2017 was underway.

Konqi! And KDE Neon!

The keynote talk was by Robert Kayne of Metabrainz, about the story of how MusicBrainz was formed out of the ashes of CDDB. The talk set the tone for the first day of Akademy.

The coffee break after the first two talks was much needed. I was grappling with sleep deprivation and jet lag from the last two days and needed all the caffeine and sugar I could get to keep myself going for the rest of the day. Over coffee, I caught up with some KDE developers I met at QtCon.

Throughout the day, there were a lot of good talks, notably 'A bit on functional programming', and five quick lightning talks on a variety of topics. Soon after this, it was time for my very own talk - 'An Introduction to the KIO Library'.

The audience for my talk consisted of developers with several times my experience. Much to my delight, the maintainer of the KIO Library, David Faure was in the audience as well!

Here's where I learned another thing about giving presentations - they never quite go as well as it seems to go when rehearsed alone. I ended up speaking faster than I planned to, leaving more than enough time for a QA round. Just as I was wary about, I was asked some questions about the low-level implementation of KIO which thankfully David fielded for me. I was perspiring after the presentation, and it wasn't the temperature which was the problem �� A thumbs up from David afterwards gave me some confidence that I had done alright.

Following this, I enjoyed David Edmundson's talk about Binding properties in QML. The next presentation I attended after this is where things ended up getting heated. Paul Brown went into detail about everything wrong with Kirigami's TechBase page. This drew some, for lack of a better word, passionate people to retaliate. Though it was it was only supposed to be a 10 minute lightning talk, the debate raged on among the two schools of thought of how TechBase documentation should be written. All in good taste. The only thing which brought the discussion to an end was the bus for returning to Civitas leaving sharp at 8pm.

Still having a bit of energy left after the conference, I was ready to explore this Andalusian city. One thing which worked out nicely on this trip is the late sunset in Spain around this time of the year. It is as bright as day even at around 9pm and the light only starts waning at around 930pm. This gave Rahul and me plenty of time to head to the beach, which was about a 20 minute walk from the hostel.

Here, it struck me how much I loved the way of life here.

Unlike Madrid, Almeria is not a known as a tourist destination so most of the people living there were locals. In a span of about half an hour I watched how an evening unfolds in this city. As the sun started dipping below the horizon, families with kids, college couples, and high-school friends trickled from the beach to the boardwalk for dinner. A typical evening looked delightfully simple and laid-back in this peaceful city.

The boardwalk had plenty of variety on offer - from seafood to Italian cuisine. One place caught my eye, a small cafe with Doner Kebab called 'Taj Mahal'. After a couple of days of eating nothing but bland sandwiches, Rahul and I were game for anything with a hint of spice of it. As I had done in Madrid, I tried ordering Doner Kebab using a mixture of broken Spanish and improvised sign language, only to receive a reply from the owner in Hindi! It turned out that the owner of the restaurant was Pakistani and had migrated to Spain seven years ago. Rahul made a point to ask for more chilli - and the Doner kebabs we got were not lacking in spice. I had more chilli in one kebab than I would normally would have had in a week. At least it was a change from Spanish food, which I wasn't all that fond of.

View from the boardwalk

The next day was similar to the first, only a lot more fun. I spent a lot amount of time interacting with the people from KDE India. I also got to know my GSoC mentor, Boudhayan Gupta (@BaloneyGeek). The talks for this day were as good as the ones yesterday and I got to learn about Neon Docker images, the story behind KDE's Slimbook laptop, and things to look forward to in C++17/20.

The talks were wrapped up with the Akademy Awards 2017.

David Faure and Kevin Ottens

There were still 3 hours of sunlight left after the conference and not being ones to waste it, we headed straight for the beach. Boudhayan and I made a treacherous excursion out to a rocky pier covered with moss and glistening with seawater. My well-worn sandels were the only thing keeping me from slipping onto a bunch of sharply angled stones jutting out from the waves. Against my better judgement, I managed to reach the end of the pier only to see a couple of crabs take interest in us. With the tide rising and the sun falling, we couldn't afford to stay much longer so we headed back to the beach just as we came. Not long after, I couldn't help myself and I headed into the water with enthusiasm I had only knew as a child. Probably more so for BaloneyGeek though, who headed in headfirst with his three-week-old Moto G5+ in his pocket (Spoiler: the phone was irrevocably damaged from half a minute of being immersed in saltwater). In the midst of this, we found a bunch of KDE folks hanging out on the beach with several boxes of pizza and bottles of beer. Free food!

Exhausted but exhilarated, we headed back to Civitas to end another very memorable day in Almeria.

Estacion Intermodal, a hub for public transport in Almeria

With the talks completed, Akademy 2017 moved on to its second leg, which consisted more of BoFs (Birds of a Feather) and workshops.

The QML workshop organised by Anu was timely as my relationship with QML has been hot and cold. I would always go in circles with the QML Getting Started tutorials as there aren't as many examples of how to use QtQuick 2.x as there are with say, Qt Widgets. I understood how to integrate JavaScript with the QML GUI and I will probably get around to making a project with the toolkit when I get the time. Paul Brown held a BoF about writing good user documentation and deconstructed some more pretentious descriptions of software with suggestions on how to avoid falling into the same pitfalls. I sat on a few more BoFs after this, but most of the things went over my head as I wasn't contributing to the projects discussed there.

Feeling a bit weary of the beach, Rahul and I decided to explore the inner parts of the city instead. We planned to go to the Alcazaba of Almeria, a thousand-year-old fortress in the city. On the way, we found a small froyo shop and ordered a scoop with chocolate sauce and lemon sauce. Best couple of euros spent ever! I loved the tart flavour of the froyo and how it complemented the toppings with its texture.

This gastronomic digression aside, we scaled a part of the fort only to find it locked off with a massive iron door. I got the impression that the fort was rarely ever open to begin with. With darkness fast approaching, we found ourselves in a dodgy neighbourhood and we tried to get out as fast as we could without drawing too much attention to ourselves. This brought an end to my fourth night in Almeria.

View from Alcazaba

The BoFs continued throughout the 25th, the last full day of Akademy 2017. I participated in the GSoC BoF where KDE's plans for future GSoCs, SoKs, and GCIs were discussed (isn't that a lot of acronyms!). Finally, this was one topic where I could contribute to the discussion. If there was any takeaway from the discussion for GSoC aspirants, it is to start as early as you can!

I sat on some other BoFs as well, but most of the discussed topics were out of my scope. The Mycroft and VDG BoF did have some interesting exchange of ideas for future projects that I might consider working on if I get free time in the future.

Rahul was out in the city that day, so I had the evening to explore Almeria all by myself.

I fired up Google Maps to see anything of interest nearby. To the west of the hostel was a canal I hadn't seen previously so I thought it would be worth a trip. Unfortunately, because of my poor navigation skills and phone's lack of compass, I ended up circling a flyover for a while before ditching the plan. I decided to go to the beach as a reference point and explore from there.

What was supposed to be a reference point ended up becoming the destination. There was still plenty of sunlight and the water wasn't too cold. I put one toe in the water, and then a foot.

And then, I ran.

Running barefoot alone the coastline was one of the best memories I have of the trip. For once, I didn't think twice about what I was doing. It was pure liberation. I didn't feel the exertion or the pebbles pounding my feet.

Almeria's Beaches

The end of the coastline had a small fort and a dirt trail which I would've very much wanted to cycle on. After watching the sun sink into the sea, I ran till the other end of the boardwalk to find an Italian restaurant with vegetarian spinach spaghetti. Served with a piece of freshly baked bread, dinner was delicious and capped off yet another amazing day in Almeria.

Dinner time!

Day Trip

The 26th was the final day of the conference. Unlike the other days, the conference was only for half a day with the rest of the day kept aside for a day trip. I cannot comment on how the conference went on this day as I had to rush back to the hostel to retrieve my passport, which was necessary to attend the day trip.

Right around 2 in the afternoon we boarded the bus for the day trip. Our first stop was the Plataforma Solar de Almería, a solar energy research plant in Almeria. It houses some large heliostats for focussing sunlight at a point on a tower. This can be used for heating water and producing electricity.

There was another facility used for testing the tolerance of spacecraft heat shields by subjecting them to temperatures in excess of 2000C.

Heliostats focus at the top of the tower

The next stop was at the San José village. Though not too far from Almeria, the village is frequented by tourists much more than Almeria is and has a very different vibe. The village is known for its beaches, pristine clear waters, and white buildings. I was told that the village was used in the shooting of some films such as The Good, The Bad, and The Ugly.

Our final stop for the day was at the Rodalquilar Gold Mine. Lost to time, the mine had been shut down in 1966 due to the environmental hazards of using cyanide in the process to sediment gold. The mine wouldn't have looked out of place in a video-game or an action movie, and indeed, it was used in the filming of Indiana Jones and the Last Crusade. There was a short trek from the base of the mine to a trail which wrapped around a hill. After descending from the top we headed back to the hostel.

This concluded my stay in Almeria.


After checking out of the hostel early the next morning, I caught a train to Madrid. I had a day in the city before my flight to Bangalore the next day.

I reached Atocha at about 2 in the afternoon and checked in to a hotel. I spent the entire evening exploring Madrid on foot and an electric bicycle through the BiciMAD rental service.

Photo Dump

The Return

My flight back home was on the following morning, on the 28 July. The first leg of the return was yet again, on an Etihad aircraft bound for Abu Dhabi. This time it was an A330-300. It was an emotional 8 hour long flight - with the memories of Akademy still fresh in my mind. To top it off, I finished EarthBound (excellent game!) during the flight.

Descent into Abu Dhabi started about half an hour before landing. This time though, I got to see the bizarre Terminal 1 dome of the Abu Dhabi airport. The Middle East has always been a mystical place for me. The prices of food and drink in the terminal were hard to stomach - 500mL of water was an outrageous 8 UAE Dirhams (₹140)! Thankfully it wasn't a very long layover, so I didn't have to spend too much.

Note to self: may cause tripping if stared at for too long

The next leg was a direct flight to Bangalore, on another Etihad A330. Compared to all the travel I had done in the last two days, the four hour flight almost felt too short. I managed to finish 'Your Lie in April' on this leg.

It was a mix of emotions to land in Bangalore - I was glad to have reached home, but bitter that I had to return to college in only two days.

Akademy 2017 was an amazing trip and I am very grateful to KDE for letting me present at Akademy and for giving me the means of reaching there. I hope I can make more trips like these in the future!

Once upon a time, for those who remember the old days of Plasma and Lancelot, there was an experimental applet called Shelf.

The idea behind the Shelf was that sometimes it is useful to have a small applet that just shows your recent files, favourite applications, devices, which you can place on your panel or desktop for quick access.

Now, this post is not about a revival of Lancelot and Shelf (sadly), but it is closely related to them.

Namely, I always disliked the “recent documents” section that is available in almost all launchers in Plasma. The reason is that only one in ten of those documents has a chance to ever be opened again.

The first code that had the aim to fix this landed in Lancelot a long time ago – Lancelot was tracking how you use it so that it could eventually start predicting your behaviour.

This idea was recognized as a good one, and we decided that Plasma as a whole could benefit from this.

This is how the activities manager (KAMD) was born. The aim was to allow the user to separate the workspace based on the project she was working on; and to have the activity manager track which applications, documents etc. the user uses in each of the activities.

The first part – having different widget sets for different activities was baked in Plasma 4 from the start. The second one (which I consider to be much more important) came much later in the form of KAMD.

KAMD, apart from being tasked to manage activities and switch from one to another, also tracks which files you access, which applications you use so that the menus like Kicker and Kickoff can show recent documents and recent applications. And have those recent documents and applications tied to the activity you are currently in.

For example, if you have two activities – one for KDE development, and another for working on your photo collection, Digikam will be shown in the recent applications section of Kicker only for the second activity, since you haven’t used Digikam for KDE development.

Now, this is still only showing the “recent documents”. It does show a list of documents and applications that are relevant to the task you are currently working on, but still, it can be improved.

Since we know how often you open a document or start an application, we do not need to focus only on the last time you did so. We can detect which applications and documents you use often and show them instead. Both Kicker and Kickoff allow you to replace the “recently used” with “often used” in the current version of Plasma.

Documents shelf

Now back to the topic of this post.

Documents Shelf

While working on the KAMD service, I often need to perform tests whether it keeps enough data for it to be able to deduce which are the important applications and documents, and whether the deduction logic performs well.

Most of these tests are small GUI applications that show me the data in a convenient way.

For one of these, I realized it is not only useful for testing and debugging, but that it might be useful for the day-to-day work.

In the screenshot above, you can see an applet, that looks quite similar to Shelf from Plasma 4 days which shows the files I use most often in the dummy “test” activity.

One thing that Shelf did not have, and neither Kicker nor Kickoff have it now is that this applet allows you to pin the documents that are really important to you so that they never disappear from the list because the service thinks some other file is more important.

You can think of it as a combination of “often used” and “favourite” documents. It shows your favourite documents – the documents you pinned, and then adds the items that it considers worthy enough to be alongside them.

This applet is not going to be released with the next Plasma, it needs to evolve a bit more before that happens. But all the backend stuff it uses is released and available now if you want to use it in your project.

The keywords are kde:kactivities, kde:kactivities-stats and kde:kactivitymanagerd.


We are happy to announce the release of Qt Creator 4.4 RC!

For the details on what is new in Qt Creator 4.4, please refer to the Beta blog post. As usual we have been busy with bug fixes and improvements since then, and now would be a good time for you to go get it, and provide final feedback.

Get Qt Creator 4.4 RC

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.4 RC released appeared first on Qt Blog.

Later than planned, here’s Krita 3.2.0! With the new G’Mic-qt plugin integration, the smart patch tool, finger painting on touch screens, new brush presets and a lot of bug fixes.

Read the full release notes for more information!. Here’s GDQuest’s video introducing 3.2.0:

Note: the gmic-qt plugin is not available on OSX. Krita now ships with a pre-built gmic-qt plugin on Windows and the Linux AppImage. If you have tested the beta or release candidate builds, you might need to reset your configuration.

Changes since the last release candidate:

  • Don’t reset the LUT docker when moving the Krita window between moitors
  • Correctly initialize the exposure display filter in the LUT docker
  • Add the missing pan tool a ction
  • Improve the “Normal” blending mode performance by 30% (first patch for Krita by Daria Scherbatyuk!)
  • Fix a crash when creating a second view on an image
  • Fix a possible crash when creating a second window
  • Improve finding the gmic-qt plugin: Krita now first looks whether there is one available in the same place as the Krita executable
  • Fix scroll wheel behaviour if Krita is built with Qt 5.7.1. or later
  • Fix panning in gmic-qt when applying gmic-qt to a non-RGBA image
  • Scale channel values correctly when using a non-RGBA image with gmic-qt
  • Fix the default setting for allowing multiple krita instances



    Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


    (If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

    When it is updated, you can also use the Krita Lime PPA to install Krita 3.2.0 on Ubuntu and derivatives.


    Note: the gmic-qt and pdf plugins is not available on OSX.

    Source code


    For all downloads:


    The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
    . The signatures are here.

    Support Krita

    Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

August 16, 2017

Hola Amigos!
Akademy 2017 was such a great experience, that I would love to share with you all in this post.

For those who are unaware about Akademy, it’s an annual world summit of KDE (Please refer https://akademy.kde.org)
This year, Akademy was held in Almeria, Spain.


It features a 2-day conference with presentations on the latest KDE developments, followed by 4 days of workshops, Birds of a Feather (BoF) and coding sessions.

There are so many interesting talks given by some real tech enthusiasts, who are amiable and super cool. It is a great opportunity to meet people in person whom you have been communicating just knowing their IRC nickname.

So, as mentioned it started with some really interesting presentations on Plasma desktops, Plasma mobile, Ruqola, digiKam, WikiData, GCompris, KIO library, Calligra, KDE neon, Slimbook, Translating Challenges, and many more. We were fortunate to have Robert Kaye and Antonio Larrosa as our keynote speaker. These presentations were followed by BoF sessions and Workshops.

I conducted a workshop on Qt Quick Control 2, Paul Brown had some awesome stuff to share on what data should be visible to a viewer who visits your site or uses your product. Parallelly there was a BoF session, where the team brainstormed on Plasma mobile and Plasma vision.

[ https://twitter.com/anu22mittal ]

It was such pleasure to meet Valorie, Aleix Pol, Albert, Lydia, John Samuel, David, Timothee, Vasudha, Jonathan and so many more active contributes of KDE.

[Memories of Akademy 2017]

Now, How is KDE Akademy different than conf.kde.in?
I have been a part of KDE India conference held in Jaipur in 2016 [conf.kde.in]. Which looked like this:


When you see this, you find so many students, keen to know about foss and KDE community.
All are budding developers with some spark and unexplored ideas in their mind. So such type of conference held by KDE in India help these unexplored ideas to come into execution by using and developing features in the various software build by KDE developers.

Where as when you look at these bunch of developers here in this picture:  

This is the group photograph of Akademy 2017 attendees. They are the backbone of all softwares in KDE.

I would like to thank KDE for giving me this opportunity. It has added great experience and wonderful memories to my journey of foss development

[Hostel, Food, Location @ Akadmey]
Also, guys please help make KDE apps and Plasma easier to use, more convenient and accessible. Support #Randa2017.  (To know more about Randa refer: https://dot.kde.org/2017/08/08/randa-meetings-2017-its-all-about-accessibility)

Those who know me, or at the least know my history with Krita is that one of the prime things I personally want to use Krita for is making comics. So back in the day one of the things I did was make a big forum post discussing the different parts of making a comic and how different software solves it.

One of the things about making a comic is that is a project. Meaning, it is big and unwieldy, with multiple files and multiple disciplines. You need to be able to write, to draw, to ink, to color. And you need to be able to do this consistently.

The big thing I was missing in Krita was the ability to quickly get to my selection of pages. In real life, I can lay down pages next to one another, and always have them in my minds eye. In Krita, getting the next or previous page is always a matter of digging through folders and finding the correct page number.

Adding to this, I am also a bit of a perfectionist, so I have been training myself to start drawing scenes or writing as soon as I have an idea, because any idea is way more useful when you’ve got it down on page. You can append it to an existing story, or just work it in and reuse the drawings and compositions. And this was also a bit difficult to do, because how does one organise and tag those ideas?

So hence I spend the last few weeks on writing a comics manager for Krita.

Comics Project Management Tools

The comics manager, or more officially, the Comics Project Management Tools are a set of python scripts that form a plug in to Krita. They show up in the form of a docker.

comics_manager_01Python scripting was recently added to Krita because people were willing to pay and vote for the stretchgoal in our previous kickstarter. It shines in this case as the majority of the tasks that needed to be done were storing simple information and showing GUI for editing that information. Being a developer for Krita on the C++ side also meant that making use of PyQt was very natural.

So, I made a docker in Krita. It starts with a “New project”.

comics_manager_02Here, the most important part is making sure the artist only has to fill in the most vital information. That is… just the project name and the concept, actually. The location is asked before this window shows up.

The concept is basically a personal note, like “A comic about vampires fighting robots in a post-nuclear wasteland”, or “A scene where character A teaches character B how to ice skate.”

The project name, is actually sorta arbitrary. Usually any type of writer doesn’t know what to title their story until they’re halfway through, so the project name is intended to be more of a code name that is used to automate page naming. For that reason I also spent a day writing a “sophisticated” project name generator, of which most of the day was spent filling up the two lists the project name parts are pulled from.

comics_manager_03Next to that are details like the language(which it attempts to guess by asking QLocale for the system language), and the names for the page, export and template folders. The metadata can also be filled in already, but again, not necessary. So basically, the only thing an artist struck with inspiration needs to do is press New Project, pick a location, press “generate” and then “Finish”.

After that, the CPMT will make a project folder(if that was checked), will check for, and if necessary, make folders for the pages, export and templates, and will serialise all the information you put in, into a comicConfig.json file.


Originally, this was a yaml file, but I discovered that there’s no yaml library in the python standard libraries, and I figured that the majority of our users would panic at the idea of having to install a python library on windows(Krita users are very sweet, but for a good majority of them, computers are magical mystery boxes that will explode when pressing the wrong button. As an more savvy computer user, I can of course, confirm that this is true, but there’s still a massive difference between knowing how to fix the magic mystery box when it goes wrong, and being helplessly subjected to its moodswings). Either way, I just wanted to have a config file that was somewhat easily readable by someone coming across it when clearing up their computer for space. This is also the purpose of the concept line, to answer the question: “What in the hell was I trying to do here?”


comics_manager_05Anyway, the user can now navigate to their config file and open it. Then they can press “add page”, which asks them to either create a default template or import one.

comics_manager_06As the text may indicate, I am not too happy about the look of this window.

Creating a default template gives an over complicated window that will ask for a name, dpi, width and height, and the precise sizes of the margins and bleeds. The margins and bleeds are print terms.

When printing something, theoretically we…

  • Take a large piece of paper.
  • Align it to a printer
  • Print several pages at once on that large piece.
  • Cut the pages.
  • Fold the pages
  • and finally, bind the pages in some way or another.

Those steps above have many places where things can go wrong. In particular, the first things people started to learn when using mechanical printing was that aligning a text was difficult. So people introduced a margin to their text layout, so that even when it was slightly off, the whole text would still fit on the page.

The same thing goes for cutting, folding and binding. This is what the bleed is for. Sometimes, you want to have images that go all the way to the edge. So we create a second border, the bleeds, which indicate where the image will get cut off. The artist will paint over these too, but this method allows for determining where the absolutely important items, like the speech bubbles, go, as well as determining where the drawing doesn’t need to be super detailed, just nice enough to look right.

So the template creation tool allows you to set a margin and a bleed and will create a page with guidelines at those places. An experienced artist will likely have a collection of such templates already, so hence import template, which will copy the chosen template to the templates folder.

CPMT will remember the template chosen in for “Add page” and use that one to create pages with “Add Page” without showing the dialog. “Add page from template” will always show the dialog, as well as showing all the templates in the template folder, and the user can configure the default template for “Add page” in the project settings.

Originally, I had wanted to tell the user to select a page size, margin and bleeds in the project settings. However, those are a lot of things to fill in. Furthermore, there would also be need for a template for spreads(that’s a panel that covers two whole pages), as well as a template for the cover which is radically different to the regular pages. And these cannot be determined computer wise, because often there’s a little margin in the middle or extra bleed to either side for such templates and this can be unique per printing company. And that doesn’t even begin to cover situations where you are not making a print-style comic.

So I thought about this for a long time and decided that the most important thing would be a way 1) make images from template kra files and 2) make one of those templates a default template, easy to add with a single button, while still allowing for adding the others in a simple manner.

So, once you press ok, or add page with a default template, it will open the template, and resave it in the pages folder with projectname-firstavailablenumber.kra and show it in the pages list.

The Pages List

The pages list is probably the core element of the comics project management tool. On a technical level is a QTable view with a QStandardItem model, which in the first column shows a list of QStandard items with the page “preview.png”(thumbnail) as the icon and the page title as the text, and on the second column, the image subject as the text.


For the user, it is a list of pages, with their icon and title/filename on the first column, and the description on the second. The pages can be rearrange by moving the row selector on the right, and the config will then update the way the pages are arranged. Double clicking the page title will open the file in Krita.


Originally, I wanted the description to be editable from the docker, and was almost succesful, except that the python ZipFile library has no mechanism for overwriting/deleting files from a existing ziparchive, so I couldn’t only edit the documentinfo.xml. This is a bit of a pity, as editing the description from the docker was very convenient for the half hour that it did work. Now the user has to go to the document info to fill in either the subject or description. I want to keep this kind of information inside the image as that allows for moving the image around irrespective of project, so storing the information in the config file isn’t an option.

This is also why the templates are stored inside the comics project, so that when moving to a new computer, the whole project should still work. I went through a lot of trouble to ensure that all the paths are relative, and right now this is also where all the code probably should be a little cleaner :D

comics_manager_09Here’s where I learned that drag and drop reordering doesn’t work currently…*Sigh*

Anyway, the pages list allows reordering pages, accessing pages quickly, removing pages from the list(but not from disk, too dangerous), and adding existing pages. This is all stored in the config, and thus also loaded from it.

Originally, the loading was done by opening the file in Krita, requesting the thumbnail, and a title, but that could take up to a minute. Now, I get ZipFile to open the kra file(which is a zip), load the “preview.png” into a QImage, parse the “documentinfo.xml” with Elementree and get out the title and subject/abstract for showing that information. A part of me is wondering if I should allow a simple docker that just shows the “mergedimage.png” inside a given kra file for reference.

So the pages list is the most important element, but there are of course other elements, the second of which is the meta data.


As indicated in the setup section, writers usually don’t know the initial title of their work, and in general it isn’t too wise to force them to go through all the meta data information at the start. On a similar note, it is often hard to figure out what kind of meta data should go into a given field, and anyone who has attempted to read the actual specs of meta data standards knows that the people making these specs are insane and completely incapable of adding real world examples to their specs.

Personally, I also know that when I am writing, I slowly collect a selection of summaries/titles/keywords that should be going into the metadata, so hence I figured it might be useful to create a organised place to put them.

So, what does comics metadata look like?

Starting with that, what do comics formats look like? As digital distribution is a pretty new thing, and comics in general are a bit slow with providing these things, most comic readers use a guerilla format that probably got created somewhere around the nineties/early 2000. It is basically an archive with images inside, ordered and read alphabetically.

I say it was created around that time because there’s like 4 variations of the format. The simplest is CBZ, which is a zip archive with images. Then there’s CBR, which is a rar archive with the images, because around the early 2000 the rar archive was obviously so much better at compressing than zip files. Similarly, there’s CB7, which is of course, a 7-zip archive with images. And of course, Linux nuts came up with CBT, a tar archive with images.

As you can tell, none of these have obvious meta-data schemes. Luckily for us, a guerrilla format has guerrilla attempts at creating metadata. No less that 5 different schemes can be found on the internet.

  1. ComicInfo.xml (https://wiki.mobileread.com/wiki/ComicRack) is a file that is added by the windows comic managing software “Comic Rack”. It has no proper specification, but is apparently common enough to be considered relevant.
  2. CoMet.xml (http://www.denvog.com/comet/) is one of the attempts at making an official spec. I haven’t figured out if it is actually common.
  3. ComicBookInfo (https://code.google.com/archive/p/comicbookinfo/wikis/Example.wiki) which unlike the other three is a json file stuck into the zip info. The logic being that this is easier to read than those nasty xml files. For the computer that is, I don’t think most people realise there’s such a thing as a zipfile header and that it can have information in there. Anyway, the spec seems to have been devoured by googlecode’s passing, leaving us only with this lone example file. But at the least it has an example file.
  4. Comic Book Markup Language(http://dcl.slis.indiana.edu/cbml/) This… is some academic attempt at making comics machine readable. It says it has a spec. It doesn’t. It instead decides upon two extra tags for something or another spec, but that other spec isn’t really a spec either.
  5. ACBF (http://acbf.wikia.com/wiki/Advanced_Comic_Book_Format_Wiki) or Advanced Comic Book Format is an open source attempt at making a slightly more advanced markup similar to CBML, but unlike CBML it actually has a spec and examples, even if all of that is on a combo of a wikia/launchpad site that is on one hand full of more scripts you can shake a stick at(wikia.com) and on the other a extremely difficult to navigate website(launchpad). None the less, there’s two readers that can read it(sorta). One is ACBF viewer, a pyGTK based application, and the other is Peruse. Well, Peruse master. Peruse from the KDE neon repository takes your cbz with acbf meta-data just fine… and then shows nothing. It is fixed in master, apparantly.

So, for the purpose of writing a meta-data editor, we first look at the things these guys have in common. Except CBML because that has no spec as far as I am concerned.

All of them have a title. The title is the title of a work. It’s pretty uncontroversial. Even dublin core calls the title of a work “title”, and it is a sentence.

All of them also have… a description. You know whatever is on the back of the book to entice you to read it. In the English language, we have the following words to talk about this piece of text:

  • Description
  • Blurb
  • Abstract
  • Summary
  • Synopsis (Yes really, often book publishers ask authors for a synopsis, that is the whole plot in a paragraph, so they can determine whether it is worth publishing the book, and often this same synopsis is chucked on the back-cover, leading to many books where the back-cover spoils the whole story.)

And from this you can gather that this is where many meta-data schemes get confusing:

ACBF stores this in the “Annotation” element. ComicRack in the “Summary” element, CoMet in the “Description” element, and ComicBookInfo in the “Comments” element. Dublin Core uses the “Description” element for this.

But still, it is relatively uncontroversial. Just give someone a text area, like QPlainTextEdit, and let them type in something.

Language is a QComboBox with entries pulled from a csv having the ISO languages and their codes and defaults to the system locale. Reading direction has to be seperate from language because sometimes you have people writing comics in the opposite direction of their language. Still, it is a combobox that is set to default to the system language reading direction.

Of the four metadata schemes, CoMet and ComicRack define reading direction, but only CoMet calls it by its name, while ComicRack calls it “Manga”. I am not sure what ACBF intends here. CoMet is also a bit annoying that out of the four schemes it is the only one that requests the proper language name instead of the ISO language code.


Then there’s the smaller meta-data, like the Genre. All of the specs have genre listed as a seperate “Genre” element that can occur several times. Dublin Core has no such element, so it goes into “Subject” instead. ACBF is unique in that it allows a limited list here. ACBF also allows noting percentages, but that got me complicated too quickly.

So, because there’s multiple entries, one would think, QLineEdit, with comma separation. Because we cannot use a QComboBox, that doesn’t allow for multiple entries at once. I also didn’t really want to use checkboxes because that doesn’t allow what people feel the genre of their work is. For example, a horror writer differentiates between psychological horror and gothic horror, while a fantasy writer differentiates between urban fantasy, swords and sorcery, epic fantasy, etc.

So to give people an indication of which types are acceptable, I set up a QCompleter on the QLineEdit. The metadata editor checks on initialisation the folder key_genre for txt files, and uses each line as a stringlist entry, which then goes into the QCompleter. However, QCompleter doesn’t handle comma seperated entries by itself. Thankfully, many people on the internet had been attempting to make a QLineEdit with comma separated autocompletion, so I was able to cobble something together from that. This way the artist can type in entries, be encouraged to pick certain entries. When exporting we can then check if the entries don’t match and chuck those into the keywords list.

I rather liked this approach, as it helps people, but because the data is pulled from outside the code, from a simple format, they can also extend it.

So I decided to reuse it for Characters, Format and Keywords as well.

Characters is also near universal. It probably is inspired by the big overarching universes in American comics. The ComicBookInfo json just puts it in “tags” but is a it unique in that. ACBF has a list characters with name elements for each character. CoMet and ComicRack have reoccuring “Character” tags.

Format shows up in both ComicRack and CoMet. I have no idea what the former means by it, and am equally confused by the description of the latter. I suspect it means a format-genre. Either way, it is not Dublin Core format, which means “physical format”.

All of them have a place for extra keywords, which I am thankful for.

Then there’s series, volume and issue. ACBF calls series a “sequence” but otherwise there’s not much confusion here. Just a QLineEdit and two QSpinBoxes with Vol. and No. as prefixes.

Then there’s the content rating. Originally I had this as a line edit that pulled from a text-file as well, but as I realised different rating systems use the same letter to mean slightly different things, I decided to switch over to a CSVs for this. The first row for the title of the rating system, and after that, the first column is the letter, and the second column the description.


The CSVs results into two comboboxes, the first of which gives the type, and the second the letter. By using the combobox’ model we can put the description as a tool tip to the letter. The comboboxes are both editable for the person who has no idea they can add their own ratings systems but still wishes to rate differently.

Next up is the author. So typically an author is written down like “John Doe”, but for archiving purposes, it is usually written like “Doe, John”. Furthermore, in comics there often multiple creators credited. The latter is most common in American comics, where there’s a separate Writer, Penciller, Inker, Colorist, and Letterer, and often the Editor is also credited. In European comics this is usually just a “Scenario” and “Artist”, and in Manga it is one or two authors, and an army of assistants, but the latter are never credited and one wouldn’t know they existed unless they read the author’s ramblings in the volume bound releases.

All of the specs have spaces to refer to authors. They of course, do this in wildly different manners.

ComicRack has seperate elements for “Writer”, “Penciller”, “Inker”, “Letterer”, “Colorist”, “CoverArtist”, “Editor” and “Translator”. CoMet is similar, except it calls “CoverArtist” “coverDesigner”. ComicBookInfo and ACBF both instead use a tag to refer to an author and then assign a role to them. ComicBookInfo calls them “Person” and ACBF makes an author element and says only specific roles are valid. The Dublin Core specifies Creator and Contributor tags, which can be refined with a meta tag.

Of these, ACBF is unique in that it actually bothers to seperate the different parts of a name as well as allow nicknames and contact info.

So… how to make a gui element for that? At this point I had gotten comfortable enough with QT model/view programming that I just made a QTableView with a QStandardModel item and having columns for Nickname/First Name/Middle Name/Last Name/Role/Email/Homepage. For the role, because many of them ask for a limited list, I subclassed QStyledItemDelegate just enough to give a line edit with a QCompleter that takes it’s autocompletion entries from the txts in the key_author_roles folder. Like with the page list, people can add authors, remove authors, and rearrange authors. By default, it has an entry for John “Anonymous” Doe, which seemed a sensible default that would demonstrate the gui, but the first feedback I got was from someone who was not familiar with the meaning of the name “John Doe”, so I am a wee bit worried about translation.


I still want to add buttons to optionally scrape the pages list for authors, as well as the ability to generate a text object in the current open file with the credits nicely outlined.

Then, the final part of the meta data is the publishing meta-data.

All of the schemes have some place to put the Publisher Name and the Publishing Date. ACBF also allows for City, and I am a little confused why the others don’t.

The date is a little bit more confusing. ComicRack and ComicBookInfo require a separate publish year, CoMet an ISO date and ACBF requires any kind of date, with the ability to specify an ISO date explicitely. QDate and QDate input to the rescue here.

Then, there’s ISBN. ACBF only accepts ISBN, and CoMet allows for an ISBN or some unique publishing number. The other two don’t have anything.

Then there’s the license. Like the content rating, I am pulling this from a CSV with some examples, and like the content rating, I have it editable and default to have nothing in it. My reasoning here is that for example, we could have a teenager making a fancomic, and I think it would kind of suck if they got bothered because their fancomic has a license defined.


Either way, of the four schemes, ACBF asks for a license, but no rights holder, CoMet only a rightsholder and no license, ComicBookInfo and ComicRack don’t ask for anything. Dublin Core says that the “Rights” tag should contain anything pertaining the license and rightsholder. I am not quite sure how to help people here either how.

And that is all the meta data. So, the idea is that the author just types in some things, and then later comes back and types in more. And eventually, somewhere over the course of the project it has a proper meta-data definition.

So, I have been discussing these four metadata formats, does that mean I intend to export to them? Yes.


So, there’s a big fancy export button on the comics management docker. Pressing it does nothing, unless you have set up the export. On the other hand, after you have set up the export, pressing the button is the only thing that needs to be done.

The exporter right now can export to three formats. The first is CBZ, with all 4 metadatas acceptable. The second is EPUB, which uses Dublin core metadata. Finally there’s ‘TIFF’ which is not really an export format so much as a intermediary format for publishing programs like Scribus. While Epub and CBZ need to be in 8bit srgb/grayscale, tiff can handle multiple colorspaces and high bitdepths.


Each of them has a resize menu, which can resize by width, height, percentage or DPI. These options were necessary because it otherwise is too difficult to determine how to handle different sized documents sensibly. (Someone who uses spreads doesn’t want to resize by height, and someone working on a by-panel basis instead of per-page would prefer DPI or percentage resize). For similar reasons the crop menu allows you to select “crop to outmost guides” and pixels, so that it is easy to define a per-image cropping mechanism.


The exporter also allows removing layers by color label, which is useful to make sure layers sketch layers or commentaries are removed.

The exporter exports everything to the export folder, both packaged and unpackaged, so that it is easy to get to the right elements and adjust them.


So that is quite easy, you set it up, and then press the export button whenever you feel like making a result cbz or epub that can be read from ACBF viewer or other readers.

There’s still metadata menaces here. I got pretty confused with the ACBF spec here as it asks for a ‘unique identifier for cataloguing purposes’ and I have no idea what that means. Note that it doesn’t say “universial unique id”, nor does it specify what kind of ID this ought to be, and none of the existing ACBF files have anything like it(despite the spec saying it is mandatory), so I decided to just make a QLineEdit and then someone else can figure it out.

For EPUB… anyone who has attempted to read that spec knows it is overcomplicated(and yet still easier that CBML). So I just opened Sigil, made an EPUB that sorta looked like I wanted it to and then reproduced that EPUB in detail. This still took me a full day, so let alone if I had actually tried to read the spec.

Closing Thoughts:

So, I wanted a docker to organise and access my comics pages. I ended up with a docker that can support a full-on production process(theoretically).

What is next?

I already noted here and there that there’s elements I want to improve. On top of those, I also want to…

  • make it possible for people to select a folder with additional meta-data auto-completion keys.
  • when vector python api is in, allow people to specify the name of a layer that has the panel data, so this can be stored into ACBF.
  • when the text works and is in the python api also give ACBF export the text layers.
  • General GUI clean up. There’s parts of the editor that are a bit messy.
  • Improve the EPUB look.
  • Improve the other metadata.
  • Fix bugs… like reordering pages doesn’t actually work >_>
  • Krita has gained an beautiful plethora of bugs with saving/loading via python thanks to async saving being implemented, so those bugs need to be catalogued and put onto bugzilla and fixed.

But for now, I am gonna take a break. I also poked someone to do some testing for me, and I might poke some more people for testing, and then fix some bugs. And then worry about python scripting translation support. Maybe then merge.

Something like that at the least.

Hello again and welcome to my blog! In this post i am going to cover what happened since the first GSoC evaluation and give you some overview on the status of my work.

Since the last post I’ve been working on the implementation of the guitar plugin, along with adjusting the existing piano plugin to better suit to the new framework.

As you remember from my last post, minuet currently supports multiple plugins to display its exercises. To change from one plugin to another, all you have to do is to press on the desired instrument name: for now, only “Guitar” and “Piano” are available.



In the past couple of weeks, I’ve been deciphering the guitar notes representation and also the guitar chords. I don’t want to discourage anyone from learning how to play the guitar, but man.. It was so hard and tiresome.. Nevertheless, my previous piano experience helped me better understand the guitar specifics and get up to speed with the theory needed to complete my project.keyChords.PNG


Then I talked with my mentor on Hangouts and, using http://chordfind.com as a base (which is indeed a great start for beginners who want to learn guitar/piano and many other 4-strings instruments chords), we agreed on two specific representations for each cord: Major, Minor, Augmented, Diminished, etc. for chords with the root note in the C-E range or in the F-B range.

Then i started working at the core of the plugin guitar: to keep functional the piano keyboard, i had to implement the exact same methods used by the piano plugin. I won’t go into too much coding detail (the code is available on GitHub on my fork of Minuet and on the official GSoC branch when completed), but with a little twitch to the current ExerciseView component, I managed to create a guitar plugin that runs the Minuet’s chords exercises.

It look like this:

  • minor and major chords

minor and major chords.gif

  • diminished and augmented chords

diminished and augmented chords.gif

  • minor7 and dominant7 chords

minor7 and dominant7 chords

  • minor9 and major9 chords

minor9 and major9 chords


Fruto de una conversación con un buen amigo nace el presente artículo “4 trucos con Alt+F2 para Plasma” que no quiere ser más que una entrada que muestre lo eficiente que se vuelve Plasma cuando invocamos la aplicación Krunner con su famoso atajo de teclado para los recién llegados al sistema.

4 trucos con Alt+F2 para Plasma

Hice un artículo muy parecido a este hace un tiempo, pero prefiero ir actualizando haciéndolos más atractivos para los nuevos (la cantidad de información que hay en la red nos emouja a hacer este tipo de cosas de vez en cuando). Además, de esta forma reviso si las funcionalidades están de serie en mi Plasma 5.10 que utiliza KDE Neon.

4 trucos con Alt+F2 para Plasma

Para quienes no lo sepan, una de las cosas que antes nos acostumbramos los linuxeros es a utilizar la combinación de teclas Alt+f2 para invocar el lanzador de comandos. De esta forma nos ahorramos movimientos de ratón y nos hace un poco más eficientes.

En el escritorio Plasma de la Comunidad KDE al pulsar Alt+F2 invocamos el gran Krunner, un completísimo lanzador de comandos  increíblemente completo cuyas funcionalidades suelen ser desconocidas para los usuarios más novatos. Visualmente se trata de una pequeña barra que aparece en la parte superior de la pantalla con un campo de texto donde escribiremos comandos.

Así que he realizado un top 5 trucos con Alt+F2 para Plasma que seguro os será de mucha utilidad. Eso si, no son todas, así que las podéis ir descubriendo vosotros mismos.

Ejecutar aplicaciones

La primera funcionalidad que solemos utilizar es la de lanzar aplicaciones.

Es decir, básicamente al pulsar Alt+F2 aparecerá Krunner y escribiremos la aplicación a lanzar. Por ejemplo, al escribir “dolphin” aparecerá el magnífico gestor de ficheros de KDE.

Además, en una de las últimas actualizaciones de Plasma se ha añadido una nueva característica: si la aplicación que escribimos no está instalada pero sí está en los repositorios (es decir, la “tienda de aplicaciones del sistema”) tenemos la posibilidad de abrir automáticamente Discover para que la instale. Una sencilla pero potente funcionalidad.

Realizar cálculos numéricos sencillos

Krunner también tiene funcionalidades de calculadora. Cuando lo invocamos con Alt+F2 podemos escribir una operación matemática sencilla (sumas, restas, multiplicaciones, divisiones) y automáticamente nos escribirá las soluciones.

Además, si utilizamos el símbolo “=” podemos realizar cálculos más complicados, como resolver ecuaciones de segundo grado como la siguiente “ = solve( x^2 + 4*x - 21 = 0 )

Buscar documentos en tu disco duro

Una de las funcionalidades de Krunner que más estoy utilizando en mis ordenadores es de la búsqueda de carpetas o documentos en el disco duro. Se acabó ir navegando por carpetas con Dolphin, simplemente pongo parte de lo que quiero encontrar en mi disco duro en Krunner y me aparece una lista clasificada en tipo de documento en el lanzador. Una verdadera maravilla que hace que optimice el tiempo de uso en mis ordenadores.


Convertir unidades

La siguiente funcionalidad es muy útil cuando realizas cálculos en física, química o cualquier otra materia científica. También es útil para trabajar con monedas de diferentes zonas comerciales.

Como habréis adivinado se trata de un conversor de unidades que se utiliza de la siguiente forma: escribir “5 euros” y te aparecerá su conversión a diferentes monedas; o pon “3,2 Kp” y te mostrará su conversión a N o KN. Fabuloso e instantáneo.


Más información: KDE UserBase Wiki


When we went public with our troubles with the Dutch tax office two weeks ago, the response was overwhelming. The little progress bar on krita.org is still counting, and we’re currently at 37,085 euros, and 857 donators. And that excludes the people who sent money to the bank directly. It does include Private Internet Access‘ sponsorship. Thanks to all you! So many people have supported us, we cannot even manage to send out enough postcards.

So, even though we’re going to get another accountant’s bill of about 4500 euros, we’ve still got quite a surplus! As of this moment, we have €29,657.44 in our savings account!

That means that we don’t need to do a fund raiser in September. Like we said, we’ve still got some features to finish. Dmitry and I are currently working on

  • Make Krita save and autosave in the background (done)
  • Improved animation rendering speed (done)
  • Improve Krita’s brush engine multi-core adaptability (under way)
  • Improve the general concurrency in Krita (under way)
  • Add touch functionality back (under way)
  • Implement the new text tool (under way)
  • Lazy brush: plug in a faster algorithm
  • Stacked brushes: was done, but needs to be redone
  • Replace the reference images docker with a reference images tool (under way)
  • Add patterns and filters to the vector support

All of that should be done before the end of the year. After that, we want to spend 2018 working on stability, polish and performance. So much will have changed that from 3.0 to 4.0 is a bigger step than from 2.9 to 3.0, even though that included the port to a new version of Qt! We will be doing new fund raisers in 2018, but we’re still discussing what the best approach would be. Kickstarters with stretch goals are very much feature oriented, and we’ve all decided that it’s time to improve what we have, instead of adding still more features, at least, for a while…

In the meantime, we’re working on the 3.2 release. We wanted to have it released yesterday, but we found a regression, which Dmitry is working hard on fixing right now. So it’ll probably be tomorrow.

August 15, 2017

This was the first time I used the new healing clone tool for the image editor in removing dust spots from a famous photo used in many online tutorials of similar programs, the user interface changed than the initially planed to be more user friendly, as the available functionality in the editor itself appeared to enable a more user friendly scenario than what I had on my head when I started coding, I will attach a screenshot of the tool and my trial to fix the image in this post. And document the journey with more details about the code and the tool usage in the next few days.

Screenshot from 2017-08-15 22-53-56



The last week was mostly spent in creating more tutorial levels, optimising the tutorial dataset, ability to create wires in the playArea for tutorial mode when the level loads and checking the correctness of the answer provided by the user.

The Dataset

The dataset for the tutorial mode is now updated to the following:

                    inputComponentList: [zero, one],
                    playAreaComponentList: [orGate, digitalLight],
                    determiningComponentsIndex: [1],
                    wires: [ [0, 0, 1, 0] ],
                    playAreaComponentPositionX: [0.4, 0.6],
                    playAreaComponentPositionY: [0.3, 0.3],
                    introMessage: [
                            qsTr("The OR Gate produces an output of 1 when at least one of the input terminal is of value 1"),
                            qsTr("Turn the Digital light on using an OR gate and the inputs provided")
  • The inputComponentList denotes the items that are provided to the user and can be used any number of times. It is present in the ListWidget component.
  • playAreaComponentList denotes the items which are provided to the user in the playArea. It’s position can be changed, but it cannot be deleted.
  • The wires is an array denoting the wires which should be pre-defined in the playArea, it is defined in the following manner:
    [from_component_index, from_component_terminal_number, to_component_index, to_component_terminal_number]
    from_component_index: The index of the component from the playAreaComponentList, from which the wire is to be drawn
    from_component_terminal_number: The terminal number of the above component from which the wire is to be drawn
    to_component_index: The index of the component from the playAreaComponentList, to which the wire is to be drawn
    to_component_terminal_number: The terminal number of the above component to which the wire is to be drawn
  • playAreaComponentPositionX/Y: The x/y position in the playArea where the component is to placed
  • introMessage: The message which is to be shown in the beginning of the tutorial level. If no message is required, it is kept blank.

Making playArea components indestructible

The components in the playArea is made indestructible by adding a property destructible (bool) to the electrical components, and the MouseArea is enabled via:

MouseArea {
        enabled: destructible

This property is disabled for the playArea components, by adding:

"destructible": false

while creating.

Adding wires

Pre-defined wires in the playArea is added by traversing the wires[] array and creating wires accordingly by calling the createWire(from_component, to_component, destructible) method.

        // creating wires
        for (i = 0; i < levelProperties.wires.length; i++) {
            var terminal_number = levelProperties.wires[i][1]
            var outTerminal = components[levelProperties.wires[i][0]].outputTerminals.itemAt(terminal_number)

            terminal_number = levelProperties.wires[i][3]
            var inTerminal = components[levelProperties.wires[i][2]].inputTerminals.itemAt(terminal_number)

            createWire(inTerminal, outTerminal, false)

Similar to the previous topic destructible decides whether the wire can be deleted by the “delete” tool or not, which is set to false for playArea wires.

Checking answers

The answer checking process is divided into two parts:

  • The levels which only check if the bulb is on or not
  • The levels which asks the user to create a circuit so that the bulb glows only under certain conditions

The first case is very easy, which is achieved by checking the value of the bulb when the OK button is clicked.

        if (determiningComponents[0].inputTerminals.itemAt(0).value == 1)

For the second case, we traverse through all the possible scenario for the input, check it with the output whether it passes the test. If the configuration passes all the tests, the answer is declared as correct, else it is marked as incorrect:

        var digitalLight = determiningComponents[2]
        var switch1 = determiningComponents[0]
        var switch2 = determiningComponents[1]

        var switch1InitialState = switch1.imgSrc
        var switch2InitialState = switch2.imgSrc

        for (var A = 0; A <= 1; A++) {
            for (var B = 0; B <= 1; B++) {
                switch1.imgSrc = A == 1 ? "switchOn.svg" : "switchOff.svg"
                switch2.imgSrc = B == 1 ? "switchOn.svg" : "switchOff.svg"


                var operationResult = A ^ B

                if (operationResult != digitalLight.inputTerminals.itemAt(0).value) {
                    switch1.imgSrc = switch1InitialState
                    switch2.imgSrc = switch2InitialState

Future plans

For the next week, my plans are roughly:

  • Create more levels, with a proper difficulty curve for each of the components provided.
  • Add an option for “hint”, so that the user can seek help when they are stuck.

KDE Project:

In our Akademy presentation, Kévin and I showed the importance for a better developer story to be able to work on a KDE module without having to install it. Running unittests and running applications without installing the module at all is possible, it turns out, it just needs a bit of effort to set things up correctly.

Once you require ECM version 5.38 (using find_package(ECM 5.38)), your libraries, plugins and executables will all go to the builddir's "bin" directory, instead of being built in the builddir where they are defined.
Remember to wipe out your builddir first, to avoid running outdated unit tests!
This change helps locating helper binaries, and plugins (depending on how they are loaded).

After doing that, see if this works:

  • make uninstall
  • ctest . (or run the application)

Oops, usually it doesn't work. Here's what you might have to do to fix things.

  • XMLGUI files: since KDE Frameworks 5.4, they can be embedded into a qrc file so that they can be found without being installed.
    The qrc should put the xmlgui file under ":/kxmlgui5/". You can use the script kde-dev-scripts/kf5/bundle_data_files.pl to automate most of this change.
  • Uninstalled plugins can be found at runtime if they are installed into the same subdir of the "bin" dir as they will be in their final destination. For instance, the cmake line install(TARGETS kio_file DESTINATION ${KDE_INSTALL_PLUGINDIR}/kf5/kio) indicates that you want the uninstalled plugin to be in builddir/bin/kf5/kio, which can be done with the following line:
    set_target_properties(kio_file PROPERTIES LIBRARY_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/bin/kf5/kio")
    Qt uses the executable's current directory as one of the search paths for plugins, so this then works out of the box.
  • If ctest complains that it can't find the unittest executable, the fix is very simple: instead of the old syntax add_test(testname myexec) you want to use the newer syntax add_test(NAME testname COMMAND myexec)
  • Helper binaries for libraries: look for them locally first. Example from KIO:
    QString kioexec = QCoreApplication::applicationDirPath() + "/kioexec";
    if (!QFileInfo::exists(kioexec))
        kioexec = CMAKE_INSTALL_FULL_LIBEXECDIR_KF5 "/kioexec"; // this was the original line of code
  • Helper binaries for unittests: an easy solution is to just change the current directory to the bin dir, so that ./myhelper continues to work. This can be done with QDir::setCurrent(QCoreApplication::applicationDirPath());

There are two issues I didn't solve yet: trader queries that should find uninstalled desktop files, and QML components, like in kirigami. It seems that the only solution for the latter is to reorganize the source dir to have the expected layout "org/kde/kirigami.2/*"?

Update: this howto is now a wiki page.

It is time for foss-gbg to get going again. A week from now Zeeshan will talk about The good kind of Rust. Tickets are free, so if you are in Gothenburg, feel free to drop by for some snacks, the Rust talk and some lighting talks.

foss-gbg is a local group sharing ideas and knowledge around Free and Open Source Software in the Gothenburg area.

I just want to share some links. This is the type of articles that makes me follow Raymond Chen.

As he lives outside of the open source space, his blog might be a gem that many have missed.

Hi, this post is general information about telemetry in Krita. I want to clarify some points. Soon we will launch a preliminary testing of my branch. In case of successful testing, it will go into one of the closest releases of Krita (not 3.2). Krita must follow the policy of...

I mentioned in my previous blog that I started with the note names activity. This will be a musical blog covering the different components that we have and some music knowledge :)

I had been a fond of music from playing a piano to a guitar, that being a reason for me working on background music and making the musical activities as a part of my GSoC. Music is generally represented with Staff. So what is a Staff? The Staff consists of 5 horizontal lines on which our musical notes lie. We represent the lower pitches lower on the Staff and the higher pitches are represented higher on the staff.

Repeater {
  model: nbLines
  Rectangle {
    width: staff.width
    height: 5
    border.width: 5
    color: "black"
    x: 0
    y: index * verticalDistanceBetweenLines

nbLines = number of horizontal lines = 5

But with blank staff can you tell what notes will be played? No, we can’t, we use Clef for that. We have two main Clefs: Base Clef and Treble Clef. Also more notes can be added to a staff using Ledger lines which are used for extending the Staff. We can specify the type of clef using which the notes are represented.

Repeater {
  id: staves
  model: nbStaves
  Staff {
      id: staff
      clef: multipleStaff.clef
      height: (multipleStaff.height - distanceBetweenStaff * (nbStaves - 1)) / nbStaves
      width: multipleStaff.width
      y: index * (height + distanceBetweenStaff)
      lastPartition: index == nbStaves - 1
      firstNoteX: multipleStaff.firstNoteX

We can even have multiple staffs by specifying the nbStaves in MultipleStaff component. For note names we have nbStaves = 1 whereas clef = treble for levels < 10 whereas cleff = bass for level > 10

MultipleStaff {
  id: staff
  nbStaves: 1
  clef: bar.level <= 10 ? "treble" : "bass"
  height: background.height / 4
  width: bar.level == 1 || bar.level == 11 ? background.width * 0.8 : background.width / 2
  nbMaxNotesPerStaff: bar.level == 1 || bar.level == 11 ? 8 : 1
  firstNoteX: bar.level == 1 || bar.level == 11 ? width / 5 : width / 2

I did various changes and fixes in the last week in note names which include:

  1. Adding highlighting to options in the levels.
    Highlight of notes
  2. Fixing keyboard controls which allows you to navigate between options using arrow keys and selecting the answer using Enter or return key
  3. Added the initial version of highlighting of the notes on staff for note names.

In the coming days, I will work on the following things:

  1. Improving the highlight of notes on staff.
  2. Add a drag for the options in levels.
  3. Cleaning the code and other minor fixes :)

Did I tell you that I am also working on more animations for oware? Yes, we have more animations for oware coming up for the movements of the seeds when captured to the score houses. I completed the animation movement pretty quickly this time as compared to the time taken when I was implementing it for the movements. Probably that was due to all the things I learnt in those animations which made me realise that though it took a lot of time (made me behind my timeline alot :D) but it was totally worth it. At the end our aim is to provide the best activities for kids with the best experience that they can get and not just workable activities. Along with clean and maintainable code to make it as easy as we can for new contributors or anyone to understand. Well, that’s what you learn the most :)

I will share more about the note names activity and the score animation also in my next blog post :)

August 14, 2017


Sidenote: I'm working on Go language support in KDevelop. KDevelop is a cross-platform IDE with awesome plugins support and possibility to implement support for various build systems and languages. The Go language is an cross-platform open-source compiled statically-typed languages which tends to be simple and readable, and mainly targets console apps and network services.

During last week I was continuing working on code completion support.
Firstly, I spent time investigating what else could be added to existing support - and realized that Go channels wasn't covered really well. "Channels" in Go world are something like queues, or, maybe more exactly, pipes. They provides ability to communicate between different goroutines (think of them as of lightweight threads) - you can send a value to channel and receive it on other side.
So, my first change was related to matching types while passing values to channel - now it works correctly and suggests matching types with higher priority.
Aside from different value types channels differs in direction - there are mono-directional and bidirectional channels: in, out, and in/out.
Because of that my second change was aimed on providing support for matching these different kinds of channels. Now, if function expects, for example, in channel, both in and in/out channels will have higher priority than out channel.

After doing that I began to open various Go files \ projects to find remaining bugs and received segfault while parsing fmt/print.go. :( After some investigating I realized that in case of struct variable declaration with literal (e.g. initializing struct fields inside {} block) no context was opened and that leaded to crash lately. Although it took me some time to find where real problem is and how to fix it it's fixed now and even 1142-lines fmt/print.go opens successfully.

Despite that, I found that in case of struct literal initialization names of fields are not highlighted as usages - I am going to fix that during next week and spend more time on testing and fixing remaining issues.

Looking forward to next week!

Recently, we talked about how we’re broadening our offering towards the automation sector. In case you missed it, you can find all relevant information here as well as read our blog post here.

One of the biggest challenges in starting an automation project is to build a suitable communication stack. MQTT has received more and more popularity over the last years for managing telemetry data (i.e. collecting data from sensors, health status of devices etc.). This is why we are now extending our portfolio to further help and simplify the development workflow.

What is MQTT

MQTT describes itself as follows:

It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium.”

Using a publisher-subscriber methodology puts the routing responsibility towards the server (or in this context called “message broker”), which all clients connect to. Multiple levels of service quality can be specified to guarantee message delivery.

The connection itself usually builds on top of a TCP connection. However, it can use any ordered, lossless and bi-directional communication method.

How QtMqtt Fits Into the Picture

QtMqtt is a client implementation, which can be used for creating devices to send, but also to monitor solutions for receiving and managing data. QtMqtt does not focus on the broker side.

One important item to mention is that we aim to have QtMqtt fully specification compliant compared to other solutions. This implies support for

  • Protocol level 3.1 and 3.1.1 (prominently known/referred as 4)
  • All QoS levels
  • Wildcards
  • Authentication
  • SSL connections
  • Last Will support

Let ‘s dig a bit deeper and discuss how you actually use QtMqtt in your project.

Publishing data:

QMqttClient publisher;

publisher.publish(“sensor_1/dataset/foo”, “values”, qosLevel);

Receiving data:
QMqttClient subscriber;
. . .
QSharedPointer<QMqttSubscription> sub = subscriber.subscribe(“sensor_1/#”, qosLevel);
connect(sub.data(), &QMqttSubscription::messageReceived, [&](QMqttMessage msg) {
qDebug() << “New Message:” << msg.payload();


It is crucial for any automation solution today to make sure that all communication is secure and safe. QtMqtt provides two means to achieve this:

  1. Authentication via username and password when a connection is established.
  2. Using SSL/TLS sockets as connection channel.

For the latter case, we can utilize QSslSocket as provided by Qt Network. As a convenience, QMqttClient has another member called connectToHostEncrypted() which behaves similar to QSslSocket‘s argument list.

Extending QtMqtt

While MQTT is mostly used via TCP, it isn’t hardwired to it. QtMqtt allows you to specify additional transport methods, which are based on either QIODevice or QAbstractSocket. This implies that you can create your own transport and pass it over to QMqttClient before establishing a connection.

One concrete example is to use MQTT over websockets, for which Qt provides a separate module. QWebsocket is not based on QAbstractSocket due to different means of sending and receiving data. However, the specification is very clear on how MQTT data has to be pushed via websocket (send as binary data, must fit one datagram, etc.). Hence, a convenience class can be implemented. The specific showcase can be found in the examples of the QtMqtt module.

If you found this post interesting, feel free to get in touch with us and get access to a prerelease version.

The post Introducing QtMqtt appeared first on Qt Blog.

August 13, 2017

I read Emmanuele Bassi’s very interesting blog post about software distribution this week and thought a lot about it. Emmanuele kind of answers to a presentation by Richard Brown (from OpenSUSE fame). While I haven’t seen that presentation, I saw a similar one last year at the openSUSE conference and also talked with Richard about the topic. So I dare to think that I understand Richard’s arguments and even agree with most of them.

Nevertheless I want to share some of my thoughts on the topic from the perspective of KWin maintainership. KWin is more part of the OS stack, so distributing through means like Flatpack are not a valid option IMHO. As KWin is close to the OS, we have a very well integration with distributions. Plasma (which KWin is part of) has dedicated packager groups in most distributions, we have a direct communication channel to the distros, we know our packagers in large parts in person, etc. etc. So from the open source distribution model we are in the best category. We are better positioned than let’s say a new game which needs to be distributed.

So this is the background of this blog post. What I’m now going to do is to describe some of the issues we hit just over the last year with distributing our software through distros. What I will show are many issues which would not have been possible if we distributed it ourselves without a distro in between (for the record: this includes KDE Neon!). To make it quite clear: I do not think that it would be a good idea for KWin to be distributed outside distributions. While thinking about this topic I came up with a phrase to what we are doing: “Distribution management”.

We do lot of work to ensure that distributions don’t destroy our software. We actually manage our distributions.

Latest example: QML runtime dependency

Aleix created a new function in extra-cmake-modules to specify the QML runtime dependencies. Once it was available I immediately went through KWin sources, extracted all QML modules we use and added it to CMake, so that it’s shown when running CMake. Reference: D7273

This is a huge issue in fact. Plasma 5.10 introduced support for virtual keyboard in the lock screen. We had to add special code for handling that this runtime dependency is missing. I sent a mail to our distributions mailing list to remind distros to package and ship this dependency.

Obviously: if we distributed the software by ourselves this would not have been an issue at all. We would just bundle Qt Virtual Keyboard and be done. But as we are distributed through distributions we had to write special handling code for the case it’s missing and send out mails to remind distros to package it.

Incorrect dependencies – too many

Although we specify all required dependencies though CMake our distributions might add random other dependencies. For example KWin on Debian depended on QtWayland (both client and server), although KWin does not need the one or the other. This got fixed after I reported it.

This is of course a huge problem. A common saying about KDE is that it has too many dependencies and that you cannot install software on e.g. GNOME, because it pulls in everything and the kitchen sink. That’s of course true if distros add incorrect additional package dependencies.

Incorrect dependencies – too few

Of course the other side is also possible. One can have too few dependencies. Again the example is Debian, which did not make KWin depend on Breeze resulting in incorrect window decoration being set. That we depend on it at compile time is also following the “distribution management” as distros asked us to add all dependencies through CMake. So we did and made KWin optionally depend on Breeze. Of course such distribution management does not help if distributions don’t check the CMake output.

Also here if we distributed ourselves we would have compiled with all dependencies.

Outdated dependencies

Another very common problem is that distributions do not have the dependencies which we need and want to use. In case of KWin this especially became a problem during the Wayland port. For a very long time we had to keep Wayland compile time optional so that distributions like Slackware [1] could continue to compile KWin. When we turned it into a hard dependency we had to inform distros about it and check with them whether it’s working. Of course just because you inform distros does not mean that it will work out later on.

That we have distros with outdated dependencies is always a sore pain. Our users are never running the software which we are, never getting the same testing. We have to keep workarounds for outdated dependencies (e.g. KWin has a workaround for an XWayland issue fixed in 1.19) and that of course results in many, many bug reports we have to handle where we have to tell the users that their software is just too old.

Also we face opposition when trying to increase dependencies, resulting in distros consider dropping the software or to revert the changes.

Handling upgrades

Another reoccurring issue is handling updates. Our users are constantly running outdated software and not getting the bug fixes. E.g. Ubuntu 16.04 (LTS) ships with KWin 5.5. We currently support KWin 5.8 and 5.10. We constantly get bug reports for such old software and that’s just because it’s outdated in the distribution. Such bug reports only cause work for us and are a frustrating experience for the users. Because the only feedback they get is: “no longer maintained, please update to a current version”. Which is of course not true, because the distro does not even allow the user to upgrade.

Even when upgrading it’s a problem. We have to inform distros about how to handle the upgrade, of course that does not help, it fails nevertheless. We also never know what are allowed upgrade paths. For us it’s simple: 5.8 upgraded to 5.9, upgraded to 5.10, etc. If we would ship upgrades ourselves we could ensure that. But distros might go from 5.8 to 5.10 without upgrade to 5.9 ever. So we need to handle such situations. This was extremely challenging for me with a lock screen change in Plasma 5.10 to ensure the upgrade works correctly.

Random distro failure

An older example from Debian: during the gcc-5 transition KWin got broken and started to crash on startup without any chance to recover. This issue would of course not happen if we would have distributed KWin ourselves with all dependencies.

The experience was rather bad as users (rightfully) complained about the brokeness of KDE. I had friends asking me in person how it could be that we ship such broken software. Of course we were innocent and this was only in the distro. But still we (KDE) get the full blame for the breakage caused by the distro.

Useless bug reports

Thanks to distributions all crash reports we get are useless. The debug packages are missing and that’s pointless. Even if debug packages are installed mostly the crash point is missing. This is especially a problem with Arch as they don’t provide debug packages. In 2017 KWin got 71 bug reports which are marked as NEEDSINFO/BACKTRACE as the reported bug is pointless.

Misinformed maintainers

I don’t know how often I got asked this: should we change Qt to be compiled against OpenGL ES? Every time I was close to a heart attack as this would break all Qt applications for all NVIDIA users (at least a few years ago there was no GLES support in the NVIDIA driver). Luckily in most cases the maintainers asked (why me and not Qt?), but I remember one case where they did not ask and just rolled it out. With the expected result.


I could go on with examples for quite some time. But I think it should be obvious what I want to show: even for well integrated software such as KWin the distributions are not able to package and deliver correctly. As an upstream developer one has to spend quite some time on managing the distributions, so that the software is delivered in an acceptable state to the users.

Distros always claim that they provide a level of quality. Looking at the examples above I have to wonder where it is (granted there are positive exceptions like openSUSE). Distros claim they do a licensing check. That’s useful and very much needed, but is it required that openSUSE, Debian and Arch do the same check? Furthermore I personally doubt that distros can do more than a superficial check, they would never find a case where one copies code which is license incompatible.

Given the experience I have made with distros over the last decade working on KWin I am not surprised that projects are looking for better ways to distribute software. Ways where they can control the software distribution and ensure it works. Ways where their software is not degraded due to distros doing mistakes.

All that said, I do agree with Richard in most points, just don’t think it works. If everybody would use openSUSE than Richard would be right with almost all points. But the reality is that we have a bazillion of distributions doing the same work and repeating the same mistakes. Due to that I think that for software distribution Flatpack & Co. are a very valid approach to improve the overall user experience.

[1] Slackware for a long time did not have Wayland as that would need Weston and Weston depends on PAM which is only the optional fallback for no logind support, which Slackware of course does not have

Last month I had opportunity to visit the Almería, Spain for Akademy 2017. Akademy 2017 is KDE’s annual world summit. Akademy makes it possible to meet the felow KDE contributors, some of whom you only know with their IRC nicknames (Yes, I am not old enough to know every contributors yet :p). Here is few things I did at the Akademy 2017.

Plasma Mobile

On the 2nd day I presented a talk about Plasma Mobile, Slides for that talk is available here. This talk covers various topics,

  • Achievements of Plasma Mobile project for year 2017
  • Project Halium
  • New devices

In addition to all the good things, I also discussed about the areas where Plasma Mobile project needs improvement and where community can help.

  • Quality Assurance
  • Testing
  • Applications

There are several external factors which also matters for Plasma Mobile project,

  • Kernel version are too old, hard to maintain security
  • Lack of the open devices
  • Devices requires closed sources BSP to function to full extent

I also talked about the postmarketOS project which aims for the 10 year life-cycle of the phones, postmarketOS is currently using the weston as their reference user interface, and have interest in using Plasma Mobile as the reference user interface.

I also talked about various programs which are working towards the open devices,

  • Open devices program by Sony
  • Effort to support the devices by mainline kernel
  • Fairphone
  • Purism’s Linux Phone(?)

Video recording for this talk is not available yet, but it will be available at files.kde.org soon.

We also had scheduled a Plasma Mobile BoF, where we discussed about the Plasma Mobile Vision, Strategy, Convergence and more planning with rest of the Plasma Team.

Plasma BoF

KDE Student Programs BoF

In addition to being maintainer of Plasma Mobile project I am also the part of KDE Student Programs Adminstration team, on Tuesday we organized a BoF session where students, mentors and admins from various programs such as GSoC, GCI, SoK, OPW took part and discussed how we are doing in this year’s programs, Good or bad? Where we needs to improve? Overall this was quite productive discussion.


KDE Neon BoF

I also participated in the KDE Neon BoF, in KDE Neon BoF I mainly discussed the addition of ARM architecture in the Neon CI and also took lessons about how OpenQA is used on the KDE Neon CI. I do plan to use the OpenQA eventually for Plasma Mobile images.


Overall this was very productive Akademy, where we discussed various topics which are typically hard to discuss over the communication media like E-mail or IRC. I would like to thank the KDE e.V. for covering my flight and accomodation cost to attend the Akademy. I will have another chance to attend another event next month : Randa Meetings 2017. This year’s randa meetings main topic is Make KDE more accessible.

However to make Randa Meetings possible KDE community needs your help, Please donate at Randa Meetings 2017 Fundraising Campaign.

Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.