€ 20000 0.00
Please donate what you can to help make the Randa Meetings 2017 possible. Read more.

August 21, 2017

Over at lwn.net, there is an article on the coming WebKitGTK apocalypse. Michael Catanzaro has pointed out the effects of WebKit’s stalled development processes before, in 2016.

So here’s the state of WebKit on FreeBSD, from the KDE-FreeBSD perspective (and a little bit about the GTK ports, too).

  • Qt4 WebKit is doubly unmaintained; Qt4 is past its use-before date, and its WebKit hasn’t been meaningfully updated in years. It is also, unfortunately, the WebKit used in the only officially available KDE ports, so (e.g.) rekonq on FreeBSD is the KDE4 version. Its age shows by, among other things, not even rendering a bunch of current KDE.org sites properly.
  • Qt5 WebKit port has been updated, just this weekend, to the annulen fork. That’s the “only a year and a half behind” state of WebKit for Qt5. The port has been updated to the alpha2 release of annulen, and should be a drop-in replacement for WebKit for all ports using WebKit from Qt5. There’s only 34 ports using Qt5-webkit, most of them KDE Applications (e.g. Calligra, which was updated to the KF5-version some time back).
  • Webkit1-gtk2 and -gtk3 look like they are unmaintained,
  • Webkit2-gtk3 looks like it is maintained and was recently updated to 2.16.6 (latest stable release).

So .. the situation is not particularly good, perhaps even grim for Qt4 / KDE4 users (similar to the situation for KDE4 users on Debian stable). The transition of KDE FreeBSD ports to Qt5 / Frameworks 5 / Plasma 5 and KDE Applications will improve things considerably, updating QtWebKit and providing QtWebEngine, both of which are far more up-to-date than what they are replacing.

Hoy lo tengo fácil para hacer la entrada, y es que solo debo copiar y pegar la nota de Prensa del Maratón Linuxero que se elaboró en el grupo de Telegram en apenas una hora y que demuestra las grandes cosas que se consiguen trabajando colaborativamente. Evidentemente os animo a tanto a escuchar el podcast como a difundir esta nota en la redes sociales tanto virtuales como reales.

Nota de Prensa del Maratón Linuxero


Descarga la Nota de Prensa en español o inglés:


El Maratón Linuxero es un proyecto creado por podcasters y oyentes de GNU/Linux que quieren realizar un evento en directo a través de aplicaciones y servicios de Software Libre. El domingo 3 de septiembre de 15:00 a 24:00 horas ( horario español UTC+2) ofreceremos 9 horas de emisiones con podcasters de habla hispana.

Su origen fue ver si era posible sacar a delantes emisiones en directo como otras organizaciones han hecho, pero sin recurrir a sistemas privativos, o por lo menos que sean afines al Software Libre o de Código Abierto.

Nota de Prensa del Maratón Linuxero

En un principio barajamos servicios como Mumble o Butt y actualmente estamos utilizando Jit.si junto con OBS Studio para emitir en directo en Youtube, donde la facilidad de llegar a un gran número de usuarios como el feedback que ofrece su chat nos hizo decantarnos por esta fórmula. Realizamos pruebas que están tanto en Youtube como en formato podcast de audio. Vamos a hacer un total de 5 antes del evento del 3 de septiembre.

Otro aspecto que queremos resaltar es la colaboración de empresas Linuxeras españolas. Tanto PCUbuntuVant como Slimbook no dudaron en respaldar este proyecto y sumarse a él ofreciendo productos GNU/Linux para sorteos que realizaremos el mismo día y en la última prueba de emisiones (27 de agosto desde las 15:00 horas)

Sorteos en el Maratón Linuxero

La parrilla de directos propuesta es la siguiente (a 2 semanas del evento añadiremos la temática de cada uno):

Hora Podcaster Temática
15:00-16:00 Podcast Linux
16:00-17:00 Eduardo Collado
17:00-18:00 Yoyo Fernández
18:00-19:00 José GDF y DJ Mao Mix
19:00-20:00 KDE España Podcast
20:00-21:00 Ubuntu y otras hierbas
21:00-22:00 Ugeek y Mosquetero Web
22:00-23:00 Neoranger y Enderhe
23:00-00:00 Paco Estrada

Cada hora será conducida por uno o más divulgadores y/o podcasters de habla hispana ampliamente reconocidos por las comunidades GNU/Linux.

No sólo están colaborado podcasters, sino también , administradores de sistema, desarrolladores, diseñadores y artistas para realizar el blog, servicios, carteles, promos y vídeos del proyecto.

Carteles, Promos y vídeos

Actualmente el Canal de Telegram nos sirve para vertebrar y tomar decisiones y es una fuente de conocimientos y experiencias de esta comunidad que hemos creado en torno al Maratón Linuxero: https://t.me/maratonlinuxero

Para más información, te dejamos las siguientes formas de contacto:

I recently followed the advice of @sehurlburt to offer help to other developers. As I work with Qt and embedded Linux on a daily basis, I offered to help. (You should do the same!)

As it is easy to run out of words on Twitter, so here comes a slightly more lengthy explanation on how I build the latest and greatest of Qt for my Debian machine. Notice that there are easier ways to get Qt – you can install it from packages, or use the installer provided from The Qt Company. But if you want to build it yourself for whatever reason, this is how I do it.

First step is to get the build dependencies to your system. This might feel tricky, but apt-get can help you here. To get the dependencies for Qt 5, simply run sudo apt-get build-dep libqt5core5a and you are set.

Next step would be to get the Qt source tarball. You get it by going to https://www.qt.io/download/, select the open source version (unless you hold a commercial license) and then click the tiny View All Downloads link under the large Your download section. There you can find source packages for both Qt and Qt Creator.

Having downloaded and extracted the Qt tarball, enter the directory and configure the build. I usually do something like
./configure -prefix /home/e8johan/work/qt/5.9.0/inst -nomake examples -nomake tests. That should build everything, but skip examples and tests (you can build these later if you want to). The prefix should point to someplace in your home directory. The prefix has had some peculiar behaviour earlier, so I try to make sure not to have a final dash after the path. When the configuration has been run, you can look at the config.summary file (or the a bit higher up in the console output) and you can see a nice summary of what you are about to build. If this list looks odd, you need to look into the dependencies manually. Once you are happy, simply run make. If you want to speed things up, use the -j option with the highest number you dare (usually number of CPU cores plus one). This will parallelize the build.

Once the build is done (this takes a lot of time, expect at least 45 minutes with a decent machine), you need to install Qt. Run make install to do so. As you install Qt to someplace in your home directory, you do not need to use sudo.

The entry point to all of Qt is the qmake tool produced by your build (i.e. prefix/bin/qmake). If you run qmake -query you can see that it knows its version and installation point. This is why you cannot move a Qt installation around to random locations without hacking qmake. I tend to create a link (using ln -s) to this binary to somewhere in my path so that I can run qmake-5.9.0 or qmake-5.6.1 or whatnot to invoke a specific qmake version (caveat: when changing Qt version in a build tree, run qmake-version -recursive from the project root to change all Makefiles to the correct Qt version, otherwise you will get very “interesting” results).

Armed with this knowledge, we can go ahead and build QtCreator. It should be a matter of extracting the tarball, running the appropriate qmake in the root of the extracted code followed by make. QtCreator does not have to be installed, instead, just create a link to the qtcreator binary in the bin/ sub directory.

Running QtCreator, you can add Qt builds under Tools -> Options… -> Build & Run. Here, add a version by pointing at its qmake file, e.g. the qmake-5.9.0 link you just created. Then it is just a matter of picking Qt version for your project and build away.

Disclaimer! This is how I do things, but it might not be the recommended or even the right way to do it.

August 20, 2017

Focus on the ImageViewer

  • The major focus for this time period was the ImageViewer that shows the single image.
  • Made the ImageViewer full-screen
    • The imageviewer can show image in windowed mode as well as full-screen mode
    • Extra controls will not occupy space used to show the image. Hence there will be less distraction
    • User can alternate the viewing mode just by pressing Key F

      Windowed Windowed

      Full Screen Full Screen

  • Adds a No Images found label to the AlbumView screen when there are no images corresponding to a particular filter( selectable from the sidebar), so that user should not assume that the application is running slow and maybe it is loading the images No Image

  • When performing a collective delete operation the view is updated after a certain amount of time( 200ms)
    • Earlier the view was updated after every single delete operation which made the application unresponsive
    • Now it is better since the view is updated after a certain amount of time and it does not care about the number of image.
  • Adds two action in the ImageViewer Actions
    • back action
      • It takes the user back to the AlbumView grid
    • share action
      • This action is used to share the image and the shared url is also copied to the clipboard
      • The action uses KDE’s purpose framework that identifies the mimeType of a file and shows the sharing options for file
      • Since we are just trying to share image, the sharing options are
        • “Imgur” - Share the image to imgur
        • “Send to Device” - Uses kdeconnect to share the image to connected device
        • “Send to contact”
        • “Save as” - Save the image to local filesystem
        • “Twitter” - Shares the image to twitter with a twitter text. You have to set up your twitter account in the settings first

          Share action Share action

  • Added a contextDrawer to the ImageViewer to show editing options to edit the image
    • For implementing this we had to use the new layer concept of Kirigami by which we can add layers to column of the rowStack of kirigami. Because the ImageViewer is separate from the other columns in the row it had to be implemented this way
    • The actions in the contextDrawer for now are Rotate left and Rotate right

    Context Drawer

  • Editing of the image is handled in C++ class ImageDocument
    • This will allow us to have better controls over the image editing as the image to be edited will be an instance of the QImage
  • Currently working on some more editing actions such as “Brightness”, “Saturation” etc.

Sidenote: I'm working on Go language support in KDevelop. KDevelop is a cross-platform IDE with awesome plugins support and possibility to implement support for various build systems and languages. The Go language is an cross-platform open-source compiled statically-typed language which tends to be simple and readable, and mainly targets console apps and network services.

During last week I was on polishing existing features.

Firstly, I spend time improving declarations handling - inside "short var declaration" via ":=" all of the variables on left side were always redeclared. But the Go's behaviour on that is that if variable was previously declared in same context it isn't redeclared but just used. So, after my change in case of such lines, for example:
result1, err := DoWork()
result2, err := DoWork()
err is no more declared twice and is handled as a single variable.

Secondly, I worked on making fields initialization inside struct literal appear as use of that fields.
Consider example:
type TestStruct struct {
Name string

var TestVariable = TestStruct { Name: "Works" }
Previosly, "Name" inside TestVariable declaration wasn't highlighted somehow - you weren't able to check it's type \ etc; after my change in that place "use" is added and it shows all the info related to Name field of TestStruct.

Aside from that I improved embedded structs handling (firstly mentioned in my 4th weekly post) - now embedding works by pointer too (as stateted in Go specification). So, if struct TestStruct contains anonymous *AnotherStruct pointer, all members from AnotherStruct would be available on TestStruct too.

Looking forward to next week!

Since 2015 I and other people have been talking about Evolving KDE – meaning reflecting on where we are, where we want to go and how we will get there. We have made great strides with defining our vision and mission since then. It has not been an easy exercise but a necessary one because it gives us focus and clarity about our purpose.

Our vision is: “A world in which everyone has control over their digital life and enjoys freedom and privacy.” We stand behind this. We want to fill this vision with life now. We came together at Akademy to discussed how to do that. How can we give the whole KDE community the opportunity to express what they think we should all be working on right now? How can we find all the creative ideas and ambitions that are hidden in so many of our community members? And how can we talk about them all together? I believe we have found a way.

Starting today all of KDE is invited to propose goals to work on for the next 3 to 4 years. We will then discuss and refine them. Finally we will have a vote for the goals we should pursue together. Goals can be about anything you consider important – it doesn’t have to be about writing code. The top 3 proposals will get supported in various ways for example with sponsorship of a sprint and presentation slots at next year’s Akademy. This way we will shine a spotlight on the most important things we are working on and together support that work in the best way we can. The plan is to do this every year and add one or two goals to the mix every time.

The timeline looks as follows:

  • Today until beginning of October: work on the proposals
  • All of October: talk about the proposal
  • First two weeks of November: vote on the proposals by everyone with a KDE contributor account
  • Middle of November: publish the results

To make it more concrete, here are some examples of potential goals that could come out of this:

  • Improving the Developer Story: a new contributor should be able to create his first patch to any KDE application in 15 minutes or less;
  • Big in Asia: users in Asia should be able to write in their writing system in any of the software produced by KDE;
  • Appeal to All our Senses: a visually impaired user should be able to use all the software produced by KDE;
  • Virtual Reality Painting: artists should be able to paint in 3D straight from a VR world using software produced by KDE;
  • Speaking Your Language: 90% of the computer users worldwide should be able to use the software produced by KDE in a language in which they are fluent.

Do you have an idea for a goal for KDE? Get a small group of people together and propose it today by adding it here.

Thank you to Kévin Ottens, Mirko Boehm, David Faure, Frederik Gladhorn and everyone who helped flesh this idea out.

At some point, the KDE4-era KDM is going to end up unmaintained. The preferred display or login manager for KDE Plasma 5 is SDDM, which is Qt-based, and QML-themeable. In Area51, the unofficial KDE-on-FreeBSD ports repository, we’ve been working on Plasma 5 and modern KDE Applications for quite some time. One of the parts of that is, naturally, SDDM.

There’s x11/sddm in the plasma5/ branch right now, with a half-dozen code patches which I’ll have to look in to for upstreaming. I decided to try building it against current official ports — that is, current Qt5 on FreeBSD — and using it to log in to my daily FreeBSD workstation. One that runs KDE4. That is immediately a good test, I think, of support for not-the-obvious-X11-environment for SDDM.

So the good news: it compiles, installs, and runs. We’ve added a few tweaks, and still need to figure out which packages should be responsible for adding .desktop files for each environment. SDDM now installs an xinitrc.desktop, for the most-basic case. Switching kdm_enable to sddm_enable in rc.conf is all you need.

The bad: some things aren’t there yet, like shutdown support. And, at least with my nVidia card, fonts can vanish from the login screen after switching vt’s.

The ugly: we really need to spend an hour or two on doing some branding material for SDDM / Plasma 5 / KDE Applications on FreeBSD. Basically slapping Beastie on everything. Graham has created some graphics bits, so we’ve got something, just not packaged nicely yet.

Speaking of the Good, the Bad, and the Ugly, I re-watched it recently, but now with the added interest of figuring out where in Tabernas or Rodalquilar it was; turns out the beach past the airport is also part of the desert scenes. And SDDM can soon be part of the FreeBSD scene.

Aunque KDE Blog es un blog personal, siempre está abierto a colaboraciones de terceras personas. Este es el caso de este nuevo artículo de la escritora Edith Gómez, editora en Gananci, apasionada del marketing digital, especializada en comunicación online y que se está convirtiendo en una habitual del blog. En esta ocasión nos presenta “5 pasos para convertirse en un desarrollador web freelance” que evidentemente que evidentemente no son la únicas pero que os pueden servir de referencia.

5 pasos para convertirse en un desarrollador web freelance

La verdad es que convertirse en un desarrollador web freelance tiene sus beneficios. Aquellos que apenas están comenzando pueden hacerlo desde prácticamente cualquier lugar del mundo (con internet, claro), o desde la comodidad de su propia casa.

Puedes tener tus propios horarios, controlar tu flujo de trabajo y establecer tu propia tarifa, en conclusión, alcanzar tu propia libertad financiera. Según Gananci, estos son algunos pasos para lograr la libertad financiera.

Los desarrolladores web son uno de los perfiles que tienen más demanda en casi cualquier industria. Y aunque la demanda está en crecimiento, tus oportunidades de encontrar buenos clientes y de tener una carrera bastante lucrativa, están muy cerca.

Ahora te estarás preguntando, pero, ¿cómo hago para convertirme en un desarrollador web freelance?

Aquí te presentamos los pasos para hacerlo:

  1. Aprende sobre tecnología y lenguajes

Los proyectos de desarrollo web requieren del conocimiento de más de un lenguaje de programación. Esto quiere decir que mientras más tecnologías y lenguajes conozcas, más trabajos podrás aceptar.

Como primer paso, puedes comenzar con lenguajes que sean versátiles pero que tengan gran demanda como Python.

  1. Establece tu negocio

El siguiente paso es establecer toda la logística de tu negocio. De hecho, hay varios pasos que debes seguir al momento de comenzar tu negocio freelance:

  • Acuerda una cita con un contador público que te ayude a determinar la mejor estructura para tu negocio. Esta información será bastante útil para los siguientes pasos.

  • Registra el nombre de tu negocio en el estado donde vives o trabajas.

  • Ponte al tanto de todo el proceso de impuestos.

  • Si aplica, paga el Seguro de Responsabilidad Civil.

  • Compra el software y todo el equipo para instalar tu oficina en casa.

  • Crea un plan de negocios. Este documento es importantísimo para que tengas una visión y guía de tu negocio, ya que termite fijar metas y medir los logros.

  1. Construye tu sitio web de freelancer

Tal vez, uno de los aspectos más difíciles de entrar al mundo freelance, es tener un portafolio que muestre todos tus trabajos completos.

Por este motivo, tu sitio web debe ser como una especie de vitrina. Así que tómate tu tiempo en crear un sitio interactivo y atractivo que use las últimas tendencias de diseño.

Además, asegúrate de hacerle ver a todos que fuiste tú quien diseñó la página y que puedes hacer lo mismo para tus clientes.

Ten una sección donde hables sobre ti, la pasión que sientes en el área, cómo puedes ayudarlos y cuáles serán sus beneficios si te eligen.

Sería propicio que les dieras la posibilidad de interactuar contigo a través de enlaces hacia tus distintas redes sociales, o tal vez agregar un chat. No olvides crear un blog donde puedas hablar sobre tu experiencia en las últimas tendencias y técnicas del desarrollo web.

  1. Comercializa tus servicios

Antes de comenzar a comercializar tus servicios como freelancer, pregúntate a ti mismo, ¿cuál es mi mercado objetivo?

Si no tienes idea, estos son algunos ejemplos:

  • Pequeños negocios locales que aún no tengan presencia en internet.

  • Organizaciones sin fines de lucro cuyos sitios web no sean prácticos.

  • Compañías de ventas que no ofrezcan servicio de tienda online.

  • Puedes enfocarte en una sola industria.

Una vez que determines tu mercado objetivo y establezcas tus tarifas, puedes comenzar a comercializar tus servicios.

En la mayoría de los casos, tus primeros clientes no llegarán a través de tu sitio web, sino que tendrás que buscarlos tú mismo y ofrecer tus servicios personalmente. Si quieres conseguir clientes, puedes aprender estos consejos para encontrar los mejores clientes.

Crea tarjetas de presentación y llévalas contigo a todas partes, no sabes en qué lugar y momento puedas encontrar un cliente potencial.

  1. Mantente actualizado

No te conformes con lo que ya sabes, sigue estudiando sobre las últimas tecnologías y lenguajes que vayan apareciendo.

Cuando no estés trabajando, usa ese tiempo para obtener más certificaciones, bien sea de manera online o presencial. Esto te permitirá mantenerte actualizado como desarrollador y ampliar tu cartera de clientes.

Convertirse en un desarrollador web freelance requiere de una gran inversión de tiempo y esfuerzo. El resultado de esto será una carrera que te dé más libertad, flexibilidad, tranquilidad y ganancias.

Sigue aprendiendo y trabajando en tus habilidades para ofrecerles a tus clientes el mejor de los servicios.

¿Estás listo para comenzar tu negocio freelance? ¿Conocen algún otro paso que pueda ampliar esta lista? ¡Cuéntanos!

A lot has happened since the last release, so let me bring you up to speed on what is cooking for the 0.4 release.
We’ve been mostly focusing on ironing out UX problems all over the place. It turns out, when writing desktop applications using QtQuick you’ll be ending up with a lot of details to figure out for yourself.

Kube Components

We noticed that we end up modifying most components (buttons, listviews, treeviews, ….), so we ended up “subclassing” (as far as that exists in QML), most components. In some cases this is just to consistently set some default options which we would otherwise have to duplicate, in some cases it’s about styling where we have to replace default styling either for pure visual reasons (to make it pretty), or for additional functionality (proper focus indicators).
In some cases it’s even behavioral as in the scrolling case you’ll see later on.

In any case, it’s very well worth it to create your own components as soon as you realize you can’t live with the defaults (or rely on the defaults of a framework like Kirigami), because you’ll have a much easier time at maintaining consistency, improving existing components and generally just end up with much cleaner code.


One of the first issues to tackle was the scrolling behavior. Scrolling is mostly implemented in Flickable, though i.e. QtQuick.Controls.ScrollView overrides it’s behavior to provide a more desktopy scroll feeling. The problem is indeed that Flickables flicking behavior is absolutely unusable on a desktop system. It depends a lot on your input devices, with some high-precision trackpads it apparently ends up doing alright, but in general it’s just designed for touch interaction.

Problems include:

  • Way to fast scrolling speed.
  • The flicking is way to long and only stoppable by scrolling in the opposite direction (at least with my trackpad and mouse).
  • Difficulties in fine positioning e.g. a listview, scrolling is generally already too fast and sometimes the view just dashes off.

These problems are unfortunately not solvable by somehow configuring the Flickable (Believe me, I’ve tried), so what we ended up doing is
overriding its behavior. This is done using a MouseArea that we overlay with the flickable (ScrollHelper.qml) and then manually control the scrolling position.

This is a very similar approach to what also QtQuick.Controls.ScrollView does and what Kirigami does as well for some of its components.

It’s not perfect and apparently doesn’t yet play nicely with some mice as the fine tuning is difficult with various input devices. There is a variety of high/low precision devices, some of which give pixel deltas (so absolute positioning), and some of them give angle deltas (which are some sort of ticks), and some of them of course give both and don’t tell you which to use. What seems to work best is trying to calculate both into absolute pixel deltas and then just use either of the values (preferably the pixel delta one it seems). This will give you about the behavior you get in e.g. a browser, so that works IMO nicely.

For most components this was fortunately easy to add since we already had custom components for them, so we could just add the ScrollHelper there.
For others like the TreeView it was a bit more involved. The reason is that the TreeView itself is already a ScrollView, which not only implements a different scrolling behavior, but also brings its own scrollbars which look different from what we’re using everywhere else. The solution ended up being to wrap it with another Flickable so we can use our usual approach. Not pretty, but the fewer components we have that implement the same thing yet again in a different way the better.

Focus visualization

As I started to look into keyboard navigation the first thing I noticed was that the focus visualization was severely lacking. If you move around in the UI by keyboard only you always need to be able to follow the currently focused item, but many of our components didn’t differentiate between having keyboard focus or being selected and sometimes lacked a focus visualization altogether. The result was that the focus would randomly vanish as you for instance focused an already selected element in a listview, or you couldn’t differentiate if you have now moved the focus to another list-item or already selected it.

The result of it all is a highlighting scheme that we have now applied fairly consistently:

  • We have a highlight for selected items
  • We have a lighter highlight for focus
  • …and we draw a border around items that have focus but are somehow not suitable for the light highlight. This is typically either because it’s i.e. text content (where a highlight would be distracting), or because it’s an item that is already selected (highlight over highlight doesn’t really work).

Once again we were only able to implement this because we had the necessary components in place.

Keyboard navigation

Next up came keyboard navigation. I already took a couple of stabs at this, so I was determined to solve this for good this time. Alas, it wasn’t exactly trivial. The most important thing to remember is that you will need a lot of FocusScopes. FocusScopes are used to componentize the UI into focusable areas that can then have focusable subareas and so on. This allows your custom built component that typically consists of a couple of items to deal with focus in it’s own little domain, without worrying about the rest of the application. It’s quite a bit of manual work with a lot of experimenting, so it’s best done early in the development process.

The rest is then about juggling the focus and focusOnTab properties to direct the focus to the correct places.

Of course arrow-key navigation still needs to be implemented separately, which is done for all list- and treeviews.

The result of this is that it’s now possible to completely (I think?) navigate kube by keyboard.

There are some rough edges like the webview stealing the focus every time it loads something (something we can only fix with Qt 5.9, which is taking it’s sweet time to become available on my distro), and there is work to be done on shortcuts, but the basics are in place now.


While at it working on accessibility stuff we figured it’s about time we prepare translations as well. We’ll be using Qt based translations because it seems to be good enough and the QML integration of ki18n comes with some unwelcome dependencies. Nothing unsolvable of course but the mantra is definitely not to have dependencies that we don’t know what for.

Anyways, Michael went through the codebase and converted all strings to translatable, and we have a Messages.sh script, so that should be pretty much good to go now. I don’t think we’ll have translations for 0.4 already, but it’s good to have the infrastructure in place.

Copyable labels

Another interesting little challenge was when we noticed that it’s sometimes convenient to copy some text you see on your screen. It’s actually pretty annoying if you have to manually type off the address you just looked up in the address book. However, if you’re just using QtQuick.Controls2.Label, that’s exactly what you’re going to have to do.

Cursor based selection, as we’re used to from most desktop applications, has a couple of challenges.

  • If you have to implement that cursor/selection stuff yourself it’s actually rather complicated.
  • The text you want to copy is more often than not distributed over a couple of labels that are somehow positioned relative to each other, which makes implementing cursor based selection even more complicated.
  • Because you’re copying visually randomly distributed labels and end up with a single blob of text it’s not trivial to turn that into usable plaintext. We all know the moment you paste something from a website into a text document and it just ends up being an unrecognizable mess.
  • Cursor based selection is not going to be great with touch interaction (which we’ll want eventually).


The solution we settled for instead is that of selectable items. In essence a selectable item is a manual grouping of a couple of labels that can be copied as once using a shortcut or a context menu action. This allows the programmer to prepare useful chunks of copyable information (say an address in an addressbook), and allows him to make sure it also ends up in a sane formatting, no matter how it’s displayed in the view itself.

The downside of this is of course that you can no longer just copy random bits of a text you see, it’s all or nothing. But since you’re going to paste it into a text editor anyways that shouldn’t be a big deal. The benefit of it, and I think this is a genuine improvement, is that you can just quickly copy something and you always get the same result, and you don’t have to deal with finicky cursor positions that just missed that one letter again.


The flatpak now actually works! Still not perfect (you need to use –devel), but try for yourself: Instructions
Thanks to Aleix Pol we should have nightly builds available as well =)

Other changes include:

  • The threading index now merges subthreads once all messages become available. This is necessary to correctly build the threading index if messages are not delivered in order (so there is a missing link between messages). Because we build a persistent threading index when receiving the messages (so we can avoid doing that in memory on every load), we have to detect that case and merge the two subthreads that exist before the missing link becomes available.
  • The conversation view was ported away from QtQuick’s ListView. The ListView was only usable with non-uniformly sized items through a couple of hacks and never played well with positioning at the last mail in the conversation. We’re now using a custom solution based on a Flickable + Column + Repeater, which works much better. This means we’re always rendering all mails in a thread, but we had to do that before anyways (otherwise scrolling became impossible), and we could also improve it with the new solution by only rendering currently visible mails (at the cost of loosing an accurate scrollbar).
  • The email parsing was moved into it’s own threads. Gpgme is dead slow, so (email-)threads containing signatures would visibly stutter (Without signature the parsing is ~1ms, with ~150ms. With encryption we can easily go up to ~1s). With the new code this no longer blocks the view and multiple mails are parsed in parallel, which makes it nice and snappy.
  • Lot’s of cleanup and porting to QtQuick.Controls2.
  • Lot’s of fixes big and small.

It’s been a busy couple of weeks.


The annual Randa meeting is coming up and it needs your support! Randa will give us another week of concentrated effort to work on Kube’s keyboard navigation, translation and other interaction issues we still have. Those sprints are very valuable for us, and are in dire need of support to finance the whole endavour, so any help would be more than welcome: https://www.kde.org/fundraisers/randameetings2017/


August 19, 2017

Accessibility (a11y for short) seems like a niche area of concern for many people. I was thinking about this recently on a hot morning in Spain, walking to the bus station with my wheeled luggage. The sidewalks are thoughtfully cut out for wheelchairs -- and those with luggage! and the kids riding skateboards, and...... the rest of us.

When websites and program output can be parsed by a screen-reader, it is great for blind folks. It is also great for the busy person working and listening, and even for the reader who doesn't have to ignore popup menus and other distractions. In other words, all of us.

There are many more examples, but my point is -- a11y helps everyone. So please - everyone - help KDE focus on accessible software at Randa! Fundraiser is ongoing! Don't pass it by because you think this is niche. Accessible software is better for all.

Stop by https://www.kde.org/fundraisers/randameetings2017/ and give generously, or read more about it first: Randa Meetings 2017: It's All About Accessibility.

Once upon a time, I start o use Craft, an amazing tool inside KDE that does almost all the hard work to compile KDE Applications on Windows and MacOS. Thanks to the great work of Hannah since last year Randa Meetings, Craft is becoming a great tool. Using all the power of Python, I started [...]

Afortunadamente Krita vuelve a ser noticia por temas puramente técnicos y no por temas financieros, los cuales fueron solucionados en tiempo record gracias a la Comunidad. Y es que acaba de lanzar una nueva versión que trae jugosas novedades. De las más destacadas es la posibilidad de volver a pintar con los dedos en Krita, una funcionalidad perdida y recuperada.

Volver a pintar con los dedos en Krita ya es una realidad

Volver a pintar con los dedos en Krita

“Girl to protect the sleep” de ComamitsuZaki realizado mediante Krita

Una vez resueltos los problemas legales con la Fundacion Krita, la Comunidad de esta extraordinaria aplicación de dibujo artístico que tiene enamorados un los artistas digitales por su potencial y versatilidad.

Entre las novedades destacadas tenemos:

  • El retorno del soporte táctil (función Touch Painting), es decir, de la posibilidad de pintar con los dedos. Hay que destacar que es un funcionalidad que se perdió en el salto a Qt5.
  • Nueva versión del al plugin gmic-qt, una interfaz para G’MIC, que nos permite aplicar filtros y efectos a imágenes.
  • Nueva herramienta que permite limpiar y eliminar elementos no deseados
  • Nuevos pinceles inteligentes Radian
  • Mejoras en el rendimiento
  • Mejoras en el comportamiento de la rueda del ratón
  • Mejorado el editor de animaciones

Estas novedades se pueden ver en el tradicional vídeo de novedades que ofrece la Comunidad Krita y que podéis visualizar a continuación:

Y, por supuesto, decenas de errores solucionados con lo que gana estabilidad con las que se pueden hacer maravillas como la siguiente:

Por último destacar que Krita suele estar disponible en los repositorios de las principales distribuciones GNU/Linux y que sus últimas educiones se pueden probar mediante una App Image con la que solo necesitas descargar la aplicación, dar permisos de ejecución y hacer click.


Más información: Krita | Muy Linux |Linux Adictos | La mirada del replicante

This is the fourth post in my GSoC series. You can read the third one here.

An introductory example

When building web apps (or internet-dependent apps in general), like I am doing for my Google Summer of Code project, you are most likely fetching data from one or more external APIs (APIs that either you, your team or an external service developed).

Suppose you are building a web app for a wiki-like website (like I am doing), suppose again you are developing the home view for your web app. To render the home page you need to fetch some data, let’s say recent edits, new users, the main categories your website is divided in, the “page of the day” and many other informations.

This means you would have to make many different calls to many different API endpoints before your page is fully rendered. It works but it is not ideal, especially if your user is, for example, on a mobile device with a mobile connection. Latency is high (and painful for the user) and before the page is rendered, even if the requests were performed in parallel, many seconds would pass.

Sometimes you can’t even make these requests in parallel, a request might depend on the result of another request, again resulting in longer loading times.

Merging the requests server-side

One possible solution to the potential slowdown would be to create an addittional API service, developed and mainteined by you (or by the team developing the app) that externally exposes “complex”/elaborate endpoints.

Think for example of an /api/homepage endpoint. When you request this endpoint your service would request all the dependencies for the homepage (/api/users, /api/page-of-the-day, /api/tags, …), add them to the same response object and send them back to you.

The first advantange we can see with this solution is that the number of requests is drastically reduced. Many requests gets “compressed” into one. Of course the response would be bigger but this would cut some time in term of latency.

Another advantage we can easily observe is that since our additional service is taking care of all the requests composing the complex request (let’s call them sub-requests), these sub-requests are performed faster and in a more reliable way (the service has access to a more decent connection than your mobile phone).

Your service would also be able to provide additional logic that maybe the other API endpoints you are depending on are not providing. For example you could define caching rules so that sub-requests are performed every once in a while insted of at every request, returning cached results whenever you feel like it is the best to do so. Again this will result in faster requests from your app.

There are some (practical and not) disvantages to this proposed solution.

First of all it is a bit against RESTful ideals, instead of single, self contained resources (/api/users, /api/page-of-the-day, /api/tags) you have a whole object (/api/homepage).

Secondly this add an additional layer of complexity. You would need to develop an additional server that performs the sub-requests, not ideal if you are developing a project by yourself, but it might be worth for a performance increase.

The Backend for Frontend pattern

Another possibility would be to implement the Backend for Frontend pattern. The article gives a nice overview on the pattern but I will also try to give some insights on it.

This pattern is especially useful when you are building mobile applications together with web applications, both sharing a common API.

Usually, on the mobile application, you might want to show the same elements you show on the website, but with less detail, because it might be impossible to show on small screens all the available informations the full API provides. For example if we are displaying the recent users on a website we might want to show all the possibile details but on a mobile app we would hide for example their registration time, or their full name.

You could of course keep using the same API you use for the web, but this is not ideal because it means that you are downloading data that is partially not being used by your client.

The Backend for Frontend pattern tries to solve this issue. Instead of having only one API endpoint for all your devices, you would implement two (or more) API endpoints, each dealing with a different kind of device and application and each providing only the needed information (nothing more and nothing less).

For example instead of /api/pages you would have /api/desktop/pages and /api/mobile/pages.

Again you would need to develop an additional service, but this way you might save some precious time (and data) for your API clients.

You could combine this pattern with the previous one I presented to get best of both worlds: save data and save requests.

Summing up

If you are building a small and personal project you don’t have to matter with the issues and solutions presented here. These ideas are important when you are building APIs and software for hundred of thousands of users, but I decided to investigate them for learning purposes. Let me know what you think in the comments.

GSoC updates

In this last month of GSoC I was busy researching and testing various libraries to manage user authentication (think of login, logout and registering functionalities) that worked well with Vue.js. In the end, since we are going to use Keycloak for authenticating user in the WikiToLearn backend, I had to choose a library to integrate with Keycloak and, since there were none to integrate with Vue.js, I had to create one kinda from scratch. The last few days of GSoC will be spent completing and polishing this library which I plan to work on ever after GSoC. It is an “external” project but it is very related to the project I am developing.

August 18, 2017

The last week of GSoC 2017 is about to begin. My project is in a pretty good state I would say: I have created a big solution for the Xwayland Present support, which is integrated firmly and not just attached to the main code path like an afterthought. But there are still some issues to sort out. Especially the correct cleanup of objects is difficult. That’s only a problem with sub-surfaces though. So, if I’m not able to solve these issues in the next few days I’ll just allow full window flips. This would still include all full screen windows and for example also the Steam client as it’s directly rendering its full windows without usage of the compositor.

I still hope though to solve the last problems with the sub-surfaces, since this would mean that we can in all direct rendering cases on Wayland use buffer flips, which would be a huge improvement compared to native X.

In any case at first I’ll send the final patch for the Present extension to the xorg-devel mailing list. This patch will add a separate mode for flips per window to the extension code. After that I’ll send the patches for Xwayland, either with or without sub-surface support.

That’s already all for now as a last update before the end and with still a big decision to be made. In one week I can report back on what I chose and how the final code looks like.


After posting the image of the 5.11 wallpaper feedback started coming in and one thing fairly consistently mentioned is how dark/muted it is. Of course, there were mixed opinions on whether it was good to be dark or if it was a little too far, but it was a clear observation, especially compared to the previous wallpapers.

So I took a few minutes to adjust the wallpaper. There were lots of people who liked having something more subtle, so I didn’t stray too far. I adjusted the blues to be more saturated, the browns are lighter towards the bottom to reduce banding, the orange is a bit brighter, and reds on the right were tweaked. I also reduced an “atmosphere” gradient. Lastly, I removed a noise filter used to combat banding.

Overall it’s not that much lighter, but it should be less muddy and washed out. If you didn’t have them side-by-side ideally you may not notice the changes, but hopefully it just feels a bit better.

Here’s the adjusted wallpaper:


Both versions are available on the KDE Store.

This is a good year to be a Qt contributor.

There was Qt Day Italy in June. From what I hear, the event was a success. The talks were great and everything worked. This was the sixth Qt Day Italy, so there is tradition behind this event!

Even though it is not a Qt event, KDE Akademy is worth mentioning. Akademy is the annual world summit of KDE, one of the largest Free Software communities in the world. It is a free, non-commercial event organized by the KDE Community. This year Akademy was in Almeria Spain, in late July, 22nd to 27th. KDE has over the years brought many excellent developers to Qt, and they are definitely the biggest open source project using Qt.

Starting now is the QtCon Brasil event in Rio de Janeiro on August 18th to 20th. Taking inspiration from last years QtCon in Berlin, QtCon Brasil the first Qt community event in Brasil. The speaker list is impressive and there is an optional training day before the event. For South-American Qt developers right now is the time and place to be in Rio!

This year Qt Contributors’ Summit is being organised as a pre-event to Qt World Summit, and we are calling it Qt Contributors’ Days. As is tradition, the event will be run in the unconference style with the twist that topics can be suggested beforehand on the schedule wiki page. Qt Contributors’ Days gathers Qt contributors to the same location to discuss where Qt is and what direction it is heading in. The discussions are technical with the people implementing the actual features going into details of their work.

You can register to Qt Contributors’ Days at the Qt World Summit site, the nominal fee is to cover the registration expense, if it is an issue, please contact me.

The late summer and autumn are shaping up to be great times for Qt contributors!

The post 2017 for Qt Contributors appeared first on Qt Blog.

It’s been a long time coming ..

Tobias and Raphael pushed the button today to push QtWebEngine into FreeBSD ports. This has been a monumental effort, because the codebase is just .. ugh. Not meant for third-party consumption, let’s say. There are 76 patches needed to get it to compile at all. Lots of annoying changes to make, like explaining that pkg-config is not a Linux-only technology. Nor is NSS, or Mesa, while #include <linux/rtnetlink.h> is, in fact, Linux-only. Lots of patches can be shared with the Chromium browser, but it’s a terrible time-sink nonetheless.

This opens the way for some other ports to be updated — ports with QtWebEngine dependencies, like Luminance (an HDR-image-processing application).

The QtWebEngine parts have been in use for quite some time in the plasma5/ branch of Area51, so for users of the unofficial ports, this announcement is hardly news. Konqueror with QtWebEngine underneath is a fine and capable browser; my go-to if I have Plasma5 available at all on FreeBSD.

August 17, 2017

Well, it’s that time of the year again where I talk about wallpapers!

For those who watched the livestream of the beach wallpaper, you’ll notice this isn’t what I had been working on. Truth be told after the stream I hit a few artistic blocks which brought progress to a grinding halt. I plan to finish that wallpaper, but for this release I created something entirely different while I decide what to do with it. I enjoyed this “wireframe” effect, and will probably experiment with it again.

This wallpaper is named “Opal”, specifically after wood opal resulting from when water carrying mineral deposits will petrify wood it runs across. Wood opal is pretty special stuff, and often it can often look straight out of a fairy tale.


The Plasma 5.11 wallpaper “Opal” is available on the KDE Store.

At the recently concluded Akademy 2017 in the incredibly hot but lovely Almería, yours truly went and did something a little silly: Submitted both a talk (which got accepted) and hosted a BoF, both about Open Collaboration Services, and the software stack which KDE builds to support that API in the software we produce. The whole thing was amazing. A great deal of work, very tiring, but all 'round amazing. I even managed to find time to hack a little bit on Calligra Gemini, which was really nice.

This blog entry collects the results from the presentation and the BoF. I realise this is quite long, but i hope that you stick with it. In the BoF rundown, i have highlighted the specific results, so hopefully you'll be able to skim-and-detail-read your specific interest areas ;)

First, A Thank You In Big Letters

Before we get to that, though, i thought i'd quickly touch on something which i've seen brought up about what the social media presence of the attendees looks like during the event: If you didn't know better, you might imagine we did nothing but eat, party and go on tours. My personal take on that is, we post those pictures to say thank you to the amazing people who provide us with the possibility to get together and talk endlessly about all those things we do. We post those pictures, at least in part, because a hundred shots of a group of people in an auditorium get a bit samey, and while the discussions are amazing, and the talks are great, they don't often make for exciting still photography. Video, however, certainly does that, and those, i hear, are under way for the presentations, and the bof wrapups are here right now :)

Nothing will stop our hackers. And this is before registration and the first presentation!

Presenting Presentations

Firstly, it felt like the presentation went reasonably well, and while i am not able to show you the video, i'll give you a quick run-down of the main topic covered in it. Our very hard working media team is working on the videos at the moment, though, so keep your eyes on the KDE Community YouTube channel to catch those when they're released.

The intention of the presentation was to introduce the idea that just because we are making Free software, that does not mean we can survive without money. Consequently, we need some way to feed funds back to the wildly creative members of our community who produce the content you can find on the KDE Store. To help work out a way of doing this in a fashion that fits in with our ideals, described by the KDE Vision, i laid out what we want to attempt to achieve in five bullet points, tongue-in-cheek called Principia pene Premium, or the principles of almost accidental reward:
  • Financial support for creators
  • Not pay-for-content
  • Easy
  • Predictable
  • Almost (but not quite) accidental
The initial point is the problem itself, that we want the creators of the content on the store to be financially rewarded somehow. The rest are limiting factors on that:

Not pay-for-content alludes to the fact that we don't want to encourage paywalls. The same way we make our software available under Free licenses of various types, we want to encourage the creators of the content used in it to release their work in similarly free ways.

Easy means easy for our creators, as well as the consumers of the content they produce. We don't want either them to have to jump through hoops to receive the funds, or to send it.

Predictable means that we want it to be reasonably predictable for those who give funds out to the creators. If we can ensure that there are stable outgoings for them, say some set amount each month or year, then it makes it easier to budget, and not have to worry. Similarly, we want to try and make it reasonably predictable for our creators, and this is where the suggestion about Liberapay made by several audience members comes in, and i will return to this in the next section.

Finally, perhaps the most core concept here is that we want to make it possible to almost (but not quite) accidentally send one of the creators funds. Almost, because of course we don't want to actually do so accidentally. If that were the case, the point of being predictable would fly right out the window. We do, however, want to make it so easy that it is practically automatically done.

All of this put together brings us to the current state of the KDE Store's financial support system: Plings. These are an automatic repayment system, which the store handles for every creator who has added PayPal account information to their profile. It is paid out monthly, and the amount is based on the Pling Factor, which is (at the time of writing) a count of how many downloads the creator has accumulated over all content items over course of the month, and each of those is counted as $0.01 USD.

Space-age looking crazy things at the Almería Solar Platform. Amazing place. Wow. So science.

Birds of a Feather Discuss Together

On Wednesday, a little while before lunch, it was time for me to attend my final BoF session of the week (as i would be leaving early Thursday). This one was slightly different, of course, because i was the host. The topic was listed as Open Collaboration Service 1.7 Preparation, but ended up being more of a discussion of what people wanted to be able to achieve with the store integration points we have available.

Most of the items which were identified were points about KNewStuff, our framework designed for easy integration of remote content using either OCS, or static sources (used by e.g. KStars for their star catalogues).

Content from alternate locations was the first item to be mentioned, which suggests a slight misunderstanding about the framework's abilities. The discussion revealed that what was needed was less a question of being able to replace existing sources in various applications, so much as needing the ability to control the access to KNewStuff more granularly. Specifically, being able to enable/disable specific sources was highlighted, perhaps through using Kiosk. It might still make sense to be able to overlay sources - one example given was the ability to overlay the wallpapers source (used in Plasma's Get New Wallpapers) with something pointing to a static archive of wallpapers (so users might be able to get a set of corporate-vetted backgrounds, rather than just one). This exact use case should already be possible, simply by providing a static XML source, and then replacing the wallpapers.knsrc file normally shipped by Plasma with another, pointing to that source.

A more complete set of Qt Quick components was requested, and certainly this would be very useful. As it stands, the components are very minimal and really only provide a way to list available items, and install/update/remove them. In particular two things were pointed out: There is no current equivalent of KNS3::Button in the components, and further no Kiosk support, both of which were mentioned as highly desired by the Limux project.

Signing and Security was highlighted as an issue. Currently, KNSCore::Security exists as a class, however it is marked as "Do not use, non-functional, internal and deprecated." However, it has no replacement that i am myself aware of, and needs attention by someone who, well, frankly knows anything of actual value about signing. OCS itself has the information and KNS does consume this and make it available, it simply seems to not be used by the framework. So, if you feel strongly about signing and security issues, and feel like getting into KNewStuff, this is a pretty good place to jump in.

Individual download item install/uninstall was mentioned as well, as something which would be distinctly useful for many things (as a simple example, you might want more than one variant of a wallpaper installed). Right now, Entries are marked as installed when one download item is installed, and uninstalling implicitly uninstalls that download item. There is a task on the KNewStuff workboard which has collected information about how to adapt the framework to support this.

But KNewStuff wasn't the only bit to get some attention. Our server-side software stack had a few comments along the way as well.

One was support for Liberapay which is a way to distribute monetary wealth between people pretty much automatically, which fits very nicely into the vision of creator support put forward in my presentation. In short, what it allows us to do

One topic which comes up regularly is adding support for the upload part of the OCS API to our server-side stack. Now, the reason for this lack is not that simply adding that is difficult at all, because it certainly isn't - quite the contrary, the functionality practically exists already. The problem here is much more a case of vetting: How do we ensure that this will not end up abused by spammers? The store already has spam entries to be handled every day, and we really want to avoid opening up a shiny, new vector for those (insert your own choice of colloquialism here) spammers to send us things we want to not have on the store. Really this deserves a write-up of its own, on account of the sheer scope of what might be done to alleviate the issues, but what we spoke about essentially came down the following:

  • Tight control of who can upload, so people have to manually be accepted by an administration/editors team as uploaders before they are given the right to do so through the API. In essence, this would be possible through establishing a network of trust, and through people using the web interface first. As we also want people to approach without necessarily knowing people who know people, a method for putting yourself up for API upload permission approval will also be needed. This might possibly be done through setting a requirement for people who have not yet contributed in other ways to do so (that is, upload some content through the web first, and then request api upload access). Finally, since we already have a process in place for KDE contributors, matching accounts with KDE commit access might also be another way to achieve a short-cut (you already have access to KDE's repositories, ability to publish things on the store would likely not be too far a stretch).
  • Quality control of the content itself. This is something which has been commented on before. Essentially, it has been discussed that having linting tools that people can use locally before uploading things would be useful (for example, to ensure that a kpackage for a Plasma applet is correct, or that a wallpaper is correctly packaged, or that there is correct data in a Krita brush resource bundle, or that an AppImage or Flatpak or Snap is what it says it is, just to mention a few). These tools might then also be used on the server-side, to ensure that uploaded content is correctly packaged. In the case of the API, what might be done is to return the result of such a process in the error message field of a potentially failed OCS content/add or content/edit call, which then in turn would be something useful to present to the user (in place of a basic "sorry, upload failed" message). 

For OCS itself, adding mimetype as an explicit way to search and filter entries and downloaditems was suggested. As it stands, it could arguably be implemented by clients and servers, however having it explicitly stated in the API would seem to make good sense.

The proposal to add tagging support to OCS currently awaiting responses on the OCS Webserver phabricator was brought up. In short, while there are review requests open for adding support for the proposal to Attica and KNewStuff respectively, the web server needs the support added as well, and further the proposal itself needs review by someone who is not me. No-one who attended the BoF felt safe in being able to review this in any sensible way, and so: If you feel like you are able to help with this, please do take part and make comments if you think something is wrong.

Finally, both at the BoF and outside of it, one idea that has been kicked around for a while and got some attention was the idea of being able to easily port and share settings between installations of our software. To be able to store some central settings remotely, such as wallpaper, Plasma theme and so on, and then apply those to a new installation of our software. OCS would be able to do this (using its key/value store), and what is needed here is integration into Plasma. However, as with many such things, this is less a technical issue (we have most of the technology in place already), and more a question of design and messaging. Those of you who have ever moved from one Windows 10 installation to another using a Microsoft account will recognise the slightly chilling feeling of the sudden, seemingly magical appearance of all your previous settings on the machine. As much as the functionality is very nifty, that feeling is certainly not.

Solar powered sun-shade platform outside the university building. With fancy steps. And KDE people on top ;)

Another Thank You, and a Wish

Akademy is not the only event KDE hosts, and very soon there is going to be another one, in Randa in the Swiss alps, this year about accessibility. I will not delve into why this topic is so important, and can only suggest you read the article describing it. It has been my enormous privilege to be a part of that several years, and while i won't be there this year, i hope you will join in and add your support.

The word of the day is: Aircon. Because the first night at the Residencia Civitas the air conditioning unit in the room i shared with Scarlett Clark did not work, making us very, very happy when it was fixed for the second night ;)
It finally landed! KStars 2.8.1 aka Hipster release is out for Windows & MacOS!

The highlight for this release is experimental support for HiPS: Hierarchical Progressive Surveys. HiPS provides multi-resolution progressive surveys to be overlayed directly in client applications, such as KStars. It provides an immersive experience as you can explore the night sky dynamically.

With over 200+ surveys across the whole electromagnetic spectrum from radio, infrared, visual, to even gamma rays, the user can pan and zoom progressively deeper into the data visually.

HiPS Support in KStars

HiPS support in KStars has been made possible with collaboration with the excellent open source planetarium software SkyTechX. This truely demonstrates the power of open source to accelerate development and access to more users.

Another feature, also imported from SkyTechX, is the Polaris Hour Angle, which is useful for users looking to polar align their mount.

Polar Hour Angle

GSoC 2017 student Csaba Kertész continued to merge many code improvements. Moreover, many bugs fixes landed in this release. The following are some of the notable fixes and improvements:
  • BUGS:382721 Just use less than and greater than keys for time.
  • BUGS:357671 Patch by Robert Barlow to support custom catalogs in WUT.
  • Improved comet and asteroid position accuracy.
  • Ekos shall fallback to user defined scopes if INDI driver does not provide scope metadata.
  • Fixed command line parsing.
  • Fixed many PHD2 external guider issues.
  • Fixed selection of external guiders in Ekos Equipment Profile.
  • Fixed rotator device infinite loop.
  • Fixed scheduler shutdown behavior.
  • Fixed Ekos Mosaic Position Angle.
  • Fixed issue with resetting Polar Alignment Assistant Tool rotation state.
  • Fixed issue with Ekos Focus star selection timeout behavior.
  • Ekos always switches to CLIENT mode when taking previews.
  • Handle proper removal of devices in case of Multiple-Devices-Per-Driver drivers
  • Display INDI universal messages for proper error reporting.
  • Better logging with QLoggingCategory.

Ekos Mosaic Tool with HiPS

Kdenlive 17.08 is released bringing minor fixes and improvements. Some of the highlights include fixing the Freeze effect and resolving inconsistent checkbox displays in the effects pannel. Downloaded transition Lumas now appear in the interface. Now it is possible to assign a keyboard shortcut for the Extract Frame feature also a name is now suggested based on the frame number. Navigation of clip markers in the timeline behave as expected upon opening the project. Audio clicks issues are resolved although this requires building MLT from git or wait for a release. In this cycle we’ve also bumped the Windows version from Alpha to Beta.

We continue steadfastly making progress in the refactoring branch due for the 17.12 release. We will soon make available a package for testing purposes. Stay tuned for the many exciting features coming soon.

Full list of changes

  • Fix audio mix clicks when using recent MLT. Commit. Fixes bug #371849
  • Fix some checkbox displaying inconsistent info. Commit.
  • Fix downloaded lumas do not appear in interface (uninstall/reinstall existing lumas will be required for previously downloaded). Commit. Fixes bug #382451
  • Make it possible to assign shortcut to extract frame feature,. Commit. Fixes bug #381325
  • Gardening: fix GCC warnings (7). Commit.
  • Gardening: fix GCC warnings (6). Commit.
  • Gardening: fix GCC warnings (5). Commit.
  • Gardening: fix GCC warnings (4). Commit.
  • Gardening: fix GCC warnings (3). Commit.
  • Gardening: fix GCC warnings (2). Commit.
  • Gardening: fix GCC warnings (1). Commit.
  • Fix clip markers behavior broken on project opening. Commit. Fixes bug #382403
  • Fix freeze effect broken (cannot change frozen frame). Commit.
  • Use QString directly. Commit.
  • Use isEmpty. Commit.
  • Use isEmpty(). Commit.
  • Remove qt module in include. Commit.
  • Use constFirst. Commit.
  • Make it compile. Commit.
  • Use Q_DECL_OVERRIDE. Commit.
  • Use nullptr. Commit.
  • Avoid using #elifdef. Commit.
  • Try harder to set KUrlRequester save mode in the renderwidget. Commit.
  • Make sure that text is not empty. Commit.
  • Use QLatin1Char(…). Commit.
  • Cmake: remove unused FindQJSON.cmake. Commit.
  • Port some foreach to c++ for(…:…). Commit.
  • Fix compiler settings for Clang. Commit.

Last month, I attended KDE's annual conference, Akademy 2017. This year, it was held in Almeria, a small Andalusian city on the south-east coast of Spain.

The name of the conference is no misspelling, it's part of KDE's age old tradition of naming everything to do with KDE with a 'k'.

It was a collection of amazing, life-changing experiences. It was my first solo trip abroad and it taught me so much about travel, KDE, and getting around a city with only a handful of broken phrases in the local language.


My trip began from the recently renamed Kempegowda International Airport, Bangalore. Though the airport is small for an international airport, the small size of the airport works to its advantage as it is very easy to get around. Check-in and immigration was a breeze and I had a couple of hours to check out the loyalty card lounge, where I sipped soda water thinking about what the next seven days had in store.

The first leg of my trip was on a Etihad A320 to Abu Dhabi, a four hour flight departing at 2210 on 20 July. The A320 isn't particularly unique equipment, but then again, it was a rather short leg. The crew onboard that flight seemed to be a mix of Asian and European staff.

Economy class in Etihad was much better than any other Economy class product I'd seen before. Ample legroom, very clean and comfortable seats, and an excellent IFE system. I was content looking at the HUD-style map visualisation which showed the AoA, vertical speed, and airspeed of the airplane.

On the way, I scribbled a quick diary entry and started reading part 1 of Sanderson's Stormlight Archive - 'The Way of Kings'.

Descent to Abu Dhabi airport started about 3:30 into the flight. Amber city lights illuminated the desert night sky. Even though it was past midnight, the plane's IFE reported an outside temperature of 35C. Disembarking from the plane, the muggy atmosphere hit me after spending four hours in an air-conditioned composite tube.

The airport was dominated by Etihad aircraft - mainly Airbus A330s, Boeing 787-8s, and Boeing 777-300ERs. There were also a number of other airlines part of the Etihad Airways Partners Alliance such as Alitalia, Jet Airways, and some Air Berlin equipment. As it was a relatively tight connection, I didn't stop to admire the birds for too long. I traversed a long terminal to reach the boarding gate for the connecting flight to Madrid.

The flight to Madrid was another Etihad operated flight, only this time on the A320's larger brethren - the A330-200. This plane was markedly older than the A320 I had been on the first leg of the trip. Fortunately, I got the port side window seat in a 2-5-2 configuration. The plane had a long take-off roll and took off a few minutes after 2am. Once we reached cruising altitude, I opened the window shade. The clear night sky was full of stars and I must have spent at least five minutes with my face glued to the window.

I tried to sleep, preparing myself for the long day ahead. Soon after waking up, the plane landed at Madrid Barajas Airport and taxied for nearly half-an-hour to reach the terminal. After clearing immigration, I picked up my suitcase and waited for a bus which would take me to my next stop - the Madrid Atocha Railway Station. Located in the heart of city, the Atocha station is one of Madrid's largest train stations and connects it to the rest of Spain. My train to Almeria was later that day - at 3:15 in the afternoon.

On reaching Atocha, I got my first good look at Madrid.

My facial expression was quite similar

I was struck by how orderly everything was, starting with the traffic. Cars gave the right of way to pedestrians. People walked on zebra crossings and cyclists stuck to the defined cycling lanes. A trivial detail, but it was a world apart from Bangalore. Shining examples of Baroque and Gothic architecture were scattered among newer establishments.

Having a few hours to kill before I had to catch my train, I roamed around Buen Retiro Park, one of Spain's largest public parks. It was a beautiful day, bright and sunny with the warmth balanced out by a light breeze.

My heavy suitcase compelled me to walk slowly which let me take in as much as I could. Retiro Park is a popular stomping ground for joggers, skaters, and cyclists alike. Despite it being 11am on a weekday, I saw plenty of people jogging though the park. After this, I trudged through some quaint cobbled neighbourhoods with narrow roads and old apartment buildings dotted with small shops on the ground floor.

Maybe it was the sleep-deprivation or dehydration after a long flight, but everything felt so surreal! I had to pinch myself a few times - to believe that I had come thousands of miles from home and was actually travelling on my own in a foreign land.

I returned to Atocha and waited for my train. By this time, I came to realise that language was going to be a problem for this trip as very few people spoke English and my Spanish was limited to a few basic phrases - notably 'No hables Espanol' and 'Buenos Dias' :P Nevertheless, the kind folks at the station helped me find the train platform.

Trains in Spain are operated by state-owned train companies. In my case, I would be travelling on a Renfe train going till Almeria. The coaches are arranged in a 2-2 seating configuration, quite similar to those in airplanes, albeit with more legroom and charging ports. The speed of these trains is comparable to fast trains in India, with a top speed of about 160km/hr. The 700km journey was scheduled to take about 7 hours. There was plenty of scenery on the way with sloping mountain ranges and deserted valleys.

Big windows encouraged sightseeing

After seven hours, I reached the Almeria railway station at 10pm. According to Google Maps, the hostel which KDE had booked for all the attendees was only 800m away - well within walking distance. However, unbeknownst to me, I started walking in the opposite direction (my phone doesn't have a compass!). This kept increasing the Google Maps ETA and only when I was 2km off track I realised something was very wrong. Fortunately, I managed to get a taxi to take me to Residencia Civitas - a university hostel where all the Akademy attendees would be staying for the week.

After checking in to Civitas, I made my way to the double room. Judging from the baggage and the shoes in the corner, someone had moved in here before I did. About half an hour later, I found out who - Rahul Yadav, a fourth year student at DTU, Delhi. Exhausted after an eventful day of travel, I switched off the lights and went to sleep.

The Conference

The next day, I got to see other the Akademy attendees over breakfast at Civitas. In all, there were about 60-70 attendees, which I was told was slightly smaller than previous years.

The conference was held at University of Almería, located a few kilometres from the hostel. KDE had hired a public bus for transport to and from the hostel for all the days of the conference. The University was a stone's throw from the Andalusian coastline. After being seated in one of the larger lecture halls, Akademy 2017 was underway.

Konqi! And KDE Neon!

The keynote talk was by Robert Kayne of Metabrainz, about the story of how MusicBrainz was formed out of the ashes of CDDB. The talk set the tone for the first day of Akademy.

The coffee break after the first two talks was much needed. I was grappling with sleep deprivation and jet lag from the last two days and needed all the caffeine and sugar I could get to keep myself going for the rest of the day. Over coffee, I caught up with some KDE developers I met at QtCon.

Throughout the day, there were a lot of good talks, notably 'A bit on functional programming', and five quick lightning talks on a variety of topics. Soon after this, it was time for my very own talk - 'An Introduction to the KIO Library'.

The audience for my talk consisted of developers with several times my experience. Much to my delight, the maintainer of the KIO Library, David Faure was in the audience as well!

Here's where I learned another thing about giving presentations - they never quite go as well as it seems to go when rehearsed alone. I ended up speaking faster than I planned to, leaving more than enough time for a QA round. Just as I was wary about, I was asked some questions about the low-level implementation of KIO which thankfully David fielded for me. I was perspiring after the presentation, and it wasn't the temperature which was the problem �� A thumbs up from David afterwards gave me some confidence that I had done alright.

Following this, I enjoyed David Edmundson's talk about Binding properties in QML. The next presentation I attended after this is where things ended up getting heated. Paul Brown went into detail about everything wrong with Kirigami's TechBase page. This drew some, for lack of a better word, passionate people to retaliate. Though it was it was only supposed to be a 10 minute lightning talk, the debate raged on among the two schools of thought of how TechBase documentation should be written. All in good taste. The only thing which brought the discussion to an end was the bus for returning to Civitas leaving sharp at 8pm.

Still having a bit of energy left after the conference, I was ready to explore this Andalusian city. One thing which worked out nicely on this trip is the late sunset in Spain around this time of the year. It is as bright as day even at around 9pm and the light only starts waning at around 930pm. This gave Rahul and me plenty of time to head to the beach, which was about a 20 minute walk from the hostel.

Here, it struck me how much I loved the way of life here.

Unlike Madrid, Almeria is not a known as a tourist destination so most of the people living there were locals. In a span of about half an hour I watched how an evening unfolds in this city. As the sun started dipping below the horizon, families with kids, college couples, and high-school friends trickled from the beach to the boardwalk for dinner. A typical evening looked delightfully simple and laid-back in this peaceful city.

The boardwalk had plenty of variety on offer - from seafood to Italian cuisine. One place caught my eye, a small cafe with Doner Kebab called 'Taj Mahal'. After a couple of days of eating nothing but bland sandwiches, Rahul and I were game for anything with a hint of spice of it. As I had done in Madrid, I tried ordering Doner Kebab using a mixture of broken Spanish and improvised sign language, only to receive a reply from the owner in Hindi! It turned out that the owner of the restaurant was Pakistani and had migrated to Spain seven years ago. Rahul made a point to ask for more chilli - and the Doner kebabs we got were not lacking in spice. I had more chilli in one kebab than I would normally would have had in a week. At least it was a change from Spanish food, which I wasn't all that fond of.

View from the boardwalk

The next day was similar to the first, only a lot more fun. I spent a lot amount of time interacting with the people from KDE India. I also got to know my GSoC mentor, Boudhayan Gupta (@BaloneyGeek). The talks for this day were as good as the ones yesterday and I got to learn about Neon Docker images, the story behind KDE's Slimbook laptop, and things to look forward to in C++17/20.

The talks were wrapped up with the Akademy Awards 2017.

David Faure and Kevin Ottens

There were still 3 hours of sunlight left after the conference and not being ones to waste it, we headed straight for the beach. Boudhayan and I made a treacherous excursion out to a rocky pier covered with moss and glistening with seawater. My well-worn sandels were the only thing keeping me from slipping onto a bunch of sharply angled stones jutting out from the waves. Against my better judgement, I managed to reach the end of the pier only to see a couple of crabs take interest in us. With the tide rising and the sun falling, we couldn't afford to stay much longer so we headed back to the beach just as we came. Not long after, I couldn't help myself and I headed into the water with enthusiasm I had only knew as a child. Probably more so for BaloneyGeek though, who headed in headfirst with his three-week-old Moto G5+ in his pocket (Spoiler: the phone was irrevocably damaged from half a minute of being immersed in saltwater). In the midst of this, we found a bunch of KDE folks hanging out on the beach with several boxes of pizza and bottles of beer. Free food!

Exhausted but exhilarated, we headed back to Civitas to end another very memorable day in Almeria.

Estacion Intermodal, a hub for public transport in Almeria

With the talks completed, Akademy 2017 moved on to its second leg, which consisted more of BoFs (Birds of a Feather) and workshops.

The QML workshop organised by Anu was timely as my relationship with QML has been hot and cold. I would always go in circles with the QML Getting Started tutorials as there aren't as many examples of how to use QtQuick 2.x as there are with say, Qt Widgets. I understood how to integrate JavaScript with the QML GUI and I will probably get around to making a project with the toolkit when I get the time. Paul Brown held a BoF about writing good user documentation and deconstructed some more pretentious descriptions of software with suggestions on how to avoid falling into the same pitfalls. I sat on a few more BoFs after this, but most of the things went over my head as I wasn't contributing to the projects discussed there.

Feeling a bit weary of the beach, Rahul and I decided to explore the inner parts of the city instead. We planned to go to the Alcazaba of Almeria, a thousand-year-old fortress in the city. On the way, we found a small froyo shop and ordered a scoop with chocolate sauce and lemon sauce. Best couple of euros spent ever! I loved the tart flavour of the froyo and how it complemented the toppings with its texture.

This gastronomic digression aside, we scaled a part of the fort only to find it locked off with a massive iron door. I got the impression that the fort was rarely ever open to begin with. With darkness fast approaching, we found ourselves in a dodgy neighbourhood and we tried to get out as fast as we could without drawing too much attention to ourselves. This brought an end to my fourth night in Almeria.

View from Alcazaba

The BoFs continued throughout the 25th, the last full day of Akademy 2017. I participated in the GSoC BoF where KDE's plans for future GSoCs, SoKs, and GCIs were discussed (isn't that a lot of acronyms!). Finally, this was one topic where I could contribute to the discussion. If there was any takeaway from the discussion for GSoC aspirants, it is to start as early as you can!

I sat on some other BoFs as well, but most of the discussed topics were out of my scope. The Mycroft and VDG BoF did have some interesting exchange of ideas for future projects that I might consider working on if I get free time in the future.

Rahul was out in the city that day, so I had the evening to explore Almeria all by myself.

I fired up Google Maps to see anything of interest nearby. To the west of the hostel was a canal I hadn't seen previously so I thought it would be worth a trip. Unfortunately, because of my poor navigation skills and phone's lack of compass, I ended up circling a flyover for a while before ditching the plan. I decided to go to the beach as a reference point and explore from there.

What was supposed to be a reference point ended up becoming the destination. There was still plenty of sunlight and the water wasn't too cold. I put one toe in the water, and then a foot.

And then, I ran.

Running barefoot alone the coastline was one of the best memories I have of the trip. For once, I didn't think twice about what I was doing. It was pure liberation. I didn't feel the exertion or the pebbles pounding my feet.

Almeria's Beaches

The end of the coastline had a small fort and a dirt trail which I would've very much wanted to cycle on. After watching the sun sink into the sea, I ran till the other end of the boardwalk to find an Italian restaurant with vegetarian spinach spaghetti. Served with a piece of freshly baked bread, dinner was delicious and capped off yet another amazing day in Almeria.

Dinner time!

Day Trip

The 26th was the final day of the conference. Unlike the other days, the conference was only for half a day with the rest of the day kept aside for a day trip. I cannot comment on how the conference went on this day as I had to rush back to the hostel to retrieve my passport, which was necessary to attend the day trip.

Right around 2 in the afternoon we boarded the bus for the day trip. Our first stop was the Plataforma Solar de Almería, a solar energy research plant in Almeria. It houses some large heliostats for focussing sunlight at a point on a tower. This can be used for heating water and producing electricity.

There was another facility used for testing the tolerance of spacecraft heat shields by subjecting them to temperatures in excess of 2000C.

Heliostats focus at the top of the tower

The next stop was at the San José village. Though not too far from Almeria, the village is frequented by tourists much more than Almeria is and has a very different vibe. The village is known for its beaches, pristine clear waters, and white buildings. I was told that the village was used in the shooting of some films such as The Good, The Bad, and The Ugly.

Our final stop for the day was at the Rodalquilar Gold Mine. Lost to time, the mine had been shut down in 1966 due to the environmental hazards of using cyanide in the process to sediment gold. The mine wouldn't have looked out of place in a video-game or an action movie, and indeed, it was used in the filming of Indiana Jones and the Last Crusade. There was a short trek from the base of the mine to a trail which wrapped around a hill. After descending from the top we headed back to the hostel.

This concluded my stay in Almeria.


After checking out of the hostel early the next morning, I caught a train to Madrid. I had a day in the city before my flight to Bangalore the next day.

I reached Atocha at about 2 in the afternoon and checked in to a hotel. I spent the entire evening exploring Madrid on foot and an electric bicycle through the BiciMAD rental service.

Photo Dump

The Return

My flight back home was on the following morning, on the 28 July. The first leg of the return was yet again, on an Etihad aircraft bound for Abu Dhabi. This time it was an A330-300. It was an emotional 8 hour long flight - with the memories of Akademy still fresh in my mind. To top it off, I finished EarthBound (excellent game!) during the flight.

Descent into Abu Dhabi started about half an hour before landing. This time though, I got to see the bizarre Terminal 1 dome of the Abu Dhabi airport. The Middle East has always been a mystical place for me. The prices of food and drink in the terminal were hard to stomach - 500mL of water was an outrageous 8 UAE Dirhams (₹140)! Thankfully it wasn't a very long layover, so I didn't have to spend too much.

Note to self: may cause tripping if stared at for too long

The next leg was a direct flight to Bangalore, on another Etihad A330. Compared to all the travel I had done in the last two days, the four hour flight almost felt too short. I managed to finish 'Your Lie in April' on this leg.

It was a mix of emotions to land in Bangalore - I was glad to have reached home, but bitter that I had to return to college in only two days.

Akademy 2017 was an amazing trip and I am very grateful to KDE for letting me present at Akademy and for giving me the means of reaching there. I hope I can make more trips like these in the future!

Once upon a time, for those who remember the old days of Plasma and Lancelot, there was an experimental applet called Shelf.

The idea behind the Shelf was that sometimes it is useful to have a small applet that just shows your recent files, favourite applications, devices, which you can place on your panel or desktop for quick access.

Now, this post is not about a revival of Lancelot and Shelf (sadly), but it is closely related to them.

Namely, I always disliked the “recent documents” section that is available in almost all launchers in Plasma. The reason is that only one in ten of those documents has a chance to ever be opened again.

The first code that had the aim to fix this landed in Lancelot a long time ago – Lancelot was tracking how you use it so that it could eventually start predicting your behaviour.

This idea was recognized as a good one, and we decided that Plasma as a whole could benefit from this.

This is how the activities manager (KAMD) was born. The aim was to allow the user to separate the workspace based on the project she was working on; and to have the activity manager track which applications, documents etc. the user uses in each of the activities.

The first part – having different widget sets for different activities was baked in Plasma 4 from the start. The second one (which I consider to be much more important) came much later in the form of KAMD.

KAMD, apart from being tasked to manage activities and switch from one to another, also tracks which files you access, which applications you use so that the menus like Kicker and Kickoff can show recent documents and recent applications. And have those recent documents and applications tied to the activity you are currently in.

For example, if you have two activities – one for KDE development, and another for working on your photo collection, Digikam will be shown in the recent applications section of Kicker only for the second activity, since you haven’t used Digikam for KDE development.

Now, this is still only showing the “recent documents”. It does show a list of documents and applications that are relevant to the task you are currently working on, but still, it can be improved.

Since we know how often you open a document or start an application, we do not need to focus only on the last time you did so. We can detect which applications and documents you use often and show them instead. Both Kicker and Kickoff allow you to replace the “recently used” with “often used” in the current version of Plasma.

Documents shelf

Now back to the topic of this post.

Documents Shelf

While working on the KAMD service, I often need to perform tests whether it keeps enough data for it to be able to deduce which are the important applications and documents, and whether the deduction logic performs well.

Most of these tests are small GUI applications that show me the data in a convenient way.

For one of these, I realized it is not only useful for testing and debugging, but that it might be useful for the day-to-day work.

In the screenshot above, you can see an applet, that looks quite similar to Shelf from Plasma 4 days which shows the files I use most often in the dummy “test” activity.

One thing that Shelf did not have, and neither Kicker nor Kickoff have it now is that this applet allows you to pin the documents that are really important to you so that they never disappear from the list because the service thinks some other file is more important.

You can think of it as a combination of “often used” and “favourite” documents. It shows your favourite documents – the documents you pinned, and then adds the items that it considers worthy enough to be alongside them.

This applet is not going to be released with the next Plasma, it needs to evolve a bit more before that happens. But all the backend stuff it uses is released and available now if you want to use it in your project.

The keywords are kde:kactivities, kde:kactivities-stats and kde:kactivitymanagerd.


We are happy to announce the release of Qt Creator 4.4 RC!

For the details on what is new in Qt Creator 4.4, please refer to the Beta blog post. As usual we have been busy with bug fixes and improvements since then, and now would be a good time for you to go get it, and provide final feedback.

Get Qt Creator 4.4 RC

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.4 RC released appeared first on Qt Blog.

Later than planned, here’s Krita 3.2.0! With the new G’Mic-qt plugin integration, the smart patch tool, finger painting on touch screens, new brush presets and a lot of bug fixes.

Read the full release notes for more information!. Here’s GDQuest’s video introducing 3.2.0:

Note: the gmic-qt plugin is not available on OSX. Krita now ships with a pre-built gmic-qt plugin on Windows and the Linux AppImage. If you have tested the beta or release candidate builds, you might need to reset your configuration.

Changes since the last release candidate:

  • Don’t reset the LUT docker when moving the Krita window between moitors
  • Correctly initialize the exposure display filter in the LUT docker
  • Add the missing pan tool a ction
  • Improve the “Normal” blending mode performance by 30% (first patch for Krita by Daria Scherbatyuk!)
  • Fix a crash when creating a second view on an image
  • Fix a possible crash when creating a second window
  • Improve finding the gmic-qt plugin: Krita now first looks whether there is one available in the same place as the Krita executable
  • Fix scroll wheel behaviour if Krita is built with Qt 5.7.1. or later
  • Fix panning in gmic-qt when applying gmic-qt to a non-RGBA image
  • Scale channel values correctly when using a non-RGBA image with gmic-qt
  • Fix the default setting for allowing multiple krita instances



    Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.


    (If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

    When it is updated, you can also use the Krita Lime PPA to install Krita 3.2.0 on Ubuntu and derivatives.


    Note: the gmic-qt and pdf plugins is not available on OSX.

    Source code


    For all downloads:


    The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
    . The signatures are here.

    Support Krita

    Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

August 16, 2017

Hola Amigos!
Akademy 2017 was such a great experience, that I would love to share with you all in this post.

For those who are unaware about Akademy, it’s an annual world summit of KDE (Please refer https://akademy.kde.org)
This year, Akademy was held in Almeria, Spain.


It features a 2-day conference with presentations on the latest KDE developments, followed by 4 days of workshops, Birds of a Feather (BoF) and coding sessions.

There are so many interesting talks given by some real tech enthusiasts, who are amiable and super cool. It is a great opportunity to meet people in person whom you have been communicating just knowing their IRC nickname.

So, as mentioned it started with some really interesting presentations on Plasma desktops, Plasma mobile, Ruqola, digiKam, WikiData, GCompris, KIO library, Calligra, KDE neon, Slimbook, Translating Challenges, and many more. We were fortunate to have Robert Kaye and Antonio Larrosa as our keynote speaker. These presentations were followed by BoF sessions and Workshops.

I conducted a workshop on Qt Quick Control 2, Paul Brown had some awesome stuff to share on what data should be visible to a viewer who visits your site or uses your product. Parallelly there was a BoF session, where the team brainstormed on Plasma mobile and Plasma vision.

[ https://twitter.com/anu22mittal ]

It was such pleasure to meet Valorie, Aleix Pol, Albert, Lydia, John Samuel, David, Timothee, Vasudha, Jonathan and so many more active contributes of KDE.

[Memories of Akademy 2017]

Now, How is KDE Akademy different than conf.kde.in?
I have been a part of KDE India conference held in Jaipur in 2016 [conf.kde.in]. Which looked like this:


When you see this, you find so many students, keen to know about foss and KDE community.
All are budding developers with some spark and unexplored ideas in their mind. So such type of conference held by KDE in India help these unexplored ideas to come into execution by using and developing features in the various software build by KDE developers.

Where as when you look at these bunch of developers here in this picture:  

This is the group photograph of Akademy 2017 attendees. They are the backbone of all softwares in KDE.

I would like to thank KDE for giving me this opportunity. It has added great experience and wonderful memories to my journey of foss development

[Hostel, Food, Location @ Akadmey]
Also, guys please help make KDE apps and Plasma easier to use, more convenient and accessible. Support #Randa2017.  (To know more about Randa refer: https://dot.kde.org/2017/08/08/randa-meetings-2017-its-all-about-accessibility)

Those who know me, or at the least know my history with Krita is that one of the prime things I personally want to use Krita for is making comics. So back in the day one of the things I did was make a big forum post discussing the different parts of making a comic and how different software solves it.

One of the things about making a comic is that is a project. Meaning, it is big and unwieldy, with multiple files and multiple disciplines. You need to be able to write, to draw, to ink, to color. And you need to be able to do this consistently.

The big thing I was missing in Krita was the ability to quickly get to my selection of pages. In real life, I can lay down pages next to one another, and always have them in my minds eye. In Krita, getting the next or previous page is always a matter of digging through folders and finding the correct page number.

Adding to this, I am also a bit of a perfectionist, so I have been training myself to start drawing scenes or writing as soon as I have an idea, because any idea is way more useful when you’ve got it down on page. You can append it to an existing story, or just work it in and reuse the drawings and compositions. And this was also a bit difficult to do, because how does one organise and tag those ideas?

So hence I spend the last few weeks on writing a comics manager for Krita.

Comics Project Management Tools

The comics manager, or more officially, the Comics Project Management Tools are a set of python scripts that form a plug in to Krita. They show up in the form of a docker.

comics_manager_01Python scripting was recently added to Krita because people were willing to pay and vote for the stretchgoal in our previous kickstarter. It shines in this case as the majority of the tasks that needed to be done were storing simple information and showing GUI for editing that information. Being a developer for Krita on the C++ side also meant that making use of PyQt was very natural.

So, I made a docker in Krita. It starts with a “New project”.

comics_manager_02Here, the most important part is making sure the artist only has to fill in the most vital information. That is… just the project name and the concept, actually. The location is asked before this window shows up.

The concept is basically a personal note, like “A comic about vampires fighting robots in a post-nuclear wasteland”, or “A scene where character A teaches character B how to ice skate.”

The project name, is actually sorta arbitrary. Usually any type of writer doesn’t know what to title their story until they’re halfway through, so the project name is intended to be more of a code name that is used to automate page naming. For that reason I also spent a day writing a “sophisticated” project name generator, of which most of the day was spent filling up the two lists the project name parts are pulled from.

comics_manager_03Next to that are details like the language(which it attempts to guess by asking QLocale for the system language), and the names for the page, export and template folders. The metadata can also be filled in already, but again, not necessary. So basically, the only thing an artist struck with inspiration needs to do is press New Project, pick a location, press “generate” and then “Finish”.

After that, the CPMT will make a project folder(if that was checked), will check for, and if necessary, make folders for the pages, export and templates, and will serialise all the information you put in, into a comicConfig.json file.


Originally, this was a yaml file, but I discovered that there’s no yaml library in the python standard libraries, and I figured that the majority of our users would panic at the idea of having to install a python library on windows(Krita users are very sweet, but for a good majority of them, computers are magical mystery boxes that will explode when pressing the wrong button. As an more savvy computer user, I can of course, confirm that this is true, but there’s still a massive difference between knowing how to fix the magic mystery box when it goes wrong, and being helplessly subjected to its moodswings). Either way, I just wanted to have a config file that was somewhat easily readable by someone coming across it when clearing up their computer for space. This is also the purpose of the concept line, to answer the question: “What in the hell was I trying to do here?”


comics_manager_05Anyway, the user can now navigate to their config file and open it. Then they can press “add page”, which asks them to either create a default template or import one.

comics_manager_06As the text may indicate, I am not too happy about the look of this window.

Creating a default template gives an over complicated window that will ask for a name, dpi, width and height, and the precise sizes of the margins and bleeds. The margins and bleeds are print terms.

When printing something, theoretically we…

  • Take a large piece of paper.
  • Align it to a printer
  • Print several pages at once on that large piece.
  • Cut the pages.
  • Fold the pages
  • and finally, bind the pages in some way or another.

Those steps above have many places where things can go wrong. In particular, the first things people started to learn when using mechanical printing was that aligning a text was difficult. So people introduced a margin to their text layout, so that even when it was slightly off, the whole text would still fit on the page.

The same thing goes for cutting, folding and binding. This is what the bleed is for. Sometimes, you want to have images that go all the way to the edge. So we create a second border, the bleeds, which indicate where the image will get cut off. The artist will paint over these too, but this method allows for determining where the absolutely important items, like the speech bubbles, go, as well as determining where the drawing doesn’t need to be super detailed, just nice enough to look right.

So the template creation tool allows you to set a margin and a bleed and will create a page with guidelines at those places. An experienced artist will likely have a collection of such templates already, so hence import template, which will copy the chosen template to the templates folder.

CPMT will remember the template chosen in for “Add page” and use that one to create pages with “Add Page” without showing the dialog. “Add page from template” will always show the dialog, as well as showing all the templates in the template folder, and the user can configure the default template for “Add page” in the project settings.

Originally, I had wanted to tell the user to select a page size, margin and bleeds in the project settings. However, those are a lot of things to fill in. Furthermore, there would also be need for a template for spreads(that’s a panel that covers two whole pages), as well as a template for the cover which is radically different to the regular pages. And these cannot be determined computer wise, because often there’s a little margin in the middle or extra bleed to either side for such templates and this can be unique per printing company. And that doesn’t even begin to cover situations where you are not making a print-style comic.

So I thought about this for a long time and decided that the most important thing would be a way 1) make images from template kra files and 2) make one of those templates a default template, easy to add with a single button, while still allowing for adding the others in a simple manner.

So, once you press ok, or add page with a default template, it will open the template, and resave it in the pages folder with projectname-firstavailablenumber.kra and show it in the pages list.

The Pages List

The pages list is probably the core element of the comics project management tool. On a technical level is a QTable view with a QStandardItem model, which in the first column shows a list of QStandard items with the page “preview.png”(thumbnail) as the icon and the page title as the text, and on the second column, the image subject as the text.


For the user, it is a list of pages, with their icon and title/filename on the first column, and the description on the second. The pages can be rearrange by moving the row selector on the right, and the config will then update the way the pages are arranged. Double clicking the page title will open the file in Krita.


Originally, I wanted the description to be editable from the docker, and was almost succesful, except that the python ZipFile library has no mechanism for overwriting/deleting files from a existing ziparchive, so I couldn’t only edit the documentinfo.xml. This is a bit of a pity, as editing the description from the docker was very convenient for the half hour that it did work. Now the user has to go to the document info to fill in either the subject or description. I want to keep this kind of information inside the image as that allows for moving the image around irrespective of project, so storing the information in the config file isn’t an option.

This is also why the templates are stored inside the comics project, so that when moving to a new computer, the whole project should still work. I went through a lot of trouble to ensure that all the paths are relative, and right now this is also where all the code probably should be a little cleaner :D

comics_manager_09Here’s where I learned that drag and drop reordering doesn’t work currently…*Sigh*

Anyway, the pages list allows reordering pages, accessing pages quickly, removing pages from the list(but not from disk, too dangerous), and adding existing pages. This is all stored in the config, and thus also loaded from it.

Originally, the loading was done by opening the file in Krita, requesting the thumbnail, and a title, but that could take up to a minute. Now, I get ZipFile to open the kra file(which is a zip), load the “preview.png” into a QImage, parse the “documentinfo.xml” with Elementree and get out the title and subject/abstract for showing that information. A part of me is wondering if I should allow a simple docker that just shows the “mergedimage.png” inside a given kra file for reference.

So the pages list is the most important element, but there are of course other elements, the second of which is the meta data.


As indicated in the setup section, writers usually don’t know the initial title of their work, and in general it isn’t too wise to force them to go through all the meta data information at the start. On a similar note, it is often hard to figure out what kind of meta data should go into a given field, and anyone who has attempted to read the actual specs of meta data standards knows that the people making these specs are insane and completely incapable of adding real world examples to their specs.

Personally, I also know that when I am writing, I slowly collect a selection of summaries/titles/keywords that should be going into the metadata, so hence I figured it might be useful to create a organised place to put them.

So, what does comics metadata look like?

Starting with that, what do comics formats look like? As digital distribution is a pretty new thing, and comics in general are a bit slow with providing these things, most comic readers use a guerilla format that probably got created somewhere around the nineties/early 2000. It is basically an archive with images inside, ordered and read alphabetically.

I say it was created around that time because there’s like 4 variations of the format. The simplest is CBZ, which is a zip archive with images. Then there’s CBR, which is a rar archive with the images, because around the early 2000 the rar archive was obviously so much better at compressing than zip files. Similarly, there’s CB7, which is of course, a 7-zip archive with images. And of course, Linux nuts came up with CBT, a tar archive with images.

As you can tell, none of these have obvious meta-data schemes. Luckily for us, a guerrilla format has guerrilla attempts at creating metadata. No less that 5 different schemes can be found on the internet.

  1. ComicInfo.xml (https://wiki.mobileread.com/wiki/ComicRack) is a file that is added by the windows comic managing software “Comic Rack”. It has no proper specification, but is apparently common enough to be considered relevant.
  2. CoMet.xml (http://www.denvog.com/comet/) is one of the attempts at making an official spec. I haven’t figured out if it is actually common.
  3. ComicBookInfo (https://code.google.com/archive/p/comicbookinfo/wikis/Example.wiki) which unlike the other three is a json file stuck into the zip info. The logic being that this is easier to read than those nasty xml files. For the computer that is, I don’t think most people realise there’s such a thing as a zipfile header and that it can have information in there. Anyway, the spec seems to have been devoured by googlecode’s passing, leaving us only with this lone example file. But at the least it has an example file.
  4. Comic Book Markup Language(http://dcl.slis.indiana.edu/cbml/) This… is some academic attempt at making comics machine readable. It says it has a spec. It doesn’t. It instead decides upon two extra tags for something or another spec, but that other spec isn’t really a spec either.
  5. ACBF (http://acbf.wikia.com/wiki/Advanced_Comic_Book_Format_Wiki) or Advanced Comic Book Format is an open source attempt at making a slightly more advanced markup similar to CBML, but unlike CBML it actually has a spec and examples, even if all of that is on a combo of a wikia/launchpad site that is on one hand full of more scripts you can shake a stick at(wikia.com) and on the other a extremely difficult to navigate website(launchpad). None the less, there’s two readers that can read it(sorta). One is ACBF viewer, a pyGTK based application, and the other is Peruse. Well, Peruse master. Peruse from the KDE neon repository takes your cbz with acbf meta-data just fine… and then shows nothing. It is fixed in master, apparantly.

So, for the purpose of writing a meta-data editor, we first look at the things these guys have in common. Except CBML because that has no spec as far as I am concerned.

All of them have a title. The title is the title of a work. It’s pretty uncontroversial. Even dublin core calls the title of a work “title”, and it is a sentence.

All of them also have… a description. You know whatever is on the back of the book to entice you to read it. In the English language, we have the following words to talk about this piece of text:

  • Description
  • Blurb
  • Abstract
  • Summary
  • Synopsis (Yes really, often book publishers ask authors for a synopsis, that is the whole plot in a paragraph, so they can determine whether it is worth publishing the book, and often this same synopsis is chucked on the back-cover, leading to many books where the back-cover spoils the whole story.)

And from this you can gather that this is where many meta-data schemes get confusing:

ACBF stores this in the “Annotation” element. ComicRack in the “Summary” element, CoMet in the “Description” element, and ComicBookInfo in the “Comments” element. Dublin Core uses the “Description” element for this.

But still, it is relatively uncontroversial. Just give someone a text area, like QPlainTextEdit, and let them type in something.

Language is a QComboBox with entries pulled from a csv having the ISO languages and their codes and defaults to the system locale. Reading direction has to be seperate from language because sometimes you have people writing comics in the opposite direction of their language. Still, it is a combobox that is set to default to the system language reading direction.

Of the four metadata schemes, CoMet and ComicRack define reading direction, but only CoMet calls it by its name, while ComicRack calls it “Manga”. I am not sure what ACBF intends here. CoMet is also a bit annoying that out of the four schemes it is the only one that requests the proper language name instead of the ISO language code.


Then there’s the smaller meta-data, like the Genre. All of the specs have genre listed as a seperate “Genre” element that can occur several times. Dublin Core has no such element, so it goes into “Subject” instead. ACBF is unique in that it allows a limited list here. ACBF also allows noting percentages, but that got me complicated too quickly.

So, because there’s multiple entries, one would think, QLineEdit, with comma separation. Because we cannot use a QComboBox, that doesn’t allow for multiple entries at once. I also didn’t really want to use checkboxes because that doesn’t allow what people feel the genre of their work is. For example, a horror writer differentiates between psychological horror and gothic horror, while a fantasy writer differentiates between urban fantasy, swords and sorcery, epic fantasy, etc.

So to give people an indication of which types are acceptable, I set up a QCompleter on the QLineEdit. The metadata editor checks on initialisation the folder key_genre for txt files, and uses each line as a stringlist entry, which then goes into the QCompleter. However, QCompleter doesn’t handle comma seperated entries by itself. Thankfully, many people on the internet had been attempting to make a QLineEdit with comma separated autocompletion, so I was able to cobble something together from that. This way the artist can type in entries, be encouraged to pick certain entries. When exporting we can then check if the entries don’t match and chuck those into the keywords list.

I rather liked this approach, as it helps people, but because the data is pulled from outside the code, from a simple format, they can also extend it.

So I decided to reuse it for Characters, Format and Keywords as well.

Characters is also near universal. It probably is inspired by the big overarching universes in American comics. The ComicBookInfo json just puts it in “tags” but is a it unique in that. ACBF has a list characters with name elements for each character. CoMet and ComicRack have reoccuring “Character” tags.

Format shows up in both ComicRack and CoMet. I have no idea what the former means by it, and am equally confused by the description of the latter. I suspect it means a format-genre. Either way, it is not Dublin Core format, which means “physical format”.

All of them have a place for extra keywords, which I am thankful for.

Then there’s series, volume and issue. ACBF calls series a “sequence” but otherwise there’s not much confusion here. Just a QLineEdit and two QSpinBoxes with Vol. and No. as prefixes.

Then there’s the content rating. Originally I had this as a line edit that pulled from a text-file as well, but as I realised different rating systems use the same letter to mean slightly different things, I decided to switch over to a CSVs for this. The first row for the title of the rating system, and after that, the first column is the letter, and the second column the description.


The CSVs results into two comboboxes, the first of which gives the type, and the second the letter. By using the combobox’ model we can put the description as a tool tip to the letter. The comboboxes are both editable for the person who has no idea they can add their own ratings systems but still wishes to rate differently.

Next up is the author. So typically an author is written down like “John Doe”, but for archiving purposes, it is usually written like “Doe, John”. Furthermore, in comics there often multiple creators credited. The latter is most common in American comics, where there’s a separate Writer, Penciller, Inker, Colorist, and Letterer, and often the Editor is also credited. In European comics this is usually just a “Scenario” and “Artist”, and in Manga it is one or two authors, and an army of assistants, but the latter are never credited and one wouldn’t know they existed unless they read the author’s ramblings in the volume bound releases.

All of the specs have spaces to refer to authors. They of course, do this in wildly different manners.

ComicRack has seperate elements for “Writer”, “Penciller”, “Inker”, “Letterer”, “Colorist”, “CoverArtist”, “Editor” and “Translator”. CoMet is similar, except it calls “CoverArtist” “coverDesigner”. ComicBookInfo and ACBF both instead use a tag to refer to an author and then assign a role to them. ComicBookInfo calls them “Person” and ACBF makes an author element and says only specific roles are valid. The Dublin Core specifies Creator and Contributor tags, which can be refined with a meta tag.

Of these, ACBF is unique in that it actually bothers to seperate the different parts of a name as well as allow nicknames and contact info.

So… how to make a gui element for that? At this point I had gotten comfortable enough with QT model/view programming that I just made a QTableView with a QStandardModel item and having columns for Nickname/First Name/Middle Name/Last Name/Role/Email/Homepage. For the role, because many of them ask for a limited list, I subclassed QStyledItemDelegate just enough to give a line edit with a QCompleter that takes it’s autocompletion entries from the txts in the key_author_roles folder. Like with the page list, people can add authors, remove authors, and rearrange authors. By default, it has an entry for John “Anonymous” Doe, which seemed a sensible default that would demonstrate the gui, but the first feedback I got was from someone who was not familiar with the meaning of the name “John Doe”, so I am a wee bit worried about translation.


I still want to add buttons to optionally scrape the pages list for authors, as well as the ability to generate a text object in the current open file with the credits nicely outlined.

Then, the final part of the meta data is the publishing meta-data.

All of the schemes have some place to put the Publisher Name and the Publishing Date. ACBF also allows for City, and I am a little confused why the others don’t.

The date is a little bit more confusing. ComicRack and ComicBookInfo require a separate publish year, CoMet an ISO date and ACBF requires any kind of date, with the ability to specify an ISO date explicitely. QDate and QDate input to the rescue here.

Then, there’s ISBN. ACBF only accepts ISBN, and CoMet allows for an ISBN or some unique publishing number. The other two don’t have anything.

Then there’s the license. Like the content rating, I am pulling this from a CSV with some examples, and like the content rating, I have it editable and default to have nothing in it. My reasoning here is that for example, we could have a teenager making a fancomic, and I think it would kind of suck if they got bothered because their fancomic has a license defined.


Either way, of the four schemes, ACBF asks for a license, but no rights holder, CoMet only a rightsholder and no license, ComicBookInfo and ComicRack don’t ask for anything. Dublin Core says that the “Rights” tag should contain anything pertaining the license and rightsholder. I am not quite sure how to help people here either how.

And that is all the meta data. So, the idea is that the author just types in some things, and then later comes back and types in more. And eventually, somewhere over the course of the project it has a proper meta-data definition.

So, I have been discussing these four metadata formats, does that mean I intend to export to them? Yes.


So, there’s a big fancy export button on the comics management docker. Pressing it does nothing, unless you have set up the export. On the other hand, after you have set up the export, pressing the button is the only thing that needs to be done.

The exporter right now can export to three formats. The first is CBZ, with all 4 metadatas acceptable. The second is EPUB, which uses Dublin core metadata. Finally there’s ‘TIFF’ which is not really an export format so much as a intermediary format for publishing programs like Scribus. While Epub and CBZ need to be in 8bit srgb/grayscale, tiff can handle multiple colorspaces and high bitdepths.


Each of them has a resize menu, which can resize by width, height, percentage or DPI. These options were necessary because it otherwise is too difficult to determine how to handle different sized documents sensibly. (Someone who uses spreads doesn’t want to resize by height, and someone working on a by-panel basis instead of per-page would prefer DPI or percentage resize). For similar reasons the crop menu allows you to select “crop to outmost guides” and pixels, so that it is easy to define a per-image cropping mechanism.


The exporter also allows removing layers by color label, which is useful to make sure layers sketch layers or commentaries are removed.

The exporter exports everything to the export folder, both packaged and unpackaged, so that it is easy to get to the right elements and adjust them.


So that is quite easy, you set it up, and then press the export button whenever you feel like making a result cbz or epub that can be read from ACBF viewer or other readers.

There’s still metadata menaces here. I got pretty confused with the ACBF spec here as it asks for a ‘unique identifier for cataloguing purposes’ and I have no idea what that means. Note that it doesn’t say “universial unique id”, nor does it specify what kind of ID this ought to be, and none of the existing ACBF files have anything like it(despite the spec saying it is mandatory), so I decided to just make a QLineEdit and then someone else can figure it out.

For EPUB… anyone who has attempted to read that spec knows it is overcomplicated(and yet still easier that CBML). So I just opened Sigil, made an EPUB that sorta looked like I wanted it to and then reproduced that EPUB in detail. This still took me a full day, so let alone if I had actually tried to read the spec.

Closing Thoughts:

So, I wanted a docker to organise and access my comics pages. I ended up with a docker that can support a full-on production process(theoretically).

What is next?

I already noted here and there that there’s elements I want to improve. On top of those, I also want to…

  • make it possible for people to select a folder with additional meta-data auto-completion keys.
  • when vector python api is in, allow people to specify the name of a layer that has the panel data, so this can be stored into ACBF.
  • when the text works and is in the python api also give ACBF export the text layers.
  • General GUI clean up. There’s parts of the editor that are a bit messy.
  • Improve the EPUB look.
  • Improve the other metadata.
  • Fix bugs… like reordering pages doesn’t actually work >_>
  • Krita has gained an beautiful plethora of bugs with saving/loading via python thanks to async saving being implemented, so those bugs need to be catalogued and put onto bugzilla and fixed.

But for now, I am gonna take a break. I also poked someone to do some testing for me, and I might poke some more people for testing, and then fix some bugs. And then worry about python scripting translation support. Maybe then merge.

Something like that at the least.

Hello again and welcome to my blog! In this post i am going to cover what happened since the first GSoC evaluation and give you some overview on the status of my work.

Since the last post I’ve been working on the implementation of the guitar plugin, along with adjusting the existing piano plugin to better suit to the new framework.

As you remember from my last post, minuet currently supports multiple plugins to display its exercises. To change from one plugin to another, all you have to do is to press on the desired instrument name: for now, only “Guitar” and “Piano” are available.



In the past couple of weeks, I’ve been deciphering the guitar notes representation and also the guitar chords. I don’t want to discourage anyone from learning how to play the guitar, but man.. It was so hard and tiresome.. Nevertheless, my previous piano experience helped me better understand the guitar specifics and get up to speed with the theory needed to complete my project.keyChords.PNG


Then I talked with my mentor on Hangouts and, using http://chordfind.com as a base (which is indeed a great start for beginners who want to learn guitar/piano and many other 4-strings instruments chords), we agreed on two specific representations for each cord: Major, Minor, Augmented, Diminished, etc. for chords with the root note in the C-E range or in the F-B range.

Then i started working at the core of the plugin guitar: to keep functional the piano keyboard, i had to implement the exact same methods used by the piano plugin. I won’t go into too much coding detail (the code is available on GitHub on my fork of Minuet and on the official GSoC branch when completed), but with a little twitch to the current ExerciseView component, I managed to create a guitar plugin that runs the Minuet’s chords exercises.

It look like this:

  • minor and major chords

minor and major chords.gif

  • diminished and augmented chords

diminished and augmented chords.gif

  • minor7 and dominant7 chords

minor7 and dominant7 chords

  • minor9 and major9 chords

minor9 and major9 chords


When we went public with our troubles with the Dutch tax office two weeks ago, the response was overwhelming. The little progress bar on krita.org is still counting, and we’re currently at 37,085 euros, and 857 donators. And that excludes the people who sent money to the bank directly. It does include Private Internet Access‘ sponsorship. Thanks to all you! So many people have supported us, we cannot even manage to send out enough postcards.

So, even though we’re going to get another accountant’s bill of about 4500 euros, we’ve still got quite a surplus! As of this moment, we have €29,657.44 in our savings account!

That means that we don’t need to do a fund raiser in September. Like we said, we’ve still got some features to finish. Dmitry and I are currently working on

  • Make Krita save and autosave in the background (done)
  • Improved animation rendering speed (done)
  • Improve Krita’s brush engine multi-core adaptability (under way)
  • Improve the general concurrency in Krita (under way)
  • Add touch functionality back (under way)
  • Implement the new text tool (under way)
  • Lazy brush: plug in a faster algorithm
  • Stacked brushes: was done, but needs to be redone
  • Replace the reference images docker with a reference images tool (under way)
  • Add patterns and filters to the vector support

All of that should be done before the end of the year. After that, we want to spend 2018 working on stability, polish and performance. So much will have changed that from 3.0 to 4.0 is a bigger step than from 2.9 to 3.0, even though that included the port to a new version of Qt! We will be doing new fund raisers in 2018, but we’re still discussing what the best approach would be. Kickstarters with stretch goals are very much feature oriented, and we’ve all decided that it’s time to improve what we have, instead of adding still more features, at least, for a while…

In the meantime, we’re working on the 3.2 release. We wanted to have it released yesterday, but we found a regression, which Dmitry is working hard on fixing right now. So it’ll probably be tomorrow.

August 15, 2017

This was the first time I used the new healing clone tool for the image editor in removing dust spots from a famous photo used in many online tutorials of similar programs, the user interface changed than the initially planed to be more user friendly, as the available functionality in the editor itself appeared to enable a more user friendly scenario than what I had on my head when I started coding, I will attach a screenshot of the tool and my trial to fix the image in this post. And document the journey with more details about the code and the tool usage in the next few days.

Screenshot from 2017-08-15 22-53-56


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.