August 17, 2019

It is a great idea to encrypt files on client side before uploading them to an ownCloud server if that one is not running in controlled environment, or if one just wants to act defensive and minimize risk.

Some people think it is a great idea to include the functionality in the sync client.

I don’t agree because it combines two very complex topics into one code base and makes the code difficult to maintain. The risk is high to end up with a kind of code base which nobody is able to maintain properly any more. So let’s better avoid that for ownCloud and look for alternatives.

A good way is to use a so called encrypted overlay filesystem and let ownCloud sync the encrypted files. The downside is that you can not use the encrypted files in the web interface because it can not decrypt the files easily. To me, that is not overly important because I want to sync files between different clients, which probably is the most common usecase.

Encrypted overlay filesystems put the encrypted data in one directory called the cipher directory. A decrypted representation of the data is mounted to a different directory, in which the user works.

That is easy to setup and use, and also in principle good to use with file sync software like ownCloud because it does not store the files in one huge container file that needs to be synced if one bit changes as other solutions do.

To use it, the cypher directory must be configured as local sync dir of the client. If a file is changed in the mounted dir, the overlay file system changes the crypto files in the cypher dir. These are synced by the ownCloud client.

One of the solutions I tried is CryFS. It works nicely in general, but is unfortunately very slow together with ownCloud sync.

The reason for that is that CryFS is chunking all files in the cypher dir into 16 kB blocks, which are spread over a set of directories. It is very beneficial because file names and sizes are not reconstructable in the cypher dir, but it hits on one of the weak sides of the ownCloud sync. ownCloud is traditionally a bit slow with many small files spread over many directories. That shows dramatically in a test with CryFS: Adding eleven new files with a overall size of around 45 MB to a CryFS filesystem directory makes the ownCloud client upload for 6:30 minutes.

Adding another four files with a total size of a bit over 1MB results in an upload of 130 files and directories, with an overall size of 1.1 MB.

A typical change use case like changing an existing office text document locally is not that bad. CryFS splits a 8,2 kB big LibreOffice text doc into three 16 kB files in three directories here. When one word gets inserted, CryFS needs to create three new dirs in the cypher dir and uploads four new 16 kB blocks.

My personal conclusion: CryFS is an interesting project. It has a nice integration in the KDE desktop with Plasma Vault. Splitting files into equal sized blocks is good because it does not allow to guess data based on names and sizes. However, for syncing with ownCloud, it is not the best partner.

If there is a way how to improve the situation, I would be eager to learn. Maybe the size of the blocks can be expanded, or the number of directories limited?
Also the upcoming ownCloud sync client version 2.6.0 again has optimizations in the discovery and propagation of changes, I am sure that improves the situation.

Let’s see what other alternatives can be found.

I won’t say that I am done with Magnetic Lasso now, but the results are a lot better now to be honest. Take a look at one of the tests that I did,

Hace un par de días fue el anuncio oficial del lanzamiento, ayer presenté las mejoras de Dolphin, Gwenview, Okular y Kate, y hoy toca hablar más novedades de KDE Aplicaciones 19.08: Konsole, Spectacle, Kontact y kdenlive.

Más novedades de KDE Aplicaciones 19.08

El pasado 15 de agosto de 2019 fue la fecha elegida por los desarrolladores de KDE para lanzar KDE Aplicaciones 19.08 y preparar una gran entrada explicando sus novedades que ha sido su anuncio oficial.

Es hora de repasarlas a fondo en el blog en la segunda parte del artículo dedicado a ello. Si ayer hablamos de Dolphin, Gwenview, Okular y Kate, hoy toca Konsole, Spectacle, Kontact y kdenlive.

KonsoleMás novedades de KDE Aplicaciones 19.08

El emulador de terminal de KDE, Konsole, trae pocas novedades pero muy vistosas:

  • Ahora se puede dividir la pantalla principal de la consola tanto en vertical como en horizontal… y éstas subdivisiones se podrán a volver a sudividir tantas veces como queráis.
  • Además, se podrán arrastrar las consolas de un lado a otro para reorganizarlas y que se adapten mejor a vuestro trabajo.
  • Por último, la ventana de Configuración ha recibido una revisión para que sea más sencilla y fácil de utilizar.

 

Spectacle

Spectacle es la aplicación de captura de pantalla del KDE que poco a poco está superando (si no lo ha hecho ya) al clásico KSnapShot, el capturador de pantalla que tenía anteriormente KDE. Sus novedades principales son:

  • Cuando se realice una captura de pantalla con retardo aparecerá una cuenta atrás con el tiempo restante, tanto en el título de su ventana como en su icono en el Gestor de tareas.
  • También en la captura de pantalla con retardo, se mostrará una barra de progreso en el gestor de tareas de manera y el botón de «Toma una captura de pantalla» se convertirá en el botón «Cancelar» para detenerlo.
  • Al guardar la captura aparecerá un mensaje que nos permitirá abrir la imagen o abrir la carpeta que la contiene.

 

Kontact

El gestor de información personal  que incluye correo electrónico, calendario y contactos viene cargada de nuevas funcionalidades:

  • Nuevos «emoji» coloridos con Unicode.
  • Soporte para Markdown en el editor de correo electrónico.
  • Integración con los correctores gramaticales como LanguageTool y Grammalecte, que nos ayudarán a revisar y corregir nuestros textos.
  • Al planificar acontecimientos a partir de una invitación de un correo electrónico de KMail, éste no se borrará después d responder.
  • Ahora es posible mover un acontecimiento de un calendario a otro desde el editor de acontecimientos de KOrganizer.
  • KAddressBook nos permitirá enviar mensajes SMS a los contactos a través del KDE Connect, con lo que se gana integración entre vuestro escritorio y vuestros dispositivos móviles.

kdenlive

Para finalizar la ronda a las ocho aplicaciones que más novedades han recibido toca hablar de Kdenlive, la aplicación de edición de vídeo del KDE, que nos ofrece pocas de novedades ya que el trabajo ha sido bajo el capó.

  • Añadido un nuevo conjunto de combinaciones de teclado y ratón que nos ayudarán a ser más productivos.
  • Se ha mejorada la usabilidad para que al hacer las operaciones de edición de 3 puntos sean consitentes con otros editores de vídeo, lo cual ayudará a la transición si se viene de otro editor.

No está mal, para ser una de las tres ramas de desarrollo del proyecto KDE. Además, estos son las novedades que han desatacado los desarrolladores pero que bajo en capó hay muchas más. Os invito a ir leyendo las entradas de Nathan o su traducción realizada por Victorchk, donde podéis ver todas las novedades semanales tanto de las aplicaciones KDE, el escritorio Plasma o los KDE Frameworks,

Más información: KDE

 

August 16, 2019

Si ayer fue el anuncio oficial, hoy toca hablar de las novedades de KDE Aplicaciones 19.08 que no son solo estéticas. Hoy repasaremos las novedades de Dolphin, Gwenview, Okular y Kate.

Las novedades de KDE Aplicaciones 19.08

Ayer 15 de agosto de 2019 fue la fecha elegida por los desarrolladores de KDE para lanzar KDE Aplicaciones 19.08 y preparar una gran entrada explicando sus novedades que ha sido su anuncio oficial.

Es hora de repasa algunas de ellas, complementando el vídeo que vimos ayer. Por cierto, he  dividido en dos porque en realidad no me gustan los artículos largos. Mañana el otro.

 

 
 

DolphinLas novedades de KDE Aplicaciones 19.08 (I)

El explorador de ficheros y carpetas, que en en mi opinión una Killer App, llega cargada de novedades:

  • Nuevo atajo de teclado global Meta + E con el que podemos lanzarlo en cualquier momento.
  • Si una aplicación ejecuta Dolphin y éste ya está ejecutado en nuestro escritorio, se abrirá una nueva pestaña en él, con lo que reducimos el número de ventanas de nuestro escritorio.
  • Mejoras en el panel de información, el cual puede ser configurado sin necesidad de abrir ninguna ventana adicional.

 

Gwenview

El visor más sencillo de imágenes de KDE

  • Añadido el modo «Uso mínimo de recursos» que cargará miniaturas en baja resolución (siempre que sea posible) ganando así mucha velocidad de ejecucion al trabajar con imágenes  JPEG y ficheros RAW.
  • Cuando Gwenview no pueda generar una miniatura para una imagen ahora se mostrará una imagen de marcador de posición en vez de reutilizar la miniatura de la imagen anterior.
  • Se han solucionado los problemas que tenía Gwenview con la visualización de miniaturas en les cámeres Sony y Canon.
  • Se ha añadido un nuevo menú «Comparte», que permitirá enviar imágenes a diversos sitios.
  • También ahora, utilizando KIO, se cargarán y mostrarán correctamente imágenes en ubicaciones remotas
  • Las imágenes RAW podrán mostrar sus metadatos EXIF.

 

Okular

El visor universal de archivos, una de las aplicaciones que conquistó mi corazón cuando conocí KDE, también introduce mejoras.

  • Mejoras en los diálogos de configuración de las anotaciones.
  • Posibilidad de añadir adornos visuales a las anotaciones.
  • Todas las anotaciones podrán expandirse o contraerse al mismo tiempo.
  • Mejoras en el formato ePub con las que Okular optimizará su rendimiento.
  • Los bordes de página y la herramienta Marcador del modo presentación también han sido mejoradas.

Kate

Una de las aplicaciones más queridas por los desarrolladores, el editor de texto avanzado Kate nos ofrece las siguientes novedades:

  • Cuando se abra un documento nuevo desde otra aplicación, Kate se abrirá en primer plano,
  • La característica «Apertura rápida» clasificará los elementos según su uso más reciente y preseleccionará el elemento superior.
  • La opción «Documentos recientes» no guardará los ajustes de cada ventana individual.

No está mal, para ser una de las tres ramas de desarrollo del proyecto KDE. Además, estos son las novedades que han desatacado los desarrolladores pero que bajo en capó hay muchas más. Os invito a ir leyendo las entradas de Nathan o su traducción realizada por Victorchk, donde podéis ver todas las novedades semanales tanto de las aplicaciones KDE, el escritorio Plasma o los KDE Frameworks,

Más información: KDE

 

August 15, 2019

Since the last year the development in Cantor is keeping quite a good momentum. After many new features and stabilization work done in the 18.12 release, see this blog post for an overview, we continued to work on improving the application in 19.04. Today the release of KDE Applications 19.08, and with this of Cantor 19.08, was announced. Also in this release we concentrated mostly on improving the usability of Cantor and stabilizing the application. See the ChangeLog file for the full list of changes.

For new features targeting at the usability we want to mention the improved handling of the “backends”. As you know, Cantor serves as the front end to different open-source computer algebra systems and programming languages and requires these backends for the actual computation. The communication with the backends is handled via different plugins that are installed and loaded on demand. In the past, in case a plugin for a specific backend failed to initialize (e.g. because of the backend executable not found, etc.), we didn’t show it in the “Choose a Backend” dialog and the user was completely lost. Now we still don’t allow to create a worksheet for this backend, but we show the entry in the dialog together with a message about why the plugin is disabled. Like as in the example below to check the executable path in the settings:


Select Backend Dialog

Similar for cases where the plugin, as compiled and provided by the Linux distribution, doesn’t match to the version of the backend that the user installed manually on the system and asked Cantor to use. Here we clearly inform the user about the version mismatch and also advise what to do:

Select Backend Dialog

Having mentioned custom installations of backends and Julia above, in 19.08 we allow to set the custom path to the Julia interpreter similarly to how it is already possible for some other backends.

The handling of Markdown and LaTeX entries became more comfortable. We allow now to quickly switch from the rendered result to the original code via the mouse double click. Switching and back, as usual, via the evaluation of the entry. Furthermore, the results of such rendered Markdown and LaTeX entries are saved now as part of the project. This allows to consume projects with such Markdown and LaTeX entries also on systems having no support for Markdown and LaTeX rendering process. This also decreases the loading times of projects since the ready-to-use results can be directly used.

In 19.08 we added the “Recent Files” menu allowing for a quick access to the recently opened projects:

Recent Files Menu

Among important bug fixes we want to mention fixes that improved the communication with the external processes “hosting” embedded interpreters like Python and Julia. Cantor reacts now much better on errors and crashes produced in those external processes. For Python the interruption of running commands was improved.

While working on 19.08 to make the application more usable and stable we also worked on some bigger new features in parallel. This development is being done as part of Google Summer of Code project and the goal of this project is to add the support of Jupyter notebooks in Cantor. The idea behind this project and its progress are covered in a series of blogs (here, here and here). The code is in a quite good shape and was merged to master already. This is how such a Jupyter notebook looks like in Cantor:

We plan to release this in the upcoming release of KDE Applications 19.12. Stay tuned!

I'm writing this while sitting in the Moscow airport, in a state which is a mix of tiredness, anger and astonishment. At this time I should be already at home, in Saint Petersburg, but something went very wrong and I feel the need to vent my frustration here. Please bear with me :-)

The story goes like this: we booked a flight from Venice to Saint Petersburg, with a change in Moscow. The time between flights was 1 hour and 20 minutes — that's not plenty of time, but it's still much longer than other changes I've had in the past in other airports. And I should stress that this flight combination was suggested to me by the Aeroflot company's website, so this time should be enough for the transfer. At least, this was my assumption.

And it was wrong: in spite of the flight from Venice to Moscow landing in time, in spite of us not losing a minute and performing all the passport and security checks without delays (we didn't even let the kids visit the toilet!), in spite of us running along the corridors as soon as we realized that we might be late, we arrived at the gate just in time to see it closing in front of us. And no, rules are rules, so they wouldn't let us board.

The guy at the gate comforted us by saying that planes from Moscow from Saint Petersburg fly every hour, so we could just take the next one, for free. We tried to protest, but to no avail; we went to the Aeroflot ticket desk, explained the situation, and they told us that we could take the flight 6 hours later. And no, it didn't matter that we had kids; the employee at the desk told us that we should be grateful to them, that we could fly for free, as they were making an exception for us.

Yes, you've read it correctly: they asked us to fly 6 hours later, and instead of giving us any compensation (as I once got from Finnair, for example, when their flight was late), we had to be grateful for not having to buy the tickets again!

And I want to make one thing clear: there was no announcement about the transfer time being tight, nor in the plane nor at the airport; no one came forward asking for passengers transferring to St. Petersburg and inviting us to skip the lines for the various controls, nothing. Our names were not called, ever. All that is the norm in all other similar situations I've experienced. If that does not happens, it means — but maybe I'm being too naive — that the second plane is waiting (or that there's still plenty of time to board it).

Luckily they found that an earlier flight, which was only three hours late than our initial expected departure had three available seats, so my wife and kids are boarding that place right now while I'm writing this. I'll take the other flight in three hours, hoping that there won't be any more surprises.

Lesson of the day: always fly from Finland (if you live in Saint Petersburg, like me, that's close), and in any case avoid Aeroflot.

En pleno verano, el equipo de desarrolladores del Proyecto KDE no descansa y acaba de anunciar que ya está disponible KDE Aplicaciones 19.08, que como siempre nos ofrece mejoras y optimizaciones en numerosas aplicaciones, y marca un punto y seguido en su desarrollo, que no se detiene y que sigue haciendo de KDE el Killer Desktop Experience (lo leí en en esta entrada de reddit que puede dar lugar a confusión y me gustó).

Disponible KDE Aplicaciones 19.08, como siempre, más y mejor

Tras las necesarias beta y versión candidata, el 15 de agosto de 2019 ha sido la fecha elegida por los desarrolladores de KDE para lanzar KDE Aplicaciones 19.08 y para preparar una gran entrada explicando sus novedades.

Disponible KDE Aplicaciones 19.08 #KDE

El anuncio oficial dice así:

La comunidad KDE se complace en anunciar el lanzamiento de KDE Aplicaciones 19.08.

Esta versión es parte del compromiso de KDE de proporcionar continuamente versiones mejoradas de los programas que ofrecemos a nuestros usuarios. Las nuevas versiones de Aplicaciones traen más características y un software mejor diseñado que aumenta la usabilidad y estabilidad de aplicaciones como Dolphin, Konsole, Kate, Okular, y todas sus otras utilidades favoritas. Nuestro objetivo es asegurarnos de que siga siendo productivo, y hacer que el software de KDE sea más fácil y agradable de usar.

¡Esperamos que disfrute de todas las nuevas mejoras que encontrará en 19.08!

Y como viene siendo habitual, este lanzamiento nos ofrece un vídeo presentación con algunas de sus nuevas características:

 

 

Difunde la palabra

Disponible la primera alpha de Frameworks 5

Mañana será el momento de hablar de sus novedades, el artículo está en el horno, así que hoy solo me queda animaros a difundir la buena nueva, como una pequeña y sencilla forma de pago por este increíble conjunto de aplicaciones que se nos ofrece de forma gratuita.

KDE y un servidor os anima a difundir la voz en las Redes Sociales, a enviar  artículos a sitios web, a utilizar canales como delicious, digg, reddit, twitter mastodon, reddit,etc.

También a subir capturas de pantalla a servicios como Instagram o Facebook (o donde sea) y publicar en los grupos adecuados.

Sin olvidar la creación de screencasts y subirlos a YouTube, Blip.tv, Vimeo,…

Y, lo más importante, etiqueta los mensajes y los materiales subidos con “KDE”. Esto hace que sean fáciles de encontrar, y da al equipo de promoción de KDE una forma de analizar la cobertura de estas versiones de software de KDE.

Por otra parte, si queréis poneros en contacto con el equipo de presa de KDE podéis enviar un correo a press@kde.org

Más información: KDE.org

KTouch, an application to learn and practice touch typing, has received a considerable update with today's release of KDE Apps 19.8.0. It includes a complete redesign by me for the home screen, which is responsible to select the lesson to train on.

New home screen of KTouch

There is now a new sidebar offering all the courses KTouch has for a total of 34 different keyboard layouts. In previous versions, KTouch presented only the courses matching the current keyboard layout. Now it is much more obvious how to train on different keyboard layouts than the current one.

Other improvements in this release include:

  • Tab focus works now as expected throughout the application and allows training without touching the mouse ever.
  • Access to training statistics for individual lessons from the home screen has been added.
  • KTouch supports now rendering on HiDPI screens.

KTouch 19.08.0 is available on the Snap Store and is coming to your Linux distribution.

Can you believe we've already passed the half-year mark? That means it's just the right time for a new release of KDE Applications! Our developers have worked hard on resolving bugs and introducing features that will help you be more productive as you get back to school, or return to work after your summer vacation.

The KDE Applications 19.08 release brings several improvements that truly elevate your favorite KDE apps to the next level. Take Konsole, our powerful terminal emulator, which has seen major improvements to its tiling abilities. We've made tiling a bit more advanced, so now you can split your tabs as many times as you want, both horizontally and vertically. The layout is completely customizable, so feel free to drag and drop the panes inside Konsole to achieve the perfect workspace for your needs.

Dolphin, KDE's file explorer, introduces features that will help you step up your file management game. Let's start with bookmarks, a feature that allows you to create a quick-access link to a folder, or save a group of specific tabs for future reference. We've also made tab management smarter to help you declutter your desktop. Dolphin will now automatically open folders from other apps in new tabs of an existing window, instead of in their own separate windows. Other improvements include a more usable information panel and a new global shortcut for launching Dolphin - press Meta + E to try it out!

Okular, our document viewer, continues with a steady stream of usability improvements for your document-viewing pleasure. In Applications 19.08, we have made annotations easier to configure, customize, and manage. Okular's ePub support has also greatly improved in this release, so Okular is now more stable and works better when previewing large files.

All this sounds exciting for those who read and sort through documents, but what about those who write a lot of text or emails? They will be glad to hear we've made Kate, our advanced text editor, better at sorting recent files. Similar to Dolphin, Kate will now focus on an existing window when opening files from other apps. Your email-writing experience also receives a boost with the new version of Kontact, or more specifically, KMail. After updating to Applications 19.08, you'll be able to write your emails in Markdown, and insert - wait for it - emoji into them! The new integration with grammar-checking tools like LanguageTool and Grammalecte will help you prevent embarrassing mistakes and typos that always seem to creep into the most important business emails.

Photographers and other creatives will appreciate changes to Gwenview, KDE's image viewer. Gwenview can now display extended EXIF metadata for RAW images, share photos and access remote files more easily, and generate better thumbnails. If you are pressed for system resources, Gwenview has your back with the "Low usage resource mode" that you can enable at will. In the video-editing department, Kdenlive shines with a new set of keyboard+mouse combinations and improved 3-point editing operations.

We should also mention Spectacle, KDE's screenshot application. The new version lets you open the screenshot (or its containing folder) right after you've saved it. Our developers introduced a number of nice and useful touches to the Delay functionality. For example, you may notice a progress bar in the panel, indicating the remaining time until the screenshot is done. Sometimes, it's the small details that make using KDE Applications and Plasma so enjoyable.

Speaking of details, to find out more about other changes in KDE Applications 19.08, make sure to read the official announcement.

Happy updating!


If you happen to be in or close to Milan, Italy this September, come and join us at Akademy, our annual conference. It's a great opportunity to meet the creators of your favorite KDE apps in person, and get an early sneak peek at all the things we have in store for the future of KDE.

August 14, 2019

As Lars mentioned in his Technical Vision for Qt 6 blog post, we have been researching how we could have a deeper integration between 3D and Qt Quick. As a result we have created a new project, called Qt Quick 3D, which provides a high-level API for creating 3D content for user interfaces from Qt Quick. Rather than using an external engine which can lead to animation synchronization issues and several layers of abstraction, we are providing extensions to the Qt Quick Scenegraph for 3D content, and a renderer for those extended scene graph nodes.

Does that mean we wrote yet another 3D Solution for Qt?  Not exactly, because the core spatial renderer is derived from the Qt 3D Studio renderer. This renderer was ported to use Qt for its platform abstraction and refactored to meet Qt project coding style.

Complex 3D scene visualized with Qt Quick 3D

“San Miguel” test scene running in Qt Quick 3D

What are our Goals?  Why another 3D Solution?

Unified Graphics Story

The single most important goal is that we want to unify our graphics story. Currently we are offering two comprehensive solutions for creating fluid user interfaces, each having its own corresponding tooling.  One of these solutions is Qt Quick, for 2D, the other is Qt 3D Studio, for 3D.  If you limit yourself to using either one or the other, things usually work out quite fine.  However, what we found is that users typically ended up needing to mix and match the two, which leads to many pitfalls both in run-time performance and in developer/designer experience.

Therefore, and for simplicity’s sake, we aim have one runtime (Qt Quick), one common scene graph (Qt Quick Scenegraph), and one design tool (Qt Design Studio).  This should present no compromises in features, performance or the developer/designer experience. This way we do not need to further split our development focus between more products, and we can deliver more features and fixes faster.

Intuitive and Easy to Use API

The next goal is for Qt Quick 3D is to provide an API for defining 3D content, an API that is approachable and usable by developers without the need to understand the finer details of the modern graphics pipeline.  After all, the majority of users do not need to create specialized 3D graphics renderers for each of their applications, but rather just want to show some 3D content, often alongside 2D.  So we have been developing Qt Quick 3D with this perspective in mind.

That being said, we will be exposing more and more of the rendering API over time which will make more advanced use cases, needed by power-users, possible.

At the time of writing of this post we are only providing a QML API, but the goal in the future is to provide a public C++ API as well.

Unified Tooling for Qt Quick

Qt Quick 3D is intended to be the successor to Qt 3D Studio.  For the time being Qt 3D Studio will still continue to be developed, but long-term will be replaced by Qt Quick and Qt Design Studio.

Here we intend to take the best parts of Qt 3D Studio and roll them into Qt Quick and Qt Design Studio.  So rather than needing a separate tool for Qt Quick or 3D, it will be possible to just do both from within Qt Design Studio.  We are working on the details of this now and hope to have a preview of this available soon.

For existing users of Qt 3D Studio, we have been working on a porting tool to convert projects to Qt Quick 3D. More on that later.

First Class Asset Conditioning Pipeline

When dealing with 3D scenes, asset conditioning becomes more important because now there are more types of assets being used, and they tend to be much bigger overall.  So as part of the Qt Quick 3D development effort we have been looking at how we can make it as easy as possible to import your content and bake it into efficient runtime formats for Qt Quick.

For example, at design time you will want to specify the assets you are using based on what your asset creation tools generate (like FBX files from Maya for 3D models, or PSD files from Photoshop for textures), but at runtime you would not want the engine to use those formats.  Instead, you will want to convert the assets into some efficient runtime format, and have them updated each time the source assets change.  We want this to be an automated process as much as possible, and so want to build this into the build system and tooling of Qt.

Cross-platform Performance and Compatibility

Another of our goals is to support multiple native graphics APIs, using the new Rendering Hardware Interface being added to Qt. Currently, Qt Quick 3D only supports rendering using OpenGL, like many other components in Qt. However, in Qt 6 we will be using the QtRHI as our graphics abstraction and there we will be able to support rendering via Vulkan, Metal and Direct3D as well, in addition to OpenGL.

What is Qt Quick 3D? (and what it is not)

Qt Quick 3D is not a replacement for Qt 3D, but rather an extension of Qt Quick’s functionality to render 3D content using a high-level API.

Here is what a very simple project with some helpful comments looks like:

import QtQuick 2.12
import QtQuick.Window 2.12
import QtQuick3D 1.0
 
Window {
  id: window
  visible: true
  width: 1280
  height: 720
     
  // Viewport for 3D content
  View3D {
    id: view
         
    anchors.fill: parent
    // Scene to view
    Node {
      id: scene
          
      Light {
             
        id: directionalLight
               
      }

      Camera {
        id: camera
        // It's important that your camera is not inside 
        // your model so move it back along the z axis
        // The Camera is implicitly facing up the z axis,
        // so we should be looking towards (0, 0, 0)
        z: -600
      }

      Model {
        id: cubeModel
        // #Cube is one of the "built-in" primitive meshes
        // Other Options are:
        // #Cone, #Sphere, #Cylinder, #Rectangle
        source: "#Cube"
                 
        // When using a Model, it is not enough to have a 
        // mesh source (ie "#Cube")
        // You also need to define what material to shade
        // the mesh with. A Model can be built up of 
        // multiple sub-meshes, so each mesh needs its own
        // material. Materials are defined in an array, 
        // and order reflects which mesh to shade
                 
        // All of the default primitive meshes contain one
        // sub-mesh, so you only need 1 material.
                 
        materials: [
                     
          DefaultMaterial {
                         
            // We are using the DefaultMaterial which 
            // dynamically generates a shader based on what
            // properties are set. This means you don't 
            // need to write any shader code yourself.  
            // In this case we just want the cube to have
            // a red diffuse color.
            id: cubeMaterial
            diffuseColor: "red"
          }
        ]
      }
    }
  }
}

The idea is that defining 3D content should be as easy as 2D.  There are a few extra things you need, like the concepts of Lights, Cameras, and Materials, but all of these are high-level scene concepts, rather than implementation details of the graphics pipeline.

This simple API comes at the cost of less power, of course.  While it may be possible to customize materials and the content of the scene, it is not possible to completely customize how the scene is rendered, unlike in Qt 3D via the its customizable framegraph.  Instead, for now there is a fixed forward renderer, and you can define with properties in the scene how things are rendered.  This is like other existing engines, which typically have a few possible rendering pipelines to choose from, and those then render the logical scene.

Camera Orbiting Car

A Camera orbiting around a Car Model in a Skybox with Axis and Gridlines (note: stutter is from the 12 FPS GIF )

What Can You Do with Qt Quick 3D?

Well, it can do many things, but these are built up using the following scene primitives:

Node

Node is the base component for any node in the 3D scene.  It represents a transformation in 3D space, and but is non-visual.  It works similarly to how the Item type works in Qt Quick.

Camera

Camera represents how a scene is projected to a 2D surface. A camera has a position in 3D space (as it is a Node subclass) and a projection.  To render a scene, you need to have at least one Camera.

Light

The Light component defines a source of lighting in the scene, at least for materials that consider lighting.  Right now, there are 3 types of lights: Directional (default), Point and Area.

Model

The Model component is the one visual component in the scene.  It represents a combination of geometry (from a mesh) and one or more materials.

The source property of the Mesh component expects a .mesh file, which is the runtime format used by Qt Quick 3D.  To get mesh files, you need to convert 3D models using the asset import tool.  There are also a few built-in primitives. These can be used by setting the following values to the source property: #Cube, #Cylinder, #Sphere, #Cone, or #Rectangle.

We will also be adding a programmatic way to define your own geometry at runtime, but that is not yet available in the preview.

Before a Model can be rendered, it must also have a Material. This defines how the mesh is shaded.

DefaultMaterial and Custom Materials

The DefaultMaterial component is an easy to use, built-in material.  All you need to do is to create this material, set the properties you want to define, and under the hood all necessary shader code will be automatically generated for you.  All the other properties you set on the scene are taken into consideration as well. There is no need to write any graphics shader code (such as, vertex or fragment shaders) yourself.

It is also possible to define so-called CustomMaterials, where you do provide your own shader code.  We also provide a library of pre-defined CustomMaterials you can try out by just adding the following to your QML imports:

import QtQuick3D.MaterialLibrary 1.0

Texture

The Texture component represents a texture in the 3D scene, as well as how it is mapped to a mesh.  The source for a texture can either be an image file, or a QML Component.

A Sample of the Features Available

3D Views inside of Qt Quick

To view 3D content inside of Qt Quick, it is necessary to flatten it to a 2D surface.  To do this, you use the View3D component.  View3D is the only QQuickItem-based component in the whole API.  You can either define the scene as a child of the View3D or reference an existing scene by setting the scene property to the root Node of the scene you want to renderer.

If you have more than one camera, you can also set which camera you want to use to render the scene.  By default, it will just use the first active camera defined in the scene.

Also it is worth noting that View3D items do not necessarily need to be rendered to off-screen textures before being rendered.  It is possible to set one of the 4 following render modes to define when the 3D content is rendered:

  1. Texture: View3D is a Qt Quick texture provider and renders content to an texture via an FBO
  2. Underlay: View3D is rendered before Qt Quick’s 2D content is rendered, directly to the window (3D is always under 2D)
  3. Overlay: View3D is rendered after Qt Quick’s 2D content is rendered, directly to the window (3D is always over 2D)
  4. RenderNode: View3D is rendered in-line with the Qt Quick 2D content.  This can however lead to some quirks due to how Qt Quick 2D uses the depth buffer in Qt 5.

 

2D Views inside of 3D

It could be that you also want to render Qt Quick content inside of a 3D scene.  To do so, anywhere where an Texture is taken as a property value (for example, in the diffuseMap property of default material), you can use a Texture with its sourceItem property set, instead of just specifying a file in the source property. This way the referenced Qt Quick item will be automatically rendered and used as a texture.

animated_cubes

The diffuse color textures being mapped to the cubes are animated Qt Quick 2D items.

3D QML Components

Due to Qt Quick 3D being built on QML, it is possible to create reusable components for 3D as well.  For example, if you create a Car model consisting of several Models, just save it to Car.qml. You can then instantiate multiple instance of Car by just reusing it, like any other QML type. This is very important because this way 2D and 3D scenes can be created using the same component model, instead of having to deal with different approaches for the 2D and 3D scenes.

Multiple Views of the Same Scene

Because scene definitions can exist anywhere in a Qt Quick project, its possible to reference them from multiple View3Ds.  If you had multiple cameras in a scene, you could even render from each one to a different View3D.

teapots

4 views of the same Teapot scene. Also changing between 3 Cameras in the Perspective view.

Shadows

Any Light component can specify that it is casting shadows.  When this is enabled, shadows are automatically rendered in the scene.  Depending on what you are doing though, rendering shadows can be quite expensive, so you can fine-tune which Model components cast and receive shadows by setting additional properties on the Model.

Image Based Lighting

In addition to the standard Light components, its possible to light your scene by defining a HDRI map. This Texture can be set either for the whole View3D in its SceneEnvironment property, or on individual Materials.

Animations

Animations in Qt Quick 3D use the same animation system as Qt Quick.  You can bind any property to an animator and it will be animated and updated as expected. Using the QtQuickTimeline module it is also possible to use keyframe-based animations.

Like the component model, this is another important step in reducing the gap between 2D and 3D scenes, as no separate, potentially conflicting animation systems are used here.

Currently there is no support for rigged animations, but that is planned in the future.

How Can You Try it Out?

The intention is to release Qt Quick 3D as a technical preview along with the release of Qt 5.14.  In the meantime it should be possible to use it already now, against Qt 5.12 and higher.

To get the code, you just need to build the QtQuick3D module which is located here:

https://git.qt.io/annichol/qtquick3d

What About Tooling?

The goal is that it should be possible via Qt Design Studio to do everything you need to set up a 3D scene. That means being able to visually lay out the scene, import 3D assets like meshes, materials, and textures, and convert those assets into efficient runtime formats used by the engine.

designstudio3d

A demonstration of early Qt Design Studio integration for Qt Quick 3D

Importing 3D Scenes to QML Components

Qt Quick 3D can also be used by writing QML code manually. Therefore, we also have some stand-alone utilities for converting assets.  Once such tool is the balsam asset conditioning tool.  Right now it is possible to feed this utility an asset from a 3D asset creation tool like Blender, Maya, or 3DS Max, and it will generate a QML component representing the scene, as well as any textures, meshes, and materials it uses.  Currently this tool supports generating scenes from the following formats:

  • FBX
  • Collada (dae)
  • OBJ
  • Blender (blend)
  • GLTF2

To convert the file myTestScene.fbx you would run:

./balsam -o ~/exportDirectory myTestScene.fbx

Thiswould generate a file called MyTestScene.qml together with any assets needed. Then you can just use it like any other Component in your scene:

import QtQuick 2.12
import QtQuick.Window 2.12
import QtQuick3D 1.0

Window {
 width: 1920
 height: 1080
 visible: true
 color: "black"

 Node {
   id: sceneRoot
   Light { 
   }
   Camera {
   z: -100
   }
   MyTestScene {
   }
 }
 
 View3D {
   anchors.fill: parent
   scene: sceneRoot
 }
}

We are working to improve the assets generated by this tool, so expect improvements in the coming months.

Converting Qt 3D Studio Projects

In addition to being able to generate 3D QML components from 3D asset creation tools, we have also created a plugin for our asset import tool to convert existing Qt 3D Studio projects.  If you have used Qt 3D Studio before, you will know it generates projects in XML format to define the scene.  If you give the balsam tool a UIP or UIA project generated by Qt 3D Studio, it will also generate a Qt Quick 3D project based on that.  Note however that since the runtime used by Qt 3D Studio is different from Qt Quick 3D, not everything will be converted. It should nonetheless give a good approximation or starting point for converting an existing project.  We hope to continue improving support for this path to smooth the transition for existing Qt 3D Studio users.

qt3dstudio_sample

Qt 3D Studio example application ported using Qt Quick 3D’s import tool. (it’s not perfect yet)

What About Qt 3D?

The first question I expect to get is why not just use Qt 3D?  This is the same question we have been exploring the last couple of years.

One natural assumption is that we could just build all of Qt Quick on top of Qt 3D if we want to mix 2D and 3D.  We intended to and started to do this with the 2.3 release of Qt 3D Studio.  Qt 3D’s powerful API provided a good abstraction for implementing a rendering engine to re-create the behavior expected by Qt Quick and Qt 3D Studio. However, Qt 3D’s architecture makes it difficult to get the performance we needed on an entry level embedded hardware. Qt 3D also comes with a certain overhead from its own limited runtime as well as from being yet another level of abstraction between Qt Quick and the graphics hardware.  In its current form, Qt 3D is not ideal to build on if we want to reach a fully unified graphics story while ensuring continued good support for a wide variety of platforms and devices ranging from low to high end.

At the same time, we already had a rendering engine in Qt 3D Studio that did exactly what we needed, and was a good basis for building additional functionally.  This comes with the downside that we no longer have the powerful APIs that come with Qt 3D, but in practice once you start building a runtime on top of Qt 3D, you already end up making decisions about how things should work, leading to a limited ability to customize the framegraph anyway. In the end the most practical decision was to use the existing Qt 3D Studio rendering engine as our base, and build off of that.

What is the Plan Moving Forward?

This release is just a preview of what is to come.  The plan is to provide Qt Quick 3D as a fully supported module along with the Qt 5.15 LTS.  In the meantime we are working on further developing Qt Quick 3D for release as a Tech Preview with Qt 5.14.

For the Qt 5 series we are limited in how deeply we can combine 2D and 3D because of binary compatibility promises.  With the release of Qt 6 we are planning an even deeper integration of Qt Quick 3D into Qt Quick to provide an even smoother experience.

The goal here is that we want to be able to be as efficient as possible when mixing 2D and 3D content, without introducing any additional overhead to users who do not use any 3D content at all.  We will not be doing anything drastic like forcing all Qt Quick apps to go through the new renderer, only ones who are mixing 2D and 3D.

In Qt 6 we will also be using the Qt Rendering Hardware Interface to render Qt Quick (including 3D) scenes which should eliminate many of the current issues we have today with deployment of OpenGL applications (by using DirectX on Windows, Metal on macOS, etc.).

We also want to make it possible for end users to use the C++ Rendering API we have created more generically, without Qt Quick.  The code is there now as private API, but we are waiting until the Qt 6 time-frame (and the RHI porting) before we make the compatibility promises that come with public APIs.

Feedback is Very Welcome!

This is a tech preview, so much of what you see now is subject to change.  For example, the API is a bit rough around the edges now, so we would like to know what we are missing, what doesn’t make sense, what works, and what doesn’t. The best way to provide this feedback is through the Qt Bug Tracker.  Just remember to use the Qt Quick: 3D component when filing your bugs/suggestions.

The post Introducing Qt Quick 3D: A high-level 3D API for Qt Quick appeared first on Qt Blog.

I am in the Netherlands I came for the Krita Sprint and I have done a lot of progress with my Animated Brush for the Google Summer of Code Read More...

August 13, 2019

In Calamares there is a debug window; it shows some panes of information and one of them is a tree view of some internal data in the application. The data itself isn’t stored as a model though, it is stored in one big QVariantMap. So to display that map as a tree, the code needs to provide a Qt model so that then the regular Qt views can do their thing.

Screenshot of treeview

Each key in the map is a node in the tree to be shown; if the value is a map or a list, then sub-nodes are created for the items in the map or the list, and otherwise it’s a leaf that displays the string associated with the key. In the screenshot you can see the branding key which is a map, and that map contains a bunch of string values.

Historically, the way this map was presented as a model was as follows:

  • A JSON document representing the contents of the map is made,
  • The JSON document is rendered to text,
  • A model is created from the JSON text using dridk’s QJsonModel,
  • That model is displayed.

This struck me as a long-way-around. Even if there’s only a few dozen items overall in the tree, it looks like a lot of copying and buffer management going on. The code where all this happens, though, is only a few lines – it looks harmless enough.

I decided that I wanted to re-do this bit of code – dropping the third-party code in the process, and so simplifying Calamares a little – by using the data from the QVariant directly, with only a “light weight” amount of extra data. If I was smart, I would consult more closely with Marek Krajewski’s Hands-On High Performance Programming with Qt 5, but .. this was a case of “I feel this is more efficient” more than doing the smart thing.

I give you VariantModel.

This is strongly oriented towards the key-value display of a QVariantMap as a tree, but it could possibly be massaged into another form. It also is pushy in smashing everything into string form. It could probably use data from the map more directly (e.g. pixmaps) and be even more fancy that way.

Most of my software development is pretty “plain”. It is straightforward code. This was one of the rare occasions that I took out pencil and paper and sketched a data structure before coding (or more accurate: I did a bunch of hacking, got nowhere, and realised I’d have to do some thinking before I’d get anywhere – cue tea and chocolate).

What I ended up with was a QVector of quintptrs (since a QModelIndex can use that quintptr as intenal data). The length of the vector is equal to the number of nodes in the tree, each node is assigned an index in the tree (I used depth-first traversal along whatever arbitrary yet consistent order Qt gives me the keys, enumerating each node as it is encountered). In the vector, I store the parent index of each node, at the index of the node itself. The root is index 0, and has a special parent.

Tree Representation

The image shows how a tree with nine nodes can be enumerated into a vector, and then how the vector is populated with parents. The root gets index 0, with a special parent. The first child of the root gets index 1, parent 0. The first child of that node gets index 2, parent 1; since it is a leaf node, its sibling gets index 3, parent 1 .. the whole list of nine parents looks like this:

-1, 0, 1, 1, 0, 0, 5, 5, 5

For QModelIndex purposes, this vector of numbers lets us do two things:

  • the number of children of node n is the number of entries in this vector with n as parent (e.g. a simple QVector::count()).
  • given a node n, we can find out its parent node (it’s at index n in the vector) but also which row it occupies (in QModelIndex terms), by counting how many other nodes have the same parent that occur before it.

In order to get the data from the QVariant, we have to walk the tree, which requires a bunch of parent lookups and recursively descending though the tree once the parents are all found.

Changing the underlying map isn’t immediately fatal, but changing the number of nodes (expecially in intermediate levels) will do very strange things. There is a reload() method to re-build the list and parents indexes if the underlying data changes – in that sense it’s not a very robust model. It might make sense to memoize the data as well while walking the tree – again, I need to read more of Marek’s work.

I’m kinda pleased with the way it turned out; the consumers of the model have become slightly simpler, even if the actual model code (from QJsonModel to VariantModel) isn’t much smaller. There’s a couple of places in Calamares that might benefit from this model besides the debug window, so it is likely to get some revisions as I use it more.

So, we had a Krita sprint last week, a gathering of contributors of Krita. I’ve been at all sprints since 2015, which was roughly the year I became a Krita contributor. This is in part because I don’t have to go abroad, but also because I tend to do a lot of administrative side things.

This sprint was interesting in that it was an attempt to have more if not as much artists as developers there. The idea being that the previous sprint was very much focused on bugfixing and getting new contributors familiar with the code base(we fixed 40 bugs back then), this sprint would be more about investigating workflow issues, figuring out future goals, and general non-technical things like how to help people, how to engage people, how to make people feel part of the community.

Unfortunately, it seems I am not really built for sprints. I was already somewhat tired when I arrived, and was eventually only able to do half days most of the time because there were just too many people …

So, what did I do this sprint?

Investigate LibSai (Tuesday)

So, PaintTool Sai is a 2d painting program with a simple interface that was the hottest thing around 2006 or so, because at the time, you had PaintShop Pro(a good image editing program, but otherwise…), Photoshop CS2(You could paint with this but it was rather clunky), GIMP, OpenCanvas(very weird interface) and a bunch of natural media simulating programs (Corel, very buggy, and some others I don’t remember). Paint Tool Sai was special in that it had a stablizer, and mirroring/rotating the viewport, and a color-mixing brush, and variable width vector curves, and a couple of cool dockers. Mind you it only had like, 3 filters, but all those other things were HUGE back then. So everyone and their grandmother pirated it until the author actually made an English version, at which point like 90% of people still pirated it. Then the author proceeded to not update it for like… 8 years?, with Paint Tool Sai 2 being in beta for a small ever.

The lowdown is that nowadays many people(mostly teens, so I don’t really want to judge them too much) are still using Paint Tool Sai 1, pirated, and it’s so old it won’t work on windows 10 computers anymore. One of the things that always had bothered me is that there was no program outside of Sai that could handle opening the Sai file format. It was such a popular program, yet noone had seemed to have tried?

So, it seems someone has tried, made a library out of it even. If you look at the readme, the reason noone besides Wunkolo has tried to support it is because sai files aren’t just encoded, no, they’re encrypted. This would be fine if it were video game saves, but a painting program is not a video game, and I was slightly horrified, as it is a technological mechanism put in place to avoid people getting to their artwork that they made rather than just the most efficient way to store it on the computer. So I now feel more compelled to have Krita be able to open these files, so people can actually access them. So I sat down with Boudewijn to figure out how much needs to be done to add it to Krita, and we got some build errors, so we’re delaying this for a bit. Made a phabricator task instead:

T11330 – Implement LibSai for importing PaintTool Sai files

Pressure Calibration Widget (Tuesday)

This was a slightly selfish thing. I had been having trouble with my pen deciding it had a different pressure curve every so often, and adjusting the global tablet curve was, while perfectly possible, getting a bit annoying. I had seen Pressure Calibration widgets in some android programs, and I figured that the way how I tried to figure out my pressure curve(messing with the tablet tester and checking the values) is a little bit too technical for people, so I decided to gather up all my focus and program a little widget that would help with that.

Right now, it asks for a soft stroke, a medium one and a heavy one and then calculates the the desired pressure from that. It’s a bit fiddly though, and I want to make it less fiddly but still friendly in how it guides you to provide the values it needs.

MR 104 – Initial Prototype Pressure Callibration Widget

HDR master class (Tuesday)

(Not really)

So, the sprint was also the first time to test Krita on exciting new hardware. One of these was Krita on an Android device(more on that later), the other was the big HDR setup provided by Intel. I had already played with it back in January, and have since the beginning of Krita’s LUT docker support played with making HDR/Scene Linear images in Krita. Thus when Raghu started painting, I ended up pointing at things to use and explaining some peculiarities(HDR is not just bright, but also wide gamut, and linear, so you are really painting with white).

Then Stefan joined in, and started asking questions, and I had to start my talk again. Then later that day Dmitry bothered David till he tried it out, and I explained everything again! ��

Generally, you don’t need an HDR setup to paint HDR images, but it does require a good idea of how to use the color management systems and wrapping your head around it. It seems that the latter was a really big hurdle, because artists who had come across as scared of it over IRC were, now that they could see the wide gamut colors, a lot more positive about it.

Animation cycles were shown off as well, but I had run out of juice too much to really appreciate it.

Later that evening we went to the Italian Restaurant on the corner, who miraculously made ordering á la carte work for a group of 25 people. I ended up translating the whole menu for Tusooaa and Tiar, who ended up repaying me by making fun of my habit of pulling apart the nougat candy I got with the coffee. *shakes fist in a not terribly serious way* There were later during the walk also discussions had about burning out and mental health, a rather relevant topic for freelancers.

Open Lucht Museum Arnhem (Wednesday)

I did make a windmill, but… I really should’ve stayed in Deventer and catch up on sleep. Being Dutch, this was my 5th or 7th time I had seen this particular open air museum(there’s another one(Bokrijk) in Belgium where I have been just as many times), but I had wanted to see how other people would react to it. In part because it shows all this old Dutch architecture, and in part because these are actual houses from those periods, they get carefully disassembled and then carefully rebuild brick by brick, beam by beam on the museum terrain, which in itself is quite special.

But all I could think when there was ‘oh man, I want to sleep’. Next time I just need to stay in Deventer, I guess.

I did get to take a peek at other people’s sketchbooks and see, to my satisfaction, I am not the only person to just write notes and todo lists in the sketchbook as well.

At dinner, we mostly talked about the weather, and confusion over the lack of spicyness in the other wise spicy cuisine of Indonesia(which was later revealed to be caused by the waiters not understanding we had wanted a mix of spicy and non-spicy things). And also that box-beds are really weird.

Taking Notes (Thursday Afternoon)

Given the bad decision I had made yesterday to go to the museum, I decided to be less dumb and tell everyone I’d sleep during the morning.

And then when I joined everyone, it turned there had been half a meeting during the morning. And Hellozee was kind of hinting that I should definitely take the notes for the afternoon(looking at his notes and his own description of that day, taking notes had been a bit too intense for him). And later he ranted at me about the text tool, and I told him that ‘Don’t worry, we know it is clunky as hell. Identifying that isn’t what is necessary to fix it’. (We have several problems, the first being the actual font stack itself, so we can have shaping for scripts like those for Arabic and Hindi, then there’s the laying out, so we can have wordwrap and vertical layout for CJK scripts, and only after those are solved we can even start thinking of improving the UI).

Anyhow, the second part of the meeting was about instagram, marketing, and just having a bit of fun with getting people to show off their art they made with Krita. The thing is of course that if you want it to be a little bit of fun, you need to be very thorough in how you handle it. Like, competition could lead to it feeling like a dog-eat-dog style competition, and we also need to make it really clear how to deal with the usual ethics around artists and ‘working for exposure’. Sara Tepes was the one who wants to start up the Krita account for Instagram, which I am really thankful of. She also began with the discussion on this, and I feel a little bad because I pointed out the ethical aspect by making fun of ‘working for exposure’, and I later realized that she was new to the sprint, and maybe that had been a bit too forward.

And then I didn’t get the chance to talk to her afterwards, so I couldn’t apologize. I did get some comments from others that they were glad I brought it up, but still it could’ve been nicer. orz

In the end we came to a compromise that people seemed comfortable with: A cycle of several images, selected from things people explicitly tag to be included on social media, for a short period, and a general page that explains how the whole process works so that there’s no confusion on what and why and how, and that it can just be a fun thing for people to do.

Android Testing (Friday)

I had poked at the android version, but last time it had not yet have graphics acceleration support, so it was really slow. This time I could really sit down and test it. It’s definitely gotten a lot better, and I can see myself or other artists having this as an alternative to a sketchbook to sit down and doodle on while the computer is for the bigger intensive work.

It was also a little funny, when I showed to someone it was pressure sensitive, all the other artists present one-by-one walked over to me to try poke the screen with a stylus. I guess we generally have so much trouble to get pressure to work on desktop devices it’s a little unbelievable it would just work on the mobile device.

T11355 – General feedback Android version.

That evening discussions were mostly about language, and photoshop’s magnetic lasso tool crashing, and that Europeans talk about language a lot.

Saturday

On Saturday I read an academic book of 300~ pages, something which I had really needed after all this. I felt a lot more clear headed after wards. I had attempted to help Boudewijn with bugtriaging, which is something we usually do on a sprint, but I just couldn’t concentrate.

We were all too tired to talk much on Saturday. I can only remember eating.

Sunday

On Sunday I spent some time with Boudewijn going through the meeting notes and turning it into the sprint report. Boudewijn then spend 5 times trying to explain the current release schedule to me, and now I have my automated mails setup so people get warned about backporting their fixes and about the upcoming monthly release schedule.

In the evening I read through the Animator’s Survival Kit. We have a bit of an issue where it seems Krita’s animation tools are so intuitive that when it comes to the unintuitive things that are inherent to big projects themselves (ram usage, planning, pipeline), people get utterly confused.

We’ve already been doing a lot of things in that area: making it more obvious when you are running out of ram, making the render dialog a bit better, making onion skins a bit more guiding. But now I am also rewriting the animation page and trying to convey to aspiring animators that they cannot do a one hour 60 fps film in a single krita file, and that they will need to do things like planning. The Animator’s Survival Kit is a book that’s largely about planning, which very little talked about, so hence why it is suggested to aspiring animators a lot, and I was reading it through to make sure I wasn’t about to suggest nonsense.

We had, after all the Indians had left, gone to an Indian restaurant. Discussions were about spicy food, language and Europe.

Monday

On Monday I stuck around for the irc meeting and afterwards went home.

It was lovely to meet everyone individually, and each singular conversation I had, had been lovely, but this is really one of those situations where I really need to learn to take more breaks and not be too angry at myself for that. I hope to meet everyone in the future again in a less crowded setting so I can actually have all the fun of meeting fellow contributors and none of the exhausting parts. The todo list we’ve accumulated is a bit daunting, but hopefully we’ll get through it together.

Last week we had a huge Krita Sprint in Deventer. A detailed report is written by Boudewijn here, and I will concentrate on the Animation and Workflow discussion we had on Tuesday, when Boudewijn was away, meeting and managing people arriving. The discussion was centered around Steven and his workflow, but other people joined during the discussion: Noemie, Scott, Raghavendra and Jouni.

(Eternal) Eraser problem

Steven brought up a point that current brush options "Eraser Switch Size" and "Eraser switch Opacity" are buggy, so it winded up an old topic again. These options were always considered as a workaround for people who need a distinct eraser tool/brush tip, and they were always difficult to maintain.

After a long discussion with broader circle of people we concluded that "Ten Brushes Plugin" can be used as an alternative for a separate eraser tool. One should just assign some eraser-behaving preset to the 'E' key using this plugin. So we decided that we need the following steps:

Proposed solution:

  1. Ten Brushes Plugin should have some eraser preset configured by default
  2. This eraser preset should be assigned to "Shift+E" by default. So when people ask about "Eraser Tool" we could just tell them "please use Shift+E".
  3. [BUG] Ten brushes plugin doesn't reset back to a normal brush when the user picks/changes painting color, like normal eraser mode does.
  4. [BUG] Brush slot numbering is done in 1,2,3,...,0 order, which is not obvious. It should be 0,1,2,...,9 instead.
  5. [BUG] It is not possible to set up a shortcut to the brush preset right in the Ten Brushes Plugin itself. The user should go to the settings dialog.

Stabilizer workflow issues

In Krita stabilizer settings are global. That is, they are applied to whatever brush preset you use at the moment. That is very inconvenient, e.g. when you do precise line art. If you switch to a big eraser to fix up the line, you don't need the same stabilization as in the liner.

Proposed solution:

  1. The stabilizer setting are still in the Tool Options docker, we don't move them into the Brush Settings (because sometimes you need to make them global?)
  2. Brush Preset should have a checkbox "Save Stabilizer Settings" that will load/save the stabilizer settings when the preset is selected/unselected.
  3. The editing of these (basically) brush-based setting will happen in the tool option.

Questions:

  • I'm not sure if the last point is sane. Technically, we can move the stabilizer settings into the brush preset. And if the user wants to use the same stabilizer settings in different presets, he can just lock the corresponding brush settings (we have an special lock icon for that). So should we move the stabilizer settings into the brush preset editor or keep it in the tool options?

Cut Brush feature

Sometimes painter need a lot of stamps for often-used objects. E.g. a head or a leg for an animation character. A lot of painters use brush preset selector as a storage for that. That is, if you need a copy of a head on another frame, just select the preset and click in a proper position. We already have stamp brushes and they work quite well, we just need streamline workflow a bit.

Proposed solution:

  1. Add a shortcut for converting the current selection into a brush. It should in particular:
    • create a brush from the current selection, add default name to it and create an icon from the selection itself
    • deselect the current selection. It is needed to ensure that the user can paint right after pressing this shortcut
  2. There should be shortcuts to rotate, scale current brush
  3. There should be a shortcut for switching prev/next dab of the animated brush
  4. Brush needs a special outline mode, when it paints not an outline, but a full colorful preview. It should be activated by some modifier (that is pres+hold).
  5. Ideally, if multiple frames are selected, the created brush should become animated. That would allow people to create "walking brush" or "raining brush".

Multiframe editing mode

One of the major things Krita's animation still lack is multiframe editing mode, that is ability to transform/edit multiple frames at once. We discussed it and ended up with a list of requirements.

Proposed solution:

  1. By default all the editing tools transform the current frame only
  2. The only exception is "Image" operations, which operate on the entire image, e.g. scale, rotate, change color space. These operations work on all existing frames.
  3. If there is more than one frame selected in the timeline, then operation/tool should be applied on these frames only.
  4. We need a shortcut/action in the frame's (or timeline's layer) context menu: "Selection all frames"
  5. Tools/Actions that should support multiframe operations:
    • Brush Tool (low-priority)
    • Move Tool
    • Transform Tool
    • Fill Tool (may be efficiently used on multiple frames with erase-mode-trick)
    • Filters
    • Copy-Paste selection (now we can only copy-paste frames, not selections)
    • Fill with Color/Pattern/Clear

BUGS

There is also a set of unsorted bugs that we found out during the discussion:
  1. On Windows multiple main windows don't have unique identifier, so they are no distinguishable from OBS.
  2. Animated brush spits a lot of dabs in the beginning of the stroke
  3. Show in Timeline should be default for all the new layers
  4. Fill Tool is broken with Onion Skins (BUG:405753)
  5. Transform Tool is broken with Onion Skins (BUG:408152)
  6. Move Tool is broken with Onion Skins (BUG:392557)
  7. When copy-paste frames on the timeline, in-betweens should override the destination (and technically remove everything that was in the destination position). Right now source and destination keyframes are merged. That is not what animators expect.
  8. Changing "End" of animation in "Animation" docker doesn't update timeline's scroll area. You need to create a new layer to update it.
  9. Delayed Save dialog doesn't show the name of the stroke that delays it (and sometimes the progress bar as well). It used to work, but now is broken.
  10. [WISH] We need "Insert pasted frames", which will not override destination, but just offset it to the right.
  11. [WISH] Filters need better progress reporting
  12. [WISH] Auto-change the background of the Text Edit Dialog, when the text color looks alike.

As a conclusion, it was very nice to be at the sprint and to be able to talk to real painters! Face to face meetings are really important for getting such detailed lists of new features we need to implement. If we did this discussion through Phabricator we would spend weeks on it :)

I’ve updated the kde.org/applications site so KDE now has web pages and lists the applications we produce.

In the update this week it’s gained Console apps and Addons.

Some exciting console apps we have include Clazy, kdesrc-build, KDebug Settings (a GUI app but has no menu entry) and KDialog (another GUI app but called from the command line).

This KDialog example takes on a whole new meaning after watching the Chernobyl telly drama.

And for addon projects we have stuff like File Stash, Latte Dock and KDevelop’s addons for PHP and Python.

At KDE we want to be a great place to be a home for your project and this is an important part of that.

 

August 12, 2019

KDevelop 5.4.1 released

We today provide a stabilization and bugfix release with version 5.4.1. This is a bugfix-only release, which introduces no new features and as such is a safe and recommended update for everyone currently using KDevelop 5.4.0.

You can find the updated Linux AppImage as well as the source code archives on our download page.

ChangeLog

kdevelop

  • Fix crash: add missing Q_INTERFACES to OktetaDocument for IDocument. (commit. fixes bug #410820)
  • Shell: do not show bogus error about repo urls on DnD of normal files. (commit)
  • [Grepview] Use the correct icons. (commit)
  • Fix calculation of commit age in annotation side bar for < 1 year. (commit)
  • Appdata: add entry. (commit)
  • Fix registry path inside kdevelop-msvc.bat. (commit)

kdev-python

No user-relevant changes.

kdev-php

  • Update phpfunctions.php to phpdoc revision 347831. (commit)
  • Switch few http URLs to https. (commit)
kossebau Mon, 2019/08/12 - 22:00
Category
Tags

Officially, on Friday the 2019 Krita Sprint was over. However, most people stayed until Saturday… It’s been a huge sprint! Almost a complete convention, a meeting of developers and artists.

All the sprinters together

The sprinters, artists, contributors and artists/contributors, all together. Photo taken by Krzyś.

Monday

On Monday, people started arriving. It was pretty great to meet again so many people who hadn’t seen each other for a long time, and to see so many people who hadn’t been to any Krita sprint before. We had rigged a HDR test system in the sprint area, which was, probably for the last time, since it’s getting too small, the 12th-century cellar underneath the Krita maintainer’s house, in the town centre of Deventer. Wolthera was kept busy all week giving an introduction to painting in HDR — she is, after all, the first person in the world to have actually done creative painting in HDR.

There was other hardware to test as well, like the Android tablet with Sharaf Zaman’s Android port of Krita. Sharaf couldn’t come to the sprint; his visum was denied, probably because the Dutch authorities were informed beforehand of the intention of the Indian government to cancel Kashmir’s special status. With a blanket closure of internet, mobile telephony and landline telephony, it was impossible to be in touch with Sharaf. We did some thorough testing, and we hope contact with Sharaf will be restored soon.

Tuesday

Since we still weren’t complete, we postponed the meeting until Thursday, so this day was a day for hacking and discussions. We had many more artists joining us than previously, so the discussions were lively and the meetings were good — but there were more bugs reported than bugs fixed.

Wednesday

On Wednesday, we went on an expedition, to the Openlucht Museum. With twenty-three people attending, it was more efficient to actually rent a bus for the lot of us. The idea about the outing was to make sure people who had never been at a sprint and people who had been at sprints before would mingle and get to know each other. That worked fine!

We had a somewhat disappointing guided tour. I had asked for a solid introduction in the social history of the Netherlands, but the guide still felt he needed to make endless inane and borderline sexist jokes that all fell very flat. Oh well, the buildings were great and the people inside the buildings were giving quite interesting information, with the rosmolen being a high point:

From David Revoys sketchbook

From David Revoy’s sketchbook

And as you can see, it gave the artists and developer/artists amongst us a chance to do some analog painting (althoug at least one sprint attendant tried to paint with Krita on a Surface tablet, unfortunately cut short by battery problems):

We followed up with dinner at an Indonesian restaurant, and went home tired but satisfied. There was still hacking and painting going on, though, until midnight.

Thursday

Today we really had the core of our sprint. Some sprints are for coding, but this sprint was for bringing together people and gathering ideas. On Thursday, we discussed the future of Krita in quite a bit of detail.

In 2018/2019 the focus was fully on fixing bugs. There are now two full-time developers working on fixing bugs and improving stability more than this time last year, and both Boudewijn and Dmitry have dedicated all their coding time to fixing bugs as well. Weirdly enough, that doesn’t seems to make much of a dent in our number of open bugs:

Bugs, open, new and closed, since April.

One reason is that we manage to introduce too many regressions. That’s partly explained by our new hackers needing to learn the codebase, partly by an enormous increase in our user base (we’re on track to break 2,500,000 downloads a year in 2019), but mostly by our changes not getting enough testing before releasing. So, taking things out of the order we discussed them at the meeting, let’s report on our Bugs and Stability discussion first.

Bugs and Stability

As David Revoy reports, the 4.2 releases don’t feel as stable as the 4.1 releases. As noted above, this is not unexpected since we have two new full-time developers working on Krita, who aren’t that deep in the codebase yet. Another reason we have so much trouble with the 4.2 releases is that we updated to Qt 5.12, which seems to come with many regressions we either have to fix in Qt (and we do submit patches upstream, and those are getting accepted), or work around in Krita itself. On the other hand, we are merging bug fixes into our release branch until the last minute before the release, so those fixes get barely any testing: so the lack of testing isn’t something we can blame on our users, it’s to a large extent our own fault.

Yet Raghukamath and Deevad noted that they both don’t actually test master or the stable branch from day to day anymore because they are too busy actually using Krita, and the same goes for the other artists present. It’s clear that the developers cannot do regression testing, that our extensive set of unittests (although most are more like integration tests, technically) doesn’t catch the regressions — we have to find a better way.

Coincidentally (or not…) Anna (who was not present) had started some time ago a discussion about this on phabricator: T11021: Enhancements to quality assurance). There are many parts to that discussion, but one thing we concluded, based on discussions during the sprint, is that we will try the following:

  • We will release once a month, at the end of the month (we already try to do that…)
  • We will merge bug fixes to the stable branch until the middle of the month: the merge window thus is two weeks, while master is always open to bug fixes.
  • We will publish an article on krita.org telling our users what landed in the next stable version. That article will show up in all our users’ welcome screen news widget, right inside Krita. There will be links to download a portable version of Krita for every OS.
  • We will add a link to a survey (not bugzilla) in the welcome screen of those builds, and in the survey ask people for the results of testing the changes noted in the release article.
  • And then, two weeks later, we will release the next stable version of Krita, with only fixes merged during the test period for noted regressions.

Slightly related: we also want to do a monthly update article on changes for master, but without the survey mechanism. However, that would make two development updates a month, which might be a bit much to digest, so we’re starting slowly, with the stable release system.

Development focus for 2019/2020

In October we will release a hopefully super-stable Krita 4.3, with a bunch of new features as well, but still focused on stability. Boudewijn is still working on fixing the resource handling system, but that is going really slow, and is being really hard. It’s also hard for the maintainer of the whole project to find time to work on big coding projects, and it’s getting harder, the more management-like tasks there are.

Everyone in the meeting agreed that the text tool still needs much more work, maybe even another rewrite to get rid of the dependency on Qt’s internal rich text editor: the conversion between QTextDocument and SVG is lossy, and gives problems. We were all aware of the missing bits and the problems and bugs, so we didn’t need to discuss this in detail. So one focus is:

  • Text Tool: it is still not usable for the primary goal, namely comic book text balloons. We need to make it suitable. We know what to do, we just don’t have the time while fighting incoming bugs all the time.

So it’s clear that we still need to work on…

  • Stability and performance. During the discussion some particular issues were noted:
    • Raghukamath reported that the move tool has become very slow. This definitely is a regression:
    • One year ago, at the 2018 Krita Sprint, we made a list of Unfinished Stuff, ranging from missing features after the vector rewrite, unimplemented layer style options, half-implement animated transform mask, missing undo/redo support in the scripting layer. Most of that is still relevant. See the original sprint report.
    • Dmitry: noted he had made some expirements that show that we could make our brushes much faster by using new versions of AVX, but this would only help people with newer laptops. Boudewijn wondered whether the brush engines are the true bottleneck — if the improvement only shows in benchmarks, users won’t notice much difference. We might want to do another round of measuring using Intel’s VTune, if we can get another license for a year.
    • Resource handling is still being rewritten. That means that when Steven noted that he cannot update a workspace, Boudewijn replied that is part of the bigger problem with the current resource system. Boud is rewriting that, and has been working on it for two years now, leading to a huge merge request — big enough to almost bring gitlab to its knees. The rewrite might be be too big to actually finish, and it’s hard to distribute the work over multiple developers.

Since we had some many artists around whose view we had never before been able to canvass, we decided that digging into workflow issues might be the best thing to do: it would make a good theme for the next fundraiser, too. So:

    • Workflow issues
      • Stefan brought in multiple issues with animation, like editing on multiple frames at the same time or finally getting the clones feature done. Most issues are already in bugzilla as wishes, but unfortunately, we don’t have someone to work on animation full-time at the moment. Some progress was made and demonstrated during the sprint, though!
      • In the nineties and oughties, all desktop applications looked the same and followed more or less the same guidelines. That made sure that users knew they could investigate the contents of the application menus; it would be the first place to look for something relevant for their task at hand. That barely seems to happen anymore. So, discoverability of features in Krita is a problem that is getting worse. We made a list of things that were hard to find:
      • Our dockers are overcrowded and the contents hard to find; a docker hidden behind another docker isn’t going to be discovered by many users.
      • If there are tabs in a single docker, like with the transform tool, and some of those tabs aren’t visible because there are too many of them, like the liquify transform functionality, that functionality might as well not exist, it won’t be found.
      • Deevad suggested making each transform tool option into a separate tool; however, as Raghukamath noted, the objection to that is that we don’t want six new tools crowding the toolbox, nor having the weird pop-out tool selection buttons Photoshop has. This is a perennial discussion, and it’s next to impossible to come to a conclusion here.
    • Related to that is the question of the tool options docker. Right now, users can choose between putting the tool options in a docker, or in a popup button on the toolbar. Another option would be to make a toolbar out of the tool options, with some pop-ups, like Corel Painter and Photoshop did. But none of these options is a real solution. We might do want to do a survey, but from experience. users want to have at least the overview, tool options, color selector, layer docker, brush preset docker visible, as well as some others, and on most displays, there just isn’t the space for that…
    • Actually figuring out which dockers we have is pretty hard, too. Most people don’t seem to find the dockers submenu in the settings menu, or in the right-click menu on the docker titlebars, and if they find it, the list is too intimidating.
    • So, one thing we decided to do is create a tool to search through Krita’s functionality. Other applications are apparently facing the same problem, and this is an easy and cheap solution. Of course, people started asking for this tool to also search e.g. layer names. That lead to the thought that this is starting to look a bit like QtCreator’s locator widget…
    • A workflow improvement Steven suggested was to autosave the current session (that is, open windows, views, images) and restoring this on restarting Krita.
    • Mariya said that the combination of clone layers and transform masks doesn’t work as well for her as Photoshop’s smart objects. After some discussion, it seems that we might want to rethink showing masks in the hierarchy if there’s only one mask of a certain type: it’s easier to show them as a toggle in the layer’s row in the layerbox.
    • Sara asked whether Krita had a screen recorder. This should only record the canvas stroke by stroke and export to PNG or JPG. The old screen recorder could do this, but it was very broken and removed, and never intentionally released. This would be a biggish project, gsoc-sized, and needs to build on first finishing the porting to the generic strokes sytem, and then extend that system with recording. Another option would be to add a timer to save incremental versions and add an option to export incremental in addition to save incremental.
    • Emmett and Eoin discussed working with painting assistants: it would be interesting to make a hierarchy with grouped assistants. The conclusion was that we might want to show the assistants in the layer docker on top of the layers; other suggestions were a separate docker or putting the treeview in the tool option widget. If the first solution is chosen, it would be useful to also show reference images and maybe even guides in the layer^Wimage structure docker.

Some of the workflow issues mentioned already sound like new features, and then there were a number of discussions about what really would be new features:

    • It would help a lot with user support if the a statusbar indicator would show whether the stabilizer, canvas acceleration, instant preview (and others) are on or off. A screenshot would then immediately answer the questions we most ask users who need support.
    • After so many months of bug fixes, Dmitry really wants to work on one or two new brush engines: the first has thin bristles that could each have their own color. Sara mentions she really misses a brush like this. The other engine would be more like a calligraphy tool. Dmitry estimates needing two months per engine, which is quite a bit of time.
    • Eoin wants to have a tool, or a brush engine, or at least, something that would make it very easy to paste images or designs and transform them before pasting the next one. The images should come from an ordered or random collection. Currently, people use pipe brushes for that, but that is not convenient. A new tool is suggested.
    • Steven notes that it would be much more convenient to create a pipe brush from an animation on the timeline than the current system based on layers. This is true – we just never thought of it.
    • Mariya really wants to have a font selector where she can mark X fonts as her favourite and created a wish bug for it. It turns out that Calligra doesn’t have widget like that, and the standard Qt Font combobox doesn’t support it either, nor does Scribus: it might be that such a thing hasn’t been written in Qt.

Anoother thing that was discussed briefly was telemetry (we tried that, the project failed).

Marketing and Outreach

Our presence on Twitter, Mastodon, Tumblr, DeviantArt, Reddit is fine. On Facebook, non-official groups are more used than Krita’s own account (which is because Boud is the last maintainer standing, and he cannot stand facebook). Youtube needs improvement, we are absent on Instagram.

Instagram

Sara Tepes volunteers to handle Krita’s instagram account (which we don’t have). Sara also wants to run “competitions” on Instagram with the prize being having the a number of selected images shown on either Krita’s splash or Krita’s welcome screen.

The splash screen is our main branding location, so we shouldn’t put other images in there other than the holiday jokes.

We could redesign the welcome screen to include an image location; it needs redesign in any case because it’s too drab right now. We wanted it to be not in-your-face, but it’s a bit too much not so now.

Once we have the image location, getting and selecting images can be a problem, as it was for the art book or main release announcements. Sara notes that instagram gives easy tools to select images from a larger set; other platforms are not so good.

In any case, selecting images will be quite a bit of work, and we do need to make sure we’re not playing favorites or forgetting where we come from: free software, open culture.

Conclusion: we are going to try to run the competition on all social networks for which we have a maintainer (instagram, twitter, mastodon, reddit). We can always extend this later to other places. Each maintainer can propose two images + attribution info + links, which will be shown in rotation for a month.

The system for doing this should be ready for the 4.3 release in October.

Note: we have to make a page with a very clear text explaining the rules: we don’t take ownership of the images, the images will be shown in Krita, there will be no licensing requirements for the images, certain kinds of images cannot be used.

Note 2: Scott should ask Ben Cooksley how we can get the welcome screen news widget traffic information on a regular basis.

YouTube

We have already started improving our presence on YouTube. We feature existing Krita-related channels, and we are working with Ramon to provide interesting videos. We could do more, but let’s give Ramon a chance to build up some momentum first.

Development Fund and Fundraiser

Financially, Krita is doing okay. We do get between 2000 and 2700 euros a month in donations: that translates to one full time developer (yes, we’re not getting rich from working on Krita, these are not commercial fees). Windows Store + Steam bring in enough for three to four extra full-time developers. It would be good to become less dependent on the Windows Store, though, since Microsoft is getting more and more aggressive in promoting getting applications from the Windows Store.

Enter the Krita Development Fund. Like in most things, we try to look at what Blender is doing, and then try to find out whether that works for us. Often it does. We already have a notional Development Fund, but it’s basically a monthly paypal subscription or recurring bank transfer. We don’t have any feedback or extras for the subscribers, and the subscribers have no way to manage their subscription, or reach us other than in the usual way. We tried to implement a CiviCRM system for this, but that was way too complex for us to manage.

We need to reboot our Development Fund and migrate existing subscribers to the new fund. A basic list of requirements is:

      • A place on the website where people can subscribe and unsubscribe
      • A place where the names of people who want that are shown
      • A way to tell people what we are doing with the money and what we will be doing
      • Make sure companies and sponsors will also be able to join

And no doubt there will be other considerations and requirements. We should check Blender’s dev fund website, of course. We created a Phabricator task to track this, and it’s something we really want help with!

What wasn’t discussed

Interestingly, the new gitlab workflow seems to work for everyone. Gitlab’s UI is even less predictable and discoverable than Krita’s, but we didn’t need to discuss anything, people can work with it without much trouble.

Steam wasn’t much of a discussion item either: Windows is doing fine on Steam, our macOS version of Krita still has too many niggles to make it worth-while to put on Steam (or the Apple Store either, even if that were possible, license-wise), and the Linux market share is still too small to make it worth the time investment: still Emmet promised to contact Valve to see how we can get the appimage into Steam. At a first glance, the problem seems to be the version of libc required, which might mean we’ll have to figure out a way to build Qt 5.12 on older versions of Ubuntu or CentOS. But let’s wait and see, first.

Tangentially, we did discuss how to get more people involved in user support, but Agata already has plans towards involving people who already trying to help others in places like Reddit and the forum more recognition. It was late, and the discussion degenerated into hilarity pretty soon — still, this is something to work on, since the core development team just doesn’t have the capacity anymore to help every new Krita user’s teething problems anymore.

Tasks

To create: a task for rethinking what goes into dockers, and what goes somewhere else.

Friday

Friday was the real hacking day. Some people already started leaving, but many people were staying around and started hacking on the issues identified during the meeting, like the action search widget. Bugs were being fixed, regressions identified and blogs posted. And even later on, on Saturday and Sunday, there was still hacking, like on the detached canvas feature.

Last week I finished writing all the new examples for the ROCS, together with a little description of each commented in the beginning of the code. The following examples were implemented:

  • Breadth First Search;
  • Depth First Search;
  • Topological Sorting Algorithm;
  • Kruskal Algorithm;
  • Prim Algorithm;
  • Dijkstra Algorithm;
  • Bellman-Ford Algorithm;
  • Floyd-Warshall Algorithm;
  • Hopcroft-Karp Bipartite Matching Algorithm.

It is good to note that while Prim algorithm and BFS were already in rocs, they were broken and could not be run. The following image is an example of a simple description of an algorithm:

Description

About the step-by-step execution, I am considering our possibilities. My first idea was to take a look into the debugger for the QScriptEngine class, which is the QScriptEngineDebugger class. An instance of this class can be attached to our script engine, and it provides the programmer with a interface with all the necessary tools.

Although useful, I personally think our rocs don’t need all this tools. (but they can be provided separately) There are 3 ways to stop the code execution using this debugger:

  • By an not treated Exception inside the javascript code;
  • By a call to the debugger instruction, that automatically invokes the debugger interface;
  • By a breakpoint, that can be put in any line of the javascript code.

The first one is not really useful for us, as it halts the code execution. The second and the third can be really useful couple with an Continue command. But the second invokes the full debugger interface, which we don’t really want.

Debugger

So, by using the third one, we can stop the execution in any line of the javascript code and create a step button with the Continue command to continue executing the code. The only problem is how to add the breakpoints, as there is no direct function to add them, and usually the programmer has to use the ConsoleWidget interface or the BreakpointsWidget to do this. The following image shows the Continue button, which is already working:

Continue

But the challenge of adding the breakpoints still remains. One of my ideas is to modify the code editor to accept an click on the line number bar, which triggers an signal to add/remove an breakpoint to that line. This is an clean alternative for me. But for that I have to check if the KTextEditor have this type of signal and create a way to add breakpoints in the code by function.


August 11, 2019

Some considerable time ago I wrote up instructions on how to set up a FreeBSD machine with the latest KDE Plasma Desktop. Those instructions, while fairly short (set up X, install the KDE meta-port, .. and that’s it) are a bit fiddly.

So – prompted slightly by a Twitter exchange recently – I’ve started a mini-sub-project to script the installation of a desktop environment and the bits needed to support it. To give it at least a modicum of UI, dialog(1) is used to ask for an environment to install and a display manager.

The tricky bits – pointed out to me after I started – are hardware support, although a best-effort is better than having nothing, I think.

In any case, in a VBox host it’s now down to running a single script and picking Plasma and SDDM to get a usable system for me. Other combinations have not been tested, nor has system-hardware-setup. I’ll probably maintain it for a while and if I have time and energy it’ll be tried with nVidia (those work quite well on FreeBSD) and AMD (not so much, in my experience) graphics cards when I shuffle some machines around.

Here is the script in my GitHub repository with notes-for-myself.

Installing FreeBSD is not like installing a Linux distribution. A Linux distro hands you something that will provide an operating system and more, generally with a selection of pre-installed packages and configurations. Take a look at ArcoLinux, which offers 14(?) different distribution ISOs depending on your preference for installed software.

FreeBSD doesn’t give you that – you end up with a text-mode prompt (there’s FreeBSD distributions, though, that do some extra bits, but those are outside my normal field-of-view). So it’s not really expected that you have a complete desktop experience, post-installation (nor, for that matter, a complete GitLab-server, or postfix-mail-relay, or any of the other specialised purposes for which you can use FreeBSD).

I could vaguely imagine bunging this into bsdinstall as a post-installation option, but that’s certainly not my call. Not to mention, I think there’s an effort ongoing to update the FreeBSD installer anyway, led by Devin Teske.

So to sum up: to install a FreeBSD machine with KDE Plasma, download script and run it; other desktop environments might work as well.

There is one week left of the call for papers for the foss-north IoT and Security Day. The conference takes place on October 21 at WTC in Stockholm.

We’ve already confirmed three awesome speakers and will fill the day with more contents in the weeks following the closing of the CfP, so make sure to get your submission in.

Patricia Aas

The first confirmed speaker is Patricia Aas who will speak about election security – how to ensure transparency and reliability into the election system so that it can be trusted by all – including a less technologically versed public.

Also, this is the first stage in our test of the new foss-north conference administration infrastructure, and it seems to have worked this far :-). Big thanks goes to Magnus for helping out.

This week in KDE’s Usability & Productivity initiative is massive, and I want to start by announcing a big feature: GTK3 apps with client-side decorations and headerbars using the Breeze GTK theme now respect the active KDE color scheme!

Pretty cool, huh!? This feature was written by Carson Black, our new Breeze GTK theme maintainer, and will be available in Plasma 5.17. Thanks Carson!

As you can see, the Gedit window still doesn’t display shadows–at least not on X11. shadows are displayed on Wayland, but on X11 it’s a tricky problem to solve. However I will say that that anything’s possible!

Beyond that, it’s also been a humongously enormous week for a plethora of other things too:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

If you’re getting the sense that KDE’s momentum is accelerating, you’re right. More and more new people are appearing all the time, and I am constantly blown away by their passion and technical abilities. We are truly blessed by… you! This couldn’t happen without the KDE community–both our contributors for making this warp factor 9 level of progress possible, and our users for providing feedback, encouragement, and being the very reason for the project to exist. And of course, the overlap between the two allows for good channels of communication to make sure we’re on the right track.

Many of those users will go on to become contributors, just like I did once. In fact, next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If you find KDE software useful, consider making a tax-deductible donation to the KDE e.V. foundation.

August 10, 2019

As part of migrating this blog from a defunct hosting company and a Wordpress installation, to a non-defunct hosting company and Jekyll, I’m re-visiting a lot of old posts. Assuming the RSS generator is ok, that won’t bother the feed aggregators (the KDE planet in particular). The archives are slowly being filled in, and one entry from 2004 struck me:

Ok, my new machine is installed (an amd64 running FreeBSD -CURRENT, which puts me firmly at the forefront of things-unstable).

Not much has changed in 15 years, except maybe the “unstable” part. Oh, and I tend to run -STABLE now, because that’s more convenient for packaging.

Something else I spotted: in 2004 I was working on KPilot as a hobby project (alongside my PhD and whatever else was paying the bills then), so there’s lots of links to the old site.

Problem is, I let the domain registration expire long ago when Palm, Inc., the Palm Pilot, and KDE 4 ceased to be a going concern. So, that domain has been hijacked, or squatted, or whatever, with techno bla-bla-bla and recognizable scraps of text from the ancient website. Presumably downloading anything from there that pretends to be KPilot will saddle you with plenty of malware.

In any case it’s a reminder that links from (very) old blog posts are not to be trusted, particularly. Since the archives are being updated (from old Wordpress backups, and from the Internet Archive) I’ll try to fix links or point them somewhere harmless if I spot something, but no guarantess.

The sprint has officially ended yesterday and most of the participants have already left, except me, Ivan, Wolthera and Jouni. Well I would have also left as planned but I read my flight timings wrong and it would leave after 3 hours of what I thought the departure time was.

The default configuration for the Kate LSP client does now support more stuff than just C/C++ and Python out of the box.

In addition to the recently added Rust support we now support Go and LaTeX/BibTeX, too.

Configuration

The default supported server are configured via some JSON settings file we embed in our plugin resources.

Currently this looks like:

{
"servers": {
"bibtex": {
"use": "latex"
},
"c": {
"command": ["clangd", "-log=error", "--background-index"],
"commandDebug": ["clangd", "-log=verbose", "--background-index"],
"url": "https://clang.llvm.org/extra/clangd/"
},
"cpp": {
"use": "c"
},
"latex": {
"command": ["texlab"],
"url": "https://texlab.netlify.com/"
},
"go": {
"command": ["go-langserver"],
"commandDebug": ["go-langserver", "-trace"],
"url": "https://github.com/sourcegraph/go-langserver"
},
"python": {
"command": ["python3", "-m", "pyls", "--check-parent-process"],
"url": "https://github.com/palantir/python-language-server"
},
"rust": {
"command": ["rls"],
"rootIndicationFileNames": ["Cargo.lock", "Cargo.toml"],
"url": "https://github.com/rust-lang/rls"
}
}
}

The file is located at kate.git/addons/lspclient/settings.json. Merge requests to add additional languages are welcome.

I assume we need still to improve what we allow to specify in the configuration.

Currently supported configuration keys

At the moment, the following keys inside the per-language object are supported:

use

Tell the LSP client to use the LSP server for the given language for this one, too. Useful to dispatch stuff to a server supporting multiple languages, like clangd for C and C++.

command

Command line to start the LSP server.

commandDebug

Command line to start the LSP server in debug mode. This is used by Kate if the LSPCLIENT_DEBUG environment var is set to 1. If this variable is set, the LSP client itself will output debug information on stdout/stderr and the commandDebug command line should try to trigger the same for the LSP server, like e.g. using -log=verbose for clangd.

rootIndicationFileNames

For the Rust rls LSP server we added the possibility to specify a list of file names that will indicate which folder is the root for the language server. Our client will search upwards for the given file names based on the file path of the document you edit. For Rust that means we first try to locate some Cargo.lock, if that failed, we do the same for Cargo.toml.

url

URL of the home page of the LSP server implementation. At the moment not used internally, later should be shown in the UI to give people hints where to find further documentation for the matching LSP server (and how to install it).

Current State

For C/C++ with clangd the experience is already good enough for day-to-day working. What is possible can be seen in one of my previous posts, video included. I and some colleagues use the master version of Kate at work for daily coding. Sometimes Kate confuses clangd during saving of files but otherwise, no larger hiccups occur.

For Rust with rls many things work, too. We now discover the root directory for it more easily thanks to hints to look for the Cargo files. We adapted the client to support the Hover message type rls emits, too.

For the other languages: Beside some initial experiments that the servers start and you get some completion/…, not much work went into that. Help is welcome to improve their configuration and our client code to get a better experience.

Just give Kate from the master branch a test drive, here is our build it how-to. We are open for feedback on kwrite-devel@kde.org or directly via patches on invent.kde.org.

Btw., if you think our how-to or other stuff on this website is lacking, patches are welcome for that, too! The complete page is available via our GitHub instance, to try changes locally, see our README.md.

August 09, 2019

If you want an Akademy 2019 t-shirt you have until Monday 12th Aug at 1100CEST (i.e. in 2 days and a bit) to order it.

Head over to https://akademy.kde.org/2019/akademy-2019-t-shirt and get yourself one of the exclusive t-shirts with Jen's awesome design :)

Previously: 1st GSoC post 2nd GSoC post 3rd GSoC post 4th GSoC post In this GSoC entry I’ll mention two things implemented since the last blog post: syncing of scaling and NumLock settings. Aside from that, I’ll reflect on syncing of locally-installed files. Even thought I thought scaling would require changes on the SDDM side...... Continue Reading →

So far, most of my blog postings that appeared on Planet KDE were release announcements for KBibTeX. Still, I had always planned to write more about what happens on the development side of KBibTeX. Well, here comes my first try to shed light on KBibTeX&aposs internal working …

Active development of KBibTeX happens in its master branch. There are other branches created from time to time, mostly for bug fixing, i. e. allowing bug reporters to compile and test a bug fix before before the change is merged into master or a release branch. Speaking of release branches, those get forked from master every one to three years. At the time of writing, the most recent release branch is kbibtex/0.9. Actual releases, including alpha or beta releases, are tagged on those release branches.

KBibTeX is developed on Linux; personally I use the master branch on Gentoo Linux and Arch Linux. KBibTeX compiles and runs on Windows with the help of Craft (master better than kbibtex/0.9). It is on my mental TODO list to configure a free Windows-based continuous integration service to build binary packages and installers for Windows; suggestions and support are welcome. Craft supports macOS, too, to some extend as well, so I gave KBibTeX a shot on this operating system (I happen to have access to an old Mac from time to time). Running Craft and installing packages caused some trouble, as macOS is the least tested platform for Craft. Also, it seems to be more difficult to find documentation on how to solve compilation or linking problems on macOS than it is for Windows (let alone Linux). However, with the help of the residents in #kde-craft and related IRC channels, I was eventually able to start compiling KBibTeX on macOS (big thanks!).

The main issue that came up when crafting KBibTeX on macOS was the problem of linking against ICU (International Components for Unicode). This library is shipped on macOS as it is used in many other projects, but seemingly even if you install Xcode, you don't get any headers or other development files. Installing a different ICU version via Craft doesn't seem to work either. However, I am no macOS expert, so I may have gotten the details wrong …

Discussing in Craft&aposs IRC channel how to get KBibTeX installed on macOS despite its dependency on ICU, I got asked why KBibTeX needs to use ICU in the first place, given that Qt ships QTextCodec which covers most text encoding needs. My particular need is to transliterate a given Unicode text like ‘äåツ’ into a 7-bit ASCII representation. This is used among others to rewrite identifiers for BibTeX entries from whatever the user wrote or an imported BibTeX file contained to an as close as possible 7-bit ASCII representation (which is usually the lowest common denominator supported on all systems) in order to reduce issues if the file is fed into an ancient bibtex or shared with people using a different encoding or keyboard layout.

Such a transliteration is also useful in other scenarios such as if filenames are supposed to be based on a person&aposs name but still must be transcribed into ASCII to be accessible on any filesystem and for any user irrespective of keyboard layout. For example, if a filename needs to have some resemblance the Scandinavian name ‘Ångström’, the name&aposs transliteration could be ‘Angstrom’, thus a file could be named Angstrom.txt.

So, if ICU is not available, what are the alternatives? Before I adopted ICU for the transliteration task, I had used iconv. Now, my first plan to avoid hard-depending on ICU was to test for both ICU and iconv during the configuration phase (i. e. when cmake runs) and use ICU if available and fall back to iconv if no ICU was available. Depending on the chosen alternative, paths and defines (to enable or disable specific code via #ifdefs) were set.
See commit 2726f14ee9afd525c4b4998c2497ca34d30d4d9f for the implementation.

However, using iconv has some disadvantages which motivated my original move to ICU:

  1. There are different iconv implementations out there and not all support transliteration.
  2. The result of a transliteration may depend on the current locale. For example, ‘ä’ may get transliterated to either ‘a’ or ‘ae’.
  3. Typical iconv implementations know less Unicode symbols than ICU. Results are acceptable for European or Latin-based scripts, but for everything else you far too often get ‘?’ back.

Is there a third option? Actually, yes. Qt&aposs Unicode code supports only the first 216 symbols anyway, so it is technically feasible to maintain a mapping from Unicode character (essentially a number between 0 and 65535) to a short ASCII string like AE for ‘Æ’ (0x00C6). This mapping can be built offline with the help of a small program that does link against ICU, queries this library for a transliteration for every Unicode code point from 0 to 65535, and prints out a C/C++ source code fragment containing the mapping (almost like in the good old days with X PixMaps). This source code fragment can be included into KBibTeX to enable transliteration without requiring/depending on either ICU or iconv on the machines where KBibTeX is compiled or run. Disadvantages include the need to drag along this mapping as well as to updated it from time to time in order to keep up with updates in ICU&aposs own transliteration mappings.
See commit 82e15e3e2856317bde0471836143e6971ef260a9 where the mapping got introduced as the third option.

The solution I eventually settled with is to still test for ICU during the configuration phase and make use of it in KBibTeX as I did before. However, in case no ICU is available, the offline-generated mapping will be used to offer essentially the same functionality. Switching between both alternatives is a compile-time thing, both code paths are separated by #ifdefs.

Support for iconv has been dropped as it became the least complete solution (see commit 47485312293de32595146637c96784f83f01111e).

Now, how does this generated mapping look like? In order to minimize the data structure&aposs size I came up with the following approach: First, there is a string called const char *unidecode_text that contains any occurring plain ASCII representation once, for example only one single a that can be used for ‘a’, ‘ä’, ‘å’, etc. This string is about 28800 characters long for 65536 Unicode code points where a code point&aposs ASCII representation may be several characters long. So, quite efficient.

Second, there is an array const unsigned int unidecode_pos[] that holds a number for every of the 65536 Unicode code points. Each number contains both a position and a length telling which substring to extract from unidecode_text to get the ASCII representation. As the observed ASCII representations' lengths never exceed 31, the array&aposs unsigned ints contain the representations' lengths in their lower (least significant) five bits, the remaining more significant bits contain the positions. For example, to get the ASCII representation for ‘Ä’, use the following approach:

const char16_t unicode = 0x00C4; ///< 'A' with two dots above (diaeresis)
const int pos = unidecode_pos[unicode] >> 5;
const int len = unidecode_pos[unicode] & 31;
const char *ascii = strndup(unidecode_text + pos, len);

If you want to create a QString object, use this instead of the last line above:

const QString ascii = QString::fromLatin1(unidecode_text + pos, len);

If you would go through this code step-by-step with a debugger, you would see that unidecode_pos[unicode] has value 876481 (this value may change if the generated source code changes). Thus, pos becomes 27390 and len becomes 1. Indeed and not surprisingly, in unidecode_text at this position is the character A. BTW, value 876481 is not just used for ‘Ä’, but also for ‘À’ or ‘Â’, for example.

Above solution can be easily adjusted to work with plain C99 or modern C++. It is in no way specific to Qt or KDE, so it should be possible to use it as a potential solution to musl (a libc implementation) to implement a //TRANSLIT feature in their iconv implementation (I have not checked their code if that is possible at all).



comment count unavailable comments

As you may have been made aware on some news articles, blogs, and social media posts, a vulnerability to the KDE Plasma desktop was recently disclosed publicly. This occurred without KDE developers/security team or distributions being informed of the discovered vulnerability, or being given any advance notice of the disclosure.

KDE have responded quickly and responsibly and have now issued an advisory with a ‘fix’ [1].

Kubuntu is now working on applying this fix to our packages.

Packages in the Ubuntu main archive are having updates prepared [2], which will require a period of review before being released.

Consequently if users wish to get fixed packages sooner, packages with the patches applied have been made available in out PPAs.

Users of Xenial (out of support, but we have provided a patched package anyway), Bionic and Disco can get the updates as follows:

If you have our backports PPA [3] enabled:

The fixed packages are now in that PPA, so all is required is to update your system by your normal preferred method.

If you do NOT have our backports PPA enabled:

The fixed packages are provided in our UPDATES PPA [4].

sudo add-apt-repository ppa:kubuntu-ppa/ppa
sudo apt update
sudo apt full-upgrade

As a precaution to ensure that the update is picked up by all KDE processes, after updating their system users should at the very least log out and in again to restart their entire desktop session.

Regards

Kubuntu Team

[1] – https://kde.org/info/security/advisory-20190807-1.txt
[2] – https://bugs.launchpad.net/ubuntu/+source/kconfig/+bug/1839432
[3] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports
[4] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/ppa

August 08, 2019

This is a short description of a workflow I apply in git repositories that I “own”; it mostly gets applied to Calamares, the Linux installer framework, because I spend most of my development hours on that. But it also goes into ARPA2 projects and home experiments.

It’s a variation on “always summer in master”, and I call it the Git Alligator because when you draw the resulting tree in ASCII-art, horizontally (I realise that’s a pretty niche artform), you get something like this:

    /-o-o-\   /-o-o-o-\ /-o-\
o--o-------o-o---------o-----o--o

To me, that looks like the bumps on an alligator’s back. If I were a bigger fan of Antoine de Saint-Exupéry, I would probably see it as a python that has eaten multiple elephants.

Anyway, the idea is twofold:

  • master is always in a good state
  • I work on (roughly) one thing at a time

For each thing that I work on, I make a branch; if it’s attached to a Calamares issue, I’ll name it after the issue number. If it’s a different bit of work, I’ll name it more creatively. The branch is branched off of master (which is always in a good state). Then I go and work on the branch – commit early, commit often – until the issue is resolved or the feature implemented or whatever.

In a codebase where I’m the only contributor, or the gatekeeper for it so that I know that master remains unchanged, I know a merge can go in painlessly. In a codebase with more contributors, I might merge upstream master into my branch right at the end as a sanity check (right at the end because most of these branches are short-lived, a day or two at most for any given issue).

The alligator effect comes in when merging back to master: I always use --no-ff and I try to write an additional summary description of the branch in the merge commit.

Here’s a screenshot of Calamares history, from qgit, turned on its side like an alligator crawling to the right, (cropped a little so you don’t see where I don’t follow my own precepts and annotated with branch names).

Calamares Alligator History

Aside from the twofold ideas of “always summer in master” and “focus on one thing” I see a couple of other benefits:

  • History if desired; this approach preserves history (all the little steps, although I do rebase and fixup and amend stuff as I go along, I don’t materially squash things).
  • Conciseness when needed; having all the history is nice, but if you follow the “alligator’s tummy branch” (that is, master, along the bottom of the diagrams) you get only merge nodes with a completed bugfix or feature and a little summary: in other words, following that line of commits gives you a squashed view of what happened.
  • Visual progress; each “bump” on the alligator’s back is a unit of progress. If I were to merge without --no-ff the whole thing would be smooth like a garter snake, and then it’s much harder to see the “things” that I’ve done. Instead I’d need to look at the log and untangle commit messages to see what I was working on. This has a “positivity” benefit: I can point and say “I did a thing!”

I won’t claim this approach works for everybody, or for larger teams, but it keeps me happy most days of the week, and as a side benefit I get to think about ol’ Albert the Alligator.

…and I’ve made a new wallpaper!

Yes, finally i’m back on my favourite application, Inkscape.

rect12372

Hope this is a cool presentation

I called this wallpaper Mountain, because … well, there are mountains with a sun made with the KDE Neon logo. hope you like it ��

You can find it HERE

See you soon with other wallpapers …

 

 


Older blog entries

 

 


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.