June 28, 2017

Once upon a time, I started an internship program. On that program, I needed to program in Python and also write tests. Based on my short experience with Python, I already hated to write tests. I didn't have enough abstraction to learn to write useful tests. After a really long and painful process of learning, I [...]


Como estaba previsto en el calendario de los desarrolladores, el 27 de junio la Comunidad KDE ha comunicado que ha sido lanzada la tercera actualización de Plasma 5.10. Una noticia que aunque es esperada y previsible es la demostración palpable del alto grado de implicación de la Comunidad en la mejora continua de este gran pedazo de Software Libre.

Lanzada la tercera actualización de Plasma 5.10

No existe Software creado por la humanidad que no contenga errores. Es un hecho incontestable y cuya única solución son las actualizaciones. Es por ello que en el ciclo de desarrollo del software creado por la Comunidad KDE se incluye siempre las fechas de las actualizaciones.

De esta forma, el 27 de junio fue lanzada la tercera actualización de Plasma 5.10, la cual solo trae (que no es poco) soluciones a los bugs encontrados en estas semanas de vida del escritorio y mejoras en las traducciones.

Es por tanto, una actualización 100% recomendable para todas aquellas personas que ya estén en Plasma 5.10.

Las novedades de Plasma 5.10

El pasado 30 de mayo fue lanzado Plasma 5,10, una versión que sigue cosechando buenas críticas y recibiendo las actualizaciones correspondientes.

Algunas de las novedades de Plasma 5.10 son las siguientes:

  • La Vista de carpeta es el modo de Escritorio predeterminado por defecto. Esto ha significado un intenso trabajo para mejorar y optimizar el plasmoide Vista de Carpeta para que esta forma de utilizar el fondo de escritorio sea lo más fluido posible.

  • Lanzada la tercera actualización de Plasma 5.10¡¡Integración de Firefox y Chrome (Chromium) con Plasma!!!!
  • Mejoras en la barra de tareas como la posibilidad de interactuar con ella mediante el clic central del ratón com agrupar y desagrupar aplicaciones u opciones de configuración cuando se utiliza la barra en posición vertical.
  • Controles multimedia en la pantalla de boqueo.
  • La música se pausará cuando se suspenda el equipo.
  • KRunner sugerirá la instalación de aplicaciones cuando se busquen y no estén instaladas en el equipo (al estilo de Linux  Mint)
  • Mejoras en la gestión de los dispositivos de salida de audio.
  • Mejoras múltilples para las pantallas táctiles: teclados virtuales para las pantallas de bloqueo, acciones de deslizamiento en los bordes de la pantalla, etc.
  • Añadido un nuevo módulo para añadir pantallas de Bienvenida (Plymouth)
  • Soporte experimental de Flatpak y Snappy en Discover.
  • Siguen las mejoras con Wayland, que cada vez está más cerca de ser el servidor gráfico por defecto.

Por todo ello, y mucho más que seguro que me dejo en el tintero, damos la bienvenida a Plasma 5.10, que estoy convencido que nos dará muchas alegrías.

¡KDE Rocks!

June 27, 2017

I've contributed to KDevelop in the past, on the Python plugin, and so far working on the Rust plugin, my impressions from back then were pretty much spot-on. KDevelop has one of the most well thought-out codebases I've seen. Specifically, KDevPlatform abstracts over different programming languages incredibly well and makes writing a new language plugin a very pleasant experience.

The Declaration-Use Chain

The Declaration-Use Chain stands at the core of this. It's a simple enough concept: there are things in the language which are declared somewhere and used in other places. Declarations are a generalization of programming language constructs: structs, struct fields, functions, methods, locals, etc. Declarations can open contexts and any declarations in a child context are valid in that context only. For example, functions open a new context for declarations inside the function body. Uses are, well, what you'd expect: instances where these language constructs are used.

Obviously, there's some details I skipped here, but the beauty in all of this is that KDevelop can figure out a lot of things for you if it simply has access to this information.

The Rust Compiler

The Rust compiler is also really well designed in that it allows accessing the internals such as the syntax parsing library. It's not necessarily the nicest thing, as the internals are intentionally marked unstable, but given that a lot of other tools for working with Rust source code like RLS also depend on compiler-internal libraries, I think this is an acceptable compromise.

These will likely stabilise over time, especially given the amount of effort being put into the save-analysis API. Speaking of which, one thing missing that would be very useful is to be able to get the full span of declarations that have an internal context (e.g. the span that covers the body of a function).

My main effort on the Rust side is trying to expose a C API to the compiler structures that is more independent from the compiler internals themselves. Think libclang. I think something like this would make adding support in other IDEs easier. I'm currently looking into hooking into the various stages of the compiler in order to get both the pre- and post-expansion AST, as well as type information inferred by the compiler (though I'll likely try to implement this myself first for the learning experience :)).

Building the DU Chain for Rust code

And now the fun part: how these fit together. Here is a before and after:

 

Much better. 

Going forward

There are things still missing here; as you may have noticed, uses are not highlighted. I have to admit, I fell about a week and a half behind schedule due to a university group project which was more time consuming than I initially expected as well as my attempt to think through how to best expose certain things as a C API from Rust before writing a significant amount of code. Thankfully, I accounted for some delays like this in my original timeline, so this should be recoverable from.

Right now, I'm aiming to finish adding the missing elements of the DU Chain by the end of this week and then I'll start looking into code completion. 

No hay mejor aplicación para gestionar tus imágenes que digiKam y ésta no deja de evolucionar. El pasado 21 de junio fue lanzado digiKam 5.6 con a vuelta de viejas funcionalidades que muchos usuarios han notado a faltar. De esta forma, digiKam sigue mejorando versión a versión aunque en ocasiones parezca imposible.

Lanzado digiKam 5.6 con la vuelta de viejas funcionalidades

Tras el gran salto de digiKam a Qt5/KF5, ya tenemos entre nosotros su sexta gran revisión que destaca por el trabajo realizado por los desarrolladores para traer de vuelta funcionalidades perdidas en el salto, mejoras en la base de datos , soporte mejorado en el agrupamiento de ítems , mejoras en las geolocalizaciones y, por supuesto, un buen número de errores resueltos.

Lanzado digiKam 5.6 con la vuelta de viejas funcionalidades

A modo de resumen los novedades principales son:

  • Herramienta para la creación de galerías HTML: con ella puedes crear tus páginas de imágenes para tu web a partir de tu colección de imágenes. Existen muchos temas para personalizarla e incluye soporte para Javascript.
  • Herramienta Video Slideshow; con ella es posible crear de forma sencilla vídeos a partir de tus imágenes. Se puede ajustar tanto el formato, el códec, la resolución y las transiciones.
  • Herramienta de comprobación de integridad de la base de datos: con ella se liberarán datos obsoletos de las bases de datos de forma segura y contribuirá a disminuir su tamaño. Además ahora es posible utilizar bases de datos MySQL.
  • Mejoras en las agrupaciones de ítems, con la posibilidad de crear tus colecciones de imágenes.
  • Vuelta de los marcadores de Geolocalización en versiones compiladas (App,Image, MacOS y Windows)

En definitiva, se ve un gran trabajo para el presente de la aplicación pero, sobretodo, para su futuro.

Ah! Y además, apuntar que este lanzamiento introduce un buen número de errores resueltos, concretamente 81 bugs solucionados en bugzilla.

Puedes descargar el código fuente de digiKam software collection source code, los instaladores para OSX (>= 10.8) y para Windows 32/64 bits en este repositorio.

Más información: digiKam

Hi everyone, how is it going? Fine, I hope.

Today, I will talk about my work on Krita during the week 3-4 of the coding period.

Overview

I implemented two new plugins, canvas size plugin and filter manager (applier) plugin. The first is a plugin to change image size (not scale) and to adjust the top and left edge to selected documents also. The second is a filter applier to the document or nodes, then you can select a node or the document and apply a filter to them.

Canvas Size Plugin

GUI Mockup

I drew a simple mockup to define some direction for my implementation. As you can see, it's something close to the current implementation, but without a canvas preview, we have a list of the opened documents, though.

GSOC

C++ and SIP implementation

I have to provide offset properties (xOffset, yOffset) to document API and a updating to the resizeImage method to include the new parameters. I implemented these properties and your respective SIP signatures. I also wrote documentation to each method implemented in .h files. Click here to take a look.

GUI implementation with PyQt

At the end, I implemented the GUI and all your interactions and validations. How you can see below, we have a confirm button that resizes the image to selected documents.

GSOC

Testing

Below, you can see the plugin working.

Before After
GSOC GSOC


Filter Manager Plugin

GUI Mockup

I didn't draw a GUI mockup, but the main idea here is to have a tree showing documents and your nodes and a list of the application's filters.

C++ and SIP implementation

I don't changed the C++ code, I fixed some bugs in the API, though.

GUI implementation with PyQt

You can see below the final result of my work. The tree was implemented using Model-View pattern. You can apply filters from the list to specific node or to all document (all top level nodes).

GSOC

Testing

Below, you can see the image before and after the application of the "desaturate" filter.

Before After
GSOC GSOC

What will I do in the next week?

  • Simple scripts of the previous plugins
  • Plugin to take a list of images (inputted list) and apply rotate/scale
  • Script to export all the layers (batch)
  • Script to Duplicate image
  • Script to Export to jpg X% quality.

That’s it for now, until the next week. See ya!!

June 26, 2017

Image of Beastie + NinjaThere’s quite a lot of software that uses CMake as a (meta-)buildsystem. A quick count in the FreeBSD ports tree shows me 1110 ports (over a thousand) that use it. CMake generates buildsystem files which then direct the actual build — it doesn’t do building itself.

There are multiple buildsystem-backends available: in regular usage, CMake generates Makefiles (and does a reasonable job of producing Makefiles that work for GNU Make and for BSD Make). But it can generate Ninja, or Visual Studio, and other buildsystem files. It’s quite flexible in this regard.

Recently, the KDE-FreeBSD team has been working on Qt WebEngine, which is horrible. It contains a complete Chromium and who knows what else. Rebuilding it takes forever.

But Tobias (KDE-FreeBSD) and Koos (GNOME-FreeBSD) noticed that building things with the Ninja backend was considerably faster for some packages (e.g. Qt WebEngine, and Evolution data-thingy). Tobias wanted to try to extend the build-time improvements to all of the CMake-based ports in FreeBSD, and over the past few days, this has been a succes.

Ports builds using CMake now default to using Ninja as buildsystem-backend.

Here’s a bitty table of build-times. These are one-off build times, so hardly scientifically accurate — but suggestive of a slight improvement in build time.

Name Size GMake Ninja
liblxt 50kB 0:32 0:31
llvm38 1655kB * 19:43
musescore 47590kB 4:00 3:54
webkit2-gtk3 14652kB 44:29 37:40

Or here’s a much more thorough table of results from tcberner@, who did 5 builds of each with and without ninja. I’ve cut out the raw data, here are just the average-of-five results, showing usually a slight improvement in build time with Ninja.

Name av make av ninj Delta D/Awo
compiler-rt 00:08 00:07 -00:01 -14%
openjpeg 00:06 00:07 +00:01 +17%
marble 01:57 01:43 -00:14 -11%
uhd 01:49 01:34 -00:15 -13%
opencacscade 04:08 03:23 -00:45 -18%
avidemux 03:01 02:49 -00:12 – 6%
kdevelop 01:43 01:33 -00:10 – 9%
ring-libclient 00:58 00:53 -00:05 – 8%

Not everything builds properly with Ninja. This is usually due to missing dependencies that CMake does not discover; this shows up when foo depends on bar but no rule is generated for it. Depending on build order and speed, bar may be there already by the time foo gets around to being built. Doxygen showed this, where builds on 1 CPU core were all fine, but 8 cores would blow up occasionally.

In many cases, we’ve gone and fixed the missing implicit dependencies in ports and upstreams. But some things are intractable, or just really need GNU Make. For this, the FreeBSD ports infrastructure now has a knob attached to CMake for switching a port build to GNU Make.

  • Normal: USES=cmake
  • Out-of-source: USES=cmake:outsource
  • GNU Make: USES=cmake:noninja gmake
  • OoS, GMake: USES=cmake:outsource,noninja gmake
  • Bad: USES=cmake gmake

For the majority of users, this has no effect, but for our package-building clusters, and for KDE-FreeBSD developers who build a lot of CMake-buildsystem software in a day it may add up to an extra coffee break. So I’ll raise a shot of espresso to friendship between daemons and ninjas.

(Image credits: Beastie by Marshall Kirk McKusick on FreeBSD.org, Ninja by irkeninvaderkit on deviantart)

El pasado 21 de junio fue lanzado openMandriva Lx 3.02, una nueva versión de una distribución derivada de la mítica Mandriva, la cual murió hace un tiempo quedando el futuro del proyecto en manos, de nuevo, de la Comunidad. No es que hablemos mucho en el blog de esta distribución (alguien de la Comunidad que quiera echarme una mano) pero creo que como mínimo merece toda mi atención en sus lanzamientos más importantes.

Lanzado OpenMandriva Lx 3.02

Después de meses de trabajo, el equipo de desarrollo de OpenMandriva se complace en presentar OpenMandriva Lx 3.02, la nueva versión de si querida distribución.

Evidentemente, la Comunidad OpenMandriva espera que tenga una buena acogida y por ello promete las últimas novedades en software, rápido arranque y en su funcionamiento.

Lanzado OpenMandriva Lx 3.02

Algunas de las novedades destacadas de esta distribución son las siguientes:

Con respecto al entorno KDE nos encontramos con sus últimas versiones:

  • Frameworks 5.33.0
  • Plasma 5.9.5
  • Applications 17.04
  • Qt 5.8.0

Respecto al sistema de visualización tenemos:

  • Xorg 1.19.3
  • Wayland 1.12.0
  • Mesa 17.1.1

Respecto al sistema base:

  • Kernel 4.11.3
  • systemd 233
  • LLVM/clang 4.0.1
  • gcc 6.3.1_2017.02
  • glibc 2.25

En cuanto a las posibilidades multimedia, OpenMandriva Lx 3.2 se concretan en lo útlimo de los reproductores más potentes SMPlayer, VLC y mpv. Además, nos proporciona aplicaciones gráficas como Krita, ShowFoto and digiKam.

Lanzado OpenMandriva Lx 3.02

Otro software de interés para el usuario final nos encontramos con l suite ofimática LibreOffice 5.3.3 y navegadores Qupzilla 2.1.2 y Firefox 53.0

Más información: OpenMandriva | La mirada del replicante

¿Qué es OpenMandriva?

Como es habitual, la mejor forma de definir un proyecto es dejarlo en palabras de sus creadores. De esta forma:

OpenMandriva Lx es un excitante Sistema Operativo para Escritorio que pretende atraer e interesar a usuarios nuevos y avanzados. Tiene el alcance y profundidad de un sistema extremadamente completo para todo tipo de necesidades, pero esta diseñado para ser simple y sencillo de usar.

¿Qué es la Asociación OpenMandriva?

La Asociación OpenMandriva se estableció el 12 de Diciembre de 2012, bajo la ley Francesa 1901. Se encarga de representar a personas apasionadas por el software libre.

La asociación representa el paradigma: de la Comunidad para la Comunidad, con pasión, diversión y dedicación. Sus miembros se esfuerzan por ofrecer un ambiente agradable para las tareas diarias y un sistema seguro y fiable para su uso en producción.

Más información: openMandriva Blog

Heading towards the first evaluation:

Even the exams and graduation project work this month, heading towards the first evaluation according to timeline with the planned UI ready for further development and started the search with examining a similar tool in gimp, and had more insights about the details of how the “Image Editor” works on a deeper level and how I should work to get to my next milestone of static parts cloning supporting the variable radius within the next 11 days.

tool_ui


June 25, 2017

During this week, I decided to spend more time on language support: code completion, highlighting and so on. This part is provided by DU-Chain. Du-Chain stands for Definition-Use chain which consist of various contexts, declarations in these contexts and usages of these declarations.

First change improved declaration of variables in parameters of anonymous functions. In Go language, it is possible to define anonymous function and assign it to variable, pass as parameter (people use it for various callbacks for example) or simple call it. Before my change, parameters of anonymous functions were treated as declarations only in case of assigning that function to variable. Thus, if, for example, you typed the example of Gin web framework usage:
package main
import "gopkg.in/gin-gonic/gin.v1"
func main() {
r := gin.Default()
r.GET("/ping", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "pong",
})
})
r.Run() // listen and serve on 0.0.0.0:8080
}
you would end up with “c” not being highlighted / treated as variable. After my change, parameters of anonymous functions are treated as variable declarations in all three cases: assigning, passing and calling (see screenshots).
Before

After



Second change is under review and is aimed at adding code completion from embedded structs. In Go language, there is no such thing as inheritance – composition is preferred over it. Composition often has drawback – we need to pass all calls to “base” methods so there will be a lot of boilerplate code. In Go this problem is solved using “embedding” structs so their fields and methods are added to top-level struct. For example, struct A has a method Work and struct B embeds struct A. So both B.Work() and B.A.Work() are correct. Because of that we need to travel over all embedding tree for retrieving all possible completions – this is what my second change is aimed at.
Before
After


Third change added errors of parsing as problems in “Problems” view.

Fourth change fixed misplacing usage information in wrong context. Before that change, information of variable usages was placed in top context, which had led to not working semantic highlighting – variable declarations had different colors but usages were all same. Therefore, while improving overall correctness of generated DU-Chain I also fixed that issue and now variable usages are colored too (see screenshots)!
Before 
After


Apart from DU-Chain improvements I got merged a basic project manager plugin which offers a template of simple console Go application and allows to build Go projects easier.



Looking forward to next week!

For the last month the main time I took the exams, because of this I did not do much for my project. Nevertheless, I implemented the basic primitives and tested them.

Let me tell you about them.

Wet map.

Water is the main part in watercolors. That’s why I started with this.

Wet map contains 2 types of information: water value and speed vector. If the first parameter is clear, then the second one needs explanations. Speed vector needs for rewetting our splats (take it in mind, I will explain what this later).

All this values Wet map contains in KisPaintDevice:

KisPaintDeviceSP m_wetMap;

As the color space was chosen rgb16:

m_wetMap = new KisPaintDevice(KoColorSpaceRegistry::instance()->rgb16());

And there are information about water value and speed vector in pixel data.

But in this form Paint Device can’t visualize wet map correctly:

So I transorm Paint Device for visualizing wet map correctly (because it will be important for artists, I think). And now it looks like this:

Splat

My implementation is based on a procedural brush. Every brush stamp is a union of dynamic splats. Here you can see the behavior of splats:

Also I tested reweting (when splat go to flowing state from fixed state):

flowing_part_one_adding_water

And as a final test I made splat generator for simulating strokes:

generating_blue_25radius_sin_10secgenerating_red_100radius_sin_1sec

What next?

It’s high time to get splats to work in Krita. So I’m going to finish my plugin, and test splats behavior. But it will be primitive:

  1. Clear canvas for updating splats
  2. No undo/redo
  3. Stupid singleton for splat storage

I was able to do major improvements to the build system of KStars. I think more and more open-source projects should pick up these low-hanging fruits with CMake and Clang:

- CCache: Speed-up the development time and working with git branches by caching the compiled object files.
- Unity Build: This simple "hack" can reduce the build time dramatically by compiling temporary C++ meta files in which the normal C++ files are included. The build time can be speeded up to 2x-3x for bigger projects.
- Clang Sanitizers: Use undefined behavior and address sanitizers to hunt down memory handling errors. Although the executable must be recompiled with Clang and special compiler flags, the resulted binary will run with minimal slowdown. It is not a complete replacement, but these sanitizers can catch most of the problems found by Valgrind during normal runtime.
- Clang Format: Format the source code with a real compiler engine.


More details are on our wiki page:
https://techbase.kde.org/Projects/Edu/KStars/C%2B%2B_developer_tools_with_KStars_on_Linux

June 24, 2017

Before writing about my actual Summer of Code experiences, I wanted to briefly share what I worked on before the official coding start.

Plasma Toolicons

Plasma toolicons

Can you see these ridiculously small toolicons next to the Media frame plasmoid? I’m talking about the icons which allow you to Resize, Rotate and Remove the Plasmoid. Compared with other icons, these are clearly too small.

So, as preparation for GSoC, I wanted to know why this happens, and what would be required to make them bigger. Thus my journey into the (Plasma) rabbithole began…

Tracking down where the containment is defined was relatively easy, and shortly after i found the code responsible for the ActionButton:

PlasmaCore.ToolTipArea {
    id: button

    location: PlasmaCore.Types.LeftEdge
    mainText: action !== undefined ? action.text : ""
    mainItem: toolTipDelegate

    //API
    property QtObject svg
    property alias elementId: icon.elementId
    property QtObject action
    property bool backgroundVisible: false
    property int iconSize: 32
[...]

Huh, the iconSize is 32px? Well, that was easy to fix, surely this should be set to units.iconSizes.small and this problem is gone…

… or so I thought. No, this didn’t improve the situation, back to square one.


Is it overwritten by the look and feel theme? plasma-workspace/lookandfeel/contents/components/ActionButton.qml at least doesn’t - and it also happens with the style set to Oxygen.


While looking at this, I also noticed that units.iconSizes.small returned 16px on my system. This seemed odd, because the scale factor was set to 1.8x, so I would have expected bigger icons.

Where is this icon size calculated? Ah yes, in the file units.cpp, method Units::devicePixelIconSize.

int Units::devicePixelIconSize(const int size) const
{
    /* in kiconloader.h
    enum StdSizes {
        SizeSmall=16,
        SizeSmallMedium=22,
        SizeMedium=32,
[...]
     };
    */
    // Scale the icon sizes up using the devicePixelRatio
    // This function returns the next stepping icon size
    // and multiplies the global settings with the dpi ratio.
    const qreal ratio = devicePixelRatio();

    if (ratio < 1.5) {
        return size;
     } else if (ratio < 2.0) {
         return size * 1.5;
     } else if (ratio < 2.5) {
[...]
}

Ok, my devicePixelRatio is 1.8 and therefore the icon size of a small pixmap gets multiplied by 1.5 and a request for a small (16px) pixmap should return a 24px pixmap.

But it doesn’t…

Debugging suggested that my devicePixelRatio is NOT 1.8, but rather around 1.4. How did that happen, isn’t the scale factor from the KDE settings used?


Oh, the comment in updateDevicePixelRatio() mentions that QGuiApplication::devicePixelRatio() is really not used:

void Units::updateDevicePixelRatio()
{
    // Using QGuiApplication::devicePixelRatio() gives too coarse values,
    // i.e. it directly jumps from 1.0 to 2.0. We want tighter control on
    // sizing, so we compute the exact ratio and use that.
    // TODO: make it possible to adapt to the dpi for the current screen dpi
    //  instead of assuming that all of them use the same dpi which applies for
    //  X11 but not for other systems.
    QScreen *primary = QGuiApplication::primaryScreen();
    if (!primary) {
       return;
    }
    const qreal dpi = primary->logicalDotsPerInchX();
    // Usual "default" is 96 dpi
    // that magic ratio follows the definition of "device independent pixel" by Microsoft
    m_devicePixelRatio = (qreal)dpi / (qreal)96;

Hmm, yes, that was the case in earlier Qt versions when devicePixelRatio still returned an integer - but nowadays the value is a real number.

So, instead of calculating dpi / 96 I just changed it to return primary->devicePixelRatio().

Which now, finally, should return a devicePixelRatio of 1.8 and therefore result in bigger pixmaps.

Compiled it, and confident of victory restarted plasmashell… only to notice, that it still didn’t work.


What could there still go wrong?

So I got back to debugging.. to notice that primary->devicePixelRatio() returns a scale factor of 1.0. Huh? Isn’t this supposed to just use the QT_SCREEN_SCALE_FACTORS environment variable, which gets set to the value of the “Scale Display” dialog in the Systemsettings? If you want to know, the code for setting the environment variable is located in plasma-workspace/startkde/startplasmacompositor.cmake.

But why isn’t the problem gone, is there something in Plasma that overwrites this value?


Yes, of course there is!

The only way this value can be overwritten is due to the Qt attribute Qt::AA_DisableHighDpiScaling.

Grep’ing for that one pointed me to plasma-workspace/shell/main.cpp - the base file for plasmashell:

int main(int argc, char *argv[])
{
//    Devive pixel ratio has some problems in plasmashell currently.
//     - dialog continually expands (347951)
//     - Text element text is screwed (QTBUG-42606)
//     - Panel struts (350614)
//  This variable should possibly be removed when all are fixed

   qunsetenv("QT_DEVICE_PIXEL_RATIO");
   QCoreApplication::setAttribute(Qt::AA_DisableHighDpiScaling);

I looked into the mentioned bugs. What should I do now? Re-enabling the HighDpiScaling-flag so the real devicePixelRatio is returned in Qt, and therefore I can use this value to calculate the sizes icons should be and then have the bigger Plasma ToolIcons? At least QTBUG-42606 seems to be fixed…

Oh boy, what have I gotten into now…

It was time to talk to my mentor!


David Edmundson quickly noticed that there should be no mismatch with the dpi / 96 calculation. Something fishy seems to be going on here…

What is this dpi value anyway? This is the one reported by xrdb -query |grep Xft.dpi and managed in the file $HOME/.config/kcmfonts.

And that value - as you probably can guess by now - did not make any sense. It didn’t match the expectation of being scaleFactor * 96, the value it should have been set to.


On we go to the location where the scaling configuration is set - scalingconfig.cpp in the KScreen KConfig Module.

void ScalingConfig::accept()
{
[...]
   fontConfigGroup.writeEntry("forceFontDPI", scaleDPI());
[...]
}

qreal ScalingConfig::scaleDPI() const
{
    return scaleFactor() * 96.0;
}

This scaleDPI is then applied with xrdb -quiet -merge -nocpp on startup.

So Xft.dpi is set to 1.8 * 96.0, or 172.8.

Have you spotted what is going wrong?

I did not, but David noticed…

The X server can only handle real values - and therefore 172.8 is simlpy discarded!

A few moments later this patch was ready…

Plasma Toolicons

… and I was finally able to enjoy my icons in all their scaled glory! And you can too, because this patch is already in Plasma 5.10.



Hello again...

because there are users that requested a way in order to support Latte even further and donate for the project, we created a donation page at pledgie: https://pledgie.com/campaigns/34116

and we added also a donate button in our main page at: https://github.com/psifidotos/Latte-Dock/


Click here to lend your support to: Latte Dock and make a donation at pledgie.com !




to cheer you up a bit for the upcoming 0.7 version which is scheduled for the end of August or maybe earlier ;) based on the effort...

some of the features already implemented and some that are going to land:

  • support layouts and change them with one click (we enhanced our export/import configuration mechanism to layouts. We provide already 5 predefined layouts and the user can add as many wants to)
  • provide two layouts for the configuration window (Basic/Advanced)
  • set the dock transparency per dock and enable/disable its panel shadows
  • global shortcuts, Super+No (unity way), Super+Ctrl+No (new instance), Super+` (raise the hidden dock)
  • support from libtaskmanager the libunity interface in order to show progress indicators and item counters such as unread e-mails etc... this needs libunity9 installed. +expose independently our own simpler dbus interface to show counters for tasks if there are programmers out there that dont want to use the libunity way
  • tasks audio streams indicator (increase/decrease volume with scroll whell and mute with clicking it)
  • the new Places exposer for your tasks/launchers that plasma 5.10 offers
  • dynamic dock background (show the background only for maximized windows and choose also if you want the dock shadow to be shown in that case)
  • copy your docks easily with one-click (extremely useful in multi-screen environments that want the same dock between different screens)
  • sync your launchers between all your docks (global launchers, they do not erase your per dock launchers if you disable them)
  • wayland tech preview (there is already code implemented for this and landed in our master)
  • support fillWidth/Height applets like the plasma panel
  • substitute the Latte plasmoid with your favourite plasma taskmanager (having implemented proper fillWidth/Height for the applets we can now provide this)
  • support separators everywhere and work correclty with the parabolic effect
  • various improvements for launchers start up, animations etc....

Latte 0.7 will be compatible with plasma>=5.9 and qt>=5.7


thanks everyone once more for your support...







I got an opportunity to represent KDE in FOSSASIA 2017 held in mid-March at Science Center, Singapore. There were many communities showcasing their hardware, designs, graphics, and software.

 Science Center, Singapore
I talked about KDE, what it aims at, what are the various programs that KDE organizes to help budding developers, and how are these mentored. I walked through all the necessities to start contributing in KDE by introducing the audience to KDE Bugzilla, the IRCs channels, various application Domains, and the SoK(Season of KDE) proposal format. 

                                  


Then I shared my journey in KDE, briefed about my projects under Season of KDE and Google Summer of Code. The audience were really enthusiastic and curious to start contributing in KDE. I sincerely thank FOSSASIA for giving me this wonderful opportunity.

Overall working in KDE has been a very enriching experience. I wish to continue contributing in KDE and also share my experiences to help the budding developers to get started.

June 23, 2017

Week 2 of GSoC’s coding period was pretty dope :D. After all the hard work from the last week, I got my downloader to pull data from the website share.krita.org. As from the Week #1’s work status update, we have all discussed what all classes and functions were required to get this running. I was able to get it done and the downloader started downloading the data from the website.

PS: To get my project up and running we need KNewStuff framework version to be 5.29+. KNS team has done a lot of work in the area creating things to move pretty much good. (They have isolated KNSCore and KNS3 from then).

Before I proceed, I would love to mention the immense support and help given to me by Leinir for understanding how KNS and KNSCore works. If he didn’t notice my blog post which I posted at the starting of my project and official coding period, I would be lost at every single point of my project :P. As well as my Krita community people ��

As we had discussed the different classes I have created for the project to proceed, we have used certain KDE primary level framework/APIs in order to complete the GUI and get things working as planned.

Some of them, I have listed below.

  • Kconfig

The KConfig framework offers functionality around reading and writing configuration. KConfig consists of two parts, KConfigCore and KConfigGui. KConfigCore offers an abstract API to configuration files. It allows grouping and transparent cascading and (de-)serialization. KConfigGui offers, on top of KConfigCore, shortcuts, and configuration for some common cases, such as session and window restoration.

  • KWidgetsAddons

KWidgetAddons contains higher-level user interface elements for common tasks. KWidgetAddons contains widgets for the following areas:

  • Keyboard accelerators
  • Action menus and selections
  • Capacity indicator
  • Character selection
  • Color selection
  • Drag decorators
  • Fonts
  • Message boxes
  • Passwords
  • Paging of e.g. wizards
  • Popups and other dialogs
  • Rating
  • Ruler
  • Separators
  • Squeezed labels
  • Titles
  • URL line edits with drag and drop
  • View state serialization
  • KRatingWidget

This class is a part of KWidgetAddons class. It displays a rating value as a row of pixmaps. The KRatingWidget displays a range of stars or other arbitrary pixmaps and allows the user to select a certain number by mouse.

Hence, till now I have implemented order by functionality to sort data items accordingly. Have included functionalities to sort according to categories. While the categories get populated according to the knsrc file. We have options to rate each item on star basis according to the likes of the user. We also have the option to see the expanded details of each item. Revised a methodology to view items in a different mode such as icon mode and list mode. Also, Function to search between the items is also working just fine.

To see all the changes and test it visually, Created a test UI and shows how things work out and pull in data from the site. I will attach it here:

2017-06-21Content downloader with the basic test UI, which does look like the existing KNewStuff. Next step is to change it to our own customizable UI.

Plans for Week #3

Start creating the UI for the resource downloader which will be customizable hereafter. We just need to tweak the existing UI to our need.

Untitled drawing (3)Here is what we actually need.

After that, This week is being followed by the first Evaluation of our work so I have mostly done my part well. Completed the tasks as required in time. So, after evaluation for the first phase gets over, I will be doing the following.

  1. Give the work done till now a test run with the new and revised GUI for the content downloader.
  2. Fix any bugs if there exists or noticed at the testing phase in the content downloader and fix some of the bugs which might exist in the Resource Manager after discussing it with the Krita community.
  3. Meanwhile, these are going through, will be documenting the functions and classes created and changed.

Here is my branch were all the work I am done is going to.

https://cgit.kde.org/krita.git/?h=Aniketh%2FT6108-Integrate-with-share.krita.org

Will be back with more updates later next week.

Cheers. ��


In this week’s article for my ongoing Google Summer of Code (GSoC) project I planned on writing about the basic idea behind the project, but I reconsidered and decided to first give an overview on how Xwayland functions on a high-level and in the next week take a look at its inner workings in detail. The reason for that is, that there is not much Xwayland documentation available right now. So these two articles are meant to fill this void in order to give interested beginners a helping hand. And in two weeks I’ll catch up on explaining the project’s idea.

As we go high level this week the first question is, what is Xwayland supposed to achieve at all? You may know this. It’s something in a Wayland session ensuring that applications, which don’t support Wayland but only the old Xserver still function normally, i.e. it ensures backwards compatibility. But how does it do this? Before we go into this, there is one more thing to talk about, since I called Xwayland only something before. What is Xwayland exactly? How does it look to you on your Linux system? We’ll see in the next week that it’s not as easy to answer as the following simple explanation makes it appear, but for now this is enough: It’s a single binary containing an Xserver with a special backend written to communicate with the Wayland compositor active on your system - for example with KWin in a Plasma Wayland session.

To make it more tangible let’s take a look at Debian: There is a package called Xwayland and it consists of basically only the aforementioned binary file. This binary gets copied to /usr/bin/Xwayland. Compare this to the normal Xserver provided by X.org, which in Debian you can find in the package xserver-xorg-core. The respective binary gets put into /usr/bin/Xorg together with a symlink /usr/bin/X pointing to it.

While the latter is the central building block in an X session and therefore gets launched before anything else with graphical output, the Xserver in the Xwayland binary works differently: It is embedded in a Wayland session. And in a Wayland session the Wayland compositor is the central building block. This means in particular that the Wayland compositor also takes up the role of being the server, who talks to Wayland native applications with graphical output as its clients. They send request to it in order to present their painted stuff on the screen. The Xserver in the Xwayland binary is only a necessary link between applications, which are only able to speak to an Xserver, and the Wayland compositor/server. Therefore the Xwayland binary gets launched later on by the compositor or some other process in the workspace. In Plasma it’s launched by KWin after the compositor has initialized the rendering pipeline. You find the relevant code here.

Although in this case KWin also establishes some communication channels with the newly created Xwayland process, in general the communication between Xwayland and a Wayland server is done by the normal Wayland protocoll in the same way other native Wayland applications talk to the compositor/server. This means the windows requested by possibly several X based applications and provided by Xwayland acting as an Xserver are translated at the same time by Xwayland to Wayland compatible objects and, acting as a native Wayland client, send to the Wayland compositor via the Wayland protocol. These windows look to the Wayland compositor just like the windows - in Wayland terminology surfaces - of every other Wayland native application. When reading this keep in mind, that an application in Wayland is not limited to using only one window/surface but can create multiple at the same time, so Xwayland as a native Wayland client can do the same for all the windows created for all of its X clients.

In the second part next week we’ll have a close look at the Xwayland code to see how Xwayland fills its role as an Xserver in regards to its X based clients and at the same time acts as a Wayland client when facing the Wayland compositor.

Look what we got today by snail mail:

It’s a children’s nonfiction book, nice for adults too, by Jeremy Hyman (text) and Haude Levesque (art). All the art was made with Krita!

Jeremy:

One of my favorite illustrations is the singing White-throated sparrow (page 24-25). The details of the wing feathers, the boldness of the black and white stripes, and the shine in the eye all make the bird leap off the page.

I love the picture of the long tailed manakins (page 32-33). I think this illustration captures the velvety black of the body plumage, and the soft texture of the blue cape, and the shining red of the cap. I also like the way the unfocused background makes the birds in the foreground seem so crisp. It reminds me of seeing these birds in Costa Rica – in dark and misty tropical forests, the world often seems a bit out of focus until a bright bird, flower, or butterfly focuses your attention.

I also love the picture of the red-knobbed hornbill (page 68-69). You can see the texture and detail of the feathers, even in the dark black feathers of the wings and back. The illustration combines the crispness and texture of the branches, leaves and fruits in the foreground, with the softer focus on leaves in the background and a clear blue sky. Something about this illustration reminds me of the bird dioramas at the American Museum of Natural History – a place I visited many times with my grandfather (to whom the book is dedicated). The realism of those dioramas made me fantasize about seeing those birds and those landscapes someday. Hopefully, good illustrations will similarly inspire some children to see the birds of the world.

Haude:

My name is Haude Levesque and I am a scientific illustrator, writer and fish biologist. I have always been interested in both animal sciences and art, and it has been hard to choose between both careers, until I started illustrating books as a side job, about ten years ago while doing my post doc. My first illustration job was a book about insects behavior (Bug Butts), which I did digitally after taking an illustration class at the University of Minnesota. Since then, I have been both teaching biology, illustrating and writing books, while raising my two kids. The book “Bird Brains” belongs to a series with two other books that I illustrated, and Iwanted to have illustrations that look similar, which is full double page illustrations of a main animal in its natural habitat. I started using Krita only a year ago when illustrating “Bird Brains”, upon a suggestion from my husband, who is a software engineer and into open source software. I was getting frustrated with the software I had used previously, because it did not allow me to render life-like drawings, and required too many steps and time to do what I wanted. I also wanted my drawing to look like real paintings and also get the feeling that I am painting and Krita’s brushes do just that. It is hard for me to choose a favorite illustration in “Bird Brains”, I like them all and I know how many hours I spent on each. But, if I had to, I would say the superb lyrebird, page 28 and 29. I like how this bird is walking and singing at the same time and I like how I could render its plumage while giving him a real life posture.

I also like the striated heron, page 60 and 61. Herons are my favorite birds and I like the contrast between the pink and the green of the lilypads. Overall I am very happy with the illustrations in this book and I am planning on doing more scientific books for kids and possibly try fiction as well.

You can get it here from Amazon or here from Book Depository.

June 22, 2017

ISO Image Writer is a tool I’m working on which writes .iso files onto a USB disk ready for installing your lovely new operating system.  Surprisingly many distros don’t have very slick recommendations for how to do this but they’re all welcome to try this.

It’s based on ROSA Image Writer which has served KDE neon and other projects well for some time.  This adds ISO verification to automatically check the digital signatures or checksums, currently supported is KDE neon, Kubuntu and Netrunner.  It also uses KAuth so it doesn’t run the UI as root, only a simple helper binary to do the writing.  And it uses KDE Frameworks goodness so the UI feels nice.

First alpha 0.1 is out now.

Download from https://download.kde.org/unstable/isoimagewriter/

Signed by release manager Jonathan Riddell with 0xEC94D18F7F05997E. Git tags are also signed by the same key.

It’s in KDE Git at kde:isoimagewriter and in bugs.kde.org, please do try it out and report any issues.  If you’d like a distro added to the verification please let me know and/or submit a patch. (The code to do with is a bit verbose currently, it needs tidied up.)

I’d like to work out how to make AppImages, Windows and Mac installs for this but for now it’s in KDE neon developer editions and available as source.

 

Facebooktwittergoogle_pluslinkedinby feather

One of my preferred developer tools is a web called Compiler Explorer. The tool itself is excellent and useful when trying to optimize your code.
The author of the tool describes it in the Github repository as:

Compiler Explorer is an interactive compiler. The left-hand pane shows editable C/C++/Rust/Go/D/Haskell code. The right, the assembly output of having compiled the code with a given compiler and settings. Multiple compilers are supported, and the UI layout is configurable (the Golden Layout library is used for this). There is also an ispc compiler for a C variant with extensions for SPMD.

The main problem I found with the tool is, it does not allow to write Qt code. I need to remove all the Qt includes, modify and remove a lot of code…

So I decided to modify the tool to be able to find the Qt headers. To do that first of all, we need to clone the source code:

git clone git@github.com:mattgodbolt/compiler-explorer.git

The application is written using node.js, so make sure you have it installed before starting.

The next step is to modify the options line in etc/config/c++.defaults.properties:

-fPIC -std=c++14 -isystem /opt/qtbase_dev/include -isystem /opt/qtbase_dev/include/QtCore

you need to change /opt/qtbase_dev with your own Qt build path.

Then simply call make in the root folder, and the application starts running on port 10240 (by default).

And the mandatory screenshoots ��

screenshot_20170622_152005

screenshot_20170622_152744

The post Using Compiler Explorer with Qt appeared first on Qt Blog.

June 21, 2017

In this post, I am going to discuss about the working of a submarine and my thought process on implementing the three basic features of a submarine in the “Pilot a Submarine” activity for the Qt version of GCompris, which are:

  • The Engine
  • The Ballast tanks and
  • The Diving Planes

The Engine

The engine of most submarines are either nuclear powered or diesel-electric engines, which are used to drive an electric motor which in turn, powers the submarine propellers. In this implementation, we will have two buttons one for increasing and another for decreasing the power generated by the submarine.

Ballast Tanks

The Ballast Tanks are the spaces in the submarine that can either be filled with water or air. It helps the submarine to dive and resurface on the water, using the concept of buouyancy. If the tanks are filled with water, the submarine dives underwater and if they are filled with air, it resurfaces on the surface of the water

Diving Planes

Once underwater, the diving planes of a submarine helps to accurately control the depth of the submarine. These are very similar to the fins present in the bodies of sharks, which helps them to swim and dive. When the planes are pointed downwards, the water flowing above the planes generate more pressure on the top surface than that on the bottom surface, forcing the submarine to dive deeper. This allows the driver to control the depth and the angle of the submarine.

Implementation

In this section I will be going through how I implemented the submarine using QML. For handling physics, I used Box2D.

The Submarine

The submarine is an QML Item element, designed as follows:

Item {
    id: submarine

    z: 1

    property point initialPosition: Qt.point(0,0)
    property bool isHit: false
    property int terminalVelocityIndex: 100
    property int resetVerticalSpeed: 500

    /* Maximum depth the submarine can dive when ballast tank is full */
    property real maximumDepthOnFullTanks: (background.height * 0.6) / 2

    /* Engine properties */
    property point velocity
    property int maximumXVelocity: 5

    /* Wings property */
    property int wingsAngle
    property int initialWingsAngle: 0
    property int maxWingsAngle: 2
    property int minWingsAngle: -2

    function destroySubmarine() {
        isHit = true
        submarineImage.broken()
    }

    function resetSubmarine() {
        isHit = false
        submarineImage.reset()

        leftBallastTank.resetBallastTanks()
        rightBallastTank.resetBallastTanks()
        centralBallastTank.resetBallastTanks()

        x = initialPosition.x
        y = initialPosition.y

        velocity = Qt.point(0,0)
        wingsAngle = initialWingsAngle
    }

	function increaseHorizontalVelocity(amt) {
        if (submarine.velocity.x + amt <= submarine.maximumXVelocity) {
            submarine.velocity.x += amt
        }
    }

    function decreaseHorizontalVelocity(amt) {
        if (submarine.velocity.x - amt >= 0) {
            submarine.velocity.x -= amt
        }
    }

    function increaseWingsAngle(amt) {
        if (wingsAngle + amt <= maxWingsAngle) {
            wingsAngle += amt
        } else {
            wingsAngle = maxWingsAngle
        }
    }

    function decreaseWingsAngle(amt) {
        if (wingsAngle - amt >= minWingsAngle) {
            wingsAngle -= amt
        } else {
            wingsAngle = minWingsAngle
        }
    }

    function changeVerticalVelocity() {
        /*
         * Movement due to planes
         * Movement is affected only when the submarine is moving forward
         * When the submarine is on the surface, the planes cannot be used
         */
        if (submarineImage.y > 0) {
            submarine.velocity.y = (submarine.velocity.x) > 0 ? wingsAngle : 0
        } else {
            submarine.velocity.y = 0
        }
        /* Movement due to Ballast tanks */
        if (wingsAngle == 0 || submarine.velocity.x == 0) {
            var yPosition = submarineImage.currentWaterLevel / submarineImage.totalWaterLevel * submarine.maximumDepthOnFullTanks

            speed.duration = submarine.terminalVelocityIndex * Math.abs(submarineImage.y - yPosition) // terminal velocity
            submarineImage.y = yPosition
        }
    }

    BallastTank {
        id: leftBallastTank

        initialWaterLevel: 0
        maxWaterLevel: 500
    }

    BallastTank {
        id: rightBallastTank

        initialWaterLevel: 0
        maxWaterLevel: 500
    }

    BallastTank {
        id: centralBallastTank

        initialWaterLevel: 0
        maxWaterLevel: 500
    }

    Image {
        id: submarineImage
        source: url + "submarine.png"

        property int currentWaterLevel: bar.level < 7 ? centralBallastTank.waterLevel : leftBallastTank.waterLevel + centralBallastTank.waterLevel + rightBallastTank.waterLevel
        property int totalWaterLevel: bar.level < 7 ? centralBallastTank.maxWaterLevel : leftBallastTank.maxWaterLevel + centralBallastTank.maxWaterLevel + rightBallastTank.maxWaterLevel

        width: background.width / 9
        height: background.height / 9

        function broken() {
            source = url + "submarine-broken.png"
        }

        function reset() {
            source = url + "submarine.png"
            speed.duration = submarine.resetVerticalSpeed
            x = submarine.initialPosition.x
            y = submarine.initialPosition.y
        }

        Behavior on y {
            NumberAnimation {
                id: speed
                duration: 500
            }
        }

        onXChanged: {
            if (submarineImage.x >= background.width) {
                Activity.finishLevel(true)
            }
        }
    }

    Body {
        id: submarineBody
        target: submarineImage
        bodyType: Body.Dynamic
        fixedRotation: true
        linearDamping: 0
        linearVelocity: submarine.isHit ? Qt.point(0,0) : submarine.velocity

        fixtures: Box {
            id: submarineFixer
            width: submarineImage.width
            height: submarineImage.height
            categories: items.submarineCategory
            collidesWith: Fixture.All
            density: 1
            friction: 0
            restitution: 0
            onBeginContact: {
                var collidedObject = other.getBody().target

                if (collidedObject == whale) {
                    whale.hit()
                }
                if (collidedObject == crown) {
                    crown.captureCrown()
                } else {
                    Activity.finishLevel(false)
                }
            }
        }
    }
    Timer {
        id: updateVerticalVelocity
        interval: 50
        running: true
        repeat: true

        onTriggered: submarine.changeVerticalVelocity()
    }
}

The Item is a parent object to hold all the different components of the submarine (the Image BallastTank and the Box2D component). It also contains the functions and the variables that are global to the submarine.

The Engine

The engine is a very straightforward implementation via the linearVelocity component of the Box2D element. We have two variables global to the submarine for handling the engine component, defined as follows:

property point velocity
property int maximumXVelocity: 5

which are pretty much self-explanatory, the velocity holds the current velocity of the submarine, both horizontal and vertical and the maximumXVelocity holds the maximum horizontal speed the submarine can achieve.

For increasing or decreasing the velocity of the submarine, we have two functions global to the submarine, as follows:

function increaseHorizontalVelocity(amt) {
    if (submarine.velocity.x + amt <= submarine.maximumXVelocity) {
        submarine.velocity.x += amt
    }
}

function decreaseHorizontalVelocity(amt) {
    if (submarine.velocity.x - amt >= 0) {
        submarine.velocity.x -= amt
    }
}

which essentially gets the amount by which the velocity.x component needs to be increased or decreased, checks whether it crosses the range or not, and makes the necessary changes likewise.

The actual applying of the velocity is very straightforward, which takes place in the Body component of the submarine as follows:

Body {
	...
	linearVelocity: submarine.isHit ? Qt.point(0,0) : submarine.velocity
	...

The submarine.isHit component, as the name suggests holds whether the submarine is hit by any object or not (except the pickups). If so, the velocity is reset to (0,0)

Thus, for increasing or decreasing the engine power, we just have to call one of the two functions anywhere from the code:

submarine.increaseHorizontalVelocity(1); /* For increasing H velocity */
submarine.decreaseHorizontalVelocity(1); /* For decreasing H velocity */

The Ballast Tanks

The Ballast Tanks are implemented separately in BallastTank.qml, since it will be implemented more that once. It looks like the following:

Item {
    property int initialWaterLevel
    property int waterLevel: 0
    property int maxWaterLevel
    property int waterRate: 10
    property bool waterFilling: false
    property bool waterFlushing: false

    function fillBallastTanks() {
        waterFilling = !waterFilling

        if (waterFilling) {
            fillBallastTanks.start()
        } else {
            fillBallastTanks.stop()
        }
    }

    function flushBallastTanks() {
        waterFlushing = !waterFlushing

        if (waterFlushing) {
            flushBallastTanks.start()
        } else {
            flushBallastTanks.stop()
        }
    }

    function updateWaterLevel(isInflow) {
        if (isInflow) {
            if (waterLevel < maxWaterLevel) {
                waterLevel += waterRate

            }
        } else {
            if (waterLevel > 0) {
                waterLevel -= waterRate
            }
        }

        if (waterLevel > maxWaterLevel) {
            waterLevel = maxWaterLevel
        }

        if (waterLevel < 0) {
            waterLevel = 0
        }
    }

    function resetBallastTanks() {
        waterFilling = false
        waterFlushing = false

        waterLevel = initialWaterLevel

        fillBallastTanks.stop()
        flushBallastTanks.stop()
    }

    Timer {
        id: fillBallastTanks
        interval: 500
        running: false
        repeat: true

        onTriggered: updateWaterLevel(true)
    }

    Timer {
        id: flushBallastTanks
        interval: 500
        running: false
        repeat: true

        onTriggered: updateWaterLevel(false)
    }
}

What they essentially does is:

  • fillBallastTanks: Fills up the Ballast tanks upto maxWaterLevel. Sets the flag waterFilling to true if the Ballast is to be filled with water, and the timer fillBallastTanks is set to start(), which will increase the water level in the tank after every 500 millisecond.
  • flushBallastTanks: Flushes the Ballast tanks down to 0. Sets the flag waterFlushing to true if the Ballast is to be flushed out of water, and the timer flushBallastTanks is set to start(), which will decrease the water level in the tank after every 500 millisecond.
  • resetBallastTanks: Resets the water level in the ballast tanks to it’s initial values

In the Submarine Item, we just use three instances of the BallastTank object, for left, right and central ballast tanks, setting up it’s initial and maximum water level.

BallastTank {
    id: leftBallastTank

    initialWaterLevel: 0
    maxWaterLevel: 500
}

BallastTank {
    id: rightBallastTank

    initialWaterLevel: 0
    maxWaterLevel: 500
}

BallastTank {
    id: centralBallastTank

    initialWaterLevel: 0
    maxWaterLevel: 500
}

For filling up or flushing the ballast tanks (centralBallastTank in this case), we just have two call either of the following two functions:

centralBallastTank.fillBallastTanks() /* For filling */
centralBallastTank.flushBallastTanks() /* For flushing */

I will be discussing about how the depth is maintained using the ballast tanks in the next section.

The Diving Planes

The diving planes will be used to control the depth of the submarine once it is moving underwater. Keeping that in mind, along with the fact that it needs to be effectively integrated with the ballast tanks. This is implemented in the changeVerticalVelocity() function, which is discussed as follows:

/*
 * Movement due to planes
 * Movement is affected only when the submarine is moving forward
 * When the submarine is on the surface, the planes cannot be used
 */
if (submarineImage.y > 0) {
    submarine.velocity.y = (submarine.velocity.x) > 0 ? wingsAngle : 0
} else {
    submarine.velocity.y = 0
}

However, under the following conditions:

  • the angle of the planes is reduced to 0
  • the horizontal velocity of the submarine is 0,

the ballast tanks will take over. Which is implemented as:

/* Movement due to Ballast tanks */
if (wingsAngle == 0 || submarine.velocity.x == 0) {
    var yPosition = submarineImage.currentWaterLevel / submarineImage.totalWaterLevel * submarine.maximumDepthOnFullTanks

    speed.duration = submarine.terminalVelocityIndex * Math.abs(submarineImage.y - yPosition) // terminal velocity
    submarineImage.y = yPosition
}

yPosition calculates how much percentage of the tank is filled with water, and likewise it determines the depth to which it will dive. The speed.duration is the duration of the transition animation, and the duration depends directly on how much the submarine will have to cover up along the Y axis, to avoid a steep rise or fall of the submarine.

For increasing or decreasing the angle of the diving planes, we just need to call either of the following two functions:

submarine.increaseWinglsAngle(1) /* For increasing */
submarine.decreaseWingsAngle(1) /* For decerasing */

img


That’s it for now! The two major goals to be completed next are the rotation of the submarine (in case more than one tanks are used and they are unequally filled up) and the UI for controlling the submarine. Will provide an update on it once it is completed.

It is a long time since I posted on my blog and frankly, i missed it. I’ve been busy with school: courses, tones of homework, projects and presentations.

Since last year i had a great experience with GCompris and KDE in general, i decided to apply in this year’s GSoC as well, only this time, i chose another project from KDE: Minuet.

site-header-brand.png

Minuet is part of KDE-edu and its goal is helping teachers and students both novice and experienced teach and respectively learn and exercise their music skills. It is primarily focused on ear-training exercises and other areas will soon be available.

Minuet includes a virtual piano keyboard, displayed at the bottom of the screen, on which users can visualize the exercises. Using a piano keyboard is a good starting point for anyone who wants to learn the basics of musical theory: intervals, chords, scales etc. Minuet is currently based on the piano keyboard for all its ear training exercises. While this is a great feature, some may find it not quite suitable to their musical instrument.

piano.png

 

My project aims to deliver to the user a framework which will support the implementation of multiple instrument views as Minuet plugins. Furthermore, apart from the piano keyboard, I will implement another instrument for playing the exercise questions and user’s answers.

This mechanism should allow new instruments to be integrated as Minuet plugins. After downloading the preferred instrument plugin, the user would then be able to switch between instruments. It will allow him to enhance his musical knowledge by training his skills using that particular instrument.

At the end of summer, I intend to have changed the current architecture to allow multiple-instrument visualization framework and refactor the piano keyboard view as a separate plugin. I also intend to have implemented a plugin for at least one new instrument: a guitar.

A mock up on the new guitar is shown below:guitar.png

I hope it will be a great summer for me, my mentor and the users of Minuet, whom i want to offer a better experience by using my work.


Sounds like déjà vu? You are right! We used to have Facebook Event sync in KOrganizer back in KDE 4 days thanks to Martin Klapetek. The Facebook Akonadi resource, unfortunately, did not survive through Facebook API changes and our switch to KF5/Qt5.

I’m using a Facebook event sync app on my Android phone, which is very convenient as I get to see all events I am attending, interested in or just invited to directly in my phone’s calendar and I can schedule my other events with those in mind. Now I finally grew tired of having to check my phone or Facebook whenever I wanted to schedule event through KOrganizer and I spent a few evenings writing a brand new Facebook Event resource.

Inspired by the Android app the new resource creates several calendars – for events you are attending, events you are interested in, events you have declined and invitations you have not responded to yet. You can configure if you want to receive reminders for each of those.

Additionally, the resource fetches a list of all your friend’s birthdays (at least of those who have their birthday visible to their friends) and puts them into a Birthday calendar. You can also configure reminders for those separately.

The Facebook Sync resource will be available in the next KDE Applications feature release in August.

Hello readers

I’m glad to share this opportunity to be selected 2 times for Google Summer of Code project under KDE. It’s my second consecutive year working with DigiKam team.

DigiKam is an advanced digital photo management  application which enables user to view, manage, edit, organise, tag and share photographs under Linux systems. DigiKam has a feature to search items by similarity. This require to compute image fingerprints stored in main database. These data can take space on disk especially with huge collection and bloat the main database a lots and increase complexity to backup main database which include all main information for each item registered, as tags, label, comments, etc.

The goal of this proposal is to store the similarity fingerprints must be stored in a dedicated database. This would be a big relief for the end users as image fingerprints are around few KB of raw data for each image. And storing all of them takes huge disk space, increases time latency for huge collections.

Thus, to overcome all the above issues, a new DB interface would be created. {This work has already been done for thumbnail and face fingerprints}. Also, from backup point of view, it’s easy to have separate files to optimise.

I’ll add keep you guys updated with my work in upcoming posts.

Till then, I encourage you to use the software. It’s easy to install and use. (You can find cheat sheet to build DigiKam in my previous post! �� )

Happy DigiKaming! ��

 

 

 

 

 


Following the 5th release 5.5.0 published in March 2017, the digiKam team is proud to announce the new release 5.6.0 of digiKam Software Collection. With this version the HTML gallery and the video slideshow tools are back, database shrinking (e.g. purging stale thumbnails) is also supported on MySQL, grouping items feature has been improved, the support for custom sidecars type-mime have been added, the geolocation bookmarks introduce fixes to be fully functional with bundles, the support for custom sidecars, and of course a lots of bug has been fixed.

HTML Gallery Tool

The HTML gallery is accessible through the tools menu in the main bar of both digiKam and showFoto. It allows you to create a a web gallery with a selection of photos or a set of albums, that you can open in any web browser. There are many themes to select and you can create your own as well. Javascript support is also available.

Video Slideshow Tool

The Video Slideshow is accessible also through the tools menu in the main bar of both digiKam and showFoto. It allows you to create a video slide with a selection of photos or albums. The generated video file can be view in any media player, as phones, tablets, Blue Ray reader, etc. There are many settings to customize the format, the codec, the resolution, and the transition (as for ex the famous Kens-Burn effect).

Database Integrity Tool

Already in 5.5.0 release, the tool dedicated to tests for database integrity and obsolete information have been improved. Besides obvious data safety improvements this can free up quite a lot of space in the digiKam databases. For technical reasons only SQLite database were shrunk to this smaller size in 5.5.0 release. Now this is also possible for MySQL databases with 5.6.0.

Items Grouping Features

Earlier changes to the grouping behaviour proved that digiKam users have quite diverse workflows - so with the current change we try to represent that diversity.

Originally grouped items were basically hidden away. Due to requests to include grouped items in certain operations, this was changed entirely to include grouped items in (almost) all operations. Needless to say, this wasn’t such a good idea either. So now you can choose which operations should be performed on all images in a group or just the first one.

The corresponding settings live in the configuration wizard under Miscellaneous in the Grouping tab. By default all operations are set to Ask, which will open a dialog whenever you perform this operation and grouped items are involved.

Extra Sidecars Support

Another new capability is to recognise additional sidecars. Under the new Sidecars tab in the Metadata part of the configuration wizard you can specify any additional extension that you want digiKam to recognise as a sidecar. These files will neither be read from nor written to, but they will be moved/rename/deleted/… together with the item that they belong to.

Geolocation Bookmarks

Another important change done for this new version is to restore the geolocation bookmarks feature which did not work with bundle versions of digiKam (AppImage, MacOS, and Windows). The new bookmarker has been fully re-written and is still compatible with previous geolocation bookmarks settings. It is now able to display the bookmark GPS information over a map for a better usability while editing your collection.

Google Summer of Code 2017 Students

This summer the team is proud to assist 4 students to work on separate projects:

Swati Lodha is back in the team. As in 2016, she will work to improve the Database interface. After having fixed and improved the MySQL support in digiKam, she has the task this year to isolate all the database contents dedicated to manage the similarity finger-prints matrix. As for thumbnails and face recognition, these elements will be stored in a new dedicated database. The goal is to reduce the core database size, simplify the maintenance and decrease the core database time-latencies.

Yingjie Liu is a Chinese student, mainly specialized with math and algorithms who will add a new efficient face recognition algorithm and will try to introduce some AI solution to simplify the face tag workflow.

Ahmed Fathi is an Egyptian student who will work to restore and improve the DLNA support in digiKam, to be able to stream collections contents through the network with compatible UPNP device as the Smart TV, tablets or cellulars.

Shaza Ismail is an another Egyptian student who will work to an ambitious project to create a tool for image editor to be used for healing image stains with the use of another part of the image by coloring by the use of one part over the other, mainly testing on dust spots, but can be used for other particles hiding as well.

Final Words

The next main digiKam version 6.0.0 is planned for the end of this year, when all Google Summer of Code projects will be ready to be backported for a beta release. In September, we will release a maintenance version 5.7.0 with a set of bugfixes as usual.

For further information about 5.6.0, take a look at the list of more than 81 issues closed in Bugzilla.

digiKam 5.6.0 Software collection source code tarball, Linux 32/64 bits AppImage bundles, MacOS package, and Windows 32/64 bits installers can be downloaded from this repository.

Happy digiKaming while this summer!

June 20, 2017

As my first subject for this animation blog series, we will be taking a look at Animation curves.

Curves, or better, easing curves, is one of the first concepts we are exposed to when dealing with the subject of animation in the QML space.

What are they?

Well, in simplistic terms, they are a description of an X position over a Time axis that starts in (0 , 0) and ends in (1 , 1). These curves are …

The post A tale of 2 curves appeared first on KDAB.


We are very happy to announce the first AppImage of the next generation Kdenlive. We have been working since the first days of 2017 to cleanup and improve the architecture of Kdenlive’s code to make it more robust and clean. This also marked a move to QML for the display of the timeline.

This first AppImage is only provided for testing purposes. It crashes a lot because many features have yet to be ported to the new code, but you can already get a glimpse of the new timeline, move clips and compositions, group items and add some effects. This first Appimage can be downloaded from the KDE servers. Just download the Appimage, make the file executable and run it. This version is not appropriate for production use and due to file format changes, will not properly open previous Kdenlive project files. We are hoping to provide reliable nightly build AppImages so that our users can follow the development and provide feedback before the final release.

Today is also our 18th Kdenlive Café, so you can meet us tonight -20th of june – at 9pm (CEST) in the #kdenlive channel to discuss the evolution and issues around Kdenlive.

I will also be presenting the progress of this Kdenlive version this summer (22nd of July) at Akademy in Almeria – Spain – so feel free to come visit the KDE Community in this great event.

Set up the arcanist for Koko

  • It was pretty much easy to install. For my Archlinux just below command did the work for me.

    yaourt -S arcanist-git

  • Then I had to add .arcconfig to the Koko repository so that arc could point to the link where it should publish the changes

    { “phabricator.uri”:”https://phabricator.kde.org/” }

  • The only problem is with the SSL certificates, as the university campus wireless network uses its own self-signed certificate. This will create problem to access the ssl encrypted web content, pretty much everything related to development :P
  • Also university campus network does not allow SSH over it’s network. This will prohibit me from committing the changes to the git repository.
  • Hence to use the arcanist, everytime I will have to check for the curl.cainfo in the /etc/php/php.ini and set/unset the environment variable GIT_SSL_CAINFO depending on the network I am using.

June 19, 2017

KRuler, in case you don't know it, is a simple software ruler to measure lengths on your desktop. It is one of the oldest KDE tools, its first commit dating from November 4th, 2000. Yes, it's almost old enough to vote.

I am a long time KRuler user. It gets the job done, but I have often found myself saying "one day I'll fix this or that". And never doing it.

Hidpi screen really hurt the poor app, so I finally decided to do something and spend some time during my daily commute on it.

This is what it looked like on my screen when I started working on it:

KRuler Before

As any developer would, I expected it should not be more than a week of work... Of course it took way longer than that, because there was always something odd here and there, preventing me from pushing a patch.

I started by making KRuler draw scale numbers less often to avoid ugly overlapping texts. I then made it draw ticks on both sides, to go from 4 orientations (North, South, West, East) to 2: vertical or horizontal.

The optional rotation and buttons were getting in the way though: the symmetric ticks required the scale numbers to be vertically centered so buttons were overlapping it. I decided to remove them (they were already off by default). With only two orientations it is less useful to have rotation buttons anyway: it is simple enough to use either the context menu, middle-click the ruler, or the R shortcut to change the orientation. Closing is accessible through the context menu as well.

One of the reasons (I think) for the 4 orientations was the color picker feature. It makes little sense to me to have a color picker in a ruler: it is more natural to use KColorChooser to pick colors. I removed the color picker, allowing me to remove the oddly shaped mouse cursor and refresh the appearance of the length indicator to something a bit nicer.

I then made it easier to adjust the length of the ruler by dragging its edges instead of having to pick the appropriate length from a sub-menu of the context menu. This made it possible to remove this sub-menu.

This is what KRuler looks like now:

KRuler after

That is only part 1 though. I originally had 2 smaller patches to add, but Jonathan Riddell, who kindly reviewed the monster patch, requested another small fix, so that makes 3 patches to go. Need to setup and figure out how to use Arcanist to submit them to Phabricator, as I have been told Review Board is old school these days :)

Or: Tying loose ends where some are slightly too short yet.

When

  • you favour offline documentation (not only due to nice integration with IDEs like KDevelop),
  • develop code using KDE Frameworks or other Qt-based libraries,
  • you know all the KF5 libraries have seen many people taking care for API documentation in the code over all the years,
  • and you had read about doxygen’s capability to create API dox in QCH format,
  • and you want your Linux distribution package management to automatically deliver the latest version of the documentation (resp. QCH files) together with the KDE Frameworks libraries and headers (and ideally same for other Qt-based libraries),

the idea is easy derived to just extend the libraries’ buildsystem to also spit out QCH files during the package builds.

It’s all prepared, can ship next week, latest!!1

Which would just be a simple additional target and command, invoking doxygen with some proper configuration file. Right? So simple, you wonder why no-one had done it yet ��

Some initial challenge seems quickly handled, which is even more encouraging:
for proper documentation one also wants cross-linking to documentation of things used in the API which are from other libraries, e.g. base classes and types. Which requires to pass to doxygen the list of those other documentations together with a set of parameters, to generate proper qthelp:// urls or to copy over documentation for things like inherited methods.
Such listing gets very long especially for KDE Frameworks libraries in tier 3. And with indirect dependencies pulled into the API, on changes the list might get incomplete. Same with any other changes of the parameters for those other documentations.
So basically a similar situation to linking code libraries, which proposes to also give it a similar handling: placing the needed information with CMake config files of the targeted library, so whoever cross-links to the QCH file of that library can fetch the up-to-date information from there.

Things seemed to work okay on first tests, so last September a pull request was made to add some respective macro module to Extra-CMake-Modules to get things going and a blog post “Adding API dox generation to the build by CMake macros” was written.

This… works. You just need to prepare this. And ignore that.

Just, looking closer, lots of glitches popped up on the scene. Worse, even show stoppers made their introduction, at both ends of the process pipe:
At generation side doxygen turned out to have bitrotted for QCH creation, possibly due to lack of use? Time to sacrifice to the Powers of FLOSS, and git clone the sources and poke around to see what is broken and how to fix it. Some time and an accepted pull request later the biggest issue (some content missed to be added to the QCH file) was initially handled, just yet needed to also get out as released version (which it now is since some months).
At consumption side Qt Assistant and Qt Creator turned to be no longer able to properly show QCH files with JavaScript and other HTML5 content, due to QWebKit having been deprecated/dropped and both apps in many distributions now only using QTextBrowser for rendering the documentation pages. And not everyone is using KDevelop and its documentation browser, which uses QWebKit or, in master branch, favours QWebEngine if present.
Which means, an investment into QCH files from doxygen would only be interesting to a small audience. Myself currently without resources and interest to mess around with Qt help engine sources, looks with hope on the resurrection of QWebKit as well as the patch for a QtWebEngine based help engine (if you are Qt-involved, please help and push that patch some more!)

Finally kicking off the production cycle

Not properly working tools, nothing trying to use the tools on bigger scale… classical self-blocking state. So time to break this up and get some momentum into, by tying first things together where possible and enabling the generation of QCH files during builds of the KDE Frameworks libraries.

And thus to current master branches (which will become v5.36 in July) there has been now added for one to Extra-CMake-Modules the new module ECMAddQch and then to all of the KDE Frameworks libraries with public C++ API the option to generate QCH files with the API documentation, on passing -DBUILD_QCH=ON to cmake. If also having passed -DKDE_INSTALL_USE_QT_SYS_PATHS=ON (or installing to same prefix as Qt), the generated QCH files will be installed to places where Qt Assistant and Qt Creator even automatically pick them up and include them as expected:

Qt Assistant with lots of KF5 API dox

KDevelop picks them up as well, but needs some manual reconfiguration to do so.

(And of course ECMAddQch is designed to be useful for non-KF5 libraries as well, give it a try once you got hold of it!)

You and getting rid of the remaining obstacles

So while for some setups the generated QCH file of the KDE Frameworks already are useful (I use them since some weeks for e.g. KDevelop development, in KDevelop), for many they still have to become that. Which will take some more time and ideally contributions also from others, including Doxygen and Qt Help engine maintainers.

Here a list of related reported Doxygen bugs:

  • 773693 – Generated QCH files are missing dynsections.js & jquery.js, result in broken display (fixed for v1.8.13 by patch)
  • 773715 – Enabling QCH files without any JavaScript, for viewers without such support
  • 783759 – PERL_PATH config option: when is this needed? Still used?
  • 783762 – QCH files: “Namespaces” or “Files” in the navigation tree get “The page could not be found” (proposed patch)
  • 783768 – QCH files: classes & their constructors get conflicting keyword handling< (proposed patch)
  • YETTOFILE – doxygen tag files contain origin paths for “file”, leaking info and perhaps is an issue with reproducible builds

And a related reported Qt issue:

There is also one related reported CMake issue:

  • 16990 – Wanted: Import support for custom targets (extra bonus: also export support)

And again, it would be also good to see the patch for a QtWebEngine based help engine getting more feedback and pushing by qualified people. And have distributions doing efforts to provide Qt Assistant and Qt Creator with *Web*-based documentation engines (see e.g. bug filed with openSUSE).

May the future be bright^Wdocumented

I am happy to see that Gentoo & FreeBSD packagers have already started to look into extending their KDE Frameworks packaging with generated API dox QCH files for the upcoming 5.36.0 release in July, with other packagers planning to do so soon as well.

So perhaps one not too distant day it will be just normal business to have QCH files with API documentation provided by your distribution not just for the Qt libraries itself, but also for every library based on them. After all documentation has been one of the things making Qt so attractive. As developer of Qt-based software, I very much look forward to that day ��

Next stop then: QML API documentation :/



I'm glad to announce that a first stable version of Brooklyn is released!
What's new? Well:

  • Telegram and IRC APIs are fully supported;
  •  it manages attachments (even Telegram's video notes), also on text-only protocols through a web server;
  • it has an anti-flood feature on IRC (e.g. it doesn't notify other channels if an user logs out without writing any message). For this I've to say "thank you" to Cristian Baldi, a W2L developer which has this fabulous idea;
  • it provides support for edited messages;
  • SASL login mechanism is implemented;
  • map locations are supported through OpenStreetMap
  • you can see a list of other channels' members typing "botName users" on IRC or using "/users" command on Telegram;
  • if someone writes a private message to the bot instead of in a public channel, it sends him the license message "This software is released under the GNU AGPL license. https://phabricator.kde.org/source/brooklyn/";
As you may have already noticed, after talking with my mentor I decided to modify the GSOC timeline.We decided to wait until Rocket.Chat REST APIs will be more stable and in the meantime to provide a full-working IRC/Telegram bridge.
This helped me providing a more stable and useful software for the first evaluation.
We are also considering writing a custom wrapper for the REST APIs because current solutions don't fits our needs.

The last post reached over 600 people and that's awesome!
As always I will appreciate every single suggestion.
Have you tried the application? Do you have any plans to do so? Tell me everything in the comments section down below!


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.