January 23, 2019

Nuevo evento a la vista. En esta ocasión se trata de la Conferencia Software Libre Libertad, Privacidad y Ética que se va a realizar el próximo viernes 25 de Enero, de 18 a 20h en la Sala Multiusos de San Antonio de Benagéber (Valencia).

Conferencia Software Libre Libertad, Privacidad y Ética

Me hago eco de un nuevo evento que tendrá lugar el próximo viernes 25 de enero gracias a ValenciaTech, lo cuales han decidido ofrecer una charla de divulgación informática sobre Software Libre con el apoyo de Open Xarxes Cooperativa y el GNU/Linux Valencia.

Es por tanto, una gran oportunidad para conocer más sobre el software que estás utilizando en tu PC, y el porqué debes sustituirlo por Software Libre. Una de las mejores elecciones que podrás realizar en tu vida.

Conferencia Software Libre Libertad, Privacidad y Ética

Si asistes a la charla podrás aprender a distinguir entre software libre, código abierto y software privativo. Conocerás los peligros y la injusticia de seguir haciendo uso de software privativo y entenderás porque cada día más gente trabaja en mejorarlo.

Por otra parte, podrás conocer de forma más profunda como funcionan internamente los sistemas operativos privativos como  Microsoft Windows o Mac OS.

No estará ausente de la charla la falta de privacidad y abuso a sus usuarios, que cometen de manera sistemática empresas como Facebook, Amazon o Apple y se ofrecerán alternativas a todo este software que usa a sus usuarios.

El programa es el siguiente:

Llegada : Apertura de puertas 17:45
Conferencia : 18:00h a ~19:30
Ruegos y preguntas : 19:30 a ~20:00
Exposición : Portátiles con GNU/Linux
Piscolabis : Agua, papas y refrescos.

No quiero finalizar la entrada sin agradecer a los colaboradores su implicación en la charla:

En definitiva, una gran oportunidad para iniciarse en el Software Libre en un mundo en el que cada vez más dependemos de los dispositivos electrónicos para trabajar y disfrutar de nuestro ocio.

Más información: ValenciaTech

January 22, 2019

gcompris

Hi,
Good news for Mac users: we finally have a new version of GCompris for OSX !

The last version of GCompris for OSX was 0.52 from 3 years ago. Since then, no one in the team could update it because of the lack of hardware. Thanks to Boudewijn from the Krita Foundation, we have now a little mac mini, old but good enough to build our packages. It took me several days of dedicated work to learn this new platform and update the build system to produce a distributable package.

This package was built and tested on OSX 10.13. If you can try it on a different OSX version, please let us know if it works.

The .dmg installer is available on the download page.

This time we decided to stop distributing it from the app-store. About the iOS version, it will take some more time before I can look at it, and we still don’t have any device to test.

Notes: this package is based on GCompris version 0.95. It is built with the latest version of Qt (5.12.0) which introduced some regressions. We fixed all of those issues, but some of those fixes are only in the development version for now. A few activities are broken (checkers, braille, braille_alphabet, algorithm), but we will make a new release next month that will also address those little issues.

Thank you all,
Timothée – GCompris team

El quinto podcast de la quinta temporada de los podcast de KDE España estará dedicada a la mejora de la experiencia con el escritorio Plasma KDE. Como siempre, en directo pero que dejaremos grabado para la posteridad el próximo 24 de enero a las 22:00 CEST.

Mejora de la experiencia con el escritorio Plasma KDE, próximo podcast de KDE España

Mejora de la experiencia con el escritorio Plasma KDE

El próximo jueves día 24 de enero de 2019, a las 22:00h, KDE España realizará su habitual podcast semanal, el quinto de esta temporada.

El tema de este episodio es la mejora de la experiencia de usuario del escritorio KDE Plasma: hablaremos de como a traves de las extensiones conocidas como Plasmoides, o del uso de actividades, o efectos de Kwin, el usuario puede mejorar su interacción con el escritorio.

Como siempre, para poder disfrutar del podcast en directo seguiremos utilizando los servicios de acontecimiento en vivo de Youtube y contestaremos, si podemos, vuestras preguntas en directo

Para los amantes del metapodcasting, este podcast será el primero de este 2019 y con él llegaremos al quinto de la quinta temporada. ¿No está mal, verdad?

 

¡Os esperamos el jueves 24 de enero a las 22:00!

Los podcast de KDE España

Ayúdanos a decidir el temaEn un afán de acercarnos más a todos los simpatizantes de KDE hace un tiempo que empezamos a realizar podcast. En ellos varios miembros de la Comunidad KDE de España nos reunimos para hablar un poco de los diversos proyectos.

Hemos hablado de muchos temas como por ejemplo Akademy, KDE Connect, Plasma Mobile, Kirigami, KDE y el mundo empresarial y un largo etcétera de temas. Por supuesto, os animo a ayudarnos proponiendo temas en los comentarios de esta entrada, en el grupo de Telegram de Cañas y Bravas o en la sección especial en la web de bugs que hemos creado para la ocasión.

Podéis seguirnos en el canal de Youtube de KDE España o en Ivoox, donde estamos subiendo poco a poco los audios emitidos. Esperamos que os gusten.

Nathan Lovato, the author of the Brushes for Illustrators and Concept Artists Krita brush preset pack and many Krita tutorials is running a kickstarter to create a new course on creating games with free software. He’s also sponsoring a developer to work on Krita

Here’s what Nathan says:

“I’ve been working with Photoshop for years, back when I worked as a game artist and designer. I then used Krita for painting and Affinity Designer for graphic design, side-by-side. Work is always busy, so I have to produce pictures quickly. For a long time, as it’s not its focus, Krita wasn’t the most productive option for graphic designers.”

“It kept getting better over the past years and, since January, it’s my main art program. Game assets, banners, YouTube thumbnails, or some photo editing. I do the bulk of my work with it, and it’s going well!”

“With its rich non-destructive feature set and its great color and layer management tools, it’s the most versatile Free Software to do graphic design for me. If you need to draw vector shapes, it plays very well with Inkscape, which I still use to create text and copy to Krita at the moment.”

“All the thumbnails you can see on our channel lately were made in Krita. I use File Layers to include reusable graphics, layer styles to add shadows, to give some 3d shape to text layers, and filter layers so I can tweak the value and color contrast of my compositions.”

“Right now, layer styles can slow down the program quite a bit, so you want to add them at the end, or to flatten the layers as you work.”

“They’re powerful! With the ability to reuse styles from other layers in one click, or to save libraries of styles to reuse across documents, they can save you a considerable amount of time.”

“Doing graphic design work or game assets in particular, it is already productive if you set up a good workflow. I will be honest: I do miss a few features for faster and more precise work. Mainly:”

“1. Better text editing tools”
“2. Snapping support for the pixel layers’ bounding boxes”
“3. Faster layer styles”
“4. The ability to arrange and distribute pixel layers”

“The developers are aware of all of these though, and Scott Petrovic has been working on automated text updates, among many other user experience improvements.”

“I should add some flexible batch layer export to the list, but we’re on this! Razvan has been working on a Free plugin to batch export layers in a highly configurable way, based on their name. Creating this add-on lead Razvan to make some small contributions to Krita itself. Also, we’re looking to do more in the future. ��

“The plugin is available in alpha: https://github.com/GDquest/GDquest-art-tools”

KDE Project:

Yesterday KEXI 3.2 Beta shipped, effect of improvements from entire 2018. Full info in the wiki.

That's best KEXI to date! Pun intended because among other things one is especially worth mentioning, entirely new and final date/time grammar for user's SQL.

Once we have polished type-safety in SQL handling in 3.1 (KEXISQL dialect "to bind them all"), date/time types as a string such as "2019-01-22" immediately stopped to work for constants. So one of the solutions was to reuse/invent new grammar. Here's the result of reusing as many as possible good practices: link. It was also a fun opportunity to use GNU Flex conditions.

What's next in this area? Time zones is possible extension. But first - date/time-related SQL functions, as documented here.

Advertisement: Programmers who would like to break the use of 100% raw SQL (QtSQL) in their recipe or CD collection or banking apps (wink, wink) will find plenty of database goodies in the KDb framework that's used for example in KEXI. There's free support too! For list of KDb changes look here. KDb is _the_ place where the above features are provided, not KEXI.


WHERE condition with date constants.

There was a snag in the KookBook 0.2.0 release, and 0.2.1 is available.

Get it here: kookbook-0.2.1.tar.xz

Btw, anyone can tell me the purpose of

git archive --prefix=foo

compared to

git archive --prefix=foo/

When would anyone use the former?

There’s a nice post by Ryou about the projections feature of Eric Niebler’s ranges that we’re going to get in C++20.

I’ll try to provide a more general look at what projections are – not only in the context of ranges. I recommend reading Ryou’s post before or after this one for completeness.

Tables have turned

Imagine the following – you have a collection of records where each record has several fields. One example of this can be a list of files where each file has a name, size and maybe something else.

Collections like these are often presented to the user as a table which can be sorted on an arbitrary column.

Tables are everywhereTables are everywhere

By default std::sort uses operator< to compare items in the collection while sorting which is not ideal for our current problem – we want to be able to sort on an arbitrary column.

The usual solution is to use a custom comparator function that compares just the field we want to sort on (I’m going to omit namespaces for the sake of readability, and I’m not going to write pairs of iterators – just the collection name as is customary with ranges):

sort(files, [] (const auto& left, const auto& right) {
                return left.name < right.name;
            });

There is a lot of boilerplate in this snippet for a really simple operation of sorting on a specific field.

Projections allow us to specify this in a much simpler way:

sort(files, less, &file_t::name);

What this does is quite simple. It uses less-than for comparison (less), just as we have with normal std::sort but it does not call the operator< of the file_t objects in order to compare the files while sorting. Instead, it only compares the strings stored in the name member variable of those files.

That is the point of a projection – instead of an algorithm testing the value itself, it tests some representation of that value – a projection. This representation can be anything – from a member variable, to some more complex transformation.

One step back

Let’s take one step back and investigate what a projection is.

Instead of sorting, we’re going to take a look at transform. The transform algorithm with an added projection would look like this:

transform(files, destination,
          [] (auto size) { return size / 1024; }, &file_t::size);

We’re transforming a collection of files with a &file_t::size projection. This means that the transform algorithm will invoke the provided lambda with the file sizes instead of file_t objects themselves.

The transformation on each file_t value in the files collection is performed in two steps:

  • First, for a file f, extract f.size
  • Second, divide the result of the previous step by 1024

This is nothing more than a function composition. One function gets a file and returns its size, and the second function gets a number and divides it by 1024.

If C++ had a mechanism for function composition, we would be able to achieve the same effect like so:

transform(files, destination,
    compose_fn(
        [] (auto size) { return size / 1024; },
        &file_t::size
    ));

Note: compose_fn can be easily implemented, we’ll leave it for some other time.

Moving on from transform to all_of. We might want to check that all files are non-empty:

all_of(files, [] (auto size) { return size > 0; }, &file_t::size);

Just like in the previous example, the all_of algorithm is not going to work directly on files, but only on their sizes. For each file, the size will be read and the provided predicate called on it.

And just like in the previous example, this can be achieved by simple function composition without any projections:

all_of(files,
    compose_fn(
        [] (auto size) { return size > 0; },
        &file_t::size
    ));

Almost composition

The question that arises is whether all projections can be replaced by ordinary function composition?

From the previous two examples, it looks like the answer to this question is “yes” – the function composition allows the algorithm to “look” at an arbitrary representation of a value just like the projections do. It doesn’t matter whether this representation function is passed in as a projection and then the algorithm passes on the projected value to the lambda, or if we compose that lambda with the representation function.

Unary function compositionUnary function composition

The things stop being this simple when an algorithm requires a function with more than one argument like sort or accumulate do.

With normal function composition, the first function we apply can have as many arguments as we want, but since it returns only a single value, the second function needs to be unary.

For example, we might want to have a function size_in which returns a file size in some unit like kB, KiB, MB, MiB, etc. This function would have two arguments – the file we want to get the size of and the unit. We could compose this function with the previously defined lambda which checks whether the file size is greater than zero.

N-ary function compositionN-ary function composition

sort needs this to be the other way round. It needs a function of two arguments where both arguments have to be projected. The representation (projection) function needs to be unary, and the resulting function needs to be binary.

Composition needed for sortingComposition needed for sorting

Universal projections

So, we need a to compose the functions a bit differently. The representation function should be applied to all arguments of an n-ary function before it is called.

As usual, we’re going to create a function object that stores both the projection and the main function.

template <typename Function, typename Projection>
class projecting_fn {
public:
    projecting_fn(Function function, Projection projection)
        : m_function{std::move(function)}
        , m_projection{std::move(projection)}
    {
    }

    // :::

private:
    Function m_function;
    Projection m_projection;
};

The main part is the call operator. It needs to project all the passed arguments and then call the main function with the results:

template <typename... Args>
decltype(auto) operator() (Args&&... args) const
{
    return std::invoke(
               m_function,
               std::invoke(m_projection, FWD(args))...);
}

This is quite trivial – it calls m_projection for each of the arguments (variadic template parameter pack expansion) and then calls m_function with the results. And it is not only trivial to implement, but it is also trivial for the compiler to optimize.

Now, we can use projections even with old-school algorithms from STL and in all other places where we can pass in arbitrary function objects.

To continue with the files sorting example, the following code sorts a vector of files, and then prints the files to the standard output all uppercase like we’re in 90s DOS:

    std::sort(files.begin(), files.end(),
              projecting_fn { std::less{}, &file_t::name });

    std::copy_if(
              files.cbegin(), files.cend(),
              std::ostream_iterator<std::string>(std::cout, " "),
              projecting_fn {
                  string_to_upper,
                  &file_t::name
              });

Are we there yet?

So, we have created the projected_fn function object that we can use in the situations where all function arguments need to be projected.

This works for most STL algorithms, but it fails for the coolest (and most powerful) algorithm – std::accumulate. The std::accumulate algorithm expects a function of two arguments, just like std::sort does, but it only the second argument to that function comes from the input collection. The first argument is the previously calculated accumulator.

Composition for accumulationComposition for accumulation

This means that, for accumulate, we must not project all arguments – but only the last one. While this seems easier to do than projecting all arguments, the implementation is a tad more involved.

Let’s first write a helper function that checks whether we are processing the last argument or not, and if we are, it applies m_projection to it:

template <
    size_t Total,
    size_t Current,
    typename Type
    >
decltype(auto) project_impl(Type&& arg) const
{
    if constexpr (Total == Current + 1) {
        return std::invoke(m_projection, std::forward<Type>(arg));
    } else {
        return std::forward<Type>(arg);
    }
}

Note two important template parameters Total – the total number of arguments; and Current – the index of the current argument. We perform the projection only on the last argument (when Total == Current + 1).

Now we can abuse std::tuple and std::index_sequence to provide us with argument indices so that we can call the project_impl function.

template <
    typename Tuple,
    std::size_t... Idx
    >
decltype(auto) call_operator_impl(Tuple&& args,
                                  std::index_sequence<Idx...>) const
{
    return std::invoke(
            m_function,
            project_impl
                <sizeof...(Idx), Idx>
                (std::get<Idx>(std::forward<Tuple>(args)))...);
}

The call_operator_impl function gets all arguments as a tuple and an index sequence to be used to access the items in that tuple. It calls the previously defined project_impl and passes it the total number of arguments (sizeof...(Idx)), the index of the current argument (Idx) and the value of the current argument.

The call operator just needs to call this function and nothing more:

template <typename... Args>
decltype(auto) operator() (Args&&... args) const
{
    return call_operator_impl(std::make_tuple(std::forward<Args>(args)...),
                              std::index_sequence_for<Args...>());
}

With this we are ready to use projections with the std::accumulate algorithm:

std::accumulate(files.cbegin(), files.cend(), 0,
    composed_fn {
        std::plus{},
        &file_t::size
    });

Epilogue

We will have projections in C++20 for ranges, but projections can be useful outside of the ranges library, and outside of the standard library. For those situations, we can roll up our own efficient implementations.

These even have some advantages compared to the built-in projections of the ranges library. The main benefit is that they are reusable.

Also (at least for me) the increased verbosity they bring actually serves a purpose – it better communicates what the code does.

Just compare

sort(files, less, &file_t::name);
accumulate(files, 0, plus, &file_t::size);

and

sort(files, projected_fn { less, &file::name })
accumulate(files, 0, composed_fn { plus, &file_t::size });

We could also create some cooler syntax like the pipe syntax for function transformation and have syntax like this:

sort(files, ascending | on_member(&file::name))

… but that is out of the scope of this post :)


You can support my work on , or you can get my book Functional Programming in C++ at if you're into that sort of thing.

January 21, 2019

Professor Igor Steinmacher, from Northern Arizona University, is a proeminent researcher on several social dynamics in open source communities, like support of newcomers, gender bias, open sourcing proprietary software, and more. Some of his papers can de found in his website.

Currently, Prof. Igor is inviting mentors from open source communities to answer a survey about task assignment in projects. See below the description of the survey and take some time to answer the questions – the knowledgement obtained here can be very interesting for all of us.

Hello,

My name is Igor Steinmacher, and I am a professor at Northern Arizona University.

Along with some other researchers we are currently studying the strategies that mentors use to assign tasks to newcomers to Free/Open Source projects.Your experience is very important to us given the limited number of people that mentor or guide newcomers in FOSS projects.

You are therefore, a perfect person to get feedback for our research.

We would really appreciate if you could spare about 5 minutes of your time to answer a brief survey about your experiences.
The survey is here: https://goo.gl/forms/qCzgoG3Uc4O0w9da2I would like to emphasize that, if shared, your insights will play a prominent role in creating a better understanding of the mentors’ strategies to assignment tasks for newcomers, serving as input for heuristics, and helping other mentors. Thank you very much in advance for your time, and please contact me if you have any question.

Regards,

Igor Steinmacher

The Qt VS Tools version 2.3.1 has now been released to the Visual Studio Marketplace.
Important changes include:

For details of what is included in this version, please refer to the change log.

The installation package can be installed directly from within Visual Studio (through the ‘Tools > Extensions and Updates…’ menu). Alternatively, it can be downloaded from this page.

QML Debugging

Contrary to what was mentioned in a previous post, the new QML debugging feature will not be enabled by default. It must be explicitly enabled by opening the Qt project settings and setting the “QML Debug” option to “Enabled”.

The post Qt Visual Studio Tools 2.3.1 Released appeared first on Qt Blog.

Could you tell us something about yourself?

My name is Edgar Tadeo, or Ed for short. I’m a freelance comic book artist for Marvel, and pursuing animator … if I’m lucky. I’m residing in the islands of Philippines, there’s nothing to do but draw.

Do you paint professionally, as a hobby artist, or both?

I paint professionally in a way. I draw comic book commissions using watercolors. I sometimes do it digitally.

What genre(s) do you work in?

Mainly I draw in comics. But lately I’ve been drawing animations.

Whose work inspires you most — who are your role models as an artist?

There Alex Ross and Jim Lee who are both my heroes. They worked for Marvel and DC, which I’m still dreaming to work with.

How and when did you get to try digital painting for the first time?

I started way back around the year 2000 (maybe earlier than that), when I started as colorist for Marvel Comics. I couldn’t avoid digital painting for myself as I color pages of comics for Marvel.

What makes you choose digital over traditional painting?

I still use traditional painting. There’s a big pleasure seeing your work in full detail. I still draw on paper. I only use digital for
animation.

How did you find out about Krita?

I can’t remember. I guess I read it somewhere, when someone said about the program. I searched it and found out that it can do animation. I’ve been wanting to use a freeware animation paint program that is easy to use.

What was your first impression?

My first impression was, I thought it is very complicated. I only tried it out because of the animation capabilities, I never thought of doing digital painting with it. Eventually, it was easy to use.

What do you love about Krita?

What I love about Krita is the animation. I wish developers can make it better.

What do you think needs improvement in Krita? Is there anything that really annoys you?

So far I get less crash. Although, the annoying thing is the audio sync in animation. I wish they fix that soon.

What sets Krita apart from the other tools that you use?

Comparing to Photoshop, I think Krita can make good digital painting that looks like it was made with a real brush. However,  PS is not a paint program, Krita’s advantage is its brushes.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I am so new to Krita. I can’t tell since I use this only for animation. Maybe this one:

What techniques and brushes did you use in it?

I use just the basic tools of INKING. For the background are some various brushes.

Where can people see more of your work?

They can visit my website for online portfolio: www.edtadeo.com, or
my YouTube account: www.youtube.com/EdTadeo.

El potencial de mejora del Software Libre es realmente infinito. Dado que cada usuario es un posible desarrollador implica que tengamos increíbles formas de optimizar el uso de Dolphin, el gestor de archivos de Plasma 5. Hoy os quiero presentar Mount ISO image  un Service Menu para Plasma 5 que nos harán la vida un poco más fácil.

Mount ISO image – Service menu para KDE

La idea es simple, añade al botón derecho de Plasma en Dolphin la opción de montar una imagen ISO, es decir, de crear la estructura de un CD o DVD en nuestro disco duro.

Esto es lo que consigue Mount ISO image, y nos lo permite hacer tanto en la carpeta donde estamos como en el lugar donde deseemos.

Mount ISO image

Este Service Menu es una creación de Alex-L al cual seguro que le encantará que le deis me gusta en la KDE Store y que le comentéis como os funciona. Os recuerdo que ayudar al desarrollo del Software Libre también se hace simplemente dando las gracias, ayuda mucho más de lo que os podéis imaginar, recordad la campaña I love Free Software Day 2017 de la Free Software Foundation donde se nos recordaba esta forma tan sencilla de colaborar con el gran proyecto del Software Libre y que en el blog dedicamos un artículo.

Más información: KDE Store

¿Qué son los Dolphin Service Menu?

La personalización de KDE y Plasma está más que demostrada y prueba de ello son los Dolphin Service Menu, que no son más que la posibilidad de disponer un menú auxiliar en el gestor de archivos Dophin o en Konqueror que se activa con el botón derecho del ratón.
Con ellos tendremos nuevas acciones como:

Y muchos más como hemos explicado en varias ocasiones en el blog. Puedes encontrar estos servicios se pueden encontrar en la sección Dolphin Service Menu en la Store de KDE.

Are you using Kubuntu 18.10, our current Stable release? Or are you already running our daily development builds?

We currently have Plasma 5.14.90 (Plasma 5.15 Beta)  available in our Beta PPA for Kubuntu 18.10 and in our daily Disco ISO images.

 First, upgrade your development release

Update directly from Discover, or use the command line:

sudo apt update && sudo apt full-upgrade -y

And reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

Add the PPA to Kubuntu 18.10 then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta
sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use your launchpad.net account is required to post testing feedback to the Kubuntu team. 
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.14.5?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “weather plasmoid fetches BBC weather data.”
– Test the ‘fixed’ functionality.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu 19.04 as well as added to our backports.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

January 20, 2019

I got a bit traction on KookBook and decided to fix a few bugs, mostly in the touch client, and add some features.

Get it here: kookbook-0.2.0.tar.xz

KookBook is now also available in Debian, thanks to Stuart Prescott

KRecipe converter
Some people has a large recipe collection in KRecipe and would like to try out KookBook. I wrote a convertion python script now available. It works in “current directory” and reads the krecipe database from there and outputs KookBook markdown files in same repository. It has so far been tried on 1 database.

Bug fixes

  • Fix install of touch client
  • Fixes in desktop files
  • Fixes in touch client’s file open dialog
  • Touch client now show images in recipes
  • You could end up with no dock widgets and no toolbar and no way to recover in the desktop client. This is now fixed
  • Build fixes

Future
Some people have started talking about maybe translation of the interface. I might look into that in the future.

And I wouldn’t be sad if some icon artists provided me with a icon slightly better than the knife I drew. Feel free to contact me if that’s the case.

Happy kooking!

In the last few days I've been writing a simple website for Imaginario. I'm a terrible site designer, and I can't really say that I enjoy writing websites, but it's something that from time to time people might need to do. While the PhotoTeleport website is built with Jekyll, this time I decided to try some other static site generator, in order to figure out if Jekyll is indeed the best for me, or if there are better alternatives for my (rather basic) needs.

I set out trying a couple of Python-based generators, Pelican and Nikola. Here is a brief review of them (and of Jekyll), in case it helps someone else make their own choice.

Jekyll

I've been using it since several months for the PhotoTeleport website, which features a news section and a handful of static pages. It does the job very well and I haven't any major complaint. It's very popular and there are plenty of plugins to customize its behaviour or add new functionality. The documentation is sufficient for a basic usage of the site, and information on how to solve more specific issues can easily be found in the internet.

My only issue is that it's not totally intuitive to use, and in order to customize the interactions for your own needs you need to write your own scripts — at least, I didn't find a ready solution to create a new post, or deploy the generated content into my site.

Pelican

My first impression with Pelican has been extremely positive: it's very easy to setup and start a blog. It's also quite popular, even though not as much as Jekyll, and there are may themes for it. By looking at the themes, though, I quickly realized that Pelican is meant to be used for blogs, and not for simple static sites. I'm almost sure that there must be a way to use it to create a static site, maybe with some tweaking, but I couldn't find information about this in its documentation. A quick search in the internet didn't help either, so I gave up and moved to the next one.

If I had to write a blog I'd certainly consider it, though.

Nikola

Nikola is definitely less popular than Jekyll or Pelican, at least if we trust the number of stars and forks in GitHub, but it's still a popular and maintained project, with many plugins. Like Jekyll, it can handle both blogs and sites, or a combination of the two. It's well documented, the people in the forum are helpful, and its command line interface is simpler and more intuitive than Jekyll's. Also, the live preview functionality seems to be more advanced than Jekyll's, in that the browser is told to automatically reload the page whenever the site is rebuilt.

You can see my progress with the Imaginario website by inspecting the commits in its repository; you'll see how easy it was to set it up, and hopefully following my steps you'll save some time should you decide to create your own site with Nikola.

Overall, I'd rate Jekyll and Nikola on the same level: Jekyll wins for the wider community and amount of available plugins, while Nikola wins for the better command line interactions, and the fact that it's in Python gives me better confidence should I ever need to modify it deeply (though, admittedly, the latter is just a personal preference — Ruby developers will say the opposite).

This week in KDE’s Usability & Productivity initiative, something big landed: virtual desktop support on Wayland, accompanied by a shiny new user interface for the X11 version too. Eike Hein, Marco Martin, and the rest of the Plasma hackers have been working on this literally for months and I think they deserves a round of applause! It was a truly enormous amount of work, but now we can benefit for years to come. ��

We’ve also kicked off the Plasma 5.15 beta period. Here’s how you can test it with KDE Neon or Kubuntu.

Bug reports have already started to come in and we’re fixing them as fast as possible to ensure a smooth release next month on February 12th!

New Features

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon, LiberaPay, or PayPal. Also consider making a donation to the KDE e.V. foundation.

January 19, 2019

I recently configured Travis CI to build Nanonote, my minimalist note-taking application. We use Jenkins a lot at work, and despite the fact that I dislike the tool itself, it has proven invaluable in helping us catch errors early. So I strongly believe in the values of Continuous Integration.

When it comes to CI setup, I believe it is important to keep your distances with the tool you are using by keeping as much setup as possible in tool-agnostic scripts, versioned in your repository, and making the CI server use these scripts.

Ensuring your build scripts are independent of your CI server gives you a few advantages:

  • Your setup is easier to extend and debug, since you can run the build scripts on your machine. This is lighter than running an instance of your CI server on your local machine (nobody takes the time to do that anyway) and more efficient than committing changes in a temporary branch then wait for your CI server to build them to see if you got it right (everybody does that).

  • It keeps the build instructions next to your code, instead of being stored in, say, Jenkins XML file. This ensures that you can add dependencies and adjust the build script in one commit. It also ensures that if your build script evolves, you can still build old branches on the CI server (for example because you have a fix to do on a released version).

  • If your CI server is Jenkins or something similar, you spend less time cursing against the slow web-based UI (yes, I know about Jenkins Pipelines, but those have other problems...).

  • It is easier to switch to another CI server.

With this in mind, here is how I configured Nanonote CI.

Create a Build environment using Docker

The first element is to create a stable build environment. To do this I created a Docker with the necessary build components. Here is its Dockerfile, stored in the ci directory of the repository:

FROM ubuntu:18.04

RUN apt-get update \
    && apt-get install -y -qq --no-install-recommends \
        cmake \
        dpkg-dev \
        file \
        g++ \
        make \
        ninja-build \
        python3 \
        python3-pip \
        python3-setuptools \
        qt5-default \
        qtbase5-dev \
        qttools5-dev \
        rpm \
        xvfb

COPY requirements.txt /tmp

RUN pip3 install -r /tmp/requirements.txt

ENTRYPOINT ["/bin/bash"]

Nothing really complicated here, but there are a few interesting things to point out nevertheless.

It installs dpkg-dev and rpm packages, so that CPack can build .deb and .rpm packages.

It also installs the xvfb package, to be able to run tests which require an X server.

Finally it copies a requirements.txt file and pip install it. This is to install qpropgen dependencies. This requirements.txt is in 3rdparty/qpropgen, which Docker cannot reach (because it only sees files inside the ci directory), so I created a simple ci/build-docker script to build the Docker image:

#!/bin/sh
set -ev
cd $(dirname $0)
cp ../3rdparty/qpropgen/requirements.txt .
docker build -t nanonote:1 .

This gives us a clean build environment, now lets create a build script.

The build script

This script is ci/build-app. Its job is to:

  1. Create a source tarball
  2. Build and run tests from this source tarball
  3. Build .rpm and .deb packages

You may wonder why the script creates a source tarball, since GitHub automatically generates them when one creates a tag. There are two reasons for this:

  1. GitHub tarballs do not contain repository submodules, making them useless for Nanonote.
  2. I prefer to rely on my own build script to generate the source tarball as it makes the project less dependant on GitHub facilities, should I decide to move to another git host service.

Reason #1 also explains why the script builds from the source tarball instead of using the git repository source tree: it ensures the tarball is not missing any file necessary to build the app.

I am not going to include the script here, but you can read it on GitHub.

Travis setup

Now that we have a build script and a build environment, we can make Travis uses them. Here is Nanonote .travis.yml file. As you can see, it is just a few lines:

dist: xenial
language: minimal

services:
- docker

install:
- ci/build-docker

script:
- docker run -v $PWD:/root/nanonote nanonote:1 /root/nanonote/ci/build-app

Not much to say here: - We tell Travis to use an Ubuntu Xenial (16.04) distribution and Docker. - The "install" step builds the Docker image. - The "script" step mounts the source tree inside the Docker image and runs the build script.

Travis runs this on all branches pushed to GitHub. I configured GitHub to refuse pushes to the master branch if the commits have not been validated by Travis. This rule applies to all project contributors, including me. Since there is not (for now?) a large community of contributors to the project, I don't open pull requests: I just push commits to the dev branch, once Travis has checked them, I merge dev into master.

Releases

When it's time to create a release, I just do what Travis does: rebuild the Docker image then run the build script inside it.

Since the source tree is mounted inside the Docker image, I get the source and binary packages in the dist directory of the repository, so I can test them and publish them.

Travis has a publication system to automatically attach build artefacts to GitHub releases when building from a tag, but I prefer to build them myself because that gives me the opportunity to test the build artefacts before tagging and it prevents me from becoming too dependent on Travis service.

Conclusion

That's it for this description of Nanonote CI setup. It's still young so I might refine it, but it is already useful. I am probably going to create similar setups for my other C++ Qt projects.

I hope it helped you as well.

January 17, 2019

TL;DR, I’ll be switching to releasing new wallpapers every second Plasma release, on even-numbered versions.

This is just a post to refer to for those who have asked me about Plasma 5.15 and a new wallpaper. Since I started working on Plasma 5 wallpapers, there has always been a number of factors determining how exactly I made them. After some agonising debate I’ve decided to slow the wallpaper release pace, because as time has gone on a number of things have changed since I started contributing them:

Bugs & Releases. One of the early goals for wallpapers was to have one for each release so developers could identify versions of Plasma in screenshots of bug reports. This has become far less important, as issues have gone from “the bug causing everyone’s machines to catch fire while eating kittens” to things like “maybe stealing your left sock from the dryer”. Back in the day most distros didn’t offer rolling release options, so users would be reporting the bugs and sharing screenshots of old buggy versions. That, too, has changed; not only are rolling release options more plentiful, but standard release distros are well passed the dark days of immature Plasma 5. All said and done, we just don’t need wallpapers for developers to identify problem releases anymore; the bugs are far less severe and people are more up-to-date.

LTS Plasma versions & quality. While it may seem irrelevant to wallpapers, LTS stands out to as the place where we really need to pour love and care into our designs. With each new wallpaper I’m pushing things a bit harder and a bit further which means taking more time to create them, and I’m realising that at the quality I want to drive out LTS wallpapers with, it might take 3 to 5 dedicated days to produce a final product. That’s not including post-reveal tweaks I do after receiving feedback, or the wallpapers I discard during the creation process (for each wallpaper released, it’s likely I got halfway through 2 other designs). In other words, it’s becoming less sustainable.

The wallpapers aren’t crap anymore. It’s no secret, my first wallpapers were rough. When a new wallpaper was finished there were real quality incentives for me to take the lessons learned and turn-around a better wallpaper. Nowadays though most new wallpapers are visually pleasing and people don’t mind if they stick around for a bit longer. I know a lot of people even go back to previous wallpapers. Adding to this, it’s gotten easy to get older wallpapers; OpenDesktop, GetHotNewStuff both serve as easy access, and we now have some of the most popular default wallpapers in the extended wallpapers package. While new wallpapers are always nice to have, it’s no longer bad to keep what we’ve got.

Between those big three points, it brings me to moving the wallpaper cycle to every second Plasma release. New wallpapers will fall on even-numbered Plasma releases, landing squarely on the LTS releases and a feature release directly between LTS’s. That being said I hope that future wallpapers will show quality reflecting what the additional time will afford me to do.

KDE’s flagship project Plasma has a new beta out. There’s now three weeks to sort out the bugs to make the release a work of perfection.  We need your help.

Plasma has a new testing release out with a final release due in three weeks. We need your help in testing it and reporting problems.

KDE neon Developer Git-Stable Edition now has Plasma 5.15 beta and can be used for testing.

You can either download an ISO and install it or run it on a virtual machine. https://neon.kde.org/download

Or you can run the Docker image which should work on any Linux distro. https://community.kde.org/Neon/Docker

Please have a look over the new features and give them a try https://www.kde.org/announcements/plasma-5.14.90.php

You can report success or failure on the forum thread for Plasma 5.15 beta here or directly on the bug tracker at https://bugs.kde.org

Plasma 5.15 Beta in Virtualbox

Plasma 5.15 beta in Docker



Plasma 5.15 Beta

KDE Plasma 5.15 Beta

Today KDE launches the beta release of Plasma 5.15.

For the first release of 2019, the Plasma team has embraced KDE's Usability & Productivity goal. We have teamed up with the VDG (Visual Design Group) contributors to get feedback on all the papercuts in our software that make your life less smooth, and fixed them to ensure an intuitive and consistent workflow for your daily use.

Plasma 5.15 brings a number of changes to our configuration interfaces, including more options for complex network configurations. Many icons have been added or redesigned. Our integration with third-party technologies like GTK and Firefox has been made even more complete. Discover, our software and add-on installer, has received a metric tonne of improvements to help you stay up-to-date and find the tools you need to get your tasks done.

Please test this beta release and send us bug reports and feedback. The final release will be available in three weeks' time.

Browse the full Plasma 5.15 Beta changelog to learn more about other tweaks and bug fixes included in this release:


New in Plasma 5.15 Beta

Plasma Widgets



    Bluetooth Battery Status

    Bluetooth Battery Status
  • Bluetooth devices now show their battery status in the power widget. Note that this cutting-edge feature requires the latest versions of the upower and bluez packages.
  • It’s now possible to download and install new wallpaper plugins straight from the wallpaper configuration dialog.
  • Filenames on desktop icons now have enough horizontal space to be legible even when their icons are very tiny, and are easier to read when the wallpaper is very light-colored or visually busy.
  • Visually impaired users can now read the icons on the desktop thanks to the newly-implemented screen reader support for desktop icons.
  • The Notes widget now has a 'Transparent with light text' theme.
  • It's now possible to configure whether scrolling over the virtual desktop Pager widget will “wrap around” when reaching the end of the virtual desktop list.
  • The padding and appearance of notification pop-ups have been improved.
  • KRunner has received several usability improvements. It now handles duplicates much better, no longer showing duplicate bookmarks from Firefox or duplicate entries when the same file is available in multiple categories. Additionally, the layout of the standalone search widget now matches KRunner's appearance.
  • The Devices Notifier is now much smarter. When it's configured to display all disks instead of just removable ones, it will recognize when you try to unmount the root partition and prevent you from doing so.


Settings



    Redesigned Virtual Desktop Settings

    Redesigned Virtual Desktop Settings
  • System Settings Virtual Desktops page has been redesigned and rewritten for support on Wayland, and now sports greater usability and visual consistency.
  • The user interface and layout for the Digital Clock and Folder View settings pages have been improved to better match the common style.
  • Many System Settings pages have been tweaked with the goal of standardizing the icons, wording, and placement of the bottom buttons, most notably the “Get New [thing]…” buttons.
  • New desktop effects freshly installed from store.kde.org now appear in the list on the System Settings Desktop Effects page.
  • The native display resolution is now indicated with a star icon in the System Settings Displays page.
  • The System Settings Login Screen page received plenty of visual improvements. The image preview of the default Breeze theme now reflects its current appearance, the background color of the preview matches the active color scheme, and the sizes and margins were adjusted to ensure that everything fits without being cut off.
  • The System Settings Desktop Effects page has been ported to QtQuickControls 2. This fixes a number of issues such as bad fractional scaling appearance, ugly dropdown menu checkboxes, and the window size being too small when opened as a standalone app.


Cross-Platform Integration



    Firefox with native KDE open/save dialogs

    Firefox with native KDE open/save dialogs
  • Firefox 64 can now optionally use native KDE open/save dialogs. This is an optional, bleeding-edge functionality that is not yet included in any distribution. However, it can be enabled by installing the xdg-desktop-portal and xdg-desktop-portal-kde packages and setting GTK_USE_PORTAL=1 in Firefox's .desktop file.
  • Integration modules xdg-desktop-portal-kde and plasma-integration now support the Settings portal. This allows sandboxed Flatpak and Snap applications to respect your Plasma configuration - including fonts, icons, widget themes, and color schemes - without requiring read permissions to the kdeglobals configuration file.
  • The global scale factor used by high-DPI screens is now respected by GTK and GNOME apps when it’s an integer.
  • A wide variety of issues with the Breeze-GTK theme have been resolved, including the inconsistencies between the light and dark variants. We have also made the theme more maintainable, so future improvements will be much easier.


Discover



    Distro Release Upgrade Notification

    Distro Release Upgrade Notification
  • Options for upgrading your distribution are now included in Discover’s Update Notifier widget. The widget will also display a “Restart” button if a restart is recommended after applying all updates, but the user hasn’t actually restarted yet.
  • On Discover’s Updates page, it’s now possible to uncheck and re-check all available updates to make it easier to pick and choose the ones you want to apply.
  • Discover’s Settings page has been renamed to “Sources” and now has pushbuttons instead of hamburger menus.
  • Distribution repository management in Discover is now more practical and usable, especially when it comes to Ubuntu-based distros.
  • Discover now supports app extensions offered with Flatpak packages, and lets you choose which ones to install.
  • Handling for local packages has been improved, so Discover can now indicate the dependencies and will show a 'Launch' button after installation.
  • When performing a search from the Featured page, Discover now only returns apps in the search results. Add-ons will appear in search results only when a search is initiated from an add-on category.
  • Discover’s search on the Installed Apps page now works properly when the Snap backend is installed.
  • Handling and presentation of errors arising from misconfigured add-on repos has also been improved.
  • Discover now respects your locale preferences when displaying dates and times.
  • The “What’s New” section is no longer displayed on app pages when it doesn't contain any relevant information.
  • App and Plasma add-ons are now listed in a separate category on Discover’s Updates page.


Window Management

  • The Alt+Tab window switcher now supports screen readers for improved accessibility, and allows you to use the keyboard to switch between items.
  • The KWin window manager no longer crashes when a window is minimized via a script.
  • Window closing effects are now applied to dialog boxes with a parent window (e.g. an app’s Settings window, or an open/save dialog).
  • Plasma configuration windows now raise themselves to the front when they get focus.


Wayland

  • More work has been done on the foundations - the protocols XdgStable, XdgPopups and XdgDecoration are now fully implemented.
  • Wayland now supports virtual desktops, and they work in a more fine-grained way than on X11. Users can place a window on any subset of virtual desktops, rather than just on one or all of them.
  • Touch drag-and-drop is now supported in Wayland.


Network Management



    WireGuard VPN Tunnels

    WireGuard VPN Tunnels

  • Plasma now offers support for WireGuard VPN tunnels when the appropriate Network Manager plugin is installed.
  • It’s now possible to mark a network connection as “metered”.


Breeze icons

Breeze Icons are released with KDE Frameworks but are extensively used throughout Plasma, so here's a highlight of some of the improvements made over the last three months.



    Icon Emblems in Breeze

    Icon Emblems in Breeze
  • A variety of Breeze device and preference icons have been improved, including the multimedia icons and all icons that depict a stylized version of a Plasma wallpaper.
  • The Breeze emblem and package icons have been entirely redesigned, resulting in a better and more consistent visual style, plus better contrast against the icon they’re drawn on top of.
  • In new installs, the Places panel now displays a better icon for the Network place.
  • The Plasma Vault icon now looks much better when using the Breeze Dark theme.
  • Python bytecode files now get their own icons.


Other



    KSysGuard’s optional menu bar

    KSysGuard’s optional menu bar
  • It’s now possible to hide KSysGuard’s menu bar — and it reminds you how to get it back, just like Kate and Gwenview do.
  • The plasma-workspace-wallpapers package now includes some of the best recent Plasma wallpapers.



Live Images

The easiest way to try out Plasma 5.15 beta is with a live image booted off a USB disk. Docker images also provide a quick and easy way to test Plasma.

Download live images with Plasma 5
Download Docker images with Plasma 5

Package Downloads

Distributions have created, or are in the process of creating, packages listed on our wiki page.

Package download wiki page

Source Downloads

You can install Plasma 5 directly from source.

Community instructions to compile it
Source Info Page

Feedback

Discuss Plasma 5 on the KDE Forums Plasma 5 board.

You can provide feedback direct to the developers via the #Plasma IRC channel, Plasma-devel mailing list or report issues via bugzilla. If you like what the team is doing, please let them know!

Your feedback is greatly appreciated.

January 16, 2019

After many years of successful Google Code-in participation, this year we did it again! KDE attracted a number of students with exciting tasks for their eager young minds.

Google Code-in is a program for pre-university students aged from 13 to 17 and sponsored by Google Open Source. KDE has always worked to get new people involved in Free and open source (FOSS) projects with the aim of making the world a better place.

This year was no different. Our students worked very hard, and some of them already have their contributions committed to the KDE codebase!

We designed tasks in a way that made them exciting for all students. Students who were not skilled in programming took on tasks of writing blogs or documentation. To help students who had no experience with FOSS or with the community, we set up introductory tasks for IRC and mailing lists, both of which are essential in FOSS as communication channels.

The students who had some prior programming experience received tutorial tasks to get a better understanding of how KDE software works. Those types of tasks also helped them become familiar with the Qt framework on which all KDE software is based. Finally, students good at programming were put to work contributing to on-going KDE projects. They created new features or solved known bugs and wrote unit tests.

We’re happy that some really enthusiastic and persistent students joined us this year. Thanks to their passion for programming, they completed many tasks and delivered quality code we merged into our project repositories.

It wasn’t easy for the mentors to select winners, as every student had accomplished great things. Still, we finally settled on pranav and triplequantum (their GCI names). Finalists were TURX, TUX, UA and waleko.

KDE would like to congratulate all the winners and finalists, and we warm-heartedly welcome all our new contributors!

Author: Pranam Lashkari

January 13, 2019

Facebook’s AccountKit is an authentication service that can use your email or phone number to login to your services, it doesn’t require that the user has a Facebook account, just a valid email or phone.

The cool thing about it is that it sends SMS for free, and although sending SMSs is cheap being free of charge is something you might want to look when creating a new App, in fact here in Brazil some big Apps do make use of it.

So long story short story I wanted to add this to my Qt Android App.

Thanks to the great help of Kai Uwe Broulik who had give some tips on the past on how to call Java code I started with this same approach:

  • Create a Java file
  • Put the Java code of AccountKit there
  • Call the static function of Java from C++

And… it did work BUT I was unable to get the result, this is because the Java code starts a new Activity and returns the result on a method that must be implemented on the code that Qt creates for starting your activity, so this means I’d need to change the XML manifest to call my subclass instead of what Qt provides, and would need to do a bunch of other stuff myself, basically would get more maintainance work to do.

So another option would be to do all of this from C++, which Kai said it would be better at first, but doing a bunch of JNI scared me a bit, also I hadn’t found the reference page to their API (which he found after the work was done lol). There’s a tool called javap that can dump the Java signature of what is inside the package you got, so with the help of this we went on porting all the calls to AccountKit API to C++/Qt, this way we can call QtAndroid::startActivity() passing a pointer to a class that will handle the result of the new Activity, all in C++ �� .

There were some initial issues with the Enums used (aparently some new Java7 stuff), but the code now doesn’t require a single line of Java code, which is great when integrating in Qt Android Apps.

The result is on GitHub, so if you need this just copy the integration class, it has IFDEFs so if you are testing on !Android it will compile fine. (You still need to follow the Gradle and manifest integration that’s on their dev site), feel free to make PR for fixes and features ��

https://github.com/ceciletti/third-party-qt-utils

As a follow-up to my previous post, I've finally succeeded in building Imaginario as an AppImage package.

I've smoke tested the package in Ubuntu Xenial and Trusty, and while it appears to be working, I'd be happy if someone else could also download and test it in their machines and let me know how it goes. Just please keep in mind that this a very beta release and, while I'm not aware of any major bugs that could corrupt your photos, I wouldn't recommend you to import your photo archive into it, unless you backup it first.
Also, the application needs quite some more polishing before being ready to be publicly advertised to non developers (I'm also considering finding a new icon for it), so the next step after getting a nicely working AppImage will be cleaning up the user interface and make sure that all the (few) features work as advertised. And making a Windows version. And a MacOS one. Oh, I'd better stop thinking about it, or I'll start crying.

I totally missed that last week marked the one-year anniversary of my documentation and guidance of KDE’s Usability & Productivity initiative. I think we’ve achieved a lot over the course of that year!

Note that this is NOT an exhaustive log of everything that happened this week in the entire KDE community, or even in all of Plasma. The actual number of commits and improvements is always vast and enormous–too much to comprehend, really. The KDE Community is staggeringly productive.

Rather, this is always a curated list of only the user-facing improvements I believe are directly relevant to the Usability & Productivity initiative. And speaking of it, this week we got an interesting assortment of new features, bugfixes, and UI improvements–many of which I didn’t mention but will ultimately be appreciated when taken together. Check it out:

New Features

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon, LiberaPay, or PayPal. Also consider making a donation to the KDE e.V. foundation.

January 12, 2019

The first release of Nanonote, my minimalist note-taking app, was a bit rushed: I broke indentation shortly before tagging version 1.0.0... meh.

So here is version 1.0.1. It fixes the indentation and adds the ability to indent or unindent whole lines with Tab and Shift+Tab, in addition to the existing Ctrl+I and Ctrl+U shortcuts.

In addition to these changes, the build system can now generate Debian and RPM packages, making the application easier to install.

These packages are generated by CPack inside an Ubuntu 18.04 Docker. This means they work on my machine, but do not have the same level of quality as packages crafted by real packagers. I am especially looking for feedback regarding the RPM packages, which I haven't tested.

You can find them on the project release page.

There aren't any other end-user changes in this release but I worked on infrastructure a bit: I added unit tests and set up Travis CI for continuous integration. I am probably going to write an article about this next. In the mean time, enjoy Nanonote 1.0.1!

TL;DR; Get your tickets from here!

For the fourth year running, foss-north is taking place. Now bigger than ever.

It all started as a one day conference in a room with too much people in it. We gathered ten speakers and started something that continues to this day.

110 ten people in a room for 110 people.

Back then we, the three organizers: Jeremiah, Mikael and myself, joked over beers that we should have trainings, conference, community rooms and much more. A moderated FOSDEM was a crude description of what we wanted to build. But this was only us dreaming away.

During the past years we’ve tried different venues. We’ve gone from one day, one track to two days and two tracks. This year we decided to go for it all: four days, trainings, a community day and the conference.

Organizing a conference is to manage a chicken and egg type problem. You need speakers to get sponsors, and you need sponsors to get speakers to the venue. The same applies to the audience – visistors wants speakers, speakers wants visitors. This is why it takes time to establish a conference.

Last year we felt that we reached a tipping point – the call for papers was so full that we had to extend the conference with an additional day. We simply could not pick the right contents for a single day. This means that we feel that the conference part is established. If you want to speak, the call for papers is still open.

That takes us to the next steps. The community day consists of various projects and groups organizing workshops, hackathons, install fests, development sprints and whatnot throughout the city. We find venues (usually conference rooms) and projects and hope that people will come visit the various events. Again, starting from zero projects, zero venues and no real idea how many visitors to expect, we are trying to put this together. At the time of writing, it looks great. We have 7 projects and 5 venues fixed, but we are still looking for both projects and venues. If you want to join in, look at our call for projects.

The same logic applies to the training. Now we have training contents, all we need are visitors. The great thing is that our teachers, Michael Kerrisk and Chris Simmonds, are great to work with and understand our situation. Now we just have to work hard to make sure that we find students for them.

The final piece of the puzzle, which is not always visible to speakers and visitors, is the hunt for sponsors. Venues does not come for free, and we believe in compensating our speakers for their costs, thus we need sponsors. We offer the opportunity to host a booth during the conference days and the chance to meet our audience. We also believe that helping a conference focused on free and open source, is a way to contribute to the free and open source movement. For this we have a network of sponsors that we’ve worked with in the past (thank you all!) but as the conference grows, we need more help. If you want to join in, have a look at our call for sponsors

I’ve written a lot about speakers, sponsors and projects. Now all we need are visitors – lots and lots of visitors. So bring your friends to Gothenburg and join us at foss-north. The early bird tickets are available now. Get yours here!

With the ongoing work on realtime data access in KDE Itinerary we need a way show notifications in case of delays or other trip changes. That’s what KF5Notifications is for, which unfortunately isn’t supported on Android yet. Since an Android specific code path in KDE Itinerary for that would be quite ugly, I did look into adding Android support for KF5Notifications. How hard can it be? ;)

KF5Notifications on Android KF5Notifications on Android

C++/Java Integration

How to do notifications with the official Android Java API is widely documented, and also matches the model of KF5Notifications well enough. This would however be the first KF5 framework using Android Java API for its backend, so I’ll focus on these integration bits here.

On the code level this means we need to be able to call Java code from C++ (to trigger a notification), and vice versa (to handle user interaction with the notification), and we need to be able to pass data back and forth.

The mechanism for this is the Java Native Interface (JNI), of which Qt abstract some parts via QAndroidJniObject. Qt itself contains a number of examples on how to use this, and you can look at the KF5Notifications code in D17851. The JNI signature syntax takes a bit of getting used too due to its built-in traps (like a different way of writing fully qualified class names compared to Java code), but at least you usually get helpful debug output at runtime in case of a mistake.

Building and Deployment

The build system integration turned out to be the most challenging part, as the framework consists of a Java part and a native part now. The general idea is that the framework builds and installs a Java library next to the native library, and provides a meta data file that tells androiddeployqt to also integrate the Java part when linking against the native part.

This is done in several Qt modules, but using qmake rather than CMake. Also, Qt only does this for Java code using the official Android Java API, not for using the Android support libraries, which is needed in case of notifications to simultaneously provide compatibility with older Android versions and to comply with requirements of very recent Android versions or the Google Play Store.

Building JARs

The first attempt was following closely what Qt does, and just implement that with CMake. You’ll find the code in D17851. There’s already basic Java support in CMake that makes this pretty straightforward, it essentially just builds a JAR library from a given set of Java source files using the Java compiler directly.

This approach works as long as you just use the basic Android API and don’t need any dependency outside of the normal Android SDK. But as that’s not enough for notifications, we needed another way.

Building AARs

The Android support library (and presumably other higher-level dependencies) break this approach in two places:

  • The canonical way of accessing dependencies on Android is via Maven repositories. Neither CMakre nor the raw Java compiler support that easily.
  • Those libraries are not regular Java libraries (JAR files) but Android libraries (AAR files). That’s essentially a ZIP file bundling the JAR file as well as resources, manifest elements, and other things you might find in an Android APK. However, this is also not directly supported by the Java compiler.

In order to consume AAR files we use the default Android way, building with Gradle (which is also what androiddeployqt does when building APKs). Obtaining and running Gradle is hidden behind a small CMake macro for easy integration, but how to build the Java side is now specified in a build.gradle file rather than in CMake files. This means all Android development related tools and resources become applicable for us. Besides the automatic dependency handling and inbound and outbound support for AAR libraries also things like using Kotlin should be possible. This approach has been implemented in D17986.

Consuming the result in the application is almost identical to the JAR case, we only needed a minor tweak in the androiddeployqt template for this.

Conclusion

Starting with KF5 5.55 we will have basic support for notifications, notification interaction and notification actions on Android. There’s probably still a number of features and flags in KF5Notifications that can be better mapped to Android’s native system, and there’s still work to be done to improve compatibility with a wider range of Android versions, but it’s a good start. But maybe even more importantly, we now have a template for integrating Android Java code in KF5 frameworks.

January 11, 2019

The Qt Company has been running benchmarks like QMLBench for a long time that assist us knowing when a change creates a performance regression, but it’s also important to see how Qt performs at a higher level, allowing components to interact in ways that granular tests like QMLBench can’t show. In this blog post, we’ll walk through new application startup testing results from a more real-world QML benchmark application.

 

The benchmark

For these tests, a relatively simple QML application was developed that utilizes many areas of QtDeclarative and QtGraphicalEffects. The application code was written as a casual developer might, and has not been designed for optimal startup, memory consumption, or performance. Because we’re benchmarking, the application does not make use of Interactive elements or user input. The application is of low complexity, without divergent logic, so that results are as consistent as possible between test runs. Though no benchmark will ever truly simulate real-world performance with user interaction, the test discussed here aims to more accurately represent a real-world QML workload than QMLBench or the QtQuickControls “Gallery” example.

QML Benchmark

The benchmark application. It combines textures, animations, QML shapes, repeaters, complex text, particle effects, and GL shaders to simulate a heavier, more real-world application than other QML benchmarks like QMLBench.

Download the benchmark source code here.

Lars has previously written about The Qt Company’s commitment to improving the performance of Qt, and with the recent release of Qt 5.12 LTS, the efforts made are really showing, especially on QML. Among the improvements, a good number have been towards improving startup performance. Out of the platforms tested, the greatest startup performance improvement was seen on the lowest power device we tested, a Toradex Apalis i.MX6. Let’s explore that.

Startup Performance

overview-chart

The chart above shows how the features in Qt 5.12 LTS really cut down on the startup performance, dropping time-to-first-frame from 5912ms in Qt 5.6 to only 1258ms in Qt 5.12.0, a 79% reduction! This is thanks to a number of new features that can be stacked to improve startup performance. Let’s walk through each.

  1. The Shader Cache – Introduced in Qt 5.9 LTS

    The shader cache saves compiled OpenGL shaders to disk where possible to avoid recompiling GL shaders on each execution.

    Pros: Lowers startup time and avoids application lag when a new shader is encountered if the shader is already in the cache.
    Cons: Systems with small storage can occasionally clear shader caches. If your application uses very complex shaders and runs on a low-power device where compiling the shader may produce undesirable startup times, it may be recommended to use pre-compiled shaders to avoid caching issues. There is no performance difference between cached shaders and pre-compiled shaders.
    Difficulty to adopt: None! This process is automatic and does not need to be manually implemented.

  2. Compiled QML

    Without use of the Qt Quick Compiler detailed below, QML applications built on Qt versions prior to 5.9 LTS would always be compiled at runtime on each and every run of the application. Depending on the application’s size and host’s processing capabilities, this action could lead to undesirably long load times. Two advancements in Qt now make it possible to greatly speed up the startup of complex QML applications. Both of which provide the same startup performance boost. They are:

    Qt Quick Cache – Introduced in Qt 5.9 LTS

    The Qt Quick Cache saves runtime-compiled QML to disk in a temporary location so that after the first run when qml gets compiled, it can be directly loaded on subsequent executions instead of running costly compiles every time.

    Pros: Can greatly speed up complex applications with many qml files
    Cons: If your device has a very small storage device, the operating system may clear caches automatically, leading to occasional unexpected long startup times.
    Difficulty to adopt: None! This process is automatic and does not need to be manually implemented.

    Pre-generated QML (Qt Quick Compiler) – Introduced in Qt 5.3 for commercial licensees, both commercial and open source in Qt 5.11

    The Quick Compiler allows a QML application to be packaged and shipped with pre-compiled QML. Initially available under commercial license from Qt 5.3 onwards, it is available for both commercial and open-source users from Qt 5.11 onwards.

    Pros: Using Quick Compiler has the advantage of not needing to rely on the runtime generated QML cache, so you never need to worry about a suddenly unexpected long startup time after a given application host clears its temporary files.
    Cons: None!
    Difficulty to adopt: Low. See the linked documentation. It’s often as simple as adding “qtquickcompiler” to CONFIG in your project’s .pro file!

  3. Distance Fields – Introduced in Qt 5.12 LTS

    Though Qt has been using Distance Fields in font rendering for a long time in order to have cleaner, crisper, animatable fonts, Qt 5.12 introduces a method for pre-computing the distance fields. Learn more about Distance Fields and implementation in this blog post by Eskil.

    Pros: Using pre-generated Distance Field fonts can drastically reduce start-up performance when using complex fonts like decorative Latin fonts, Chinese, Japanese, or Sanskrit. If your application uses a lot of text, multiple fonts, or complex fonts, pre-generating your distance fields can knock off a huge chunk of time to startup.
    Cons: Generated distance field font files will be marginally larger on disk than standard fonts. This can be optimized by selecting only the glyphs that will appear in your application when using the Distance Field Generator tool. Non-selected glyphs will be calculated as-needed at runtime.
    Difficulty to adopt: Low. See the linked documentation. No additional code is necessary, and generating the distance fields for you font takes seconds.

  4. Compressed textures – Introduced in Qt 5.11

    Providing OpenGL with compressed textures, ready to be uploaded to video memory right out of the gate, saves Qt from need to prepare other file types (jpg, png, etc…) for upload.

    Pros: Using compressed textures provides a faster startup time, decrease in memory usage. It may even provide a bit of performance boost depending on how heavy your texture use is, and how strong of compression you choose to utilize.
    Cons: While the compression algorithms in use for textures inherently require some tradeoff in visual fidelity, all but the most extreme compression schemes will usually not suffer any visible fidelity loss. Choosing the right compression scheme for your application’s use case is an important consideration.
    Difficult to adopt: Low +. See this blog post by Eirik for implementation details. Almost no coding is required, needing only to change texture file extensions in your qt code. Easy-to-use tools for texture compression are available, like the “texture-compressor” package for Node.

 

Conclusions

The i.MX6 is a great representation of mid-tier embedded hardware, and the performance improvements included in Qt 5.12 LTS really shine in this realm. Stack all the improvements together and you can really cut down on the startup time required in low power devices.

With these latest test results for low-power hardware, Qt 5.12 could lend a hand to your development by greatly decreasing startup times, particularly when running on low and mid-tier embedded devices. These new performance improvements are easy to adopt, requiring only the most minor of changes to your codebase, so there’s very little reason to not start using Qt 5.12 right away, especially if your project is cramming heavy QML applications into a fingernail sized SoC. The chart below is a reminder of what’s possible with Qt 5.12 LTS, and faster start-up time makes happier customers.

chart-2

The post Qt 5.12 LTS – The road to faster QML application startup appeared first on Qt Blog.

A bit more than a year ago, the KDE community decided to focus on a few goals. One of those goals (the most important one as far as I’m concerned) is to increase the users’ control over their private data.

KDE developers and users have always been a privacy-minded bunch. But due to all the fun things that have happened in the recent years, we had to switch to the next gear.

We have seen new projects like KDE Itinerary (by Volker), Plasma Vault (by yours truly), Plasma Mycroft (by Yuri and Aditya), etc. There has also been a lot of work to improve our existing projects like KMail.

Now, this post is not about any of these.

It is about a KDE Privacy developer sprint organized by Sando Knauß.

The sprint will be held in Leipzig (Germany) from 22. 3. to 26. 3. and all privacy-minded contributors are invited to join.

LeipzigLeipzig

You can support my work on , or you can get my book Functional Programming in C++ at if you're into that sort of thing.

January 10, 2019

As of a few minutes ago, i merged the code from Chinmoy Ranjan Pradhan's GSOC to support showing PDF Signatures and Certificates in Okular.



Signature handling is a big step for us, but it's also very complex, so i expect it to have bugs and things that can be improved so testers more than welcome.

Compiling is a bit "hard" since it requires poppler 0.73 that was released a few days ago.

But thanks to flatpak, there's no need to compile it, you can run the KDE Okular Nightly on your system to try it

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak remote-add --if-not-exists kdeapps --from https://distribute.kde.org/kdeapps.flatpakrepo
flatpak install kdeapps org.kde.okular

Note: if you have okular installed from another flatpak repo (for example flathub) this will switch you to the KDE Nightlies, you may want to switch back after testing.

And then you can try the adobe sample pdf
flatpak run --share=network org.kde.okular https://blogs.adobe.com/security/SampleSignedPDFDocument.pdf

And you should get stuff like this

It’s the second week of 2019 already, which means I’m curious what Nate is going to do with his series This week in usability .. reset the numbering from week 1? That series is a great read, to keep up with all the little things that change in KDE source each week — aside from the release notes.

For the big ticket items of KDE on FreeBSD, you should read this blog instead.

In ports this week (mostly KDE, some unrelated):

  • KDE Plasma has been updated to the latest release, 5.14.5.
  • KDE Applications 18.12.1 were released today, so we’re right on top of them.
  • Marble was fixed for FreeBSD-running-on-Power9.
  • Musescore caught up on 18 months of releases.
  • Phonon updated to 4.10.1, along with its backends.

And in development, Qt WebEngine 5.12 has been prepared in the incongruously-named plasma-5.13 branch in Area51; that does contain all the latest bits described above, as well.


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.