July 19, 2018

A pesar de que Plasma 5 está  bien dotado en cuanto a relojes nunca está de más tener alternativas para personalizar nuestro entorno de trabajo hasta detalles casi enfermizos. Hoy os presento Plasma Inline Clock, un simple reloj para la barra de tareas que destaca porque puede mostrar la hora y la fecha en solo una línea. Con esto llegamos al plasmoide número 85 comentado en el blog.

Plasma Inline Clock – Plasmoides de KDE (85)

Si quieres personalizar al máximo tu escritorio nunca viene mal tener hasta la última opción para ello. Y en eso el escritorio Plasma de la Comunidad KDE no tiene rival.

Hoy os presento un Plasma Inline Clock, un plasmoide creado por Marianarlt que aún siendo simple puede ayudar a dar el toque que le falta a tu escritorio. Básicamente, el plasmoide se utiliza dentro de una barra de tares y te permite tener un reloj que puede mostrar la fecha en la misma línea que la hora, lo cual no es la forma habitual.

Plasma inline clock

 

Y como siempre digo, si os gusta el plasmoide podéis “pagarlo” de muchas formas en la nueva página de KDE Store, que estoy seguro que el desarrollador lo agradecerá: puntúale positivamente, hazle un comentario en la página o realiza una donación. Ayudar al desarrollo del Software Libre también se hace simplemente dando las gracias, ayuda mucho más de lo que os podéis imaginar, recordad la campaña I love Free Software Day 2017 de la Free Software Foundation donde se nos recordaba esta forma tan sencilla de colaborar con el gran proyecto del Software Libre y que en el blog dedicamos un artículo.

 

 

Más información: KDE Store

¿Qué son los plasmoides?

Para los no iniciados en el blog, quizás la palabra plasmoide le suene un poco rara pero no es mas que el nombre que reciben los widgets para el escritorio Plasma de KDE.

En otras palabras, los plasmoides no son más que pequeñas aplicaciones que puestas sobre el escritorio o sobre una de las barras de tareas del mismo aumentan las funcionalidades del mismo o simplemente lo decoran.

We are happy to announce version 1.12.0 of the Qbs build tool.

What’s new?

Generating Interfaces for Qbs and pkg-config

When distributing software components such as libraries, you’d like to make it as simple as possible for other projects to make use of them. To this end, we have added two new modules: The Exporter.qbs module creates a Qbs module from a product, while the Exporter.pkgconfig module generates a .pc file.

For example:

DynamicLibrary {
    name: "mylib"
    version: "1.0"
    Depends { name: "cpp" }
    Depends { name: "Exporter.qbs" }
    Depends { name: "Exporter.pkgconfig" }
    files: "mylib.cpp"
    Group { 
        fileTagsFilter: "dynamiclibrary"
        qbs.install: true
        qbs.installDir: "lib"
    }
    Group { 
        fileTagsFilter: "Exporter.qbs.module"
        qbs.installDir: "lib/qbs/modules/mylib" 
    }
    Group {
        fileTagsFilter: "Exporter.pkgconfig.pc"
        qbs.install: true
        qbs.installDir: "lib/pkgconfig"
    }
}

When building this project, a Qbs module file mylib.qbs and a pkg-config file mylib.pc will be generated. They contain the information that is necessary to build against this library using the respective tools. The mylib.qbs file might look like this (the concrete content depends on the target platform):

Module {
    Group {
        filesAreTargets: true
        fileTags: "dynamiclibrary"
        files: "../../../libmylib.so.1.0.0"
    }
}

As you can see, the library file is specified using relative paths in order to make the installation relocatable.

Now anyone who wants to make use of the mylib library in their Qbs project can simply do so by declaring a dependency on it: Depends { name: "mylib" }.

System-level Settings

Until now, Qbs settings were always per-user. However, some settings should be shared between all users, for instance global search paths. Therefore, Qbs now supports system-level settings as well. These are considered in addition to the user-level ones, which take precedence in the case of conflicts. System-level settings can be written using the new --system option of the qbs-config tool. This operation usually requires administrator rights.

Language Improvements

We have added a new property type varList for lists of objects. You could already have those by using var properties, but the new type has proper list semantics, that is, values from different modules accumulate.

The FileInfo extension has two new functions suffix and completeSuffix.

Two changes have been done to the Rule item:

C/C++ Support

The cLanguageVersion and cxxLanguageVersion properties are now arrays. If they contain more than one value, then the one corresponding to the highest version of the respective language standard is chosen. This allows different modules to declare different minimum requirements.

Autotest Support

The AutotestRunner item has a new property auxiliaryInputs that can help ensuring that additional resources needed for autotest execution (such as helper applications) are built before the autotests run.

The working directory of an autotest is now the directory in which the respective test executable is located or AutotestRunner.workingDirectory, if it is specified. In the future, it will also be possible to set this directory per test executable.

Various things

All command descriptions now list the product name to which the generated artifact belongs. This is particularly helpful for larger projects where several products contain files of the same name, or even use the same source file.

The vcs module no longer requires a repository to create the header file. If the project is not in a repository, then the VCS_REPO_STATE macro will evaluate to a placeholder string.

It is now possible to generate Makefiles from Qbs projects. While it is unlikely that complex Qbs projects are completely representable in the Makefile format, this feature might still be helpful for debugging purposes.

Try It!

The Open Source version is available on the download page, and you can find commercially licensed packages on the Qt Account Portal. Please post issues in our bug tracker. You can also find us on IRC in #qbs on chat.freenode.net, and on the mailing list. The documentation and wiki are also good places to get started.

Qbs is also available on a number of packaging systems (Chocolatey, MacPorts, Homebrew) and updated on each release by the Qbs development team. It can also be installed through the native package management system on a number of Linux distributions including but not limited to Debian, Ubuntu, Fedora, and Arch Linux.

Qbs 1.12.0 is also included in Qt Creator 4.7.0, which was released this week as well.

The post qbs 1.12 released appeared first on Qt Blog.

July 18, 2018

 El pasado martes 17 de julio se emitió y grabó en directo un nuevo podcast de la cuarta temporada de KDE España. Un episodio que lleva por nombre KDE Neon, pasado, presente y futuro y donde se analiza este más que interesante proyecto de la Comunidad KDE.

KDE Neon, pasado, presente y futuro, podcast 04×10 de KDE España

Nuevo mes, nuevo podcast. Los chicos de KDE España nos ofrecieron una nueva edición de su charla mensual sobre aspectos del mundo KDE con dos temas importantes encima de la mesa. En esta ocasión el protagonista fue KDE Neon.

KDE Neon, pasado, presente y futuro, podcast de KDE España

A lo largo de solo una hora de duración (y con muchos problemas técnicos) tuvimos con nosotros a  Jonathan Riddell, desarrollador de KDE Neon que fue acompañado por los habituales contertulios:

  • Ruben Gómez Antolí, miembro de KDE España y del HackLab de Almería.
  • Adrián Chaves, vicepresidente de KDE España y desarrollador de KDE.
  • Baltasar Ortega @baltolkien, miembro de KDE España y creador y editor de www.kdeblog.com bortega@kde-espana.org

Además, se comentaron las noticias:

Un buen podcast que no os podéis perder que combina actualidad con uno de los proyectos más importantes de la Comunidad KDE y que comparto con todos vosotros:

 

Espero que os haya gustado, si es así ya sabéis: “Manita arriba“, compartid y no olvidéis visitar y suscribiros al canal de Youtube de KDE España.

Como siempre, esperamos vuestros comentarios que os aseguro que son muy valiosos para los desarrolladores, aunque sean críticas constructivas (las otras nunca son buenas para nadie). Así mismo, también nos gustaría saber los temas sobre los que gustaría que hablásemos en los próximos podcast.

Aprovecho la ocasión para invitaros a suscribiros al canal de Ivoox de los podcast de KDE España que pronto estará al día.

Today we’re releasing Krita 4.1.1, the first bug fix release for Krita 4.1.0.

  • Fix loading PyKrita when using PyQt 5.11 (patch by Antonio Rojas, thanks!) (BUG:396381)
  • Fix possible crashes with vector objects (BUG:396145)
  • Fix an issue when resizing pixel brushes in the brush editor (BUG:396136)
  • Fix loading the system language on macOS if more than one language is enabled in macOS
  • Don’t show the unimplemented color picker button in the vector object tool properties docker (BUG:389525)
  • Fix activation of the autosave time after a modify, save, modify cycle (BUG:393266)
  • Fix out-of-range lookups in the cross-channel curve filter (BUG:396244)
  • Fix an assert when pressing PageUp into the reference images layer
  • Avoid a crash when merging layers in isolated mode (BUG:395981)
  • Fix loading files with a transformation mask that uses the box transformation filter (BUG:395979)
  • Fix activating the transform tool if the Box transformation filter was selected (BUG:395979)
  • Warn the user when using an unsupported version of Windows
  • Fix a crash when hiding the last visible channel (BUG:395301)
  • Make it possible to load non-conforming GPL palettes like https://lospec.com/palette-list/endesga-16
  • Simplify display of the warp transformation grid
  • Re-add the Invert Selection menu entry (BUG:395764)
  • Use KFormat to show formatted numbers (Patch by Pino Toscano, thanks!)
  • Hide the color sliders config page
  • Don’t pick colors from fully transparent reference images (BUG:396358)
  • Fix a crash when embedding a reference image
  • Fix some problems when saving and loading reference images (BUG:396143)
  • Fix the color picker tool not working on reference images (BUG:396144)
  • Extend the panning range to include any reference images

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.1.1 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

July 17, 2018

Lo cierto es que pensaba que ya había anunciado este evento. De hecho, iba a hacer un recordatorio, pero al parecer es la primera vez que aparece en el blog… me estoy volviendo viejo. Así que me complace compartir con vosotros una nueva edición del Maratón Linuxero, concretamente el próximo 21 de julio. ¿Te lo vas a perder?

Nueva edición del Maratón Linuxero: Migra tu hogar a GNU/Linux

Nueva edición del Maratón Linuxero: Migra tu hogar a GNU/Linux: Ya casi ha pasado un año del primer Maratón Linuxero de septiembre de 2017. A lo largo de este año se han ido celebrando unos cuantos minimaratones, no comparables en duración pero si en calidad.

A cuatro días de una nueva edición me he dado cuenta todavía no la había promocionado, así que rápidamente me he puesto a solucionarlo.

El anuncio oficial dice así:

“El sábado 21 de Julio a las 17:00 (UTC) tendremos la oportunidad de disfrutar de otro evento del Maratón Linuxero. Esta vez nos centraremos en utilizar GNU/Linux y otras herramientas de código abierto para tener todo el control de nuestros dispositivos y ser cada vez más libres!.

Migra tu hogar a GNU/Linux será su título y tendremos preparado para tí los siguientes asuntos a tratar:
17:10 (UTC) – Migrando nuestro PC Escritorio
18:00 (UTC) – Liberando nuestro router
19:00 (UTC) – Nuestro propio media center o PC para todo
20:00 (UTC) – Actualidad GNU/Linux y ML – Cierre.

Temas de interés que buen seguro que nos proporcionan conocimiento libre a raudales y nuevas formas de ser un poco más libre.

Yo no me lo voy a perder ¿y tú?

Contacta con el Maratón Linuxero

Y como siempre que realizo entradas del Maratón Linuxero me hago eco de la invitación a participar en los canales habituales, con tus preguntas y sugerencias:

Cutelyst a C++ web framework based on Qt got a new release. This release has some important bug fixes so it’s really recommended to upgrade to it.

Most of this release fixes came form a side project I started called Cloudlyst, I did some work for the NextCloud client, and due that I became interested into how WebDAV protocol works, so Cloudlyst is a server implementation of WebDAV, it also passes all litmus tests. WebDAV protocol makes heavy use of REST concept, and although it uses XML instead of JSON it’s actually a good choice since XML can be parsed progressively which is important for large directories.

Since the path URL now has to deal with file paths it’s very important it can deal well with especial characters, and sadly it did not, I had tried to optimize percent encoding decoding using a single QString instead of going back and forth toLatin1() then fromUTF8() and this wasn’t working at all, in order to better fix this the URL is parsed a single time at once so the QString path() is fully decoded now, which will be a little faster and avoid allocations. And this is now unit tested ��

Besides that there was:

  • Fix for regression of auto-reloading apps in cutelyst-wsgi
  • Fix csrf token for multipart/form-data (Sebastian Held)
  • Allow compiling WSGI module when Qt was not built with SSL support

The last one and another commit were to fix some build issues I had with buildroot, which I also created a package so soon you will be able to select Cutelyst from buildroot menu.

Have fun https://github.com/cutelyst/cutelyst/releases/tag/v2.5.0

July 16, 2018

Make sure you commit anything you want to end up in the KDE Applications 18.08 release to them :)

We're already past the dependency freeze.

The Freeze and Beta is this Thursday 19 of July.

More interesting dates
August 2: KDE Applications 18.08 RC (18.07.90) Tagging and Release
August 9: KDE Applications 18.08 Tagging
August 16: KDE Applications 18.08 Release

https://community.kde.org/Schedules/Applications/18.08_Release_Schedule

July 15, 2018

At the company I’m working at, we’re employing Qt WebChannel for remote access to some of our software. Qt WebChannel was originally designed for interfacing with JavaScript clients, but it’s actually very well suited to interface with any kind of dynamic language.

We’ve created client libraries for a few important languages with as few dependencies as possible: pywebchannel (Python, no dependencies), webchannel.net (.NET/C#, depends on JSON.NET) and webchannel++ (header-only C++14, depends on Niels Lohmann’s JSON library).

Python and .NET are a pretty good match: Their dynamic language features make it possible to use remote methods and properties like they were native. Due to being completely statically typed, C++ makes the interface a little more clunky, although variadic templates help a lot to make it easier to use.

As with the original Qt WebChannel C++ classes, transports are completely user defined. When sensible, a default implementation of a transport is provided.

Long story short, here’s an example of how to use the Python client. It’s designed to talk with the chatserver example of the Qt WebChannel module over a WebSocket. It even supports the asyncio features of Python 3! Relevant excerpt without some of the boilerplate:

async def run(webchannel):
    # Wait for initialized
    await webchannel
    print("Connected.")

    chatserver = webchannel.objects["chatserver"]

    username = None

    async def login():
        nonlocal username
        username = input("Enter your name: ")
        return await chatserver.login(username)

    # Loop until we get a valid username
    while not await login():
        print("Username already taken. Please enter a new one.")

    # Keep the username alive
    chatserver.keepAlive.connect(lambda *args: chatserver.keepAliveResponse(username))

    # Connect to chat signals
    chatserver.newMessage.connect(print_newmessage)
    chatserver.userListChanged.connect(lambda *args: print_newusers(chatserver))

    # Read and send input
    while True:
        msg = await ainput()
        chatserver.sendMessage(username, msg)


print("Connecting...")
loop = asyncio.get_event_loop()
proto = loop.run_until_complete(websockets.client.connect(CHATSERVER_URL, create_protocol=QWebChannelWebSocketProtocol))
loop.run_until_complete(run(proto.webchannel))

pywebchannel can also be used without the async/await syntax and should also be compatible with Python 2.

I would also really like to push the code upstream, but I don’t know when I’ll have the time to spare. Then there’s also the question of how to build and deploy the libraries. Would the qtwebchannel module install to $PYTHONPREFIX? Would it depend on a C# compiler (for which support would have to be added to qmake)?

In any case, I think the client libraries can come in handy and expand the spectrum of application of Qt WebChannel.

I am very happy that the third phase of coding phase has begun. The following work is done in eighth & ninth week:

  • UserScript API
  • Ability to register external QtObject
  • ExtensionScheme API

UserScript API

The API to interact with browser user-scripts. This will enable the plugin to create, register, remove, and get all the user-scripts loaded in the browser. Also the scripts registered by it will automatically gets unregistered when the plugin unloads.

Ability to register external QtObject

Since the UserScript API is developed, so the next step was to develop a channel between qml-plugin and webengine. The ExternalJsObject type will enable a QtObject to register as external js object as Falkon.ExternalJsObject.registerExtraObject ({id, object}) and Falkon.ExternalJsObject.unregisterExtraObject({object}) to unregister the object.

ExtensionScheme API

Now as both UserScript and ExternalJsObject APIs are developed, the next step as suggested by my mentor was to implement an extension:// page for qml-plugins.

This time, unfortunately I am yet to add the documentation & tests of the following APIs, but I will complete it soon.

Happy Monsoon :-)





Latte Dock v.0.8 released!!! The third stable release has just landed! 

Go get it from, download.kde.org * ! 



- youtube presentation -

Features in Video
  • Multiple Layouts simultaneously
  • Smart Dynamic Background
  • Unify Global Shortcuts for applets and tasks 
  • User Set Backgrounds
  • Download Community Layouts
  • Smooth Animations


Latte v0.8


if you dont want to build it yourself, you can wait a few days to launch on your distro repositiories!

Take notice, Latte v0.8 is compatible only with:

  • Plasma >= 5.12
  • KDE Frameworks >= 5.38
  • Qt >= 5.9

Except the video features what you can find in this release, you may ask...





multiple launchers and their separators


Multiple Tasks Separators

Drag n' Drop separator widget on tasks in order to distinguish your launchers











Unity layout is read-only and its maximized windows
do not have borders

New Layout Settings


Lock/Unlock your layouts in order to make them read-only or writable.

Borderless maximized windows according the current layout











three different dolphin instances

Dont group tasks

The user can choose to not group its tasks through Tasks configuration tab










the active indicator will be a line and all indicators will
have a glow with opacity 55% and 3D style

New Appearance Settings

Active Indicator and Glow Sections were added in order to adjust them accordingly at your preference










Dock mode




Panel/Dock mode

Change between Panel and Dock
mode with one click

















some "Unity" and "Dock with TopBar" layouts


 
Community Layouts

Download community provided layouts from store.kde.org to customize your Latte















telegram with 679 unread messages

Bigger Badges


A badge can have up to 9.999 size
in order to improve accessibility












access Latte layouts from plasma taskmanager

Improve Plasma Taskmanagers Experience

Access your Latte options through context menu when you are using plasma taskmanagers









load default layout, access installed layouts etc.

Command Line Options

Use your command prompt in order to handle Latte startup










fixes / improvements
  • Various Wayland improvements. I use it daily in my system with Plasma 5.13 and it provides a fantastic experience with fantastic painting.
  • Smoother parabolic animation
  • Support KWin edges when hiding the dock or panel by default
  • New improved splitters icons in Justify (Edit Mode) 
  • Improve the entire experience at Layouts/Latte Settings window 
  • Filter Windows by Launchers, show only windows that there is already a launcher present for the current running activity 
  • Vastly improve the experience in !compositing environments. No more showing an 1px line at the screen edge when the dock is hidden. 
  • New Global Shortcuts to open/hide dock settings and Latte settings (Meta+A, Meta+W, Meta+E) 
  • New Kwin script to trigger the application menu from a corner-edge 
  • Hide the audio badge when there no audio is coming from a pulseaudio stream 
  • various fixes for RTL languages 
  • New more robust animations all over the place
  • Plenty of bug fixes and improvements

For new Latte users it might be a good idea to read/watch also Latte Dock v0.7 - " ...a tornado is coming... "

-------------------
Icons
      "Papirus" from Alexey Varfolomeev 
 Plasma Theme
      "Materia" from Alexey Varfolomeev
Wallpaper
      "Sea, Ocean, Water and Wave" from Josh Withers
Layout
      "Dock And TopBar" at, store.kde.org
Music
      1. Scott Holmes - "Our Big Adventure" at, freemusicarchive.org
      2. Scott Holmes - "Ukulele Whistle" at, freemusicarchive.org 
      3. Scott Holmes - "Kiss The Sky" at, freemusicarchive.org


* Archive Signature :
       gpg key: 325E 97C3 2E60 1F5D 4EAD CF3A 5599 9050 A2D9 110E





Get ready for a humongous week for KDE’s Usability and Productivity initiative! KDE developers and contributors squashed a truly impressive number of bugs this week, all the while adding features and polishing the user interface.

New Features

Bugfixes

UI Polish & Improvement

There are still more bugs I’d like to get fixed ASAP, though, including a number of high-profile regressions with the mouse and touchpad settings in Plasma 5.13. I hope very much that we can fix these soon. If anyone reading this feels like they could help out, please do so! We’re happy to lend a hand when first getting started as a KDE contributor. There are lots of other ways to get involved, too.

If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon, LiberaPay, or PayPal. Also consider making a donation to the KDE e.V. foundation.

July 14, 2018

Hi! I've passed in the second evaluation of Google Summer of Code 2018. I am ready for the third phase, but before that I'll give some updates about how my progress with RAID on kpmcore is going. This post will explain how RAID management works on Linux. Linux and RAID devices First of all, you … Continue reading GSoC 2018 – Coding Period (June 26th to July 15th): RAID on Linux

July 13, 2018

Hi! I am going to Akademy this year. It will happen in Vienna, Austria between August 11th and August 17th. I will talk there about my experiences during Season of KDE 2018 and Google Summer of Code 2018, explaining my work and progress in KDE Partition Manager, kpmcore and Calamares. This will be a great … Continue reading Going to Akademy 2018

One of the things that every old application suffers is from old code. It’s easier to keep something that works than to move to something new, even if the final result is better. Take a look at the current Tabbar + Buttons of Konsole.

 

Now compare with the one I’m working on:

Everything is working (of course I still need to test many things, but open, close, rearrange and detach are working), it’s prettier and it’s around 1.5k lines less to maintain.

The Elisa team is happy to announce our new bugfix release, version 0.2.1.

Elisa is a music player developed by the KDE community that strives to be simple and nice to use. We also recognize that we need a flexible product to account for the different workflows and use-cases of our users.

We focus on a very good integration with the Plasma desktop of the KDE community without compromising the support for other platforms (other Linux desktop environments, Windows and Android).

We are creating a reliable product that is a joy to use and respects our users privacy. As such, we will prefer to support online services where users are in control of their data.

In parallel to the development of the next stable release, we have been pushing a few fixes to the current stable version.

Screenshot_20180713_1747310.2.1 version of Elisa

We have fixed the layout of the playlist in Windows build. They should now be identical to the Linux builds. Here we are using the ability to tune some parts of the interface by providing specific qml components for a given platform.

We fixed issues that could lead to wrong data shown for albums with multiple discs.

The buttons in the header on top of the views are now aligned with the top and bottom of the big icon.

Screenshot_20180713_174753Improvements in the views header

We will continue to fix issues in the stable version for some weeks and may produce a second bugfix release in one month.

Get Involved

The team would like to thank everyone who contributed to the development of Elisa, including code contributions, testing, and bug reporting.

We are already working on new features for the next release. If you enjoy using Elisa, please consider becoming a contributor yourself. We are happy for any contribution!

We have tagged some tasks as junior jobs. They are a perfect way to start contributing to Elisa (Elisa Workboard)

The flathub Elisa package allows an easy way to test this new release.

Elisa source code tarball is available here.

0.2.1 release tarball

A Windows installer is also available thanks to Craft and binary-factory teams.

 

Less than a month left until KDE Akademy 2018. As part of the local organization team, this is going to be a busy time, but having Akademy in such a great city as Vienna is gonna be awesome.

You will over the next weeks find many more “I’m going to Akademy” posts on Planet KDE detailing the Akademy plans of other people. So here in this post I don’t want to look forward, but back and tell you the story of the (in retrospect quite long) process of how a few people from Vienna decided to put in a bid to organize Akademy 2018.

The story starts 5 years ago, around this time, in the beautiful city of Bilbao at Akademy 2013. There I met Joseph from Vienna & Kevin from Graz, the first two Austrian KDE contributors I met. It was probably somewhen during that week, that Joseph & me talked about organizing an Akademy in Vienna. Joseph worked at TU Wien at that time and I was a student there, so we already had good connections to the university.

Later that year Joseph & me met at TU to further talk about a possible Akademy in Vienna, it was at that time clear that this would happen earliest 2015, also because the Call for Locations were sent out quite late (in autumn, after Akademy).

Not much happened until Akademy 2014, where we had the “KDE (in) Austria BoF”, about which I wrote a few years ago. There the plan was to have more KDE/Qt talks and a bigger KDE presence at the Linuxwochen Wien, thus organizing a sort of “Akademy AT”. Unfortunately we didn’t do this. And Joseph & me did also not really pursue our plan to organize Akademy in Vienna. But while those things never happened, since then the Austrian KDE community meets up more or less regularly.

The following years I always attended Akademy and also volunteered at QtCon 2016, already with the intent to get a bit of an insight into how Akademy is organized.

In April 2017, surprisingly early, the Call for Locations for Akademy 2018 was published, and I knew: Now or never �� While still being a student and having contact to TU Wien, I already worked part-time as a freelancer and knew I can manage to dedicate time on the organization. And so discussions on the kde-at mailinglist followed and some first draft content for the proposal was gathered.

And it nearly was put on the pile of unfinished projects again, if weren’t for a students BBQ at TU Wien, where I met Lukas, who in 2017 was a GSoC student. I already knew him before from university, where he attended a course I organized and he was going to his first Akademy in Almeria. We talked about the idea to bring Akademy 2018 to Vienna and the later the evening and the more beer we had, the more sure we were: We are going to do this.

So when coming home from the BBQ it was already the 2nd of June (announcing interest should have been done until 1st of June), but I sat down and wrote a mail that Vienna would be interested in hosting Akademy.

Two intensive weeks of research, meetings with FSINF (TU Wien computer science students council) and proposal writing followed until the full proposal was sent half an hour before the deadline.

Afterwards a long waiting period followed (at that time I wasn’t yet a member of KDE eV, so I couldn’t even see the discussions on the members mailing list or other proposals).

Until Akademy 2017, where Lukas sent me near-real time updates about the status of our Akademy 2018 proposal. On Sunday he got the information that the Vienna proposal was selected and that it will be announced that day on the closing session. He even had to get up on stage, during his first Akademy and give a first spoiler for Akademy 2018.

So this is the story, of the long journey from idea to realization of an Akademy in Vienna. I hope you enjoyed it and look forward to seeing you all at Akademy 2018.

See you in Vienna!

In about a month I’ll be in the beautiful city of Vienna, giving a talk on the weird stuff I make using ImageMagick, Kdenlive, Synfig and FFmpeg so I can construct videos so bad and campy you could almost confuse them for being ironic…

Almost.

The Performance Analyzer

You may have heard about the Performance Analyzer (called “CPU Usage Analyzer” in Qt Creator 4.6 and earlier). It is all about profiling applications using the excellent “perf” tool on Linux. You can use it locally on a Linux-based desktop system or on various embedded devices. perf can record a variety of events that may occur in your application. Among these are cache misses, memory loads, context switches, or the most common one, CPU cycles, which periodically records a stack sample after a number of CPU cycles have passed. The resulting profile shows you what functions in your application take the most CPU cycles. This is the Performance Analyzer’s most prominent use case, at least so far.

Creating trace points

With Qt Creator 4.7 you can also record events for trace points, and if your trace points follow a certain naming convention Qt Creator will know they signify resource allocations or releases. Therefore, by setting trace points on malloc, free, and friends you can trace your application’s heap usage. To help you set up trace points for this use case, Qt Creator packages a shell script you can execute and prompts you to run it. First, open your project and choose the run configuration you want to examine. Then just select the “Create trace points …” button on the Analyzer title bar and you get:

Memory Profiling: Creating trace points

How does it work?

In order for non-privileged users to be able to use the trace points the script has to make the kernel debug and tracing file systems available to all users of the system. You should only do this in controlled environments. The script will generally work for 32bit ARM and 64bit x86 systems. 64bit ARM systems can only accept the trace points if you are using a Linux kernel of version 4.10 or greater. In order to set trace points on 32bit x86 systems you need to have debug symbols for your standard C library available

The script will try to create trace points for any binary called libc.so.6 it finds in /lib. If you have a 64-bit system with additional 32-bit libraries installed, it will try to create trace points for both sub-architectures. It may only succeed for one of them. This is not a problem if your application targets the sub-architecture for which the script succeeded in setting the trace points.

Troubleshooting

If the trace point script fails, you may want to check that your kernel was compiled with the CONFIG_UPROBE_EVENT option enabled. Without this option the kernel does not support user trace points at all. All 32-bit ARM images shipped with Qt for Device Creation have this option enabled from version 5.11 on. Most Linux distributions intended for desktop use enable CONFIG_UPROBE_EVENT by default.

Using trace points for profiling

After creating the trace points, you need to tell Qt Creator to use them for profiling. There is a convenient shortcut for this in the Performance Analyzer Settings. You can access the settings either for your specific project in the “Run” settings in “Projects” mode, or globally from the “Options” in the “Tools” menu. Just select”Use Trace Points”. Then Qt Creator will replace your current event setup with any trace points it finds on the target system, and make sure to record a sample each time a trace point is hit.

Memory Profiling: Adding trace events to Qt Creator

After this, you only need to press the “Start” button in the profiler tool bar to profile your application. After the application terminates, Qt Creator collects the profile data and displays it.

Interpreting the data

The easiest way to figure out which pieces of code are wasting a lot of memory is by looking at the flame graph view. In order to get the most meaningful results, choose the “Peak Usage” mode in the top right. This will show you a flame graph sorted by the accumulated amount of memory allocated by the given call chains. Consider this example:

Memory Profiling: Flame Graph of peak usage

Findings

What you see here is a profile of Qt Creator loading a large QML trace into the QML Profiler. The QML profiler uses a lot of memory when showing large traces. This profile tells us some details about the usage. Among other things this flame graph tells us that:

  • The models for Timeline, Statistics, and Flame Graph views consume about 43% of the peak memory usage. TimelineTraceManager::appendEvent(…) dispatches the events to the various models and causes the allocations.
  • Of these, the largest part, 18.9% is for the Timeline range models. The JavaScript, Bindings, and Signal Handling categories are range models. They keep a vector of extra data, with an entry for each such range. You can see the QArrayData::allocate(…) that allocates memory for these vectors.
  • Rendering the Timeline consumes most of the memory not allocated for the basic models. In particular Timeline::NodeUpdater::run() shows up in all of the other stack traces. This function is responsible for populating the geometry used for rendering the timeline categories. Therefore, QSGGeometry::allocate(…) is what we see as direct cause for the allocations. This also tells us why the QML profiler needs a graphics card with multiple gigabytes of memory to display such traces.

Possible Optimizations

From here, it’s easy to come up with ideas for optimizing the offending functions. We might reconsider if we actually need all the data stored in the various models, or we might temporarily save it to disk while we don’t need it. The overwhelming amount of geometry allocated here also tells us that the threshold for coalescing adjacent events in dense trace might be too low. Finally, we might be able to release the geometry in main memory once we have uploaded it to the GPU.

Tracing Overhead

Profiling each and every malloc() and free() call in your application will result in considerable overhead. The kernel will most likely not be able to keep up and will therefore drop some of the samples. Depending on your specific workload the resulting profile can still give you relevant insights, though. In other words: If your application allocates a huge amount of memory in only a handful of calls to malloc(), while also allocating and releasing small amounts at a high frequency, you might miss the malloc() calls you are interested in because the kernel might drop them. However,  if the problematic malloc() calls form a larger percentage of the total number of calls, you are likely to catch at least some of them.

In any case, Qt Creator will present you with absolute numbers for allocations, releases, and peak memory usage. These numbers refer to the samples perf actually reported, and therefore are not totally accurate. Other tools will report different numbers.

Special allocation functions

Furthermore, there are memory allocation functions you cannot use for profiling this way. In particular posix_memalign() does not return the resulting pointer on the stack or in a register. Therefore, we cannot record it with a trace point. Also, custom memory allocators you may use for your application are not handled by the default trace points. For example, the JavaScript heap allocator used by QML will not show up in the profile. For this particular case you can use the QML Profiler, though. Also, there are various drop-in replacements for the standard C allocation functions, for example jemalloc or tcmalloc. If you want to track these, you need to define custom trace points.

Conclusion

Profiling memory usage with Qt Creator’s Performance Analyzer is an easy and fast way to gain important insights about your application’s memory usage. It works out of the box for any Linux targets supported by Qt Creator. You can immediately browse the resulting profile data in an easily accessible GUI, without any further processing or data transfer. Other tools can produce more accurate data. However, for a quick overview of your application’s memory usage the Performance Analyzer is often the best tool.

The post Profiling memory usage on Linux with Qt Creator 4.7 appeared first on Qt Blog.

Hi Everyone,

I am working on the GSoC project Verifying signatures of pdf files and since the last blog post I have made number of improvements. They are listed below.

1. Signature Properties Dialog

This is an improved version with better layout and messages.

Signature Properties Dialog

2. Signature Panel

This is a sidebar widget in okular which will list all digital signatures present in a document. In future, I plan to add a context menu to allow verifying a single signature as well as all signatures at once and viewing the signed revision.

Signature Panel

3. Revision Viewer

This is a dialog similar to print preview dialog but instead of previewing what is about to be printed it loads the data covered by a signature in a read-only KPart. In its current state this dialog is pdf specific. This is problematic since okular is a universal document viewer. So I plan to make it a bit more generic.

Revision Viewer

4. API changes

Poppler does not provide any details of the signing certificate. So I’ve filed two bugs (107055 and 107056) with patches attached for the said task.

Things I dropped

  • Revision manager

    It turned out to be a mishmash of signature panel and revision viewer. So I dropped it.

  • Signature Summary Dialog

    Initially I liked the idea but now I don’t.

Finally, the animation below sums up my progress.

Phase2 GIF

Thanks for reading :)

Forgot to post last week…

In this 2 weeks, I have been adding features.

The first step is to change the underlying data structure for the palettes to enable modification of entries based on their row and columns. As mentioned before, now the data structure is a vector of maps (red-black trees). I’m considering adding panning to this docker, and that means horizontally, the view needs to be able to show columns with relatively large indices. If the panning is actually achievable, I’ll try to make the model able to handle larger row numbers.

Then it’s GUI. The docker structure haven’t changed too much. All elements are still there, will some minor change in the structure of the interface. The view now stretches to fill the horizontally available space. I personally feel it’s easier to handle than before for users. Right clicking the swatches now can give user options to modify the palette. The palette now shows empty entries, as designed before.

The new palette list widget is now functioning. Instead of giving the GUI used to add a new palette in the widget itself, these GUI elements are now placed in a dialog. I feel this will help user to better handle what they have done. Hope others feel the same. New “export” and “edit” buttons are added, too. The later should help users to change the name and number of columns of a palette; of course, if the panning is done, the number of columns would be handled by the MVC system itself.

I wonder if the palettes still need the tag system. All right, a question to ask in the next meeting.

These 2 weeks have been great for me, because I had a change to really get myself familiarized with the Qt MVC system. I believe I’ll be confident when I need to use it in future projects.

The next step is too make Krita store palettes used in a painting in its .kra file. There seems to be some annoying dependency stuff, but I should be able to handle.

July 12, 2018

I’m pleased to announce the immediate availability of Kube 0.7.0

Over the past year or so we’ve done a lot of work and building and maturing Kube and it’s underlying platform Sink.
Since the last publicly announced release 0.3.0 there have been 413 commits to sink and 851 to Kube. Since that diff is rather large I’ll spare you the changelog and will do a quick recap of what we have instead:

  • A conversation view that allows you to read through conversations in chronological order.
  • A conversation list that bundles all messages of a conversation (thread) together.
  • A simple composer that supports drafts and has autocompletion (assisted by the addressbook) for all recipients.
  • GPG support for reading and writing messages (signing and encryption).
  • Automatic attachment of own public key.
  • Opening and saving of attachments.
  • Rendering of embedded messages.
  • A read-only addressbook via CardDAV.
  • Full keyboard navigation.
  • Fulltext search for all locally available messages.
  • An unintrusive new mail hint in the form of a highlighted folder.
  • Kube is completely configuration free apart from the account setup.
  • The account setup can be fully scripted through the sinksh commandline interface.
  • Available for Mac OS.
  • Builds on Windows (But sadly doesn’t completely work yet).
  • The dependency chain has been reduced to the necessary minimum.

While things still change rapidly and we have in no way reached the end of our ever growing roadmap, Kube has already become my favorite email client that I have ever used. YMMV.

Outlook

Turns out we’re not done yet. Among the next plans we have:

  • A calendar via CalDAV (A first iteration is already complete).
  • Creation of new addressbook entries.
  • A dedicated search view.

While we remain committed to building a first class email experience we’re starting to venture a little beyond that with calendaring, while keeping our eyes focused on the grander vision of a tool that isn’t just yet another email client, but an assistant that helps you manage communication, time and tasks.

Tarballs

Get It!

Of course the release is already outdated, so you may want to try a flatpak or some distro provided package instead:

https://kube.kde.org/getit.html

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

The story As I described in the introductory post, KDE has been working towards a trinity of goals and I have been responsible for pushing forward the Streamlined onboarding of new contributors one. Half a year has passed since my initial blog post and with Akademy, KDE’s annual conference, coming up in a month this is a great time to post a quick update on related developments. Progress so far Over the past months I tried to organize some key objectives related to this goal, which sounded ambitious to begin with.


The Debian community meets at Debconf 6 in Mexico. Photo by Joey Hess, licensed under CC By 4.0.

Since the KDE Advisory Board was created in 2016, we have been encouraging more and more organizations to join it, either as patrons or as non-profit partner organizations. With Ubuntu (via Canonical) and openSUSE (via SUSE) we already had two popular Linux distributions represented in the Advisory board. They are now joined by one of the biggest and oldest purely community-driven distributions: Debian.

KDE has a long-standing and friendly relationship with Debian, and we are happy to formalize it now. Having Debian on our Advisory Board will allow us to learn from them, share our experience with them, and deepen our collaboration even further.

As is tradition, we will now hand over the stage to the Debian Project Leader, Chris Lamb, who will tell you a bit about Debian and why he is happy to accept our invitation to the Advisory Board:


Chris Lamb.
Debian is a stable, free and popular computer operating system trusted by millions of people across the globe, from solo backpackers, to astronauts on the International Space Station, and from small companies, to huge organisations.

Founded in 1993, Debian has since grown into a volunteer organisation of over 2,000 developers from more than 70 countries worldwide collaborating every day via the Internet.

The KDE Plasma desktop environment is fully-supported within Debian and thus the Debian Project is extremely excited to be formally recognising the relationship between itself and KDE, especially how that will greatly increase and facilitate our communication and collaboration.

July 11, 2018

There is an ongoing debate about freedom and fairness on the web. I'm coming from the free and open source software community. From this perspective it's very clear that the freedoms to use, share, and modify software are the cornerstones of sustainable software development. They create the common base on which we can all build and unleash the value of software which is said to eat the world. And the world seems to more and more agree to that.

But how does this look like with software we don't run ourselves, with software which is provided as a service? How does this apply to Facebook, to Google, to Salesforce, to all the others which run web services? The question of freedom becomes much more complicated there because software is not distributed so the means how free and open source software became successful don't apply anymore.

The scandal around data from Facebook being abused shows that there are new moral questions. The European General Data Protection Regulation has brought wide attention to the question of privacy in the context of web services. The sale of GitHub to Microsoft has stirred discussions in the open source community which relies a lot on GitHub as kind of a home for open source software. What does that mean to the freedoms of users, the freedoms of people?

I have talked about the topic of freedom and web services a lot and one result is the Fair Web Services project which is supposed to give some definitions how freedom and fairness can be preserved in a world of software services. It's an ongoing project and I hope we can create a specification for what a fair web service is as a result.

I would like to invite you to follow this project and the discussions around this topic by subscribing to the weekly Fair Web Services Newsletter I maintain for about a year now. Look at the archive to get some history and sign up to get the latest posts fresh to your inbox.

The opportunities we have with the web are mind-boggling. We can do a lot of great things there. Let's make sure we make use of these opportunities in a responsible way.

I’ve been working lately in a command line application called Bard which is a music manager for your local music collection. Bard does an acoustic fingerprinting of your songs (using acoustid) and stores all song metadata in a sqlite database. With this, you can do queries and find song duplicates easily even if the songs are not correctly tagged. I’ll talk in another post more about Bard and its features, but here I wanted to talk about the algorithm to find song duplicates and how I optimized it to run around 8000 times faster.

The algorithm

To find out if two songs are similar, you have to compare their acoustic fingerprints. That seems easy (and in fact, it is), but it’s not as straightforward as it seems. A fingerprint (as acoustid calculates it) is not just a number, but an array of numbers, or better said, an array of bits, so you can’t just compare the numbers themselves, but you have to compare the bits in those numbers. If all bits are exactly the same, the songs are considered the same, if 99% of bits are the same then that means there’s a 99% chance it’s the same tune, maybe differing because of encoding issues (like one song being encoded as mp3 with 192 kbits/s and the other with 128 kbits/s).

But there are more things to have in mind when comparing songs. Sometimes they have different silence lengths at the beginning or end, so the bits that we compare are not correctly aligned and they don’t match as they are calculated, but could match if we shifted one of the fingerprints a bit.

This means that to compare two songs, we don’t just have to compare the two fingerprints, but then we have to simulate increasing/decreasing silence lengths at the beginning of a song by shifting its fingerprint and calculate the match level for each shift to see if we improve/worsen the similarity percentage.  Right now, Bard shiftes the array by 100 positions in one direction and then another 100 positions in the other direction, this means that for each song we have to compare the two fingerprints 200 times.

Then, if we want to compare all of the songs in a collection to find duplicates, we need to compare the songs with ID 1 and 2, then we need to compare the song with ID 3 with IDs 1 and 2, and in general, each song with all previous ones. This means for a collection with 1000 songs we need to compare 1000*1001/2 = 500500 songs (or 100100000 fingerprints).

The initial python implementation

Bard is written in python, so my first implementation used a python list to store the array of integers from the fingerprint. For each iteration in which I have to shift the array numbers I prepend a 0 to one fingerprint array and then iterate over them comparing each element. To do this comparison, I xor the two elements and then use a standard good known algorithm to count the bits which are set in a integer:

def count_bits_set(i):
    i = i – ((i >> 1) & 0x55555555)
    i = (i & 0x33333333) + ((i >> 2) & 0x33333333)
    return (((i + (i >> 4) & 0xF0F0F0F) * 0x1010101) & 0xffffffff) >> 24

Let’s use the speed of this implementation as a reference and call it 1x.

First improvement

As a first improvement I tried replacing that bit counting algorithm with gmpy.popcount which was faster and also improved the algorithm by introducing a canceling threshold. This new algorithm stops comparing two fingerprints as soon as it’s mathematically impossible to have a match over the canceling threshold. So for example, if we are iterating the fingerprints and we calculate that even if all remaining bits would match we wouldn’t get at least a 55% match between songs, we just return “different songs” (but we still need to shift songs and try again, just in case).

With these improvements (commit) the comparisons ran at nearly an exact 2x speed.

Enter C++17

At that time,  I thought that this code wouldn’t scale nicely with a large music collection, so I thought Bard needed a much better implementation. Modifying memory is slow and C/C++ allows for much fine grained low-level optimizations, but I didn’t want to rewrite the application in C++ so I used Boost.Python to implement just this algorithm in C++ and call it from the python application. I should say that I found really easy to integrate methods in C++ with Python and I absolutely recommend Boost.Python .

With the new implementation in C++ (commit) I used STL’s vector to store the fingerprints and I added the maximum offsets in advance so I didn’t need to modify the vector elements during the algorithm and I access elements by simulating the shifting offsets. I also use STL’s map to store all fingerprints indexed on a song ID. Finally, I also added another important optimization which is using the CPU’s instructions to calculate bits set by using gcc’s __builtin_popcount.

The best part of this algorithm is that during the comparison algorithm itself, no fingerprint is copied or modified, which translated in a speed of 126.47x . At this point I started calculating another measure: songs compared per second (remember we’re doing 200 fingerprints comparison for each pair of songs). This algorithm gave an average speed of 580 songs/second. Or put another way, to compare our example collection of 1000 songs, this would take 14 min 22 sec (note that we can calculate the original implementation in Python would take approximately 1 day, 6 hours, 16 minutes and 57 seconds).

First try to parallelize the algorithm

I use an i7 CPU to run Bard and I always thought it was a pity that it only used one CPU. Since the algorithm that compares two songs doesn’t modify their data anymore, I thought it might be interesting to parallelize it to allow it to run in all 8 cores at the same time and just coalesce the result of each independent iterations. So I wondered how to do it and noticed that when I compare each song with all previous ones, this is done using a for loop that iterates over a std::map containing all songs that are already processed. Wouldn’t it be nice to have a for-each loop implementation that runs each iteration on a different thread? Well, there is! std::for_each in C++17 allows to specify an ExecutionPolicy with which you can tell it to run the iterations in different threads. Now the bad news: This part of the standard is not completely supported by gcc yet.

So I searched for a for_each implementation and found this stackoverflow question which included one. The problem is that the question mentions the implementation was copied from the C++ Concurrency in Action book and I’m not sure of the license of that code, so I can’t just copy it into Bard. But I can make some tests with it just for measurements purposes.

This increased the speed to 1897x or ~8700 songs/second (1000 songs would be processed in 57 seconds). Impressive, isn’t it? Well… keep reading ��

Second parallelization try

So I needed to find a parallelized for_each version I could use. Fortunately I kept looking and found out that gcc includes an experimental parallel implementation of some algorithms in the C++ standard library which includes __gnu_parallel::for_each (there are more parallelized algorithms documented in that page). You just need to link to the openmp library.

So I ran to change the code to use it and hit a problem: I used __gnu_parallel::for_each but every time I tested, it only ran sequentially! It took me a while to find out what was happening, but after reading the gcc implementation of __gnu_parallel::for_each I noticed it required a random access iterator, but I’m iterating on a std::map and maps have a bidirectional iterator, not a random-access one.

So I changed the code (commit) to first copy the fingerprints from the std::map<int, std::vector<int>> to a std::vector<std::pair<int,std::vector<int>>> and with just that, the same __gnu_parallel::for_each line ran using a pool of 8 threads.

The gcc implementation proved to be faster than the implementation in the stackoverflow question, with a speed of 2442x , ~11200 songs/second and 44 seconds.

The obvious but important improvement I forgot about

While looking at the compiler build Bard I noticed I wasn’t using compiler flags to optimize for speed! So I tested adding -Ofast -march=native -mtune=native -funroll-loops to the compiler (commit). Just that. Guess what happened…

The speed raised to a very nice  6552x, ~30050 songs/second and 16 seconds.

The Tumbleweed improvement I got for free

I’m running openSUSE Tumbleweed in the system I use to develop which as you probably know, is a (very nice) rolling release distribution. One day, while doing these tests, Tumbleweed updated the compiler from gcc 7.3 to using gcc 8.1 by default. So I thought that deserved another measurements.

Just changing the compiler to the newer version increased the speed to 7714x, 35380 songs/second and 14 seconds.

The final optimization

An obvious improvement I didn’t do yet was replacing the map with a vector so I don’t have to convert it before each for_each call. Also, vectors allow to reserve space in advance, and since I know the final size the vector will have at the end of the whole algorithm, I changed to code to use reserve wisely.

This commit gave the last increase of speed, to 7998x, 36680 songs/second and would fully process a music collection of 1000 songs in just 13 seconds.

Conclusion

Just some notes I think are important to keep in mind from this experience:

  • Spend some time thinking how to optimize your code. It’s worth it.
  • If you use C++ and can afford to use a modern compiler, use C++17, it allows you to make MUCH better/nicer code, more efficiently. Lambdas, structured bindings, constexpr, etc. are really worth the time spent reading about them.
  • Allow the compiler to do stuff for you. It can optimize your code without any effort from your side.
  • Copy/Move data as little as possible. It’s slow and many times it can be avoided just by thinking a bit on data structures before starting developing.
  • Use threads whenever possible.
  • And probably the most important note: Measure everything. You can’t improve what you can’t measure (well, technically you can, but you won’t know for sure).

EDIT: Thanks to a tip in the comments, Qt 5.6 support is restored for now. Something broke Fedora 27, though. Once Dolphin adopts Qt 5.9-specific features, I’ll proceed as originally mentioned below.

Original post:
A day after I formally announced my game console emulator repository, the Dolphin Emulator guys decided to merge a patch that makes Qt 5.9 mandatory. That means Dolphin is no longer compatible with openSUSE Leap 42.3 which comes with Qt 5.6.

I take pride in myself for having a high-quality product, even if it’s just free video game stuff. Therefore my plan is this instead of simply disabling 42.3 and calling it a day:

I’ll pick the last commit before that patch and build that Dolphin revision. Then I’ll disable the 42.3 target and build the most recent version for the other distributions. That way the last 42.3-compatible binaries stay on the download server until I remove the 42.3 target entirely which will be either when Leap 15.1 gets released or maybe even earlier.

I don’t think the impact will be that big, though. For gaming it’s important anyway to migrate to new base OS anyway because of all the performance improvements that come with new kernel and Mesa versions but for now the 42.3 users are covered.

July 10, 2018

KDE Project:

Hi,

there is a new book on CMake (there are not many):
Professional CMake - A practical guide.

I haven't read it yet, so I cannot say more about it, but I guess it should be useful.

Hi, it’s a bit of time that I didn’t write a blog post and many things on WikiToLearn ecosystem happened. Course editor mode is almost finished: now you can add, remove and edit chapter on a course, with new revamped Dialog and Modal components for confirming and editing views. You can see it below in action.

These works needed an update on the API backend and some discussion with mentors so they took some time to implement. (because you know, _etags…)
I introduced a feature not present on previous Modal and Dialog compoents that dismiss them on clicking outside their view. This is important because Modals which haven’t “confirm” or “dismiss” buttons couldn’t be closed before.

Another important work has been done on making typography fluid. When resizing the window the typography should change little by little and media queries aren’t enough. This has been done using viewport units, so font size change smoothly (check https://www.youtube.com/watch?v=Wb5xDcUNq48&t=2s for more information).

Now WTLIcons are clickable, more styles have been fixed and PWA banner is displaying again!

Now we can work on page editor and course history viewer!

L'articolo WikiToLearn web app course editor almost done sembra essere il primo su Blogs from WikiToLearn.

The Kubuntu Community is please to announce that KDE Plasma 5.12.6, the latest bugfix release for Plasma 5.12 was made available for Kubuntu 18.04 LTS (the Bionic Beaver) users via normal updates.

The full changelog for 5.12.6 contains scores of fixes, including fixes and polish for Discover and the desktop.

These fixes should be immediately available through normal updates.

The Kubuntu team wishes users a happy experience with the excellent 5.12 LTS desktop, and thanks the KDE/Plasma team for such a wonderful desktop to package.

I already posted the Seven Lessons of Open Source Governance from my talk at FOSS Backstage. Another part of the talk was about a project to map open source governance models. The idea is to have a machine-readable collection of data about how different projects implement their governance and a web page showing that as an overview. This should help with learning from what others have done and provide a resource for further analysis. It's meant as a map, not a navigation system. You still will have to think about what is the right thing to do for your project.


The project is up on GitHub right now. For each project there is a YAML file collecting data such as project name, founding date, links to web sites, governance documents, statistics, or maintainer lists. It's interesting to look into the different implementations of governance there. There is a lot of good material, especially if you look at the mature and well-established foundations such as The Apache Foundation or the Eclipse Foundation. I'm also looking into syncing with some other sources which have similar data such as Choose A Foundation or Wikidata.

The web site is minimalistic now. We'll have to see for what proves to be useful and adapt it to serve these needs. Having access to the data of different projects is useful but maybe it also would be useful to have a list of code of conducts, a comparison of organisation types, or other overview pages.


If you would like to contribute some data about the governance on an open source project which is not listed there or you have more details about one which is already listed please don't hesitate to contribute. Create a pull request or an open an issue and I'll get the information added.

This is a nice small fun project. SUSE Hack Week gives me a bit of time to work on it. If you would like to join, please get in touch.


Older blog entries

 

 


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.