June 18, 2018

I’ve learned that IBM Travelstar 40GB drives use glass platters. I learned this the fun way, by bending one in a vice, with a big set of pliers. It went snap, tinkle — a different sound from other drives. And after that the bendy drive was usable as a maraca!

So why was I bending drives in the first place?

Well, I volunteer some of my time at a local second-hand place called Stichting Overal. It is an “idealistic” organisation that uses the revenue from second-hand sales to support various projects (generally small-scale development, like funding the construction of sanitation in schools). Like most second-hand stores, there’s clothes and ancient kitchen appliances and books and used bicycles .. and also an IT corner.

I help out Tom, a local school kid who has run the IT corner for some time. From time to time a PC or monitors or random IT crap is dropped off for re-use, and we apply triage. Yes, that is a lovely 486DX2, but it is still going into the bin. For various reasons, there’s a mismatch between supply and demand of hard drives: we end up with piles of small ATA-33 drives, and very few 80GB-or-more SATA drives.

Machines that show up and not immediately consigned to the bin are thoroughly cleaned (’cause, eww). Some machines are cannibalized for parts for others. Working, usable hard drives are wiped, and then re-triaged. Since we don’t want to leak whatever data is on the drives (even after wiping, and customers aren’t always all that careful about what they bring in either), leftover drives are destroyed.

So that’s why I was contorting a laptop drive. Here’s a Christmas ornament I have made out of a desktop 3.5″ drive.

Machines that get through this gauntlet are dd’ed with zeroes, then installed with some flavor of GNU/Linux. Even if there’s a valid Windows license attached to the machine, getting that installed correctly and legally is way more effort for us than doing something we know is right (and Free). Lately it’s been Fedora 27 with rpmfusion, and a KDE Plasma 5 desktop (I didn’t do the choosing, so this was a pleasant surprise). Frankly, I’m not convinced we’re doing a really good job in delivering the Linux desktop PC experience to the buyers, since there’s no Linux / Fedora documentation included (now I write that down, I realise we should probably check if there’s licensing obligations we need to follow up on). What it kinda needs is an OEM installer to do some post-sale configuration like setting up a user (I can think of at least one).

Por razones extrañas, casi nunca he publicado el calendario de lanzamientos de Plasma. Esa tendencia la rompí con Plasma 5.13 y es momento de seguir enmendando ese error . Es hora de seguir constatando la continua evolución de la Comunidad KDE; su compromiso por la constancia y mejora continua. Es hora de publicar el calendario de lanzamientos de Plasma 5.14 y no perder la costumbre con futuros compromisos.

Tener un plan de trabajo pre-establecido es algo fundamental para que los equipos funcionen. Este calendario de trabajo debe contener la respuesta a dos preguntas muy explícitas: qué hay que hacer y cuándo debe estar hecho. Además, en su esquema de trabajo interno se responde a otra pregunta que también es sumamente importante: quien lo va a hacer.

Esta metodología de trabajo la tienen perfectamente clara y establecida los desarrolladores de KDE que, como viene siendo habitual, no solo se lo marcan en sus agendas sino que lo hacen público.

Calendario de lanzamientos de Plasma 5.14

Calendario de lanzamientos de Plasma 5.14Si tenéis un calendario a mano y tenéis interés en los lanzamientos de KDE Aplicaciones os aconsejo que  anotéis en él las fechas principales de lanzamientos de Plasma 5.14. Hay que destacar que en esta ocasión se ha querido simplificar mucho el proceso en aras de ser más claros y efectivos.

De este modo tenemos:

  • Jueves 30 de Agosto 2018: Congelamiento de Plasma 5.14
  • Jueves, 13 de Septiembre, 2018: Lanzamiento de la beta
  • Martes, 9 de Octubre,  2018:  Lanzamiento de Plasma 5.14
  • Martes, 16 de Octubre,  2018:  Lanzamiento de Plasma 5.14.1
  • Martes, 23 de Octubre,  2018:  Lanzamiento de Plasma 5.14.2
  • Martes, 6 de Noviembre,  2018:  Lanzamiento de Plasma 5.14.3
  • Martes, 27 de Noviembre,  2018:  Lanzamiento de Plasma 5.14.4
  • Martes, 01 de Enero,  2019:  Lanzamiento de Plasma 5.14.5

En fin, un equipo incansable que nos ofrece la colección de aplicaciones más útil, integradas y funcionales para el escritorio libre más bello, funcional y dinámico que puede habitar en tu PC o portátil.

Más información: KDE Techbase

 

I work for atelier together with Chris, Lays and Patrick for quite a while, but I was basically being the “guardian angel” of the project being invocked when anything happened or when they did not know how to proceed (are you a guardian angel of a project? we have many that need that)

For instance I’v done the skeleton for the plugin system, the buildsystem and some of the modules in the interface, but nothing major as I really lacked the time and also lacked a printer.

Now I got my first 3d printer, and for the first time in two years I can actually test the program that I’m helping to build for so long, and peeps, I can confirm that we have a working 3d printer host in KDE, and it’s really good.

It has some quircks, obviously, but it’s really good.

 

With all the advances being made in Qt 3D, we wanted to create some new examples showing some of what it can do. To get us started, we decided to use an existing learning framework, so we followed the open source Tower Defence course, which you can find at CGCookie. Being a game, it allows an interactive view of everything at work, which is very useful.

We found it to be so diverse, that we are now implementing Parts 2 and 3 of the game into Qt 3D. However you don’t have to wait for that, you can start now by following the steps we took.

The setup

These instructions will help you setup for Qt 5.11.0 .

To start, turn to your QtCreator and create a new Qt Console Application, set to run on your Qt 5.11.0 kit.

A Qt Console Application doesn’t come with too much ‘plumbing’. A lot of the other options will attempt to give you starting files that aren’t required or in some cases, the wrong type entirely.

Let’s edit it to fit our needs by opening up the .pro file and adding the following:

First remove the QT += core and QT -= gui lines if they are present.

QT += 3dcore 3drender 3dinput 3dquick 3dquickextras qml quick

Then, if the lines CONFIG += c++11 console and CONFIG -= app_bundle are present, remove them too. Now back on the main.cpp file we need to edit our “includes” from the Qt 3D library.

Replace the #include QCoreApplication with #include QGuiApplication and add these lines:

#include <Qt3DQuick/QQmlAspectEngine>
#include <Qt3DQuickExtras/Qt3DQuickWindow>
#include <QtQml>

Within the main block we now have to edit QCoreApplication a(argc, argv); to mirror our include change. So change it to:

QGuiApplication a(argc, argv);

Before the first build / run we should add something to look at. Adding the following block of code before the return statement will provide us with a window:

Qt3DExtras::Quick::Qt3DQuickWindow view;
view.setSource(QUrl("qrc:/main.qml"));
view.show();

Commenting out the line referring to main.qml will allow you to build and run what you have already. If everything has gone to plan, you will get a white window appear. Now you can uncomment the line and continue onwards!

QRC creation

Okay, let’s get rid of the boring white scene and get something in there. Right-click the ‘Sources’ folder and select ‘Add New…’. From here select the Qt > QML File (Qt Quick 2) option. We’ve gone and named it main so that after clicking next till the end you should now have a main.qml and a main.cpp.

This QML file is now going to hold our scene, but to do that we need some resources. We will achieve this by adding a Qt Resource File, just as we did for main.qml – assuming you have an obj with accompanying textures placed in an assets folder within the project.

So this time right-click on the project folder and select ‘Add New…’. From the Qt menu, select ‘Qt Resource File’ and name it something fitting. When this opens it will look noticeably different to the qml and cpp files. At the bottom you will see the self-descriptive; Add, Remove and Remove Missing Files buttons. Click the ‘Add’ button and select ‘Add Prefix’. Now remove everything from the Prefix: text input just leaving the ‘/‘. Click the ‘Add’ button again, this time selecting the ‘Add Files’ option.

Navigate to your obj and texture files and add them all to the qrc, save and close it. If everything went to plan, a ‘Resources’ folder will now be visible in the Projects window on the left.

Follow this again and add main.qml to the qrc in the same way.

One last thing we need before playing with the scene is a skymap. With the files placed in your assets folder, go ahead and add the skymap to the qrc file.

Gotcha

We use three dds files for our skymaps, irradiance, radiance and specular. If you are trying this on a Mac, you will have to uncompress them or they will not work. Keep the names similar to their compressed version. For example we simply added ‘-16f’ to the filename. So our files would be ‘wobbly_bridge_4k_cube_irradiance’ vs ‘wobbly_bridge_4k-16f_cube_irradiance’ respectively.

The necessities

Back to the QML file now, rename the Item { } to be an Entity { } and give it the id: scene. Entity is not recognised because we are missing some imports. Hitting F1 with Entity selected shows us that we need to import Qt3D.Core 2.0, so add this to the imports at the top of the file.

There are certain components that a 3D scene must have, a camera and Render settings being two of those. For this example, we’ll throw in a camera controller too so we can move around the scene.

components: [
    RenderSettings {
        activeFrameGraph: ForwardRenderer {
            camera: mainCamera
            clearColor: Qt.rgba(0.1, 0.1, 0.1, 1.0)
        }
    },
    // Event Source will be set by the Qt3DQuickWindow
    InputSettings { }
]

Camera {
    id: mainCamera
    position: Qt.vector3d(30, 30, 30)
    viewCenter: Qt.vector3d(0, 0, 0)
}

FirstPersonCameraController {
    camera: mainCamera
    linearSpeed: 10
    lookSpeed: 50
}

Here we see that Camera is not recognised, so let’s get the missing import.

Gotcha

If you select Camera and hit F1 to find the import, you will in fact be shown the import for the non-Qt3D Camera. The one you will want is: import Qt3D.Render 2.9

The sky is the limit

Let’s put that skymap to use now. Back in the main.cpp file, we need to add code to check if we’re on MAC or not. If you remember, this was due to MAC not supporting compressed files and needing its own versions. After the QGuiApplication line, put in the following:

#if defined(Q_OS_MAC)
    const QString envmapFormat = QLatin1String("-16f");
#else
    const QString envmapFormat = QLatin1String("");
#endif

Then after the Qt3DExtras line, add the following:

auto context = view.engine()->qmlEngine()->rootContext();
context->setContextProperty(QLatin1String("_envmapFormat"), envmapFormat);

If you try to build at this point, you will notice various imports missing. One for FirstPersonCameraController, one for InputSettings and TexturedMetalRoughtMaterial. Hitting F1 on FirstPersonCameraController will give you import Qt3D.Extras 2.0 and F1 on InputSettings will give you import Qt3D.Input 2.0 but then later you’ll hit a snag. TexturedMetalRoughtMaterial may not turn up any documentation but we’ll be kind enough to give you the answer… edit the Qt3D.Extras 2.0 to be 2.9 instead. If this now works you will get a dark grey window.

Barrel of laughs

The final part will be our mesh, we chose a barrel, and the skymap for it to reflect (although this might not be visible).

In main.qml after the InputSettings{}, throw in the following:

EnvironmentLight {
    id: envLight
    irradiance: TextureLoader {
        source: "qrc:/path/to/your/file" + _envmapFormat + "_cube_irradiance.dds"

        minificationFilter: Texture.LinearMipMapLinear
        magnificationFilter: Texture.Linear
        wrapMode {
            x: WrapMode.ClampToEdge
            y: WrapMode.ClampToEdge
        }
        generateMipMaps: false
    }
    specular: TextureLoader {
        source: "qrc:/path/to/your/file" + _envmapFormat + "_cube_specular.dds"
                
        minificationFilter: Texture.LinearMipMapLinear
        magnificationFilter: Texture.Linear
        wrapMode {
            x: WrapMode.ClampToEdge
            y: WrapMode.ClampToEdge
        }
        generateMipMaps: false
    }
}

You can hit build now to check it’s working, but the scene will still be pretty boring. Throw in your obj to get some eye candy. Here is the code we used after EnvironmentLight:

Mesh {
    source: "qrc:/your/model.obj"
},
Transform {
    translation: Qt.vector3d(4, 0, 2)
},
TexturedMetalRoughMaterial {
    baseColor: TextureLoader {
        format: Texture.SRGB8_Alpha8
        source: "qrc:/path/to/your/Base_Color.png"
    }
    metalness: TextureLoader { source: "qrc:/path/to/your/Metallic.png" }
    roughness: TextureLoader { source: "qrc:/path/to/your/Roughness.png" }
    normal: TextureLoader { source: "qrc:/path/to/your/Normal_OpenGL.png" }
    ambientOcclusion: TextureLoader { source: "qrc:/path/to/your/Mixed_AO.png" }
}

Finally, hit build and then run.

Rendered Barrel

The barrel viewed at the end of What A Mesh pt1

The post What a mesh! appeared first on KDAB.

Perhaps if Windows wasn’t such a PITA there would be more progress ��

  • The Conversation view received some vim-style keyboard bindings (because who uses a mouse anyways).
  • The INBOX is now automatically selected when Kube is started, so we show something useful immediately.
  • Progress on Kube for Windows. Everything builds, but there are still a couple of remaining issues to sort out.
  • Ported from QGpgME to plain old GpgME. This was a necessary measure to build Kube on Windows, but also generally reduced complexity while removing the dependency on two large libraries that do nothing but wrapping the C interface.
  • Ported away from readline to cpp-linenoise, which is a much simpler and much more portable replacement for readline.
  • Rémi implemented the first steps for range queries, which will allow us to retrieve only the events that we require for to e.g. render a week in the calendar.
  • The storage layer got another round of fixes, fixing a race condition that could happen when initially creating the database for the first time (Blogpost on how to use LMDB).
  • The IMAP resource no longer repeatedly tries to upload messages that don’t conform to the protocol (Not that we should ever end up in that situation, but bugs…).
  • The CalDAV/CardDAV backends are now fully functional and support change-replay to the server (Rémi).
  • The CalDAV backend gained support for tasks.
  • Icons are now shipped and loaded from a resource file after running into too many problems otherwise on Windows.
  • A ton of other fixes for windows compatiblity.
  • A bunch of mail rendering fixes (also related to autocrypt among others).
  • Work on date range queries for efficient retrieval of events has been started.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

The last 2 weeks were mainly dedicatd for reviews and testing and thanks to my mentors, I passed the first evaluation with good work till now. Some significant changes were made on discussion with my mentors during the last 2 weeks in the code and some new features. Added E3, F3, D6, E6, F6 keys …

June 17, 2018

I started to hack in Konsole, and first I was afraid, I was petrified. You know, touching those hardcore apps that are the center of the KDE Software Collection.

I started touching it mostly because some easy to fix bugs weren’t fixed, and as every cool user knows, this is free software. So I could pay for someone to fix my bugs,  or I could download the source code and try to figure out what the hell was wrong with it. I choosed the second approach.

If you have something for me to improve in konsole please poke-me, as I have landed around 25 commits this past week and I plan to continue that.

I would also like to thank my employer for letting me stay late hacking in KDE related software.

I am happy to announce the release of Kraft version 0.81. Kraft is a Qt based desktop application that helps you to handle documents like quotes and invoices in your small business.

Version 0.81 is a bugfix release for the previous version 0.80, which was the first stable release based on Qt5 and KDE Frameworks5. Even though it came with way more new features than just the port, it’s first release has proven it’s stability in day-to-day business now for a few month.

Kraft 0.81 mainly fixes building with Qt 5.11, and a few other installation- and AppStream metadata glitches. The only user visible fix is that documents do not show the block about individual taxes on the PDF documents any more if the document only uses one tax rate.

Thanks for your suggestions and opinions that you might have about Kraft!

Howdy folks! This has been a bit of a light week for KDE’s Usability and Productivity initiative, probably because everyone’s basking in the warm glow of a well-received release: KDE Plasma 5.13 came out on Tuesday and is getting great reviews!

Don’t worry, we’ve got lots of great stuff queued up though.

Bugfixes

UI Polish & Improvement

  • The Libinput-backend Mouse and Touchpad System Settings pages received a visual and usability overhaul (Furkan Tokac, KDE Plasma 5.14):

  • Discover’s Updates page is now clearer about what version is being upgraded (me: Nate Graham, KDE Plasma 5.14):
  • Scrollbars in Konsole are now overlay style and disappear entirely when the view isn’t scrollable, e.g. with a full-screen CLI program like top (Tomaz Canabrava and Alex Nemeth, KDE Applications 18.08.0)

See all the names of people who worked hard to make the computing world a better place? That could be you next week! Getting involved isn’t all that tough, and there’s lots of support available. Give it a try today! It’s easy and fun and important.

If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon, LiberaPay, or PayPal.

Become a patron Donate using Liberapay donate with PayPal

The week was totally involved in developing the GUI for QML Plugins. The follwoing APIs are developed:

  • BrowserAction API: to add a GUI popup button to navigation tool bar and status bar
  • SideBar API: to add a side bar widget to the browser

Below are the screeshots of the Hello QML plugin from the my working branch

BrowserAction Button

Browser Action Button can be added to either Navigation Tool Bar or Status Bar or both. Browser_Action_Button

BrowserAction Popup

The popup for Browser Action Button is in form of QML Window. This part took much time to develop because:

  • To show the GUI, QQuickWidget (QWidget) or QQucikWindow (QWindow) can be used.
  • Now popup property is a QQmlComponent thus the source Url is not known. Also QQuickWidget uses url - thus only solution left is to use QQuickWindow (by casting object created using QQmlComponent::create)
  • Now to show popup the Qt::Popup flag is needed for QQuickWindow.

Everything is fine upto this until I found that this didn't worked. My Mentor (David Rosca) explained that this is because the QWindow is not grabing mouse and keyboard events - which means that the window is not activated - so I added QWindow::requestActivate and It works like a charm! Browser_Action_Popup

SideBar Menu

SideBar_Menu

SideBar

Again SideBar menu is a QQuickWindow which is embeded into the browser using QWidget::createWindowContainer. SideBar


Again, a great thanks to my mentor David Rosca to help me in developing the APIs for GUI.

Happy Fathers Day!

Un poco precipitado y con una charla ya realizada, me complace anunciar que mañana Richard Stallman estará en Barcelona realizando una función básica en el desarrollo del Software Libre: promoviendo su filosofía. Basándome en el artículo de Víctorhck no está de más promocionar este tipo de eventos.

Mañana Richard Stallman estará en Barcelona

Mañana Richard Stallman estará en Barcelona

Carta Richard Stallman by DominusHatred

Lo cierto es que Richard Stallman ayer sábado ya estuvo en Barcelona (y supongo que hoy también) y realizó un charla en la Feria Maker, junto con Francesca Bria del ayuntamiento de Barcelona, donde hablaron sobre las ciudades, libertades y privacidad digital. Pero esta se me pasó compartirla con vosotros. Cosas de la vida.

Así que, uno de los promotores del movimientos GNU/Linux más importantes, estará mañana realizando una charla en la UPC (Edifici Vèrtex), Universitat Politècnica de Catalunya. En esta ocasión, hablará sobre el software libre en la ética y en la práctica. Una charla para todo el mundo, ya que se focalizará en los aspectos no técnicos del movimiento. Más información:https://www.fsf.org/events/rms-20180618-barcelona

Aprovecho la ocasión, para repetir los consejos que nos da Victorhck si queremos asistir a una charla de Richard, ya que él ha tenido la suerte de poder asistir e incluso tiene un autógrafo:

  • Si tienes que hablar con él hazlo con voz alta  y vocaliza para que te pueda entender.
  • Lleva dinero para donar a la FSF, comprar algo de merchandaising geek, o pujar por el “adorable GNU” que se sortea a beneficio de la FSF.
  • No le saques fotos con un iPhone.
  • Y di GNU/Linux ��

Por cierto, el próximo 25 estará en Jaen. Si podéis, no os lo perdáis.

Para los que no les suene de nada decir que Richard Stallman, es considerado el padre del software libre, es desarrollador de GNU, de Emacs, promotor de las licencias libres bajo las que se publican miles de paquetes de software, defensor del Software Libre en la educación y mucho más.

 

June 16, 2018

As you might have heard I decided to step down from my maintainer positions in KDE, especially KWin. Unfortunately I had to read very weird things about it and so I think it’s time to write about what it means that I am no longer maintainer of KWin.

First of all: I’m not leaving KDE. I’m still contributing in form of code, bug management and reviews. And I intend to continue to do this.

Second of all: I did not step down as maintainer because of the VDG or the usability group. I understand that my mail read like this, but it’s not the case. That I would step down as maintainer was inevitable and I’m sure it didn’t come as a general surprise to my fellow Plasma and KWin hackers. Personally I decided to step down as maintainer once the Wayland port is finished years ago. In my opinion KWin reached that state about two years ago. I continued to be maintainer to prepare for a good hand over. I deliberately reduced my involvement and passed responsibility to others. This was a long process and worked great in my opinion. As an example I want to point out the new and awesome blur effect introduced in 5.13. My first comment on the phabricator code review was that I’m not going to review it, but leave it to others. I think the result is great and I’m very happy how this worked out.

Over the last year I thought a lot about passing on the torch for maintainership. I realized that I contribute less and less of code but are at the same time blocking many changes due to me reviewing the code and giving a nak or through inactivity by just not reviewing the code at all. In KDE we have a saying: “Those who do, decide”. I realized I’m not doing enough anymore to decide. This results in the timing of me stepping down: I once again nak’ed a change and afterwards realized that I cannot do this. Either I need to actively veto a change I consider wrong and by that anger those who do or step down as maintainer. I decided that I don’t want to be the grumpy old conservative who is against progress and thus did the only logical consequence. It was inevitable, I would have stepped down as maintainer in the next half year for personal reasons anyway, it was just a little bit sooner to help those who are currently working on improving our products.

For KWin this means a significant improvement. A maintainer who is not responsive to reviews is not helpful to the project. By stepping down I give others the possibility to accept changes and nobody needs to wait for me to acknowledge changes. This is especially important for new contributors who we want to integrate better. Also for me personally it is a great improvement as it takes away a lot of burden for not reviewing the code. I now feel way more relaxed to do code changes I’m interested in and chiming in to reviews where I feel like I want to say something. And at the same time I can ignore other review requests as I know there will be a good review on them and it won’t depend on me. Also KWin is currently in a great position to step down as maintainer. We have more developers working on KWin than we had for years. KWin is in a great shape and I’m very positive about the future.

I read a few comments where users expressed the fear that the quality of KWin would suffer by me stepping down. I feel honored that users think I acted positively to the quality. Personally I am quite certain that the quality of KWin won’t suffer. As example I present you the 5.13 release with more user visible changes in KWin for years and me hardly contributing anything to that. Also KWin has an awesome test suite which would catch regressions.

On the other hand I read some disturbing comments about NVIDIA support getting improved by me stepping down as maintainer. Let me assure you that I never blocked any change which would be NVIDIA specific. In fact I encouraged NVIDIA to implement the changes required to get EGL stream working in KWin. Unfortunately NVIDIA has not contributed such patches.

Now a few words on how I maintained KWin. My aim as maintainer was to hand over the code to the next maintainer in a better shape than how it was when I became maintainer. I hope that I could contribute to this aim and many of my decisions as maintainer were motivated by that aim. I learned what went well in past KWin and tried to apply the lessons from it. I considered KWin as a standalone product inside KDE Plasma and judged changes from the perspective of a window manager. One of my highest rules was: no workarounds! No workarounds for broken applications, no workarounds for broken drivers and no workarounds for Plasma. No matter how many users a software has, KWin won’t add workarounds. This applies to software such as Spotify, Chromium and even GTK. If the applications or toolkits are broken, they need to be fixed, so that it works in all window managers and not just in KWin. With the time I found workarounds, e.g. for Netscape Navigator. Of course such workarounds don’t make sense, but clutter the code and negatively affect all users. Or there were workarounds for kdesktop and kicker (the KDE 3 panel, not the Plasma 5 menu). KWin is older than Plasma and I expected that Plasma would evolve and change (which did happen with Plasma 5, Plasma Active and Plasma mobile). KWin needs to be flexible enough to handle such evolution without having to rely to workarounds. Thus if something was needed we did the proper solution instead of finding fast workarounds. I think that the refusal to add workarounds helped the product KWin to achieve the level of quality we have today. Of course it results in disappointed users – as an example the NVIDIA users who would like to have a better experience – but in the long term all users benefit from the strict and hard line I used to maintain KWin.

I also learned that the number of options is a problem in KWin. We have optional features with even more optional features. Over the years I noticed that most of the breakage is in such areas. By trying to do too much we degraded the usability and quality of KWin. I reacted to this by making KWin more flexible and allow users to influence how they like to have KWin without having to carry the code in KWin. The result is KWin scripting, scripted effects, Alt+Tab QML switching themes and in general moving all of the UI elements to QML. This allowed us to streamline KWin and provide a high quality for the areas we offer, but at the same time give users the full flexibility to adjust KWin as they need. Simple by default, powerful when needed.

On the other hand this of course did not go well with all users. Many users requested features and my general response was: no. Any addition to KWin should go through scripts and be maintained by users. Of course not every user understood why they should fork an effect just to get another option. Or why I said no to a contributed patch. But in the end I think this helped to keep the quality high and have KWin in a maintainable state. In that sense I understand that not every user appreciated how I maintained KWin. The hard rules I applied in bug reports unfortunately created tension and the community working group had to step in more than once. For those users hoping that they can get their pet bugs resolved now that I stepped down as maintainer: I’m sorry to disappoint you. I still go through the bug reports and will continue to manage them the way I did as maintainer as long as my successor decides to apply different rules for maintaining KWin.

Last but not least a few words to my criticism on the VDG and usability project. First of all I need to apologize for mixing this in my mail about stepping down as maintainer. I should have raised my concerns at a different time as this was unfortunately received as I stepped down due to conflicts. I’m somebody who speaks up when he feels things go wrong. Given the way how I maintained KWin as explained above this creates tension with the usability project. My aim is to not implement every feature request and move the responsibility to users while the usability project tries to implement everything which makes users happy IMHO. This is obviously a clash of cultures and due to that I very often had to take an opposite position to what the usability project tried to achieve. I raised my concerns quite often. For me it was personally difficult to hold up such a position over a long time. Due to that as explained in my mail I lost motivation to review changes. My mail was mostly to explain why I lost motivation and I think to those in Plasma who know how I maintained KWin and what my aims are this was understandable. I was very unhappy on how this got communicated in some news postings and social media. As all those internal information were missing. I urge users to keep project internal discussions to the projects.

What I want to point out is that I really appreciate the work the usability project and also the VDG are doing. They are super enthusiastic and try to bring our software to the next level. I do not disagree with their work. The criticism I expressed in my mail was focused on the process and the transparency on how decisions are made. That’s where I personally see the need for improvements. What’s quite important to me is to point out again that those two projects are not responsible for me stepping down as maintainer.

Thank you.

Esta semana fue lanzada la última versión del escritorio de la Comunidad KDE  y poco ha tardado el gran elav en realizar un vídeo explicando cómo configurar Plasma 5.13 paso a paso. Otra demostración más del eslogan  “simple por defecto, poderoso cuando es necesario”.

Cómo configurar Plasma 5.13 paso a paso

No es la primera vez que utilizo uno de los vídeos de elav en el blog para promocionarlo y para tener una referencia visual de las grandes posibilidades del escritorio Plasma 5 de KDE. Hace unos meses os presenté “8 buenos motivos para usar Plasma Desktop” , “Personalizando Plasma 5 en ArchLinux” o “Personalizar Plasma 5.12 en ArchLinux – Vídeo“.

Cómo configurar Plasma 5.13 paso a paso

Hoy os presento otro vídeo similar, esta vez sobre el recién lanzado Plasma 5.13 instalado sobre KDE Neon, la no-distribución según sus creadores, que bajo mi punto de vista es la más indicada para los simpatizantes del proyecto KDE.

Elav empieza hablando de lo acertada nueva pantalla de bloqueo para pasar después comentar el nuevo fondo de pantalla. A continuación se dedica a comentar las novedades de Plasma 5.13, empezando con los cambios en la integración de los navegadores web. Elav nos explica como añadir dicha integración (mediante “sudo apt install plasma-browser-integration”, un reinicio del sistema y la adición de un add-on.).

Una vez explicado esta gran novedad Elav se dedica a cambiar decenas de opciones de configuración y personalización del entorno, haciendo hincapié en el desenfoque y en las transparencias de los menús contextuales. No obstante, creo que lo mejor es que cojáis palomitas y que veáis integro el vídeo de más de una hora. Estoy seguro que aprenderéis muchas cosas.

 

 

Las novedades de Plasma 5.13

No quiero terminar el artículo sin dar un repaso rápido a algunas de las novedades de Plama 5.13.

  • Mejoras en la integración de los navegadores web.
  • Mayor optimización de recursos para equipos de poca potencia.
  • Nuevos efectos visuales como el desenfoque (blur).
  • Mejoras en Wayland.
  • Retoques en las Preferencias del Sistema.
  • Nuevas pantalla de bloqueo y de autentificación.
  • Mejoras en el gestor de aplicaciones Discover.
  • Nuevo y precioso fondo de pantalla.

June 15, 2018

I found this construct some time ago. It took some reading to understand why it worked. I’m still not sure if it is actually legal, or just works only because m_derivedData is not accessed in Base::Base.

struct Base {
    std::string& m_derivedData;
    Base(std::string& data) : m_derivedData(data) {
    }
};

struct Derived : public Base {
    std::string m_data;
    struct Derived() : Base(m_data), m_data("foo") {
    }
};

When you write code in QML, ListModel is a handy class to quickly populate a list with data. It has a serious limitation though: the values of its elements cannot be the result of a function. This means you cannot write this:

import QtQuick 2.9
import QtQuick.Window 2.2

Window {
    visible: true

    ListModel {
        id: speedModel
        ListElement {
            name: "Turtle"
            speed: slowSpeed()
        }
        ListElement {
            name: "Rabbit"
            speed: highSpeed()
        }
    }

    Column {
        Repeater {
            model: speedModel
            Text {
                text: model.name + " " + model.speed
            }
        }
    }

    function slowSpeed() {
        return 12;
    }

    function highSpeed() {
        return 42;
    }
}

Running this will fail with that error message: "ListElement: cannot use script for property value".

A first workaround: use a JavaScript array

The first workaround to this limitation I came up with was to replace the ListModel with a JavaScript array, like this:

import QtQuick 2.9
import QtQuick.Window 2.2

Window {
    visible: true

    property var speedModel: [
        {
            name: "Turtle",
            speed: slowSpeed()
        },
        {
            name: "Rabbit",
            speed: highSpeed()
        }
    ]

    Column {
        Repeater {
            model: speedModel
            Text {
                property var element: speedModel[model.index]
                text: element.name + " " + element.speed
            }
        }
    }

    function slowSpeed() {
        return 12;
    }

    function highSpeed() {
        return 42;
    }
}

This works fine, but has two limitations.

First limitation: you can't use the model as usual in your delegate because model.index is the only piece of information available, hence the need for the element property (Actually, I wish the variable representing the data inside a delegate were always called element instead of model, but that's another story...)

Second limitation: you can't manipulate the data afterwards. If you add an element to the array, the view is not going to display it.

Second workaround, initialize ListModel from a JavaScript array

Instead of passing a JavaScript array to our view, this workaround uses a ListModel, but initializes it using JavaScript:

import QtQuick 2.9
import QtQuick.Window 2.2
import QtQuick.Controls 2.2

Window {
    visible: true

    ListModel {
        id: speedModel
        Component.onCompleted: {
            [
                {
                    name: "Turtle",
                    speed: slowSpeed()
                },
                {
                    name: "Rabbit",
                    speed: highSpeed()
                }
            ].forEach(function(e) { append(e); });
        }
    }

    Column {
        Repeater {
            model: speedModel
            Text {
                text: model.name + " " + model.speed
            }
        }
        Button {
            text: "Add"
            onClicked: {
                speedModel.append({name: "Bird", speed: 60});
            }
        }
    }

    function slowSpeed() {
        return 12;
    }

    function highSpeed() {
        return 42;
    }
}

This approach avoids the two limitations of the previous workaround: the view can use the model as usual, which is nice especially if the model and the view are not defined in the same file. And we can modify the model as we usually do.

I tried to simplify the forEach() call to forEach(append) but hit another error: "QML ListModel: append: value is not an object". Don't know why this happens, if you have the answer I would love to hear it.

You can make the declaration less verbose by representing elements using arrays instead of objects in Component.onCompleted, like this:

Component.onCompleted: {
    [
        ["Turtle", slowSpeed()],
        ["Rabbit", highSpeed()],
    ].forEach(function(element) {
        append({
            name: element[0],
            speed: element[1]
        });
    });
}

It's an interesting approach if you have many rows and few columns. You still get named fields in the view, so you don't loose any readability.

That's it for this article, I hope it was useful! Here are the source files if you want to play with them: fail.qml, jsarray.qml and listmodel-js-init.qml.

Recently I had to write some scripts to automatize some of my daily tasks. So I had to think about which scripting language to use. You’re probably not surprised when I say I went for C++. After trying several hacky approaches, I decided to try out Cling – a Clang-based C++ interpreter created by CERN.

Cling allows developers to write scripts using C and C++. Since it uses the Clang compiler, it supports the latest versions of the C++ standard.
If you execute the interpreter directly, you’ll have a live environment where you can start writing C++ code. As a part of the standard C/C++ syntax, you will find some other commands beginning with ‘.’ (dot).

When you use the interactive interpreter, you can write code like:

#include <stdio.h>
printf("hello world\n");

As you can see, there is no need to worry about scopes; you can just call a function.

If you plan to use Cling as an interpreter for creating your scripts, you need to wrap everything inside of a function. The entry point of the script by default is the same as the file name. It can be customized to call another function. So, the previous example would turn into something like:

#include <stdio.h>                                                                               
                                                                                                       
void _01_hello_world() {                                                                               
    printf("foo\n");                                                                                   
}

…or the C++ version:

#include <iostream>                                                                               

void _02_hello_world()
{
    std::cout << "Hello world" << std::endl;
}

The examples are quite simple, but they show you how to start.

 

What about Qt?

#include <QtWidgets/qapplication.h>                                                                    
#include <QtWidgets/qpushbutton.h>                                                                     
                                                                                                       
void _03_basic_qt()                                                                                    
{                                                                                                      
    int argc = 0;                                                                                      
    QApplication app(argc, nullptr);                                                                   
                                                                                                       
    QPushButton button("Hello world");                                                                 
    QObject::connect(&button, &QPushButton::pressed, &app, &QApplication::quit);                       
    button.show();                                                                                     
                                                                                                       
    app.exec();                                                                                        
}

But the previous code won’t work out of the box – you need to pass some custom parameters to cling:

cling -I/usr/include/x86_64-linux-gnu/qt5 -fPIC -lQt5Widgets 03_basic_qt.cpp

You can customize your “cling” in a custom script based on your needs.

You can also load Cling as a library in your applications to use C++ as a scripting language. I’ll show you how to do this in one of my next blog posts. Cheers!

The post Scripting In C++ appeared first on Qt Blog.

There has been a lot of back and forth around the use of Free Software in public administration. One of the latest initiatives in this area was started by the Free Software Foundation Europe, FSFE. It focuses on the slogan: Public Money – Public Code. There are various usage scenarios for Free Software in public administration. The span ranges from the use of backend technology over user-facing software, e.g. LibreOffice, up to providing a whole free desktop for the administrative staff in a public service entity such as a city council. In this article we will focus on the latter.

When the desktops in an administration are migrated to Linux, the administration becomes a distribution provider. An example for this is the LiMux desktop, that powers the administration of the city of Munich since 2012.

LiMux is a distribution, maintained by the central IT department of the City of Munich. Technically, it builds upon Kubuntu. It provides specific patches, a modified user experience and an automatic distribution system, so all desktops in all departments of the city can be easily administered and offer a consistent user experience.

Distributions in the Free Software ecosystem have different roles, one of them surely being the provider of the finishing touches, especially to important software for its own users. Obviously public administration has special demands. Workflows and documents for example have a totally different importance than for the average Kubuntu user.

In Munich for example, architects in one department complained that Okular, the LiMux and KDE pdf reader, would freeze when they tried to open large construction plans. When the city investigated this issue further, they found out that actually Okular wouldn’t freeze, but loading these large maps would simply occupy Okular for quite a while, making the user think it crashed.

Naturally the City of Munich wanted this issue fixed. But this use case is rather rare for the voluntary Free Software Developer, as only few user groups, like architects, are actually in the situation of having to deal with such large files, so it was unlikely that this bug would be fixed on a voluntary basis.

Since the city does not have enough developer power to fix all such issues themselves, they looked to KDAB for external support. With our KDE and Qt expertise, we were well-positioned to help. Together we identified that, instead of the suggested busy indicator for Okular the City of Munich wanted, progressive loading would be an even better solution. It allows the user to visually follow the loading process and makes it possible for the user to interact with large files as early as possible. And as the City does not want to maintain a patch that fixes this issue for them locally, we also helped them to get all the patches upstream.

This way all efforts are made available for the architects at the City of Munich and also for the general public. In the same spirit we have fixed numerous issues for the City of Munich all around KDE and Qt. As another example: we brought back extended settings to the Qt print dialogue. This allows the City of Munich to make use of all of the functionalities of their high-tech printer, like sorting, stapling and so on. You can read more about KDAB’s work on this here.

Becoming a distribution implies a lot of responsibility for a public administration and KDAB is proud to offer reliable backup for any administration that decides to follow the Linux, Qt and KDE path.

 

Afternote: Recent developments mean that the City has decided to revert the migration back to Microsoft by 2020. The good news is that most changes and adjustments they have made are available upstream and other administrations can build their own solutions upon them.

 

The post The LiMux desktop and the City of Munich appeared first on KDAB.

Hi all, I am Chinmoy and I am working on the GSoC project Verifying signatures of pdf files. This is my very first post and in this I intend to inform about the progress I have made since May 14.

Now due to some unforeseen problems I had to deviate from my proposed timeline. Initially my plan was to implement all non-graphical components in the first half of coding period and in the later half implement the graphical components. But while coding RevisionManager (this would have enabled to view a signed version of document before an incremental update like Adobe Reader does) I ran into some issues while designing its API. So I postponed my work on RevisionManager and started working on the graphical components. So as a result I was able to add basic GUI support needed to verify signed PDF. The patches are listed in T8704. After applying every patch from the dependency stack of any differential the following changes can be observed :

  1. Okular shows a notification telling a signature form field is present. Signature Field Notification

  2. “Validate All Signatures” validates all signature and shows signature status. Signature Status

  3. “Show Forms” shows the form widget which is basically a rectangle. Signature Widget

  4. Clicking on widget validates that particular signature and shows signature status. Signature Summary

  5. A context menu is also available. Signature Menu

  6. Signature properties will be available only after verifying a signature. It can be accessed either from signature status dialog or context menu. Signature Property

Putting everything together: Signature Gif

As you can see the UI is crude and there’s LOT to be improved. So I would really appreciate any feedback on what to show and what not to.

You can build my okular clone and try out the changes on this PDF.

Thanks for reading :)

June 14, 2018

Hi everybody, it has been a month since I started working on WikiToLearn PWA for Google Summer of Code program and many things happened!
WTL frontend needed some improvements in terms of usability and functionalities. Course needed a way to update their metadata: title and chapters order for example
So I implemented a work-in-progress EDIT MODE, as you can see below. You can drag chapters, insert new ones and/or modify course title.

Now that editing structures works, I started to write some unit and integration tests. I had to change the current setup to make it working: PhantomJS is no longer maintained so I replaced it with Chrome Headless for now.
New components have been added: Snackbar for displaying messages, Error component has been rewiewed and it is available globally by setting the “error” variable from any component.

One problem regarding OAuth2 authentication was that the token was expiring without being refreshed automatically. Now the PWA can detect if a token is expiring in little time (thanks to @crisbal and its vue-keycloak package) and refresh it.
As of now, it is a very great experience, i learned a lot using VueJS, mentors helped me a lot by keeping me focused on what needs to be done now, and what can be delayed after. Thanks a lot all.

L'articolo GSoC 2018: First period summary sembra essere il primo su Blogs from WikiToLearn.

Hi! It’s been quite a while since the first blog post. I’ve been working on the new redesign of the Keyboard KCM, and in this post I’m going to show you the progress I’ve made so far.

Since last time, I’ve been mainly focusing on working improving the infrastructure. One of the goals of this project was to make configuring the input methods (like fcitx, ibus, …) in the System Settings easier. I decided to start with fcitx, since we know the developer of it (Xuetian Weng), and thus easier to ask when there is a question/problem.

I made the KCM detect the presence of fcitx on the system and show the user an updated list of options with fcitx in mind. Then, if somehow fcitx goes away, it falls back to xkb (on X11) and shows you that some of the IMs have gone missing.

When fcitx dies.

And then, to incorporate this UI change to actual functionality, the keybinding for used for switching the xkb layouts now switches between fcitx’ layouts whenever fcitx is enabled. This is done on the Kded (KDE Daemon) part of the KCM.

One of the problems for doing this is that many CJK (Chinese, Japanese, and Korean) users need to frequently switch to typing Latin while typing in their language. Weng, the developer of fcitx said, fcitx allows this by assigning a special meaning to the first keyboard layout configured in their layout list, and when the user presses a designated switch key, the keyboard switches to that first layout. However, we think this method is overcomplicated and because our goal is to make users’ lives easier, I decided to make this model a bit simpler, by abstracting fcitx’ model away behind the kded module. At the moment, the model I’m thinking of is to add a “Keyboard Layout to use while in Latin Mode” option to the Configure dialog that pops up when the user clicks on the configure button for each IM on the list, and then add another shortcut to switch between the Latin mode and the IM’s mode.

Fortunately, Weng says, a better model is coming up on the new release of fcitx5! But for now, we are stuck with fcitx4, and have to deal with its issues the way I described above.

On another note, last time, Alexander Browne mentioned that the settings in the Advanced section is important and should not be removed. I fully agree, and I’m thinking of moving those options to each layout’s specific settings. Many of the options don’t make sense to be a layout-universal settings anyway. This will allow more flexibility for each layout, and switching will make more sense.

To finish off, I’ll talk about the code that was deleted. The legacy tray icon indicator is removed from the daemon code now, and will be replaced by a plasmoid-based indicator icon.

June 13, 2018

It might sound a bit weird that I’m now talking about something that took place two years ago, but I just realized that while the call to participate in the survey for the KDE Mission was published on the Dot, the results have so far not received their own article.

People who have participated in the survey but don’t read the Community list might have missed the results, which would be a pity. Therefore, I’d like to offer a bit of a retrospective on how the survey came to be and what came out of it.

The Backstory

To recap a bit on what lead to the survey: After we had finally arrived at a shared Vision statement for KDE in April 2016, the next step was to distill the more high-level Vision into a more concrete Mission statement. We started brainstorming content for the Mission in our Wiki, but soon realized that there were diverging viewpoints on some issues and the relatively small group discussing them directly on the mailing list wasn’t sure which viewpoint represented the majority of the KDE community.

We also wanted to know what our users care about the most. Although in the end it’s the community who defines our Mission, we don’t make software just for ourselves, so we hoped that our goals would align with those of our users.

The Survey

To find out what is most important to the community at large as well as our users, I applied one of the standard tools in my belt: An online survey, in which participants indicated their perceived importance of goals that were brought up in the brainstorming, as well as the perceived usefulness of several measures towards achieving each goal. They also gave their opinion on a few of the contending viewpoints from the brainstorming, as well as on the importance of certain target audiences and platforms.

A few demographic questions, such as whether participants identified as users of or contributors to KDE software, how long they had been using and contributing to KDE software, as well as in which area they contribute or whether they do it as part of their job, aimed at making sure that our sample isn’t skewed toward certain groups.

We invited participants via the aforementioned Dot article, several big KDE mailing lists as well as Google+ (where we have a pretty lively community).

Analysis

I ran series of one-sample t-tests (with Benferroni correction) to check whether the sample averages for the importance ratings were significantly different from the scale mid-point (4 on a 7-point scale). I did that instead of a within-subjects ANOVA because we cared about whether each goal or means was above- or below-averagely important to the KDE community more than than about how they compared to each other.

Furthermore I compared the two variables representing both sides in each case of contending viewpoints using a paired-sample t-test.

Results

Sample

We had 201 currently or formerly active KDE contributors participating in the survey, as well as 1184 interested users.

Demographics showed that no particular group of people dominated our sample in any of our demographical variables (other then users vs. contributors, which were analyzed separately anyway).

Some result highlights

I will only report a few interesting highlights in this post, please see the full report here for all results. If you want to do your own analysis, you can find all data here.

 

Chart showing the perceived importance of several proposad goals

Perceived importance of several proposad goals (click to enlarge)

The importance of the goals turned out to be quite similar between contributors and users, with creating software products which give users control, freedom and privacy, as well as providing users with excellent user experience and quality being the most important for both groups. Contributors rated all goals’ importance as significantly above the scale midpoint. The biggest difference between users and contributors is on  whether we should reach as many users as possible, with users rating it much lower than contributors. This is to be expected, as current users naturally don’t care much about whether we try to reach other users.

Chart showing the agreement with contending viewpoints

Agreement with contending viewpoints (click to enlarge)

Regarding the contending viewpoints within the brainstorming group, the survey showed a statistically significant preference among the KDE community at large for one of the viewpoints in three out of four cases: On average, the community shows a preference for focusing on applications covering our users’ most common tasks, for covering GUI applications as well as non-GUI applications, and for focusing on Qt. The community seems largely indifferent about whether we should strive for consistency between KDE GUIs across platforms, or for adherence to specific platform guidelines.

Chart showing relative importance of different target operating systems

Relative importance of different target operating systems (click to enlarge)

When it comes to which operating systems we should target, GNU/Linux is still preferred by a large margin, followed by Android according to our contributors, and by *BSD and other Free operating systems according to our users. There was a big difference in perceived importance of Windows, OS X (now macOS) and iOS, showing that KDE contributors are on average far more interested in supporting proprietary operating systems than our current users are. One possible interpretation of that could be that the KDE community takes a more pragmatic approach to OS support, whereas our current users take a more ideological standpoint. A different interpretation would be that our users care about what they currently use, and while Windows and macOS has seen some support by our applications, the userbase on those is likely comparatively small.

Chart showing relative importance of different target audiences.

Relative importance of different target audiences (click to enlarge)

An interesting finding was the very close match between contributors and users when it comes to the importance of different target audiences. This shows that apparently, KDE has so far reached pretty much exactly the users we aimed for.

Overall, the results confirm that the ideas coming out of the brainstorming are mostly shared by the wider community as well as our users. They also show that the KDE community has some ambition to expand our userbase and target platforms beyond what we serve today, but still wants to stay true to our roots such as Qt.

What Happened Next

The full results were presented at an Akademy BoF session to discuss the KDE Mission, as well as to the Community mailing list. They were used to guide the further discussion that eventually lead to KDE’s Mission and Strategy statements.

Hi! GSoC student here :]. This first weeks coding for Krita have been so busy I forgot to write about them. So I’ll start to sum everything up in short posts about each step of the project implementation process.

First Steps, setting up a dev environment

I followed the steps in the 3rdparty to compile the base krita system on OSX. This easy to follow instructions helped me get a basic Krita installation in a short time. However not everything worked for me quite easily and most tests did not work or run at all on OSX with the message.

QFATAL : FreehandStrokeBenchmark::testDefaultTip() Cannot calculate the bundle path from the app path

After some digging I found out that no program that uses a GUI can run outside of an app bundle. So while not a future proof, to start working on the code I made a quick script to install the tests I’m interested inside Krita.app folder. To allow tests to run. By default all tests are linked to libraries in the build dir, but because this wont work on OSX one approach would be to install also the tests in the bundle and link to the install libraries or, another approach could be to generate an app bundle for each test.

In any case the tests could run so It was time to start working on the unit test.

Implementing Mask Similarity Test

Phabricator task: Base unit test kis_mask_similarity_test
This unit test intention is to compare the correctness of the new vectorized mask rendering by comparing it to the same settings Mask produced by the previous engine. The new versions have to be as identical as possible to ensure the painting effects the user is expecting does not change between engines (The user can’t change how the mask is produced, but we use the scalar version for smaller brush dab sizes).

In this sense the Test had to create a maskGenerator, duplicate it, make one use the vectorized engine and the other the scalar version. In short this is done with the following code.

  QRect bounds(0,0,500,500);
  KisCircleMaskGenerator circVectr(500, 1.0, 0.5, 0.5, 2, true);
  KisCircleMaskGenerator circScalar(circVectr);

  // Force usage of scalar backend
  circScalar.resetMaskApplicator(true);
  KisMaskSimilarityTester(circScalar.applicator(),
                          circVectr.applicator(), bounds);

For this we had to implement the method resetMaskApplicatorwhich resets the mask applicator engine to Scalar if the input boolean is true. The method itself is just a wrapper that return a Scalar Factory if the bool is true, if not it uses the old function.

  template<class FactoryType>
  typename FactoryType::ReturnType
  createOptimizedClass(typename FactoryType::ParamType param,
                       bool forceScalarImplemetation)
  {
    if(forceScalarImplemetation){
      return FactoryType::template create<Vc::ScalarImpl>(param);
    }
    return createOptimizedClass<FactoryType>(param);
  }

This ensures the mask created with both engines is generated from the same parameters and reduces the variability only to the output of each engine.

Comparing masks

Mask rendering problem


After generating the masks we need a way to compare them, and because the mask is in essence a map for color application, the best way to compare the effective similarity of both implementations is to generate an image from the mask. After that we can compare the images to look for differences.

At first I decided to allow a very small percentage of the image to be different (about .03%) but this turned out to be a problem. While in theory it sounds ok, since all values are generated from similar logic, in practice it was a disaster. Accepting even one pixel difference could result in letting pass as OK a mask with a completely masked value in the middle (which happened). So a best strategy is to allow a small deviation but do not allow any difference outside of that deviation.

Mask minimal differenceEven one pixel ruins the mask

To test for difference in the images produced by the mask we used the function compareQImages. The three numerical arguments in the end determine how the test will behave. The first number two numbers define the fuzziness for considering something similar or not, corresponding to color channel and alpha channel, because the mask information is stored in the Alpha channel we just set the alpha fuzziness. The las argument will determine the number of differences allowed.

QVERIFY(
    TestUtil::compareQImages(tmpPt,scalarImage,vectorImage,
                             0, 2, 0));

Things to improve

The test works ok to test the similarity of one particular mask variation, which is fine for the initial implementation. however it will be needed a way to test more mask shape variations. Specifically changing the softness and mask ratio.

Things to consider:
+ Produce reference images to compare to: Images would have to be generated before hand and shipped with the code. This is to protect any future modification to alter the looks of the mask. The static mask would need to be compared for both models, the vectorized and the scalar one.
+ Mask Deformations: Masks can be generated with different proportions and softness values. Some combinations could generate differences if the implementation is wrong. We could iterate on every single combination, this is effective but costly. Another possibility is to identify what values are more likely to introduce errors, and only test for those cases.

Next: Implementing circular Gauss Mask optimization!

With the basics of the test ready, its time to begin implementation of a mask generator. We will start with Gauss Mask as I have some code I did for the proposal. And we can use this to see the test is working properly in this case.

********* Start testing of KisMaskSimilarityTest *********
Config: Using QtTest library 5.10.0, Qt 5.10.0 (x86_64-little_endian-lp64 shared (dynamic) release build; by Clang 9.0.0 (clang-900.0.39.2) (Apple))
PASS   : KisMaskSimilarityTest::initTestCase()
PASS   : KisMaskSimilarityTest::testCircleMask()
QPoint(50,13) source 0 0 0 169 dest 0 0 0 171 fuzzy 0 fuzzyAlpha 1 ( 1 of 4 allowed )
QDEBUG : KisMaskSimilarityTest::testGaussCircleMask()  Different at QPoint(33,15) source 0 0 0 84 dest 0 0 0 86 fuzzy 0 fuzzyAlpha 1 ( 2 of 4 allowed )
QDEBUG : KisMaskSimilarityTest::testGaussCircleMask()  Different at QPoint(67,15) source 0 0 0 84 dest 0 0 0 86 fuzzy 0 fuzzyAlpha 1 ( 3 of 4 allowed )
QDEBUG : KisMaskSimilarityTest::testGaussCircleMask()  Different at QPoint(33,85) source 0 0 0 84 dest 0 0 0 86 fuzzy 0 fuzzyAlpha 1 ( 4 of 4 allowed )
QDEBUG : KisMaskSimilarityTest::testGaussCircleMask()  Different at QPoint(67,85) source 0 0 0 84 dest 0 0 0 86 fuzzy 0 fuzzyAlpha 1 ( 5 of 4 allowed )
FAIL!  : KisMaskSimilarityTest::testGaussCircleMask()
'TestUtil::compareQImages(tmpPt,scalarImage, vectorImage, 0, 1, 4)' returned FALSE. ()

The differences are so big that even a simple look can tell both mask are different!

In the next post I will show how the I implemented the Gauss Mask Generator on Vc, what problems We found and how We solved them.

Until next time!

The newly released and extremely elegant Plasma 5.13 is now available in KDE neon User Edition. We’ve also gone ahead and included Qt 5.11 and KDE Frameworks 5.47 to get a billion bugs fixed and improve printing support.

Enjoy!

Today the Krita team releases Krita 4.0.4, a bug fix release of Krita 4.0.0. This is the last bugfix release for Krita 4.0.

Here is the list of bug fixes in Krita 4.0.4:

  • OpenColorIO now works on macOS
  • Fix artefacts when painting with a pixel brush on a transparency mask (BUG:394438)
  • Fix a race condition when using generator layers
  • Fix a crash when editing a transform mask (BUG:395224)
  • Add preset memory to the Ten Brushes Script, to make switching back and forward between brush presets more smooth.
  • Improve the performance of the stroke layer style (BUG:361130, BUG:390985)
  • Do not allow nesting of .kra files: using a .kra file with embedded file layers as a file layer would break on loading.
  • Keep the alpha channel when applying the threshold filter (BUG:394235)
  • Do not use the name of the bundle file as a tag automatically (BUG:394345)
  • Fix selecting colors when using the python palette docker script (BUG:394705)
  • Restore the last used colors on starting Krita, not when creating a new view (BUG:394816)
  • Allow creating a layer group if the currently selected node is a mask (BUG:394832)
  • Show the correct opacity in the segment gradient editor (BUG:394887)
  • Remove the obsolete shortcuts for the old text and artistic text tool (BUG:393508)
  • Allow setting the multibrush angle in fractions
  • Improve performance of the OpenGL canvas, especially on macOS
  • Fix painting of pass-through group layers in isolated mode (BUG:394437)
  • Improve performance of loading OpenEXR files (patch by Jeroen Hoolmans)
  • Autosaving will now happen even if Krita is kept very busy
  • Improve loading of the default language
  • Fix color picking when double-clicking (BUG:394396)
  • Fix inconsistent frame numbering when calling FFMpeg (BUG:389045)
  • Fix channel swizzling problem on macOS, where in 16 and 32 bits floating point channel depths red and blue would be swapped
  • Fix accepting touch events with recent Qt versions
  • Fix integration with the Breeze theme: Krita no longer tries to create widgets in threads (BUG:392190)
  • Fix the batch mode flag when loading images from Python
  • Load the system color profiles on Windows and macOS.
  • Fix a crash on macOS (BUG:394068)

Download

Windows

Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.

Linux

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)

When it is updated, you can also use the Krita Lime PPA to install Krita 4.0.4 on Ubuntu and derivatives. We are working on an updated snap.

OSX

Note: the touch docker, gmic-qt and python plugins are not available on OSX.

Source code

md5sum

For all downloads:

Key

The Linux appimage and the source tarball are signed. You can retrieve the public key over https here:
0x58b9596c722ea3bd.asc
. The signatures are here (filenames ending in .sig).

Support Krita

Krita is a free and open source project. Please consider supporting the project with donations or by buying training videos or the artbook! With your support, we can keep the core team working on Krita full-time.

QtCS

Qt Contributors’ Summit 2018 is over. Two days of presentations and a lot of discussions during presentations, talk of Qt over coffee and lunch and in restaurants in the evening.

A hundred people gathered to think where Qt is heading and where it now is. Oslo showed it’s best with warm and sunny weather. The nordic light is something to see, while it does wake people at awkward hours of the morning. I’ve never had as much company at the early breakfast time before rushing to the event venue for last minute checks ��

The major topics of the event included the first early ideas for Qt6. The first markings on the white board put Qt6 still securely in the future several releases out, maybe after Qt5.14

Bugreports has a list of suggested changes. If you have something that you would like to see changed the next time there is an ABI break, take a look and see if you need to add to the list.

The C++ version for Qt6 raised only a mild discussion. This is most likely due to things being a bit open in the C++ development. It seems like C++17 would make most sense, as staying with an older release might tie the project too much. But going with C++20 seems agressive as it will most likely not be completely stable when Qt6 needs to be in heavy development. However there are a lot of open questions around how compilers for different platforms implement the new features coming to the language.

Session

The tools of the project got several sessions and upcoming improvements and changes to Gerrit, Jira, Coin and testing tools were discussed. Every area will see changes and improvements going forward. So if your Jira boards look strange one morning, it means that that the tools has gotten updates and the process has been streamlined.

The sessions included Qt for Python, that is now in tech preview and officially supported. It builds on the PySide project and has a robust system for making Python bindings for Qt, but it can also be used for any C++ project. Check out Qt for Python now. It is under development, but on a tech preview usable level, so it will see new features arriving all the time.

The above and all the other can be found at the event program page. People will be adding notes to the session descriptions, posting notes to the development list and also adding actionable items to Bugreports. That makes following up on everything much easier and more visible.

coffee

It was a good event with a lot of things being cleared and moved forward. It is always good to have the contributors come together and see each other, it helps the project way more than is present on the surface.

Last it is again time to thank the sponsors for making Qt Contributors’ Summits possible!

KDAB, Viking Software, Froglogic, Intel and Luxoft

KDAB-300x204

logo

luxoft-logo

1280px-intel-logo-svg

froglogic

The post Qt Contributors’ Summit 2018 wrap-up appeared first on Qt Blog.

Optimized and less resource-hungry, Plasma 5.13 can run smoothly on under-powered ARM laptops, high-end gaming PCs, and everything in between.


Control play back, rewind and volume even if your browser is not visible.

Feature-wise, Plasma 5.13 comes with Browser Integration. This means both Chrome/Chromium and Firefox web browsers can be monitored and controlled using your desktop widgets. For example, downloads are displayed in the Plasma notification popup, so even if your browser is minimized or not visible, you can monitor the download progress. Likewise with media playing in a tab: you can use Plasma's media controls to stop, pause and silence videos and audio playing in any tab – even the hidden ones. This a perfect solution for those annoying videos that auto-start without your permission. Another Plasma-browser feature is that links can now be opened from Plasma's overhead launcher (Krunner), and you can also send links directly to your phone using KDE Connect.

Talking of KDE Connect, the Media Control Widget has been redesigned and its support of the MPRIS specification has been much improved. This means more media players can now be controlled from the media controls in the desktop tray or from your phone using KDE Connect.


Blurred backgrounds bring an extra level of coolness to Plasma 5.13.

Plasma 5.13 is also visually more appealing. The redesigned pages in 5.13 include theming tools for desktops, icons and cursors, and you can download new splash screens from the KDE Store directly from the splash screen page. The desktop provides a new and efficient blur effect that can be used for widgets, the dashboard menu and even the terminal window, giving them an elegant and modern look. Another eye-catching feature is that the login and lock screens now display the wallpaper of the current Plasma release, and the lock screen incorporates a slick fade-to-blur transition to show the controls, allowing it to be easily used as a screensaver.

Discover, Plasma's graphical software manager, improves the user experience with list and category pages that replace header images with interactive toolbars. You can sort lists, and they also show star ratings of applications. App pages and app icons use your local icon theme to better match your desktop settings.

Vaults, Plasma's storage encryption utility, includes a new CryFS backend, better error reporting, a more polished interface, and the ability to remotely open and close vaults via KDE Connect.

Connecting to external monitors has become much more user-friendly. Now, when you plug in a new external monitor, a dialog pops up an lets you easily control the position of the additional monitor in correlation to your primary one.

Want to try Plasma 5.13? ISO images for KDE neon will probably be available tomorrow or on Friday. Check out our page with links to Live images to download the latest.

We look forward to hearing your comments on Plasma 5.13 - let us know how it works for you!


Full announcement.

June 12, 2018

Hi everyone, The phase one of the coding period is now completed and I’m done with the initial implementation of typewriter annotation tool in Okular along with writing the integration tests for the same. I have created the revision on Phabricator and it is currently under review. Some review comments by my mentor are still … 

The post GSoC :: Coding Period – Phase One (May 14th to June 12th): Initial implementation of typewriter annotation tool in Okular appeared first on Dileep Sankhla.

A frequently requested feature by Qt customers is the possibility to access, view and use a Qt-made UI remotely.

However, in contrast to web applications, Qt applications do not offer remote access by nature as communication with the backend usually happens via direct functions call and not over socket-based protocols like HTTP or WebSockets.

But the good thing is, with right system architecture with strong decoupling of frontend and backend and using the functionality of the Qt framework, it is possible to achieve that!

If you want the embedded performance of a Qt application and together with zero installation remote access for your solution, you might consider the following bits of advice and technologies.

Remote access via WebGL Streaming or VNC

When having a headless device or an embedded device with a simple QML-made UI that only needs to be accessed by a small number of users remotely via web browser, WebGL streaming is the right thing for you. In WebGL streaming, the GL commands to render the UI are serialized and sent from the web server to the web browser. The web browser will interpret and render the commands. Here’s how Bosch did it:

On a headless device, you can simply start your application with these command line arguments: -platform webgl.

This enables the WebGL streaming platform plugin. WebGL data is accessible as the app runs locally.

For widget-based applications, you might consider the VNC based functionality, but this requires more bandwidth since the UI is rendered into pixel buffers which are sent over the network.

The drawback of both of the platform plugin approach of VNC and WebGL is, that within one process, you can start your application only either remotely or locally.

If your device has a touchscreen and you still want to have remote access, you need to run at least two processes: One for local and one for remote UI.

The data between both processes is shared via the Qt RemoteObjects library.

Use cases examples are remote training or remote maintenance for technicians in an industrial scenario, where from the browser you can show and control remotely mouse pointer on an embedded HMI device.

Have a look at the previous blog posts:

http://blog.qt.io/blog/2017/02/22/qt-quick-webgl-streaming/

http://blog.qt.io/blog/2017/07/07/qt-webgl-streaming-merged/

http://blog.qt.io/blog/2017/11/14/qt-webgl-cinematic-experience/

 

WebAssembly

WebGL streaming and VNC are rather suited for a limited number of users accessing the UI at the same time.

For example, you might have an application that needs to be accessed by a large number of users simultaneously and does not require to be installed. This could be the case when the application is running as Software-as-a-Service (SaaS) in the cloud. Fortunately, there is another technology that might suit your needs: Qt for WebAssembly.

While WebAssembly itself is not a Remote UI technology like WebGL or VNC, it is an open bytecode format for the web and is standardized by W3C.

Here’s a short video of our sensortag demo running on WebAssembly:

With Qt for WebAssembly, we are able to cross-compile Qt applications into the WebAssembly bytecode. The generated WASM files can be served from any web server and run in any modern web browser. For remote access and distributed applications, a separate data channel needs to be opened to the device. Here it needs to considered that a WebAssembly application runs in a sandbox. Thus, the only way to communicate with the web server is via HTTP requests or web sockets.

However, this means that in terms of web server communication, Qt applications then behave exactly like web applications!
Of course, you could still compile and deploy the application directly into another platform-specific format with Qt and use Qt Remote Objects for client-server communication. But only with Qt for WebAssembly, the zero-installation feature and sandboxing come for free ��

Exemplary use case scenarios are SaaS applications deployed in the cloud and running in the browser, multi-terminal UIs, UIs for gateways or headless devices without installation.

Have a look at our previous blog posts on Qt for WebAssembly:
https://blog.qt.io/blog/2018/04/23/beta-qt-webassembly-technology-preview/
https://blog.qt.io/blog/2018/05/22/qt-for-webassembly/

That’s it for today! Please check back soon for our final installment of our automation mini-blog-series, where we will look at cloud integration. In the meantime, have a look at our website for more information, or at Lars’ blog post for an overview of our blog series on Qt for Automation 2.0.

The post Remote UIs with WebGL and WebAssembly appeared first on Qt Blog.

June 11, 2018

Launchers in an OS have become the central point of access and interaction with system content. It is the main way that most people will interact with applications and files. In recent years, other OSs have become increasingly interested in beefing up their application menus. Plasma currently has 3 launchers integrated. Users are asked to select one or the other by right-clicking in the “start” menu button and switch a different launcher. The interaction is somewhat quirky but it is effective.

I wanted to contrast our iteration with something that might be more interactive, more straightforward and help users find the desired content faster. Here is an idea about that.

Please note that these mockups are just images. The layout, color and interaction is what I am trying to discuss. Is there merit to this convergence idea? Does it solve any issues?

Launchers

Plasma currently has 3 launcher styles added. I believe all of them can be converged or made to look similar. However, given user requests, we have to keep 3 of them and they currently don’t seem to correlate visually. We have the dashboard launcher (WIP), Kickoff and Kicker. Each of them brings different kinds of interactivity and space savings for the users.

I would like to propose a visual merge of these three applications so that they look more cohesive, tight and closer to a Plasma style.

Things to note:

  1. Icons are not meant to be final. They are just a representation
  2. Spacing is relative
  3. Current elements on the screen are meant to be there

Introductions

Fullscreen button: Will resize from kicker to dashboard launcher, from kickoff to fullscreen launcher. Removed the idea of right-clicking the menu and offered visual controls.

Kickoff Menu: To provide consistency, I preserved the kickoff bottom menu bar throughout the different iterations. This will take care of the category clutter menu that is present in the dashboard launcher.

Things to Consider

  1. I don’t know the possibility of creating this code-wise. I welcome feedback.

Screenshot_20180518_231739Screenshot_20180518_231704

Screenshot_20180518_231823

Small

MediumFullscreen

Last time, I was doing a recipe manager. This time I’ve been doing a game with javascript and QtQuick, and for the first time dipping my feet into the Kirigami framework.

I’ve named the game Kolorfill, because it is about filling colors. It looks like this:

Kolorfill

The end goal is to make the board into one color in as few steps as possible. The way to do it is “Paint bucket”-tool from top left corner with various colors.

But enough talk. Let’s see some code:
https://cgit.kde.org/scratch/sune/kolorfill.git/

And of course, there is some QML tests for the curious.
A major todo item is saving the high score and getting that to work. Patches welcome. Or pointer to what QML components that can help me with that.


Older blog entries

 

 


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.