November 16, 2018

Una de las aplicaciones mas orientadas para los desarrolladores de KDE es KDevelop. No por nada se trata de una aplicación que les permite crear aplicaciones, así que las mejoras que se produzcan en él servirán para mejorar el resto de las aplicaciones. Por esta razón me llena de satisfacción anunciar que ha sido lanzado KDevelop 5.3 cargado de novedades.

Lanzado KDevelop 5.3, el entorno de desarrollo de KDE

Lanzado KDevelop 5.3Para los que no lo conozcan KDevelop es un entorno de desarrollo integrado para sistemas GNU/Linux y otros sistemas Unix, publicado bajo licencia GPL, orientado al uso bajo el entorno gráfico KDE, aunque también funciona con otros entornos, como Gnome. (Vía: wikipedia)
Hacía mucho tiempo que no hablaba de esta aplicación (unos cuatro años), y eso que suelo tenerla en mente porque es una de las niñas bonitas de uno de los desarrolladores de KDE con el que más tengo relación.

Desde ese lejano 2014, KDevelop ha dado el salto de tecnologías ya obsoletas (la tecnología del software avanza muy rápido) com0 Qt4 a KDE Frameworks 5 y Qt 5, y ha lanzado dos versiones de la rama 5.x, ofreciendo versión a versión mejoras notables.

Es por ello que este noviembre de 2018 me complace compartir con vosotros que ha sido lanzado KDevelop 5.3, una actualización que nos ofrece algunas novedades como:

  • Mejoras en los analizadores de código como la inclusión del complemento para Clazy.
  • Optimización de código interno, aprovechando sus analizadores de código.
  • Mejoras en el soporte C++.
  • Optimizado el soporte del lenguaje PHP.
  • Mejorado el soporte para Python.
  • Soporte para otras plataformas privativas.

Como vemos,Kdevelop sigue con el camino de la optimización con el firme objetivo de convertirse en un imprescindible para los desarrolladores. Algo que al final repercute en todas las aplicaciones y, de esta forma, en todos los usuarios de sistemas libres.

Más información: KDevelop

I have been invited by Kisio Digital to present the work we have been doing around KDE Itinerary at the Paris Open Transport Meetup next week. The meetup is near Gare de Lyon and starts on Thursday at 19:00. Feel free to come by, I’m looking forward to discuss ideas on how to move KDE Itinerary forward.

While in Paris I’m also going to meet the team behind Navitia, an open source platform for processing and querying static and real-time ground transport data, in particular for local and nation-wide railway networks. That’s obviously very interesting for KDE Itinerary, as mentioned earlier access to dynamic information such as real-time traffic data is probably the biggest challenge for us.

And yes, I of course made sure to pick travel options that don’t have enough test coverage in KDE Itinerary yet ;-)

Linux packaging was a nightmare for years. But recently serious contenders came up claiming to solve the challenge: first containers changed how code is deployed on servers for good. And now a solution for the desktops is within reach. Meet Flatpak!

Preface

In the beginning I probably should admit that over the years I identified packaging within the Linux ecosystem as a fundamental problem. It prevented wider adoption of Linux in general, but especially on the desktop. I was kind of obsessed with the topic.

The general arguments were/are:

  • Due to a missing standard it was not easy enough for developers to package software. If they used one of the formats out there they could only target a sub-set of distributions. This lead to lower adoption of the software on Linux, making it a less attractive platform.
  • Since Linux was less attractive for developers, less applications were created on/ported to Linux. This lead to a smaller ecosystem. Thus it was less attractive to users since they could not find appealing or helpful applications.
  • Due to missing packages in an easy-accessible format, installing software was a challenge as soon as it was not packaged for the distribution in use. So Linux was a lot less attractive for users because few software was available.

History, server side

In hindsight I must say the situation was not as bad as I thought on the server level: Linux in the data center grew and grew. Packaging simply did not matter that much because admins were used to problems deploying applications on servers anyway and they had the proper knowledge (and time) to tackle challenges.

Additionally, the recent rise of container technologies like Docker had a massive impact: it made deploying of apps much easier and added other benefits like sandboxing, detailed access permissions, clearer responsibilities especially with dev and ops teams involved, and less dependency hell problems. Together with Kubernetes it seems as there is an actual standard evolving of how software is deployed on Linux servers.

To summarize, in the server ecosystem things never were as bad, and are quite good these days. Given that Azure serves more Linux servers than Windows servers there are reasons to believe that Linux is these days the dominant server platform and that Windows is more and more becoming a niche platform.

History, client side

On the desktop side things were bad right from the start. Distribution specific packaging made compatibility a serious problem, incompatible packaging formats with RPMs and DEBs made it worse. One reason why no package format ever won was probably that no solution offered real benefits above the other. Given today’s solutions for packaging software out there RPM and DEB are missing major advantages like sandboxing and permission systems. They are helplessly outdated, I question if they are suited for software packaging at all today.

There were attempts to solve the problem. There were attempts at standardization – for example via the LSB – but that did not gather enough attraction. There were platform agnostic packaging solutions. Most notably is Klik which started already 15 years ago and got later renamed to AppImage. But despite the good intentions and the ease of use it never gained serious attention over the years.

But with the approach of Docker things changed: people saw the benefits of container formats and the technology technology for such approaches was widely available. So people gave the idea another try: Flatpak.

Flatpak

Flatpak is a “technology for building and distributing desktop applications on Linux”. It is an attempt to establish an application container format for Linux based desktops and make them easy consumable.

According to the history of Flatpak the initial idea goes way back. Real work started in 2014, and the first release was in 2015. It was developed initially in the ecosystem of Fedora and Red Hat, but soon got attention from other distributions as well.

Many features look somewhat similar to the typical features associated with container tools like Docker:

  • Build for every distro
  • Consistent environments
  • Full control over dependencies
  • Easy to use  build tools
  • Future-proof builds
  • Distribution of packages made easy

Additionally it features a sandboxing environment and a permissions system.

The most appealing feature for end users is that it makes it simple to install packages and that there are many packages available because developers only have to built them once to support a huge range of distributions.

By using Flatpak the software version is also not tied to the distribution update cycle. Flatpak can update all installed packages centrally as well.

Flathub

One thing I like about Flatpak is that it was built with repositories (“shops”) baked right in. There is a large repository called flathub.org where developers can submit their applications to be found and consumed by users:

The interface is simple but has a somewhat proper design. Each application features screenshots and a summary. The apps themselves are grouped by categories. The ever changing list of new & updated apps shows that the list of apps is ever growing. A list of the two dozen most popular apps is available as well.

I am a total fan of Open Source but I do like the fact that there are multiple closed source apps listed in the store. It shows that the format can be used for such use cases. That is a sign of a healthy ecosystem. Also, there are quite a few games which is always good ��

Of course there is lots of room for improvement: at the time of writing there is no way to change or filter the sorting order of the lists. There is no popularity rating visible and no way to rate applications or leave comments.

Last but not least, there is currently little support from external vendors. While you find many closed source applications in Flathub, hardly any of them were provided by the software vendor. They were created by the community but are not affiliated with the vendors. To have a broader acceptance of Flatpak the support of software vendors is crucial, and this needs to be highlighted in the web page as well (“verified vendor” or similar).

Hosting your own hub

As mentioned Flatpak has repositories baked in, and it is well documented. It is easy to generate your own repository for your own flatpaks. This is especially appealing to projects or vendors who do not want to host their applications themselves.

While today it is more or less common to use a central market (Android, iOS, etc.) some still prefer to keep their code in there hands. It sometimes makes it easier to provide testing and development versions. Other use cases are software which is just developed and used in-house, or the mirroring of existing repositories for security or offline reasons: such use cases require local hubs, and it is no problem at all to bring them up with Flatpak.

Flatpak, distribution support

Flatpak is currently supported on most distributions. Many of them have the support built in right from the start, others, most notably Ubuntu, need to install some software first. But in general it is quite easy to get started – and once you did, there are hundreds of applications you can use.

What about the other solutions?

Of course Flatpak is not the only solution out there. After all, this is the open source world we are talking about, so there must be other solutions ��

Snap & Snapcraft

Snapcraft is a way to “deliver and update your app on any Linux distribution – for desktop, cloud, and Internet of Things.” The concept and idea behind it is somewhat similar to Flatpak, with a few notable differences:

  • Snapcraft also target servers, while Flatpak only targets desktops
  • Among the servers, Snapcraft additionally has iot devices as a specific target group
  • Snapcraft does not support additional repositories; there is only one central market place everyone needs to use, and there is no real way to change that

Some more technical differences are in the way packages are built, how the sandbox work and so on, but we will noch focus on those in this post.

The Snapcraft market place called snapcraft.io provides lists of applications, but is much more mature than Flathub: it has vendor testimonials, features verified accounts, multiple versions like beta or development can be picked from within the market, there are case stories, for each app additional blog posts are listed, there is integration with social accounts, you can even see the distribution by countries and Linux flavors.

And as you can see, Snapcraft is endorsed and supported by multiple companies today which are listed on the web page and which maintain their applications in the market.

Flathub has a lot to learn until it reaches the same level of maturity. However, while I’d say that snapcraft.io is much more mature than Flathub it also misses the possibility to rate packages, or just list them by popularity. Am I the only one who wants that?

The main disadvantage I see is the monopoly. snapcraft.io is tightly controlled by a single company (not a foundation or similar). It is of course Canonical’s full right to do so, and the company and many others argue that this is not different from what Apple does with iOS. However, the Linux ecosystem is not the Apple ecosystem, and in the Linux ecosystem there are often strong opinions about monopolies, closed source solutions and related topics which might lead to acceptance problems in the long term.

Also, technically it is not possible to launch your own central server for example for in-house development, or for hosting a local mirror, or to support offline environments or for other reasons. To me this is particularly surprising given that Snapcraft targets specifically iot devices, and I would run iot devices in an closed network wherever I can – thus being unable to connect to snapcraft.io. The only solution I was able to identify was running a http proxy, which is far from the optimal solution.

So while Snapcraft has a mature market place, targets much more use cases and provides more packages to this date, I do wonder how it will turn out in the long run given that we are talking about the Linux ecosystem here. And while Canonical has quite some experience to develop their own solutions outside the “rest” of the community, those attempts seldom worked out.

AppImage

I’ve already mentioned AppImage above and I’ve written about it in the past when it was still called Klik. AppImage is “way for upstream developers to provide native binaries for Linux”. The result is basically a file that contains your entire application and which you can copy everywhere. It exists for more than a dozen years now.

The thing that is probably most worth mentioning about it is that it never caught on. After all, already long time ago it provided many impressive features, and made it possible to install software cross distribution. Many applications where also available as AppImage – and yet I never saw wider adoption. It seems to me that it only got traction recently because Snapcraft and Flatpak entered the market and kind of dragged it with them.

I’d love to understand why that is the case, or have an answer to the “why”. I only have few ideas but those are just ideas, and not explanations why AppImage, in all the years, never managed to become the Docker of the Linux desktop.

Maybe one problem was that it never featured a proper store: today we know from multiple examples on multiple platforms that a store can mean the difference. A central place for the users to browse, get a first idea of the app, leave comments and rate the application. Docker has a central “store”, Android and iOS have one, Flatpak and Snapcraft have one. However, AppImage never put a focus on that, and I do wonder if this was a missed opportunity. And no, appimage.github.io/apps is not a store.

Another difference to the other tools is that AppImage always focused on the Open Source tools. Don’t get me wrong, I appreciate it – but open source tools like Digikam were available on every distribution anyway. If AppImage would have focused to reach out to closed source software vendors as well, together with marketing this aggressively, maybe things would have turned out differently. You do not only need to make software easily available to users, you also need to make software available the people want.

Last but not least, AppImage always tried to provide as many features as possible, while it might have benefited from focusing on some and marketing them stronger. As an example, AppImage advertises that it can run with and without sandboxing. However, sandboxing is a large benefit of using such a solution to begin with. Another thing is integrated updates: there is a way to automatically update all appimages on a system, but it is not built in. If both would have been default and not optional, things maybe would have been different.

But again, these are just ideas, attempts to find explanations. I’d be happy if someone has better ideas.

Disadvantages of the Flatpak approach

There are some disadvantages with the Flatpak approach – or the Snapcraft one, or in general with any container approach. Most notably: libraries and dependencies.

The basic argument here is: all dependencies are kept in each package. This means:

  • Multiple copies of the same libraries on the system, leading to larger disk consumption
  • Multiple copies of the same library in the RAM during execution, leading to larger memory consumption
  • And probably most important: if a library has a security problem, each and every package has to be updated

Especially the last part is crucial: in case of a serious library security problem the user has to rely on each and every package vendor that they update the library in the package and release an updated version. With a dependency based system this is usually not the case.

People often compare this problem to the Windows or Java world were a similar situation exists. However, while the underlying problem is existent and serious, with Flatpak at least there is a sandbox and a permission system something which was not the case in former Windows versions.

There needs to be made a trade off between the advanced security through permissions and sandboxing vs the risk of having not-updated libraries in those packages. That trade off is not easily done.

But do we even need something like Flatpak?

This question might be strange, given the needs I identified in the past and my obvious enthusiasm for it. However, these days more and more apps are created as web applications – the importance of the desktop is shrinking. The dominant platform for users these days are mobile phones and tablets anyway. I would even go so far to say that in the future desktops will still be there but mainly to launch a web browser

But we are not there yet and today there is still the need for easy consumption of software on Linux desktops. I would have hoped though to see this technology and this much traction and distribution and vendor support 10 years ago.

Conclusion

Well – as I mentioned early on, I can get somewhat obsessed with the topic. And this much too long blog post shows this for sure ��

But as a conclusion I say that the days of difficult-to-install-software on Linux desktops are gone. I am not sure if Snapcraft or Flatpak will “win” the race, we have to see that.

At the same time we have to face that desktops in general are just not that important anymore.  But until then, I am very happy that it became so much easier for me to install certain pieces of software in up2date versions on my machine.

We are happy to announce the release of Qt Creator 4.8.0 Beta2!

This release comes with the many fixes that we have done since our first Beta release.

Additionally we upgraded the LLVM for the Clang code model to version 7.0, and our binary packages to the Qt 5.12 prerelease.

Get Qt Creator 4.8 Beta2

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.8 Beta2 is also available under Preview > Qt Creator 4.8.0-beta2 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.8 Beta2 released appeared first on Qt Blog.

Hi! I had the opportunity to participate in QtCon Brasil 2018 as a speaker during the last weekend. It happened in São Paulo, which is a city that I haven't visited for a long time. My talk was about the integration of Qt applications and Computer Vision, specially focused on the mobile environment with QtQuick … Continue reading Talking about Qt and Computer Vision at QtCon Brasil 2018

November 15, 2018

El desarrollo del Software Libre depende tanto de los desarrolladores como de los usuarios, y más si éstos últimos se conciencian en la importante labor de cazar de errores. Así que el próximo 17 de noviembre ayuda a Okular a mejorar participante en su Okular Bug day para reducir los molestos fallos de programación que inherentemente tiene cualquier programa informático.

Ayuda a Okular a mejorar: Okular Bug Day 2018

Lanzada la beta de KDE Aplicaciones 17.12Si conoces Okular comprederás porqué es una de las grandes aplicaciones del ecosistema KDE. En caso contrario, no sabes lo que te estás perdiendo.

Okular, el magnífico visor de un buen número de formatos como PDF, Postscript, DjVu, CHM, XPS, ePub y algunos otros menos famosos, quiere seguir evolucionando, siendo una de sus principales vías para hacerlo el pulido de su código.

Es por ello que el próximo 17 de noviembre de 2018 se va a celebrar el Okular Bug Day, un evento de todo un día en el que se intentará cazar un buen número de errores para que los desarrolladores puedan invertir su precioso tiempo en eliminarlos.

El funcionamiento es el siguiente y sirve para todo el día de la cacería.

  1. Echa un vistazo a la guía Bug Triaging para aprender como confirmar y seleccionar los bugs.
  2. ¡Loguéate en KDE Phabricator y únete al Bugsquad!
  3. Únete al chat de #kde-bugs IRC channel en Freenode para charlar con los desarrolladores en tiempo real y así tener  chat with us in real-time as we go through the list.
  4. Abre el documento compartido para este evento (utiliza tus credenciales de KDE Identity) para seleccionar tu bloque de bugs e ir a por ellos.

Evidentemente, si necesitáis cualquier ayuda o aclaración, no dudéis en  contactar con Andrew Crou, el organizador del Okular Bug Day!

Una ocasión de oro para poder decir con orgullo que tú también participas de forma activa en el desarrollo de las aplicaciones libre.

These past couple weekends were a blast! On the weekend of November 3 and 4, it happened on Rio de Janeiro the first Maker Faire of Latin America. And I was able to do a talk about Atelier and the current status of our project. The event hold more than 1.500 people on the first... Continue Reading →

With Qt for Python released, it’s time to look at the powerful capabilities of these two technologies. This article details one solopreneur’s experiences.

The task

Back in 2016, I started developing a cross-platform file manager called fman. Its goal: To let you work with files more efficiently than Explorer on Windows or Finder on Mac. If you know Total Commander and Sublime Text, a combination of the two is what I am going for. Here’s what it looks like:

fman screenshot

Picking technologies for a desktop app

The biggest question, in the beginning, was which GUI framework to use. fman’s choice needs to tick the following boxes:

  • Cross-platform (Windows, Mac, Linux)
  • Good performance
  • Support for custom styles
  • Fast development

At the time, Electron was the most-hyped project. It has a lot of advantages: You can use proven web technologies. It runs on all platforms. Customizing the UI of your application is trivial by means of CSS. Overall, it’s a very powerful technology.

The big problem with Electron is performance. In particular, the startup time was too high for a file manager: On an admittedly old machine from 2010, simply launching Electron took five seconds.

I admit that my personal distaste for JavaScript also made it easier to discount Electron. Before I go off on a rant, let me give you just one detail that I find symptomatic: Do you know how JavaScript sorts numbers? Alphabetically. ’nuff said.

After considering a few technologies, I settled on Qt. It’s cross-platform, has great performance and supports custom styles. What’s more, you can use it from Python. This makes at least me orders of magnitude more productive than the default C++.

Writing a GUI with Python and Qt

Controlling Qt from Python is very easy. You simply install Python and use its package manager to fetch the Qt bindings. Gone are the days where you had to set up gigabytes of software, or even Qt. It literally takes two minutes. I wrote a Python Qt tutorial that shows the exact steps and sample code if you’re interested.

You can choose between two libraries for using Qt from Python:

  • PyQt is mature but requires you to purchase a license for commercial projects.
  • Qt for Python is a recent push by Qt the company to officially support Python. It’s offered under the LGPL and can thus often be used for free.

From a code perspective, it makes no difference because the APIs are virtually identical. fman uses PyQt because it is more mature. But given that Qt for Python is now officially supported, I hope to switch to it eventually.

Deploying to multiple platforms

Once you’ve written an application with Python and Qt, you want to bring it into the hands of your users. This turns out to be surprisingly hard.

It starts with packaging. That is, turning your source code into a standalone executable. Special Python libraries exist for solving this task. But they never “just work”. You may have to manually add DLLs that are only required on some Windows versions. Or avoid shipping shared libraries that are incompatible with those already present on the users’ systems. Or add special handling for one of your dependencies. Etc.

Then come installers. On Windows, most users expect a file such as AppSetup.exe. On Mac, it’s usually App.dmg. Whereas on Linux, the most common file formats are .deb.rpm and .pkg.tar.xz. Each of these technologies takes days to learn and set up. In total, I have spent weeks just creating fman installers for the various platforms.

Code signing

By default, operating systems show a warning when a user downloads and runs your app:

To prevent this, you need to code sign your application. This creates an electronic signature that lets the OS verify that the app really comes from you and has not been tampered with. Code signing requires a certificate, which acts like a private encryption key.

Obtaining a certificate differs across platforms. It’s easiest on Linux, where everyone can generate their own (GPG) key. On Mac, you purchase a Developer Certificate from Apple. Windows is the most bureaucratic: Certificate vendors need to verify your identity, your address etc. The cheapest and easiest way I found was to obtain a Comodo certificate through a reseller, then have them certify my business – not me as an individual.

Once you have a certificate, code signing involves a few more subtleties: On Windows and Ubuntu, you want to ensure that SHA256 is used for maximum compatibility. On Linux, unattended builds can be tricky because of the need to inject the passphrase for the GPG key. Overall, however, it’s doable.

Automatic updates

Like many modern applications, fman updates itself automatically. This enables a very rapid development cycle: I can release multiple new versions per day and get immediate feedback. Bugs are fixable in less than an hour. All users receive the fix immediately, so I don’t get support requests for outdated problems.

While automatic updates are a blessing, they have also been very expensive to set up. The tasks we have encountered so far – packaging, installers and code signing – took a month or two. Automatic updates alone required the same investment again.

The most difficult platform for automatic updates was Windows. fman uses a Google technology called Omaha, which is for instance used to update Chrome. I wrote a tutorial that explains how it works.

Automatic updates on Mac were much easier thanks to the Sparkle update framework. The main challenge was to interface with it via Objective-C from Python. Here too I wrote an article that shows you how to do it.

Linux was “cleanest” for automatic updates. This is because modern distributions all have a built-in package manager such as apt or pacman. The steps for enabling automatic updates were typically:

  • Host a “repository” for the respective package manager. This is just a collection of static files such as .deb installers and metafiles that describe them.
  • Upload the installer to the repository, regenerating the metafiles.
  • Create a “repository file”: It imports the repo into the user’s package manager.

Because fman supports four Linux distributions, this still required learning quite a few tools. But once you’ve done it for two package managers, the others are pretty similar.

OS Versions

There aren’t just different operating systems, there are also different versions of each OS. To support them, you must package your application on the earliest version. For instance, if you want your app to run on macOS 10.10+, then you need to build it on 10.10, not 10.14. Virtual machines are very useful for this. I show fman’s setup in a video.

fman’s build system

So how does fman cope with all this complexity? The answer is simple: A Python-based build script. It does everything from creating standalone executables to uploading to an update server. I feel strongly that releasing a desktop app should not take months, so am making it available as open source. Check it out if you are interested!

Conclusion

Python and Qt are great for writing desktop apps: Qt gives you speed, cross-platform support and stylability while Python makes you more productive. On the other hand, deployment is hard: Packaging, code signing, installers, and automatic updates require months of work. A new open source library called fbs aims to bring this down to minutes.

If you liked this post, you may also be interested in a webinar Michael is giving on the above topics. Sign up or watch it here: Americas / EMEA

The post Python and Qt: 3,000 hours of developer insight appeared first on Qt Blog.

November 14, 2018

Seguimos promocionando las actividades de  Linux Center. En esta ocasión con el Curso Linux Essentials con el que podrás prepararte para la obtención de un título que demuestre tus conocimientos Linux de forma oficial.

Curso Linux Essentials en Linux Center

Aunque normalmente los cursos de Linux Center son gratuitos (como el de Inkscape) hoy quisiera anunciar el primer Curso de preparación para el examen oficial “Linux Essentials” basado en el libro oficial de Linux Profesional Institute, y que tras realizarlo estarás capacitado para realizar el examen oficial, que se realiza online y te otorga la certificación.

El curso tiene una duración de 20 horas, empieza el lunes 26 de noviembre y finaliza el viernes 30 del mismo mes. A lo largo de esa semana con un horario de 16:00 a 20:00 horas.

Y es que según leemos en la página oficial del curso:

“El mercado laboral de código abierto está en auge con los empleadores que ofrecen salarios y ventajas superiores a la media para contratar y retener a aquellos que desean aprender y obtener la certificación.

El porcentaje de contratación de código abierto aumentará más que cualquier otra oportunidad de empleo, y el porcentaje de 65 ha aumentado los incentivos para retener a sus empleados actuales de código abierto.”

Curso Linux Essentials en Linux Center

Su instructor es Javier Sepúlveda, director técnico de ValenciaTech, que cuenta con más de 15 años de experiencia docente. Como si de un piloto de vuelo, Javier tiene más de diez mil horas de formación presencial, repartidas en cursos de formación ocupacional y continua como experto docente del Servicio Valenciano de Empleo y Formación, así como con la impartición de cursos para otras entidades públicas y/o privadas.

El curso tiene 15 plazas, y los 10 primeros registrados tendrán un descuento gracias a SLIMBOOK.

De esta forma el precio es el siguiente:

  • Precio en oferta (10 primeras plazas): 195€
  • Precio sin oferta (siguientes plazas): 295€

Para adelantar, se recomienda traer portátil.

 

Más información: Linux Center

 

Thank you to everyone who participated last Bug Day! We had a turnout of about six people, who worked through about half of the existing REPORTED (unconfirmed) Konsole bugs. Lots of good discussion occurred on #kde-bugs as well, thank you for joining the channel and being part of the team!

We will be holding a Bug Day on November 17th, 2018, focusing on Okular. Join at any time, the event will be occurring all day long!

This is a great opportunity for anyone, especially non-developers to get involved!

  1. Mascot_konqi-support-bughunt.pngCheck out our Bug Triaging guide for a primer on how to go about confirming and triaging bugs.
  2. Log into KDE Phabricator and join the Bugsquad!
  3. Join the #kde-bugs IRC channel on Freenode to chat with us in real-time as we go through the list.
  4. Open the shared Etherpad for this event (use your KDE Identity login) to select your block of bugs and cross them off.

If you need any help, contact me!

We promised Digital Atelier would be available in the Krita Shop after the succesful finish of the Fundraiser. A bit later than expected, we’ve updated the shop with the Digital Atelier brush preset bundle and tutorial download:

There are fifty great brush presents, more than thirty brush tips, twenty paper textures and almost twohours of in-depth video tutorial, working you through the process of creating new brush presets.

Digital Atelier sells for 39,95 euros, ex VAT.


And we’ve also created a new USB-card, with the newest stable version of Krita for all OSes. Includes Comics with Krita, Muses, Secrets of Krita and Animate with Krita tutorial packs.

It’s in the credit-card format again, which makes it easy to carry Krita with you wherever you go! Besides, this format lets us use the new Krita 4 splash screen by Tyson Tan.

The usb-card is available in two versions: as-is, for €29,95 and (manually) updated to the latest version of Krita for €34,95.

KDevelop 5.3 released

A little less than a year after the release of KDevelop 5.2 and a little more than 20 years after KDevelop's first official release, we are happy to announce the availability of KDevelop 5.3 today. Below is a summary of the significant changes.

We plan to do a 5.3.1 stabilization release soon, should any major issues show up.

Analyzers

With 5.1, KDevelop got a new menu entry Analyzer which features a set of actions to work with analyzer-like plugins. For 5.2 the runtime analyzer Heaptrack and the static analyzer cppcheck have been added. During the last 5.3 development phase, we added another analyzer plugin which now is shipped to you out of the box:

Clazy

Clazy is a clang analyzer plugin specialized on Qt-using code, and can now also be run from within KDevelop by default, showing issues inline.

Screenshot of KDevelop with dialog for project-specific Clazy settings, with clazy running in the background

The KDevelop plugin for Clang-Tidy support had been developed and released independently until after the feature freeze of KDevelop 5.3. It will be released as part of KDevelop starting with version 5.4.

Internal changes

With all the Analyzers integration available, KDevelop's own codebase has been subject for their application as well. Lots of code has been optimized and, where indicated by the analyzers, stabilized. At the same time modernization to the new standards of C++ and Qt5 has been continued with the analyzers aid, so it can be seen only in the copyright headers KDevelop was founded in 1998 .

Improved C++ support

A lot of work was done on stabilizing and improving our clang-based C++ language support. Notable fixes include:

  • Clang: include tooltips: fix range check. (commit. code review D14865)
  • Allow overriding the path to the builtin clang compiler headers. (commit. See bug #393779)
  • Always use the clang builtin headers for the libclang version we use. (commit. fixes bug #387005. code review D12331)
  • Group completion requests and only handle the last one. (commit. code review D12298)
  • Fix Template (Class/Function) Signatures in Clang Code Completion. (commit. fixes bug #368544. fixes bug #377397. code review D10277)
  • Workaround: find declarations for constructor argument hints. (commit. code review D9745)
  • Clang: Improve argument hint code completion. (commit. code review D9725)

Improved PHP language support

Thanks to Heinz Wiesinger we've got many improvements for the PHP language support.

  • Much improved support for PHP Namespaces
  • Added support for Generators and Generator delegation
  • Updated and expanded the integrated documentation of PHP internals
  • Added support for PHP 7's context sensitive lexer
  • Install the parser as a library so it can be used by other projects (currently, umbrello can use it) (Ralf Habacker)
  • Improved type detection of object properties
  • Added support for the object typehint
  • Better support for ClassNameReferences (instanceof)
  • Expression syntax support improvements, particularly around 'print'
  • Allow optional function parameters before non-optional ones (Matthijs Tijink)
  • Added support for magic constants __DIR__ and __TRAIT__

Improved Python language support

The developers have been concentrating on fixing bugs, which already have been added into the 5.2 series.

There are a couple of improved features in 5.3:

Support for other platforms

KDevelop is written with portability in mind, and relies on Qt for solving the big part there, so next to the original "unixoid" platforms like Linux distributions and the BSD derivatives, other platforms with Qt coverage are in good reach as well, if people do the final pushing. So far Microsoft Windows has been another supported platform, and there is some experimental, but maintainer-seeking support for macOS. Some porters of Haiku, the BeOS inspired Open Source operating system, have done porting as well, building on the work done for other Qt-based software. For KDevelop 5.3 the small patch still needed has been applied to KDevelop now, so the Haiku ports recipe for KDevelop no longer needs it.

KDevelop is already in the HaikuDepot, currently still at version 5.2.2. It will be updated to 5.3.0 once the release has happened.

KDevelop 5.3 Beta on Haiku

Note to packagers

The Clazy support as mentioned above has a recommended optional runtime dependency, clazy, more specifically the clazy-standalone binary. Currently clazy is only packaged and made available to their users by a few distributions, e.g. Arch Linux, openSUSE Tumbleweed or OpenMandriva,

If your distribution has not yet looked into packaging clazy, please consider to do so. Next to enabling the Clazy support feature in KDevelop, it allows developers to easily fix and optimize their Qt-based software, resulting in a less buggy and more performant software again for you.

You can find more information in the the release announcement of the currently latest clazy release, 1.4.

Get it

Together with the source code, we again provide a prebuilt one-file-executable for 64-bit Linux (AppImage), as well as binary installers for 32- and 64-bit Microsoft Windows. You can find them on our download page.

The 5.3.0 source code and signatures can be downloaded from here.

Should you find any issues in KDevelop 5.3, please let us know in the bug tracker.

kossebau Wed, 2018/11/14 - 10:00
Category
Tags

November 13, 2018

Elisa (product page, release announcements blog) is a music player designer for excellent integration into the KDE Plasma desktop (but of course it runs everywhere, including some non-Free platforms). I had used it a few times, but had not gotten around to packaging it. So today I threw together a FreeBSD port of Elisa, and you’ll be able to install it from official packages whenever the package cluster gets around to it.

Screenshot of pkg info for ElisaI say “threw together” because it took me only a half hour (most of that was just building it a half-dozen times in different ways). The Elisa code itself is very straightforward and nice. It doesn’t even spit out a single warning with Clang 6 on FreeBSD. That’s an indicator (for me, anyway) of good code. So if you want more music on FreeBSD, it’s there! (And the multimedia controls work from the SDDM lock screen, too)

November 12, 2018

Last month French publisher D-Booker released the 2nd edition of Timothée Giet’s book “Dessin et peinture numérique avec Krita”.

The first edition was written for Krita 2.9.11, almost three years ago. A lot of things have changed since then! So Timothée has completely updated this new edition for Krita version 4.1. There are also a number of  notes about the new features in Krita 4.

And more-over, D-Booker worked again on updating and improving the French translation of Krita! Thanks again to D-Booker edition for their contribution.

You can order this book directly from the publisher’s website. There is both a digital edition (pdf or epub) as well as a paper edition.

Could you tell us something about yourself?

I’m Brigette, but I go mainly go by my online handle of HoldXtoRevive. I’m from the UK and mostly known as a fanartist.

Do you paint professionally, as a hobby artist, or both?

I have had a few commissions but outside that I would call myself a hobbyist. I would love to work professionally at some point.

What genre(s) do you work in?

I do semi-realistic sci-fi art. I most recently I have been drawing character portraits inspired from the Art Nouveau style, the majority of it has been fanart of a few different Sci-fi games.

Whose work inspires you most — who are your role models as an artist?

It’s hard to list them all really. Top of the list would be my other half, RedSkittlez, who is an amazing concept and character artist, also my friends Blazbaros, SilverBones and many more that would cause this to go on for too long.

Outside of my friends I would say Charles Walton, Pete Mohrbacher and Valentina Remenar to name a few.

How and when did you get to try digital painting for the first time?

About 4 years ago I downloaded GIMP as I wanted to get back into art after not drawing for about 15 years. I got a simple drawing tablet soon after and things just progressed from there.

What makes you choose digital over traditional painting?

The flexibility and practicality of it. Whilst I would love to try traditional acquiring, maintaining and storing supplies is not easy for me.

How did you find out about Krita?

My partner was looking at alternatives to photoshop and came across it via a youtube video. He recommended me to try it out.

What was your first impression?

How clean the UI is and how all of the tools where easy to find, and the fun I had messing with the brushes.

What do you love about Krita?

The fact it was really easy to get to grips with, yet I can tell there is more I can get from it. Also the autosave.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I would like a brightness/contrast slider alongside the curve for ease of use. It would also be nice if the adjustment windows would not close when the autosave kicks in.

What sets Krita apart from the other tools that you use?

I have not used many programs before I can across Krita. But the thing that jumped out at me was the ease of use and it had everything I wanted in an art program, I know that if I want to try animation I do not need to go and find another program.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

That is hard to say, but in a pinch I would say the one titled “Saladin’s White Wolf”. I was really happy how the background came out, it was also the one to be picked out and promoted by Bungie on their twitter.

What techniques and brushes did you use in it?

For the most part I use a multiply layer over flats for shading. My main brushes are just the basic tip (gaussian), basic wet soft and the soft smudge brush.

Where can people see more of your work?

Over on my DeviantArt page: https://www.deviantart.com/holdxtorevive
And my twitter: https://twitter.com/HoldX2ReviveART

Anything else you’d like to share?

I’m bad with words but I want to show appreciation to the Krita crew for making this wonderful program and to everyone who has supported and encouraged me.

November 11, 2018

Getting started – clang-tidy AST Matchers

Over the last few weeks I published some blogs on the Visual C++ blog about Clang AST Matchers. The series can be found here:

I am not aware of any similar series existing which covers creation of clang-tidy checks, and use of clang-query to inspect the Clang AST and assist in the construction of AST Matcher expressions. I hope the series is useful to anyone attempting to write clang-tidy checks. Several people have reported to me that they have previously tried and failed to create clang-tidy extensions, due to various issues, including lack of information tying it all together.

Other issues with clang-tidy include the fact that it relies on the “mental model” a compiler has of C++ source code, which might differ from the “mental model” of regular C++ developers. The compiler needs to have a very exact representation of the code, and needs to have a consistent design for the class hierarchy representing each standard-required feature. This leads to many classes and class hierarchies, and a difficulty in discovering what is relevant to a particular problem to be solved.

I noted several problems in those blog posts, namely:

  • clang-query does not show AST dumps and diagnostics at the same time<
  • Code completion does not work with clang-query on Windows
  • AST Matchers which are appropriate to use in contexts are difficult to discover
  • There is no tooling available to assist in discovery of source locations of AST nodes

Last week at code::dive in Wroclaw, I demonstrated tooling solutions to all of these problems. I look forward to video of that talk (and videos from the rest of the conference!) becoming available.

Meanwhile, I’ll publish some blog posts here showing the same new features in clang-query and clang-tidy.

clang-query in Compiler Explorer

Recent work by the Compiler Explorer maintainers adds the possibility to use source code tooling with the website. The compiler explorer contains new entries in a menu to enable a clang-tidy pane.

clang-tidy in Compiler Explorer

clang-tidy in Compiler Explorer

I demonstrated use of compiler explorer to use the clang-query tool at the code::dive conference, building upon the recent work by the compiler explorer developers. This feature will get upstream in time, but can be used with my own AWS instance for now. This is suitable for exploration of the effect that changing source code has on match results, and orthogonally, the effect that changing the AST Matcher has on the match results. It is also accessible via cqe.steveire.com.

It is important to remember that Compiler Explorer is running clang-query in script mode, so it can process multiple let and match calls for example. The new command set print-matcher true helps distinguish the output from the matcher which causes the output. The help command is also available with listing of the new features.

The issue of clang-query not printing both diagnostic information and AST information at the same time means that users of the tool need to alternate between writing

set output diag

and

set output dump

to access the different content. Recently, I committed a change to make it possible to enable both output and diag output from clang-query at the same time. New commands follow the same structure as the set output command:

enable output dump
disable output dump

The set output <feature> command remains as an “exclusive” setting to enable only one output feature and disable all others.

Dumping possible AST Matchers

This command design also enables the possibility of extending the features which clang-query can output. Up to now, developers of clang-tidy extensions had to inspect the AST corresponding to their source code using clang-query and then use that understanding of the AST to create an AST Matcher expression.

That mapping to and from the AST “mental model” is not necessary. New features I am in the process of upstreaming to clang-query enable the output of AST Matchers which may be used with existing bound AST nodes. The command

enable output matcher

causes clang-query to print out all matcher expressions which can be combined with the bound node. This cuts out the requirement to dump the AST in such cases.

Inspecting the AST is still useful as a technique to discover possible AST Matchers and how they correspond to source code. For example if the functionDecl() matcher is already known and understood, it can be dumped to see that function calls are represented by the CallExpr in the Clang AST. Using the callExpr() AST Matcher and dumping possible matchers to use with it leads to the discovery that callee(functionDecl()) can be used to determine particulars of the function being called. Such discoveries are not possible by only reading AST output of clang-query.

Dumping possible Source Locations

The other important discovery space in creation of clang-tidy extensions is that of Source Locations and Source Ranges. Developers creating extensions must currently rely on the documentation of the Clang AST to discover available source locations which might be relevant. Usually though, developers have the opposite problem. They have source code, and they want to know how to access a source location from the AST node which corresponds semantically to that line and column in the source.

It is important to make use a semantically relevant source location in order to make reliable tools which refactor at scale and without human intervention. For example, a cursory inspection of the locations available from a FunctionDecl AST node might lead to the belief that the return type is available at the getBeginLoc() of the node.

However, this is immediately challenged by the C++11 trailing return type feature, where the actual return type is located at the end. For a semanticallly correct location, you must currently use

getTypeSourceInfo()->getTypeLoc().getAs().getReturnLoc().getBeginLoc()

It should be possible to use getReturnTypeSourceRange(), but a bug in clang prevents that as it does not appreciate the trailing return types feature.

Once again, my new output feature of clang-query presents a solution to this discovery problem. The command

enable output srcloc

causes clang-query to output the source locations by accessor and caret corresponding to the source code for each of the bound nodes. By inspecting that output, developers of clang-tidy extensions can discover the correct expression (usually via the clang::TypeLoc heirarchy) corresponding to the source code location they are interested in refactoring.

Next Steps

I have made many more modifications to clang-query which I am in the process of upstreaming. My Compiler explorer instance is listed as the ‘clang-query-future’ tool, while the clang-query-trunk tool runs the current trunk version of clang-query. Both can be enabled for side-by-side comparison of the future clang-query with the exising one.

You have probably read a lot about Akademy 2018 recently, and how great it was.
For me it was a great experience too and this year I met a lot of KDE people, both old and new. This is always nice.
I arrived on Thursday so I had one day to set everything up and had a little bit of time to get to know the city.
On Friday evening I enjoyed the "Welcoming evening", but I was very surprised when Volker told me that I would be on stage the next day, talking about privacy.
He told me that someone should have informed me several days before. The scheduled speaker, Sebastian, couldn't make it to Akademy.

That was really a pity. If I had known earlier, I would have prepared a proper presentation, as I think I have a lot to say on the topic. I care a lot about good encryption support in KDE PIM and I have also been busy with Tails, a distribution strongly focused on privacy and anonymity.

So the next day I rambled a bit about encrypting mail headers (also known as Memory Hole), but it wasn't a great talk. It was also very stressful, as we started with a lot of delay: Setting up the audio took much longer than expected so all talks started late. As I was in such a hurry. I forgot a lot of what I wanted to say and failed to draft a greater picture for the privacy goal.

To compensate, let me say it here. Privacy starts with encrypting everything, starting with the content of the connection. This is mostly done via TLS. Here KDE is lacking some bits, as we cannot display the encryption methods, so we are sometimes forced to downgrade to old and insecure encryption methods without user visual feedback. But at least we support TLS everywhere! Unfortunately metadata is often enough to track where people are. To hide this information, we need TOR/VPN support within the applications. For most applications you can use the TOR network by running

torsocks <cmd>

but this is not really user-friendly.
It also has the disadvantage that you can't see if DNS requests are transmitted via the TOR tunnel. And then there's deamons, which you don't normally run from the command line. Furthermore, e.g. GnuPG has an option to enable TOR support and does not like to run using torsocks.

We need more control over the used TLS certificates and a way to display the TLS parameters, like accepted ciphers, protocol versions etc. Because at the moment KDE just asks you about unknown certificates, but you can't view/ control the other important details of a TLS connection to harden your system. Sure a default user can't decide whether something is good or not and a user may need to access some rotten company side. But we can provide a rating like ssllabs to show that the TLS connection is not good.

Another topic is to have a good support for TOR/VPN for applications and KDE in total in a user friendly way. And be sure that everything within these applications really uses the tunnel. I think at usecases like: I want TOR for everything except this application. After we have good solutions for those, we can focus on all the different applications. I haven't a good overview what each application should do in regards to the Privacy goal additionally.
As I'm busy within KDE PIM I see some things there:

  • Encrypting headers of mails to give less metadata to surveillance T742.
  • Make it possible to search in encrypted mails. Until now, the encrypted data are a binary blob to the search. The search is important to find mails again and is often a reason, why users do not use encrypted messages. T8447.
  • Protect Akregator webviewer against ad trackers T7528.
  • Add GnuPG TOFU (Trust On First Use) trust model support for KMail. With TOFU you will get more security without exchanging the gpg fingerprints. To make it clear, exchanging fingerprints gives more security than is achieved with TOFU, but it is exhausting. Only a small number of users exchange fingerprints. So many users will profit from having extra security by the TOFU security model being supported by KMail.

To move forward in reaching the privacy goal, I volunteered to organize a Sprint together with Louise Galathea. It will hopefully take place next Spring.

But back to the Akademy - two days of many talks which was pretty overwhelming, in a good way. I really enjoyed the mix of non technical talks and technical talks. My highlight was a talk about Kontact from Markus Feilner. It is interessting to see how users master issues around Kontact and what solutions they find. The talk would have benefited from Markus talking to KDE PIM before, as some of his suggestions weren't correct. But it was nice to see that there are happy users who love all bells and whistles from Kontact. The days after the talk I encountered another user who gave Kontact a try after years, because of that talk.

The BOF session started on Monday. I started the Mondey joining the Promo BoF to get some insights into how Promo works and how KDE PIM can have better promotion to attract new contributors. Together we created a rough plan to focus first on the new website and then create several blog posts to promote KDE PIM. Internaly we started to create several junior jobs and are discussing who can mentor new people when they come along. This is still ongoing and hopefully new people will join the new spirit of KDE PIM and aren't overwhelmed by the big amount of repositories.

After the AGM I joined the Distros BoF and was impressed by how many Distros have a KDE focus. We started to talk about issues regarding KDE and downstream. Especially if distros are time based, they need a lot of manpower to validate if bugs have already been fixed upstream, or if it is a downstream bug. The fast release cycle of different KDE bundels (Plasma, Frameworks & Applications) makes it even more difficult to follow. Maybe this BoF helps the distros to talk more about their issues in the distributions@kde.org mailinglist directly and solutions can be found for common issues.

The next day I attend KConfig BOF and talked about issues reagarding config migration and how to store config. Also a very common issue is that you need to get updates of config values in your application. KDE PIM has some examples of bad decisions reagarding config handeling, as you can configure a lot of stuff and everything is seperated into several files. But the seperation is not well formed from the beginning - an issue that's grown with the years. It is also written down as task to do cleanups and make the configs easier to consume. Because a clearer seperation also helps users who want to move their KDE PIM from one computer to another.

In the afternoon, the KDE PIM BoF took place, where we followed up on what has been going on since the last Akademy and what new things we want to do. We discussed a lot of how we can improve the onbording for KDE PIM.
As we want to make it easier for other applications to use PIM related data, we have a long standing goal to move single parts of KDE PIM to Frameworks. For the next KDE Application 18.12 we want to push syndication into Frameworks release. If we would have more manpower, we would move those things faster to Frameworks. If you are interested, or currently using some parts of KDE PIM within your application, it is a good idea to help us getting things cleaned up and moved to Frameworks. A bigger audience can consume PIM data more easily.

In the evening I joined the LGBT & Quuer Dinner and it was nice to cook together and chat.

The next days I mostly sat down and tried to get some smaller things fixed, like the issue with the suspend icon within the logout Applet. It is very good to sit next to Kai-Uwe so he could see and propose workarounds directly. Finally he found the hard coded color in the theme and the issue is solved in the next release.

Since there were many open discussions I did not managed to write a line of code within these days - I was busy talking. Additionally I wanted to see the newest KDE PIM within Debian, so this also took some time.

On Friday I took a train back to Germany. As Volker is always interested about live data from delays for improving KItinerary, he wished me some delays. And I had a lot of delay - I arrived in Berlin 12h later than expected. I had to sleep in Dresden, as the last train from Prag leaves at 18:07 towards Berlin. And since I missed that one, I had to take a local train to Dresden and there was no train anymore to Berlin. And it wasn't really late - just midnight. I never thought about that Dresden is that badly connected to rest of Germany...
But it was nice as I stayed at a friends place, who I wouldn't have met otherwise. From the improving KItinerary it was not a success, as I only got one update from Deutsche Bahn at 23:25 - telling me that the train from Leipzig to Berlin would use another track. The more useful information, that I never would reach that train would be more useful.

Week 44 in Usability & Productivity is coming right up! This week was a bit lighter in terms of the number of bullet points, but we got some really great new features, and there’s a lot of cool stuff in progress that I hope to be able to blog about next week! In the meantime, take a look at this week’s progress:

New Features

Bugfixes

UI Polish & Improvement

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon, LiberaPay, or PayPal. Also consider making a donation to the KDE e.V. foundation.

November 10, 2018

C++ comparison operators are usually fairly straightforward to implement. Writing them by hand can however be quite error prone if there are many member variables to consider. Missing a single one of them will still compile and mostly work fine, apart from some hard to debug corner cases, such as misbehaving or crashing algorithms and containers, or data loss. Can we do better?

Background

In KItinerary we have a number of classes representing schema.org types. Those are fairly straightforward value types with a number of properties, to be consumed both statically (by C++ code) and dynamically (by QML, Grantlee and JSON-LD de/serialization).

As implementing getters, setters, Q_PROPERTY statements, member variables, etc for about a hundred or so properties by hand would result in an unreasonable amount of boilerplate code, this is all done by a macro (example).

So far so good, but now we’d like to add comparison operators for those types. Specifically we needed operator== for optimizing away some memory allocations in case of non-changing write operations (similar to the common pattern of not emitting change signals in setter methods on non-changes). But the thoughts below are of course also applicable to any other comparison function, not just equality.

Implementation

Ideally we’d find an alternatives to writing the comparison functions by hand that would either be impossible to break or would at least not fail silently (e.g. by producing compiler errors).

QMetaProperty

One idea would be to implement the comparison functions entirely generically by leveraging the property iteration support in QMetaObject. That is we iterate over all properties with the STORED flag and compare their values. This gets the job done, but has two drawbacks:

  • Comparison happens via QVariant, which means we have to register comparison operators for all types with the meta type system. That might actually be nice to have anyway, but it is limited to less-than and equal.
  • All property values are passed through QVariant, which in some situations can have a relevant performance impact, in particular when using types that are too large for inline storage inside QVariant (causing allocations).

More preprocessor magic

Another idea could be to use more elaborate preprocessor constructs that allow for iteration over all properties. The Boost.Preprocessor library has the building blocks for this. From experience in an old pre-C++11 project attempting to catch SQL errors at compile time this works but doesn’t really lead to a nice syntax nor easily maintainable code.

A clever overloading trick from Verdigris

The solution I ended up using is inspired by Woboq’s Verdigris. The basic idea is that each property macro generates a part of the comparison function only, the comparison for its property and a call to the comparison function for the previous property. The chaining is done by overloading on a template type that essentially describes a numerical value by inheritance. A little constexpr helper functions allows us to determine the index of a property at compile time. Olivier describes this in more detail in his blog post about Verdigris implementation details.

The resulting code for KItinerary can be found here. It’s worth noting that while this might look inefficient due to the many function calls, this is all inline code in a single translation unit, so in an optimized build this ends up essentially as the hand-written comparison function would look like.

Language support?

Would it make sense to have C++ support something like bool operator==(const T&) const = default, that is let the compiler generate the implementation for us, as it can for a number of other member functions? Proposals for such a language extension exist.

There’s a bigger conceptual problem though, that one runs immediately into once the comparison operator is implemented: the semantics of comparing things. Here are a few examples:

  • Floating point numbers: how close together is “equal” depends on what those numbers represent, in my case of geo coordinates anything within a few meters is certainly more than enough. And there is the little detail that NAN does not compare equal to itself, which isn’t what KItinerary needed either, as NAN is its indicator for “value not set”.
  • QString: here the default equal comparison doesn’t distinguish between null and empty strings. That’s probably very often what you’d expect, unless you put special semantics on that distinction, such as in the NAN case above.
  • Date/time values: two QDateTime instances compare equal if they refer to the same point in time, not if they represent exactly the same information (e.g. time specified with an UTC offset vs a full IANA timezone). For KItinerary timezones are a very crucial information, so the default semantic doesn’t cut it there.

An all-or-nothing approach for compiler-generated comparison operator implementations means it’s not a viable option in case one needs more control over the semantics. That of course does not mean it is useless, but it does mean the alternative implementation techniques remain valid either way.

November 09, 2018

Make sure you commit anything you want to end up in the 18.12 release to them

We're already past the dependency freeze.

The Freeze and Beta is this Thursday 15 of November.

More interesting dates
November 29: KDE Applications 18.12 RC (18.11.90) Tagging and Release
December 6: KDE Applications 18.12 Tagging
December 13: KDE Applications 18.12 Release

https://community.kde.org/Schedules/Applications/18.12_Release_Schedule

 

Photo of a KDE Slimbook and a pint

Slimbook and a pint of Cider

The weekend of 3 and 4 November Dave and I went to staff the KDE booth at Freenode#live, in Bristol. I had never been in that corner of England before, It turns out to have hills, and a river, and tides. Often an event brings me to a city, and then out, without seeing much of it. This time I traveled in early and left late the day after the event so I had some time to wander around, and it was quite worthwhile.

Turns out there is quite a lot of cider available, and the barman gave me an extensive education on the history of cider and a bit on apple cultivation when I asked about it. Sitting down with a Slimbook and a pint can be quite productive; I got some Calamares fixes done before the conference.

Photo of table with KDE logos

Our stand, before visitors arrived

We (as in the KDE community) have invested into getting booth materials ready for this kind of events, so we quickly had Konqui peeking at visitors and Plasma running on a range of devices (no Power9 this week, though). Dave had 3D-printed a KDE logo we stuck to the top of the monitor, and had a pile of leaflets and stickers (KDE, neon, Krita, GCompris, and some Wiki To Learn) to hand out.

Next to our stand was MineTest, with a demo and they sat hacking at the code of the game for most of the weekend. On the other side, Freenode themselves were handing our IRC.horse stickers and other goodies.

Photo of Dave talking with people

The hallway track

A conference booth isn’t just about giving away goodies, though, and we spent the weekend, 9-to-5, talking with people about the KDE community and its software products. Plasma could be seen — and played with — on the machines we had at hand, and we did some small application demo’s. As usual, people told us they used i3 — and our response as usual is that Free Software on the desktop is better than non-free software. Some people had last used KDE in 2010 or so, so we could show how the different parts now operate independently and more flexibly — and how Plasma is now pretty darn lightweight.

Here in the photo is Dave extolling the virtues of something. (I’ve edited the photo to make some people in the background un-recognizable by pasting Dave’s eyeball over their faces — I forgot to ask permission to post)

Photo of stage with presentation

Conference track

.. and a conference isn’t just about the booths and vendor stands, either. It’s about the talks. Here’s one I sat in on about business models and how licensing affects the available models (in particular with reference to recent license changes in some projects). The KDE community is interesting because it’s not one-community-one-company like a lot that I see. We have a collection of small and medium companies building on, and building with, the KDE community’s software products. I think that’s healthy and generally happy. No licensing gotcha’s for us when using Qt under the LGPL version 2.1 and KDE’s Frameworks also under the LGPL.

Two talks I would like to single out are Leslie Hawthorn and VM Brasseur, (those are YouTube links). Neither are technical talks, but social talks, about what we (as Free Software contributors) do and why, and social issues we all face — and how to bring Software Freedom to more people.

A concept I didn’t know existed was platform shaming. See above: there’s lots of ways to use Free Software from the KDE community, in combination with i3, or on Windows, and on Free Software operating systems. Those are all OK.

Photo of Dave with laptop

Bugfixing Dave

I’d like to thank Dave for being a great booth- room- and dinner-mate, and Christel and the Freenode#live crew for a wonderful well-organized weekend. Not to mention the speakers and attendees who made it a happy and instructive time.

Kdenlive 18.08.3 is out with updated build scripts as well as some compilation fixes. All work is focused on the refactoring branch so nothing major in this release. On the other hand in the Windows front some major breakthroughs were made like the fix of the play/pause lag as well as the ability to build Kdenlive directly from Windows. The next milestone is to kill the running process on exit making Kdenlive almost as stable as the Linux version.

In other news, we are organizing a bug squash day on the first days of December. If you are interested in participating this is a great opportunity since we have prepared a list of low hanging bugs to fix. See you!

 

 

Bugfixes

  • Fix finding MLT data in build-time specified path. Commit.
  • Fix play/pause on Windows. Commit.
  • Try catching application initialization crashes. Commit.
  • Fix MinGW build script misses. Commit.
  • Backport some Shotcut GLwidget updates. Commit.
  • Fix MinGW build. Commit.
  • Install doc files. Commit.
  • Build scripts for Linux & Windows. Commit.
  • Backport packaging scripts. Commit.
  • Fix MinGW build. Commit.
  • Backport fix for incorrect bin rename. Commit. See bug #368206

Have you ever wanted to test if your application works on ARM, but don’t want to make an image and launch a real device, or wanted to test an architecture you don’t even have hardware for? I have been following a set of instructions I have gathered from various online sources, and have been teaching others how to do the same, but I want to collect it all here and publish it here for you to follow as well. While writing this I will be following the instructions myself for an architecture I haven’t done this for before.

The secret is of course QEMU, but to make testing easier I will set it up so that QEMU doesn’t simulate an entire system, but runs foreign applications as if they were native inside my existing X11 desktop. This is possible through the qemu-user binaries, which can run statically linked executables or dynamically linked ones if you are in a properly set up chroot.

Setting up chroot

So how do you get this properly set up chroot? There are various ways depending on the Linux distro. I personally use Debian, and the instructions here will be almost identical to what is done on Ubuntu. If you use Suse or RedHat, I know you can find official guidelines on setting up a chroot for qemu.

First, get the dependencies (as root or sudo) :
apt-get install binfmt-support qemu qemu-user-static debootstrap

Most of these are just the tools needed, and binfmt-support is configuration glue what will make Linux launch ELF executables with non-native architectures using qemu, which is another piece of the magic puzzle.

For my example I am testing while writing this, I will be targeting the IBM mainframe architecture S/390x and will also need the cross-building tools for that as well. I can get all of those on Debian by just pulling in the right cross-building g++ (as root or sudo) :
apt-get install g++-s390x-linux-gnu

S/390x is interesting for me because Debian dropped official support for big endian PPC64, so I need a new way of testing that Qt works on 64bit big-endian platforms.

Now go to where you want your sysroots installed, and run debootstrap. Debootstrap is the Debian tool for downloading and installing a Debian image into a sub-directory
(as root or sudo) :
debootstrap stable --arch s390x stable debian-s390x http://deb.debian.org/debian/

Next prepare it for being a chroot system (as root or sudo) :
mount --rbind /dev debian-s390x/dev/
mount --rbind /sys debian-s390x/sys/
mount --rbind /proc debian-s390x/proc/

Enter the chroot (as root or sudo) :
chroot debian-s390x

If you run ‘uname -a’ here you will now get something like:
Linux hostname 4.x.x-x-amd64 #1 SMP Debian 4.x.x-x (2018-x-x) s390x GNU/Linux

In other words, a native AMD64 kernel with x390x userland! Every command you execute here will be using s390x binaries in the chroot, but using the native kernel for system calls. If you launch an X11 app, it will be using the X11-server running natively on your machine already.

Prepare chroot to be a sysroot

To build Qt, we also need all the Qt dependencies inside the chroot. In Debian we can do that using ‘apt-get build-dep’.

Edit debian-s390x/etc/apt/sources.list and add a deb-src line, such as:
deb-src http://deb.debian.org/debian stable main

Install the Qt dependencies (inside chroot as root):
apt-get update
apt-get build-dep qtbase-opensource-src

Finally, for the setup we need to make sure it works a sysroot for cross-building by replacing absolute links to system libraries with relative ones that work both inside and outside the chroot (inside chroot as root):
apt-get install symlinks
cd /usr/lib/
symlink -rc .

Building Qt

We can now build Qt for this sysroot. If you want to test you own applications, you could also install an official prebuilt Qt version from Debian, and build against that.

Configuring Qt for cross-building to a platform like that looks like this on Debian, and only needs a few changes to work on other platforms:
$QTSRC/configure -prefix /opt/qt-s390x -release -force-debug-info \
-system-freetype -fontconfig -no-opengl \
-device linux-generic-g++ -sysroot /sysroots/debian-s390x \
-device-option CROSS_COMPILE=/usr/bin/s390x-linux-gnu-

You can remove the -no-opengl argument if you want to test OpenGL using the mesa software implementation, but otherwise I don’t recommend using OpenGL inside the chroot.

If everything is setup correctly you don’t need to specify ‘-system-freetype -fontconfig’. It is just there to catch failures in pkg-config, so we don’t end up building a Qt version that requires manually bundling fonts.

In general cross-compiling with Qt always looks similar to the above, but note that if you are cross-building to ARM32, it looks a little bit different:
$QTSRC/configure -prefix /opt/qt-armhf -release -force-debug-info \
-system-freetype -fontconfig -no-opengl \
-device linux-arm-generic-g++ -sysroot /sysroots/debian-armhf \
-device-option DISTRO_OPTS=hard-float \
-device-option COMPILER_FLAGS='-march=armv7-a -mfpu=neon' \
-device-option CROSS_COMPILE=/usr/bin/arm-linux-gnueabihf-

Notice that ARM32 has a specific device name (linux-arm-generic-g++ instead of linux-generic-g++) and needs to have a floating-point mode set. Additionally, for older embedded architectures, it is generally a good idea to specify COMPILER_FLAGS to select a specific architecture. You need to pass compiler flags as a device-option, because the normal way of specifying QMAKE_CFLAGS and QMAKE_CXXFLAGS after configure options sets the host compiler flags, not the target compiler flags. If you need more complicated options, you should make your own qmake device target. You can always base it of linux-generic-g++ for good defaults.

With Qt configured like this, you can build it as usual and run make install to have the libraries installed into the /opt directory under the chroot.

Issue encountered; as I built qtbase, I hit a linking error:
qt5/qtbase/src/corelib/global/qrandom.cpp:143: error: undefined reference to 'getentropy'

The problem turns out to be that my cross-building tools have libc6 2.27 while the sysroot has libc6 2.24. The best thing to do would be using the same libc version on both sides. However, as they are fully compatible, except for newer features in one, I chose to just disable this feature in Qt. I accomplished this by calling the configure command again, this time with -no-feature-getentropy added.

With this change, everything I tried building built and installed without further issues.

Run a Qt application

To get some examples to test, go to the folder of the example in the build tree and run ‘make install’. For instance, my standard example for widgets:
cd qtbase/examples/widgets/richtext/textedit
make install

Then (inside chroot as root, assuming you have created a testuser):
su - testuser
export DISPLAY=:0
export QT_XCB_NO_MITSHM=1
/opt/qt-s390x/examples/widgets/richtext/textedit/textedit -platform xcb

And it just works…
The -platform xcb argument is needed because the default for embedded builds is using the eglfs QPA. I am assuming you are developing on X11 and want the jaw-dropping experience of having a foreign architecture Qt application just launch like a normal Qt app.

In this setup, the applications running in QEMU will be much faster than under a full system emulation, because the kernel and the X11 server are still the native ones, and thus running at normal speed. However, for X11 it means sometimes having a few oddities, as there can be client-server mismatch in endianness. While it is supposed to be supported, it is such an edge case that it often has issues. This is the reason why QT_XCB_NO_MITSHM=1 is set. Otherwise, we would hit an issue with passing file descriptors between client and server, which is likely a bug in libxcb. If we were testing a little-endian architecture like arm64 or riscv instead, the environment variable would not be needed.

So that is it, quite a few steps, but most only done once. And as I wrote this, I followed the same instructions for a rather uncommon and not officially supported platform, and only hit two issues with easy work-arounds.

The post Testing Qt on Emulated Architectures Using QEMU User Mode appeared first on Qt Blog.

November 08, 2018

(Post in french, english version below)

Le mois dernier est sorti la seconde édition de mon livre “Dessin et peinture numérique avec Krita”. Je viens juste de recevoir quelques copies, le moment est donc parfait pour en parler rapidement sur mon blog.

J’avais écrit la première édition pour la version 2.9.11 de Krita, il y a bientôt trois ans. Beaucoup de choses ont changées depuis, j’ai donc mis à jour cette seconde édition pour la version 4.1.1, et ajouté quelques notes supplémentaires concernant de nouvelles fonctions.

Aussi, mon éditeur a de nouveau travaillé sur la mise à jour et l’amélioration de la traduction Française de Krita. Merci encore aux éditions D-Booker pour leur contribution ��

Vous pouvez commander ce livre directement sur le site de l’éditeur, au format papier ou numérique.

 

 

Last month was released the 2nd edition of my book “Dessin et peinture numérique avec Krita”. I just received a few copies, so now is time to write a little about it.

I wrote the first edition for Krita 2.9.11, almost three years ago. A lot of things have changed, so I updated this second edition for Krita version 4.1.1, and added a few notes about some new features.

Also, my publisher worked again on updating and improving the french translation of Krita. Thanks again to D-Booker edition for their contribution ��

You can order this book directly from the publisher website, printed or digital edition.

November 07, 2018

Jings no wonder people find computer programming scary when the most easily accessible lanugage, JavaScript, is also the most messy one.

Occationally people would mention to me that the categories on Planet KDE didn’t work and eventually I looked into it and it mostly worked but also sometimes maybe it didn’t.  Turns out we were checking for no cookies being set and if not we’d set some defaults for the categories.  But sometimes the CDN would set a cookie first and ours would not get set at all. This was hard to recreate as it didn’t happen when working locally of course.  And then our JavaScript had at least three different ways to run the initial-setup code but there’s no easy way to just read a cookie, madness I tell you.  Anyway it should be fixed now and set categories by default but only if it hasn’t set some before so you may still have to manually choose which you read.

In the Configure Feed menu at the top you can select to read blogs in different languages.  By default it shows only blogs in English, as well as Dot News, Project News and any User blogs who have asked to be added (only two are listed in our config).  You can also show blogs in Chinese (also only 2 listed), Italian (none listed), Polish (one), Portugese (two), Spanish (four but kdeblog by Jose is especially prolific) or French (none).  Work to be done includes working out how to make this apply to the RSS feed.

Facebooktwittergoogle_pluslinkedinby feather

We are pleased to announce that the 3rd bugfix release of Plasma 5.14, 5.14.3, is now available in our backports PPA for Cosmic 18.10.

The full changelog for 5.14.3 can be found here.

Already released in the PPA is an update to KDE Frameworks 5.51.

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

 

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.14, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.13.5 as included in the original 18.10 Cosmic release.

Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

November 06, 2018

Some years ago I added an embedded Twitter feed to the side of Planet KDE.  This replaced the earlier feed manually curated feeds from identi.ca and twitter which people added but had since died out (in the case of identi.ca) and been blocked (in the case of Twitter).  That embedded Twitter feed used the #KDE tag and while there was the odd off topic or abusive post for the most part it was an interesting way to browse what the people of the internet were saying about us.  However Twitter shut that off a few months ago which you could well argue is what happens with closed proprietary services.

We do now have a Mastodon account but my limited knowledge and web searching on the subject doesn’t give a way to embed a hashtag feed and the critical mass doesn’t seem to be there yet, and maybe it never will due to the federated-with-permissions model just creating more silos.

So now I’ve added a manually curated Twitter feed back to Planet KDE with KDE people and projects.  This may not give us an insight into what the wider internet community is thinking but it might be an easy way to engage more about KDE people and projects as a community.  Or it might not, I haven’t decided yet and I’m happy to take feedback on whether it should stay.

In the mean time ping me to be added to the list or subscribe a bug on bugs.kde.org to request you or someone you know or or your project be added (or removed).  Also volunteers wanted to help curate the feed, ping me to help out.

Facebooktwittergoogle_pluslinkedinby feather

Welcome to Part 4 of my Blog series about Sharing Files to and from mobile Qt Apps.

Part 1  was about sharing Files or Content from your Qt App with native Android or iOS Apps,
Part 2 explained how to share Files with your Qt App from other native Android Apps,
Part 3 covered sharing Files with your Qt App from other native iOS Apps.
Part 4 implements FileProvider to share Files from your Qt App with native Android  Apps.

01_blog_overview

 

FileProvider Content URIs

Starting with Android 7 (SDK 24) you cannot use File URIs anymore to share Files to other Android Apps – you must use a FileProvider to share Files using a Content URI.

To avoid this you could use Target SDK 23 as I did for the first parts of this blog.

But starting November 1, 2018 Google requires for updates to Apps or new Apps on Google Play to target Android 8 (Oreo, API level 26) or higher, so there’s no way to share Files on Android without using a FileProvider.

Here’s the workflow to update your sharing App from File URI to FileProvider with Content URI.

 Android Manifest

As a first step we have to add some info to AndroidManifest.xml :

<application ...>
<provider android:name="android.support.v4.content.FileProvider"
android:authorities="org.ekkescorner.examples.sharex.fileprovider"
android:grantUriPermissions="true" android:exported="false">
<meta-data android:name="android.support.FILE_PROVIDER_PATHS"
android:resource="@xml/filepaths"/>
</provider>
</application>

The Provider needs the com.android.support.v4 Libraries. To include them, please add com.android.support to your build.gradle file:

 

dependencies{

...

compile'com.android.support:support-v4:25.3.1'

}

This will only work if you have installed the Android Support Repository.

In your QtCreator Preferences → Devices → Android, you get easy access to Android SDK Manager. Please check that under Extras the Android Support Repository is installed:

02__sdk_manager_support_lib

In your Android Manifest you see a reference to resources @xml/filepaths. You have to add a new File filepaths.xml under android/res/xml:

03_filepaths_xml

The content of filepaths.xml:

<paths xmlns:android="http://schemas.android.com/apk/res/android">
<files-path name="my_shared_files" path="share_example_x_files/" />
</paths>

This is the mapping to the folder where Android will find your File to be shared. This Folder is inside your AppData Location. Remember: in blog part 1 we used a shared documents location to provide Files to other apps. Now for Android we don’t use a shared location but a path inside our AppData location. Starting the app I always check if the folder exists:

 

#if defined (Q_OS_IOS)
QString docLocationRoot = QStandardPaths::standardLocations(QStandardPaths::DocumentsLocation).value(0);
#endif
#if defined(Q_OS_ANDROID)
QString docLocationRoot = QStandardPaths::standardLocations(QStandardPaths::AppDataLocation).value(0);
#endif
mDocumentsWorkPath = docLocationRoot.append("/share_example_x_files");
if (!QDir(mDocumentsWorkPath).exists()) {
// create the QDir
}

 

Last important part in Android Manifest specifies an Authority:

android:authorities="org.ekkescorner.examples.sharex.fileprovider"

This name must be unique – so it’s a good idea to use your package name and append .fileprovider.

 QShareUtils Java Code

As a last step you must do some small changes in QShareUtils.java.

Import FileProvider and ShareCompat from android.support:

import android.support.v4.content.FileProvider;

import android.support.v4.app.ShareCompat;

Reference the Authority as defined in AndroidManifest:

private static String AUTHORITY="org.ekkescorner.examples.sharex.fileprovider";

You must also create your Intents in a different way:

// the old way: Intent sendIntent = new Intent();

Intent sendIntent = ShareCompat.IntentBuilder.from(QtNative.activity()).getIntent();

You’re done ��

As a positive side effect, it’s now much safer to share Files because they are inside your App Sandbox located and not in public shared Folder.

Overview

Here‘s a short Overview about the Android FileProvider implementation:

04_fileprovider_overview

 

Some issues from Sharing Files to other Android Apps

Android: Sharing to Google Fotos App

If you try to edit an Image in Google Fotos App using the FileProvider, the Fotos App appears in the list of available Apps, but then reports that the File cannot be edited. Viewing works as before.

Only current workaround is to use another Foto Editing App.

Thanks

Thanks again to Thomas K. Fischer @taskfabric for his valuable input on this topic.

Have Fun

In the first part of this blog series, I told you that I needed sharing between Apps on Android and iOS for a DropBox – like customer App. This App is running very well since some months as inHouse App at customer site, so it’s not publicly available. But my example App is Open Source and now it‘s a good time to download current Version from Github, build and run the Sharing Example App.

This was the last part of this blog series, but stay tuned for some other blogs about „Qt mobile Apps in the Enterprise“ and „QML Camera on Android and iOS“.

The post Sharing Files on Android or iOS from or with your Qt App – Part 4 appeared first on Qt Blog.

This year’s North American stop on the Qt World Summit world tour was in Boston, held during the Red Sox’s World Series win. Most of us were glad to be flying home before celebration parades closed the streets! The Qt community has reason to celebrate too, as there’s an unprecedented level of adoption and support for our favorite UX framework. If you didn’t get a chance to attend, you missed the best Qt conference on this continent. We had a whole host of KDABians onsite, running training sessions and delivering great talks alongside all the other excellent content. For those of you who missed it, don’t worry – there’s another opportunity coming up! The next stop on Qt’s European tour will be with Qt World Summit Berlin December 5-6. Be sure to sign up for one of the training sessions now before they’re sold out.

Meanwhile, let me give you a taste of what the Boston Qt World Summit had to offer.

Strong Community

One thing was clear: after 23 years, Qt is still going strong. The Qt Company and the greater Qt community have contributed 29,000 commits and fixed 5000 bugs between Qt 5.6.3 and the current head branch. Very kindly and much appreciated, Lars Knoll, CTO of the Qt Company, recognized KDAB as the largest and longest external contributor to the ever-growing Qt codebase in his opening keynote. All of this effort – and most of the things discussed in this blog – come to fruition in Qt 5.12 LTS, with an expected release date at the end of November.

Qt is Everywhere

Training day @ QtWS18

Training day @ QtWS18

Another observation is that Qt is everywhere. Lars shared that it has now been downloaded five million times, a huge increase from three and a half million just two years ago. The more than 400 attendees at Qt World Summit Boston came from over 30 industries, representing academia, automotive, bioscience, defense, design, geo imaging, industrial automation, energy and utilities, and plenty more. All the training sessions and the talks were well attended, most to full capacity with eager attendees standing at the back.

 

Cross-platform Qt

Is there a reason for this upsurge in Qt interest? Multi-platform is now a critical strategy for nearly every product today. Since Qt provides comprehensive support for mobiles, desktops, and embedded, it has become a top choice for broad cross-platform UX development. It’s not enough to release a cool, innovative product – if it doesn’t link to your desktop or provide a mobile-compatible connection, it often can’t get off the ground.

Qt for Microcontrollers

Qt for Microcontrollers

While there weren’t too many new platforms announced that Qt doesn’t already run on, there was invigorated discussion and demoing at the low end of the embedded spectrum. Qt on Microcontrollers is a new initiative, trimming Qt down to run on smaller MMU-less chips like the Arm Cortext-M7. While still needing some optimization and trimming to get to the smallest configurations, Qt on Microcontrollers currently takes 3-10MB of RAM and 6-13MB of ROM/Flash, making it an attractive option to create one code base that can address a range of products regardless of form factor.

Broad Offering

Another reason behind Qt’s increasing prominence is the breadth of the Qt offering. From 3D through design to web, there are many new improvements and components that make Qt a safe choice for teams that need to continually expand their product’s capabilities. While there have been a host of new additions to the Qt portfolio, here are a few of the more notable ones.

Qt 3D Studio

In a programmer’s worldview, design is often a nice-to-have, yet not a must-have. I guarantee that perspective will change for anyone who had the good fortune of attending Beyond the UX Tipping Point, the keynote from Jared Spool, the so-called Maker of Awesome from Center Centre.

Kuesa preview

Kuesa preview

If you weren’t able to attend the conference, I recommend you watch Jared’s video. Once you do, you’ll be glad that the Qt Company just released Qt Design Studio to help build great interfaces. KDABian Mike Krus also gave a tantalizing preview of Kuesa, a new KDAB tool architected to marry the worlds of design and development. We’ve got some info on our Kuesa webpage (and a screenshot here) – keep an eye out for more to come soon.

Qt for Python

As an aficionado of the language, I’ve always found Python seriously missing UX support – there are solutions but none that are ideal. With the release of Qt for Python however, Python finally has a capable, regularly maintained UI tool with a huge community. This includes integration into the Qt Creator IDE; it will now go from a technical preview to officially supported in Qt 5.12 and beyond. I predict that this will soon make Qt the UI framework of choice for Python – happy news for the many folks who prefer C++ be neatly tucked under the covers.

WebAssembly

There are some clear benefits to running code within a browser recognized by cloud companies – lightweight (and often no) installation required, access everywhere to centralized, managed, and backed-up data stores, along with immediate and hassle-free updates. Those same benefits have now come to Qt with the support of WebAssembly. This lets you compile your Qt application (and with C++ too, not just QML/Qt Quick, mind you) into a WebAssembly compatible package to be deployed on a web server. This technology is now in tech preview, but it’s an active project that will be able to expand the reach of Qt into the cloud realm.

Building Bridges

Frances and Ann #color4cancerMaybe all this flurry of Qt activity is also because it provides a common platform that can build a bridge between disparate disciplines like art, technology, and research. An example of this is nanoQuill, a collaboration between KDAB, the Qt Company, a biotech company: Quantitative Imaging Systems (Qi), and Oregon Health & Science University that crowd-sources electron microscope images of cancer cells for people to artfully color, thus #color4cancer.

Based on Qi’s vision of merging programming and computer graphics with microscopy, nanoQuill built an application to use machine learning on these human colored images to discern cell micro structures, which in turn helps us understand how tumour cells develop resistance to escape cancer-targeting drugs.

Final #color4cancer cell

Final #color4cancer cell

Currently, nanoQuill allows anyone to #color4cancer and post their artwork via their website gallery, a coloring book, nanoQuill: The Coloring Book of Life published in December of 2017, and will soon be in a form of an app. We had a large nanoQuill poster in our booth that people colored throughout the event, allowing Qt World Summit attendees to truthfully say they were not only learning about the latest Qt developments, they were helping fight cancer with art.

CNC Craftspeople

Shaper Origin in action

Shaper Origin in action

Another great merger of art and technology was seen at our KDAB customer showcase from a company called Shaper Tools. If you think you saw a table full of dominos at the edge of our booth, we weren’t passing the time playing – those “dominos” were actually optical registration tapes that allowed Shaper’s hand-held CNC machine to perfectly cut complex shapes. Look at some of the amazing projects that woodworkers have created using the Shaper Origin, and you’ll see that advanced technology doesn’t have to replace master craftsmanship – it can enable it.

Summary

Whether you attended Qt World Summit 2018 to learn about the details of C++ and Qt or the industry trends in medical, automotive, and IoT, there was lots in Boston for everyone. If you care to share in the comments, we’d love to hear what your favourite session was and the best thing you learned.

The post Qt World Summit 2018 Boston appeared first on KDAB.

November 05, 2018

Chrys took up the role of coordinator, fixer and new master of KDE accessibility, which I think is just fantastic. We have been working on what he decided to be most important, mostly chrys fixing issues to make things work with Plasma and screen readers. After getting Orca to read desktop icons he spent quite some time to improve the various start menus.

With so much fresh energy around I started poking at KWin, which was a bit scary, to be honest. It was fun to read code I hadn’t looked at before. In the end, after I spent a while working on a huge work-around, it turned out that we could enable the task switcher to work with relatively little code added. The main issue was that KWin does really not want to give focus to the task switcher. My first attempt was to write sub-classes of QAccessibleInterface for everything in KWin. That started to work, but during some debugging I realized that KWin was actually creating the regular representations for its UI, it was just not properly announcing them to Orca. Thus I threw away my almost complete prototype. At least I verified that it’s possible to create an entire Qt UI for screen readers only, disconnected from the actual UI. Thanks to QAccessible::installFactory it is nowadays pretty easy to instantiate custom representations (subclasses of QAccessibleInterface). Once that was done, I started over and made the task switcher think it’s an active window without letting X11 or Wayland know. That way we get the right dbus messages sent to Orca, when alt-tabbing through the windows. Thanks David, Martin, Roman and Vlad for your input :)

More work is needed on Plasma when it comes to keyboard usability – getting the focus onto the panel and the various notification areas. I’m sure some great ideas are already being worked on, but I can see this easily as a blocker going forward. Configuring the network, checking the battery status and other tasks are really important after all.

I wrote the basics for Kate to work with screen readers in 2014 and it seems like there are some improvements needed when working with Orca, luckily we got some help from Joanie (who maintains Orca) to figure out what goes wrong when navigating by lines. Now it’s just to verify the findings and get the issues sorted out.

If you have a bit of time and want to help, we need someone with good English skills to help clean up the wiki. Good written instructions are really helpful when getting blind users to try Plasma for the first time. We’d also be happy if more people joined in testing and improving things. Join the KDE Accessibility mailing list to join the fun.


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.