This is the story about how I started more than one month ago contributing to the KDE project.
So, one month ago, I found a task on the Phabricator instance from KDE, about the deplorable state of the KDE userbase wiki. The wiki contains a lot of screenshot dating back to the KDE 4 era and some are even from the KDE 3 era. It’s a problem, because a wiki is something important in the user experience and can be really useful for new users and experienced ones alike.
Lucky for us, even though Plasma and the KDE applications did change a lot in the last few years, most of the changes are new features and UI/UX improvements, so most of the information are still up-to-date. So most of the work is only updating screenshots. But up-to-date screenshots are also quite important, because when the user see the old screenshots, he can think that the instructions are also outdated.
So I started, updating the screenshots one after the other. (Honestly when I started, I didn’t think it would take so long, not because the process was slow or difficult, but because of the amount of outdated screenshots.)
But I also learned a lot about KDE doing this. For example did you know that Blink (Chrome webengine) is a fork of Webkit (Safari webengine) and Webkit is a fork of KHTML (Konqueror webengine). I also learned about the existence of lesser-known KDE apps, for example Kile (a latex IDE), Calligra (an office suite), KFloppy (a floppy disk editor), …
As a non-native english speaker, I found out updating screenshots and quickly checking if the information is up-to-date is easier as I first though. There aren’t a lot of requirements, you only need a Phabricator account and the default Breeze theme installed. The Phabricator account is easy to create and the default theme should already be installed.
Then for each wiki entry, you only need to download the software, find all outdated screenshots in the wiki entry, take a new screenshot for each old screenshot, and upload the new screenshot.
For all icons, I quickly generated a png from the svg file with the following command:
convert -density 1200 -resize 128x128 -background transparent /usr/share/icons/breeze/apps/48/okular.svg okular.png
It’s not finished, there are still a lot of outdated screenshots in the wiki, but every day the amount decreases. :)
And you, dear reader, can also help. Like I said: this job doesn’t need any programming skills or perfect english skills, just a bit of motivation. If you need help, there are some instructions available to get started editing the page:
And you, dear reader, can also help. Like I said: this job doesn’t need any programming skills or perfect english skills, just a bit of motivation. If you need help, there are some instructions available to get started editing the wiki: Start Contributing, Markup help, Quick Start. You can also contact me: on the fediverse @firstname.lastname@example.org or on reddit (/u/ognarb1).
Thanks to XYQuadrat for proofreading this blog post. :D
Hi, I’ve been asked to make a new release of libqaccessibilityclient, which seemed like a good idea. So here we go: https://download.kde.org/stable/libqaccessibilityclient/ – version 0.3.0 is now available. I’d like to say thanks to the KDE sysadmins for being super fast.
Now if I wasn’t involved with the accessibility project, I’d have no clue what this is about… so What is libqaccessibilityclient?
It’s a small library that can help understand/use the accessibility information available on DBus. It could be used to write assistive applications such as screen readers. Right now my main purpose for it is to understand what’s going on, so I use it as debugging helper. There are now two small helper applications, one has been there before and shows a complete tree of accessibility objects, so the representation of applications as screen readers see them. The second one is new, it just dumps the same tree on the command line. I used this to find out KWin’s state, since doing anything while pressing alt-tab is hard. I could run it on the command line with a sleep and then see what KWin reported while I pressed alt-tab.
By the way, we’ve been organizing our work on a phabricator board here, feel free to comment and help out with some of the tasks, especially when it comes to Plasma keyboard handling.
Editing videos for foss-gbg and foss-north has turned into something that I do on almost a montly basis. I’ve tried a few workflows, but landed in using kdenlive and, when needed, Audacity. I’m not a very advanced audio person, so if kdenlive would incorporate basic noise reduction and a compressor, I stay within one tool.
Before I describe the actual process, I want to mention something about the hardware involved. There are so many things that you can do when producing this type of contents. However, all the pieces that you add to the puzzle is another point of failure. The motto is KISS – Keep It Simple Stupid. Hence, we use a single video camera with an integrated microphone. This is either an action cam, or a JVC video camera. In most cases this just works. In some cases the person talking has a microphone and then we try to place the camera close to a speaker. It has happened that we’ve recorded someone whispering just by the camera…
As we don’t have a dedicated microphone for the speaker, we get an audio stream that includes the reaction of the audience. That is in my opinion a good thing. It captures the mood of the event. However, we also get quite a lot of background noise which is bad. For this, I rely on this workflow from Rich Bowen. Basically, I extract the audio stream from the recording, massage it in Audacity, and then re-introduce it.
I’ve found it easier to cut the video prior to fixing the audio. This usually means find the start and the end of the talk, but in some cases it is more complex. E.g. removing parts of the Q&A due to reasons, or cutting out a demo that makes no sense when watching the video.
Once in Audacity, I generally pick out a “silent” part of the recoding to learn a noise profile. I then apply a noise reduction effect to the entire recording. This commonly produces a somewhat distorted sound (like if spoken into a can), but the voice of the speaker comes across nicely. After that, I usually apply a compressor effect to balance the loud and quite parts better. I’ve noticed that speakers often start out with a loud voice, and then softens the voice during the talk. For such cases, the compressor helps. It also helps balancing the sound level during Q&A where the audience might be quite or loud compared to the speaker depending on the layout of the venue.
Once the video and audio are cut and filtered, we need some intro and exit screens. I create these using LibreOffice Impress. I have created a template for the title page with the title of the talk and the name of the speaker, followed by a slide with room for the sponsor logo. This has a white background as logos mix badly with the crazy yellow colour of foss-gbg. Finally there is an exit slide which just says foss-gbg.se. I then export the slides to pdf and use ImageMagick to create pngs from them. Since I’m lazy, I just produce huge pngs that I mogrify to the right size. The entire flow looks like this:
libreoffice --headless --convert-to pdf slides.odp
convert -density 300 -depth 8 -quality 85 slides.pdf slides.png
mogrify -resize 1920x1080 slides*.png
The very last step of the process is to overlap the intro and exit screens with the start and end of the video in a good way. I mix this with fading the audio in and out. The trickiest is fading in, as it is nice to hear the first words of the speaker but you don’t want the noise from the audience. I’ve found that no matter what, you need to fade in the sound, even if the fade only lasts for a fraction of a second. Fading out is easy as things usually ends in an applause.
Then it is all about clicking render, remembering to change the name of the output file and uploading to the foss-gbg YouTube channel.
A pesar de que Plasma 5 está bien dotado en cuanto a lanzadores de aplicaciones nunca está de más tener más y más alternativas para personalizar nuestro entorno de trabajo. Hace un tiempo os presenté Tiled Menu, un lanzador de aplicaciones a lo Windows, Simple Menu, un lanzador que como su nombre indica destacaba por su sencillez y UMenu, otro lanzador simple para los que le gusta la sencillez. Hoy toca presentar Minimal menu, un lanzador minimalista para el escritorio Plasma de KDE.
Seguimos con las posibilidades de personalización de Plasma 5 en cuanto a lanzadores de aplicaciones. Al lanzador tradicional, a su versión reducida, al lanzador de aplicaciones a pantalla completa, Tiled Menu, el clon del menú de Windows y a UMenu, se les une Minimal Menu.
Su creador nos avisa que esta es su primera versión y que todavía le falta cosas como integrar los iconos de sistema, con lo que recomienda utilizar el plasmoide User Switcher para completarlo.
Y como siempre digo, si os gusta el plasmoide podéis “pagarlo” de muchas formas en la nueva página de KDE Store, que estoy seguro que el desarrollador lo agradecerá: puntúale positivamente, hazle un comentario en la página o realiza una donación. Ayudar al desarrollo del Software Libre también se hace simplemente dando las gracias, ayuda mucho más de lo que os podéis imaginar, recordad la campaña I love Free Software Day 2017 de la Free Software Foundation donde se nos recordaba esta forma tan sencilla de colaborar con el gran proyecto del Software Libre y que en el blog dedicamos un artículo.
Más información: KDE Store
Para los no iniciados en el blog, quizás la palabra plasmoide le suene un poco rara pero no es mas que el nombre que reciben los widgets para el escritorio Plasma de KDE.
En otras palabras, los plasmoides no son más que pequeñas aplicaciones que puestas sobre el escritorio o sobre una de las barras de tareas del mismo aumentan las funcionalidades del mismo o simplemente lo decoran.
Let’s have a bit more Usability & Productivity, shall we? The KDE Applications 18.12 release is right around the corner, and we got a lot of great improvements to some core KDE apps–some for that upcoming release, and some for the next one. And lots of other things too, of course!
Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!
If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon, LiberaPay, or PayPal. Also consider making a donation to the KDE e.V. foundation.
A common problem with bug reports received for KBibTeX is that the issue may already be fixed in the latest master in Git or that I can provide a fix which gets submitted to Git but then needs to be tested by the original bug reporter to verify that the issue has been indeed fixed for good.
For many distributions, no ‘Git builds’ are available (or the bug reporter does not know if they exist or how to get them installed) or the bug reporter does not know how to fetch the source code, compile it, and run KBibTeX, despite the (somewhat too technical) documentation.
Therefore, I wrote a Bash script called run-kbibtex.sh which performs all the necessary (well, most) steps to get from zero to a running KBibTeX. The nicest thing is that all files (cloned Git repo, compiled and installed KBibTeX) are placed inside /tmp which means no root or sudo are required, nor are any permanent modifications made to the user&aposs system.
There is a README.txt file explaining the script in greater detail.
The only requirement is that the user installs the usual KDE-related development tools and libraries. If a tool or library is missing, the script will abort, but the error message (most likely some output from cmake) can be searched for in order to learn which package to install. Once this is done, simply restart run-kbibtex.sh until all steps succeed.
I have tested the script with several Linux distributions and gave earlier versions to bug reporters for testing, so I am almost sure that it will work as promised. Please send suggestions or bug reports via email to me.
Hoy os quiero presentar la nueva Raspberry Pi 3 Modelo A +, un modelo que se une a la familia de los productos de la Raspberry Pi Foundation y que nos optimiza las posibilidades de estos microordenadores que poco a poco se han introducido con fuerza en los entornos GNU/Linux.
En mi casa tengo un par de Raspberry Pi, una Raspberry Pi 2 Model B y una Raspberry Pi 3 Model B+, una la utilizo básicamente como reproductor multimedia y la otra además como emulador de videojuegos retro. Se trata de un hardware que funciona de fábula con software GNU/Linux y que está haciendo mucho por su expansión.
No puedo estar más que feliz con su funcionamiento, estabilidad y consumo, por lo que en un futuro no descarto seguir domotizando mi casa con más dispositivos de la marca de la frambuesas.
Es por ello que me complace compartir con vosotros el lanzamiento de un nuevo modelo, concretamente la Raspberry Pi Model A +, la cual fue anunciada el pasado 15 de noviembre con un coste de 25 $.
Para los que conozcan las versiones anteriores este nuevo modelo viene a ser una modelo más reducido en cuanto a conexiones y precio que su hermana mayor la Raspberry Pi 3 B+ pero optimizado en cuanto a la gestión de los USB y de la temperatura.
Es por tanto, una opción más económica que se adapta a entornos donde no necesitamos todas las opciones del modelo B+. Podemos ver la diferencia en la imagen inferior:
Las características específicas de la nueva Raspberry Pi Model A + son (fuente: Linux Adictos):
Una de las aplicaciones mas orientadas para los desarrolladores de KDE es KDevelop. No por nada se trata de una aplicación que les permite crear aplicaciones, así que las mejoras que se produzcan en él servirán para mejorar el resto de las aplicaciones. Por esta razón me llena de satisfacción anunciar que ha sido lanzado KDevelop 5.3 cargado de novedades.
Para los que no lo conozcan KDevelop es un entorno de desarrollo integrado para sistemas GNU/Linux y otros sistemas Unix, publicado bajo licencia GPL, orientado al uso bajo el entorno gráfico KDE, aunque también funciona con otros entornos, como Gnome. (Vía: wikipedia)
Hacía mucho tiempo que no hablaba de esta aplicación (unos cuatro años), y eso que suelo tenerla en mente porque es una de las niñas bonitas de uno de los desarrolladores de KDE con el que más tengo relación.
Desde ese lejano 2014, KDevelop ha dado el salto de tecnologías ya obsoletas (la tecnología del software avanza muy rápido) com0 Qt4 a KDE Frameworks 5 y Qt 5, y ha lanzado dos versiones de la rama 5.x, ofreciendo versión a versión mejoras notables.
Es por ello que este noviembre de 2018 me complace compartir con vosotros que ha sido lanzado KDevelop 5.3, una actualización que nos ofrece algunas novedades como:
Como vemos,Kdevelop sigue con el camino de la optimización con el firme objetivo de convertirse en un imprescindible para los desarrolladores. Algo que al final repercute en todas las aplicaciones y, de esta forma, en todos los usuarios de sistemas libres.
Más información: KDevelop
I have been invited by Kisio Digital to present the work we have been doing around KDE Itinerary at the Paris Open Transport Meetup next week. The meetup is near Gare de Lyon and starts on Thursday at 19:00. Feel free to come by, I’m looking forward to discuss ideas on how to move KDE Itinerary forward.
While in Paris I’m also going to meet the team behind Navitia, an open source platform for processing and querying static and real-time ground transport data, in particular for local and nation-wide railway networks. That’s obviously very interesting for KDE Itinerary, as mentioned earlier access to dynamic information such as real-time traffic data is probably the biggest challenge for us.
And yes, I of course made sure to pick travel options that don’t have enough test coverage in KDE Itinerary yet ;-)
Linux packaging was a nightmare for years. But recently serious contenders came up claiming to solve the challenge: first containers changed how code is deployed on servers for good. And now a solution for the desktops is within reach. Meet Flatpak!
In the beginning I probably should admit that over the years I identified packaging within the Linux ecosystem as a fundamental problem. It prevented wider adoption of Linux in general, but especially on the desktop. I was kind of obsessed with the topic.
The general arguments were/are:
In hindsight I must say the situation was not as bad as I thought on the server level: Linux in the data center grew and grew. Packaging simply did not matter that much because admins were used to problems deploying applications on servers anyway and they had the proper knowledge (and time) to tackle challenges.
Additionally, the recent rise of container technologies like Docker had a massive impact: it made deploying of apps much easier and added other benefits like sandboxing, detailed access permissions, clearer responsibilities especially with dev and ops teams involved, and less dependency hell problems. Together with Kubernetes it seems as there is an actual standard evolving of how software is deployed on Linux servers.
To summarize, in the server ecosystem things never were as bad, and are quite good these days. Given that Azure serves more Linux servers than Windows servers there are reasons to believe that Linux is these days the dominant server platform and that Windows is more and more becoming a niche platform.
On the desktop side things were bad right from the start. Distribution specific packaging made compatibility a serious problem, incompatible packaging formats with RPMs and DEBs made it worse. One reason why no package format ever won was probably that no solution offered real benefits above the other. Given today’s solutions for packaging software out there RPM and DEB are missing major advantages like sandboxing and permission systems. They are helplessly outdated, I question if they are suited for software packaging at all today.
There were attempts to solve the problem. There were attempts at standardization – for example via the LSB – but that did not gather enough attraction. There were platform agnostic packaging solutions. Most notably is Klik which started already 15 years ago and got later renamed to AppImage. But despite the good intentions and the ease of use it never gained serious attention over the years.
But with the approach of Docker things changed: people saw the benefits of container formats and the technology technology for such approaches was widely available. So people gave the idea another try: Flatpak.
Flatpak is a “technology for building and distributing desktop applications on Linux”. It is an attempt to establish an application container format for Linux based desktops and make them easy consumable.
According to the history of Flatpak the initial idea goes way back. Real work started in 2014, and the first release was in 2015. It was developed initially in the ecosystem of Fedora and Red Hat, but soon got attention from other distributions as well.
Many features look somewhat similar to the typical features associated with container tools like Docker:
Additionally it features a sandboxing environment and a permissions system.
The most appealing feature for end users is that it makes it simple to install packages and that there are many packages available because developers only have to built them once to support a huge range of distributions.
By using Flatpak the software version is also not tied to the distribution update cycle. Flatpak can update all installed packages centrally as well.
One thing I like about Flatpak is that it was built with repositories (“shops”) baked right in. There is a large repository called flathub.org where developers can submit their applications to be found and consumed by users:
The interface is simple but has a somewhat proper design. Each application features screenshots and a summary. The apps themselves are grouped by categories. The ever changing list of new & updated apps shows that the list of apps is ever growing. A list of the two dozen most popular apps is available as well.
I am a total fan of Open Source but I do like the fact that there are multiple closed source apps listed in the store. It shows that the format can be used for such use cases. That is a sign of a healthy ecosystem. Also, there are quite a few games which is always good
Of course there is lots of room for improvement: at the time of writing there is no way to change or filter the sorting order of the lists. There is no popularity rating visible and no way to rate applications or leave comments.
Last but not least, there is currently little support from external vendors. While you find many closed source applications in Flathub, hardly any of them were provided by the software vendor. They were created by the community but are not affiliated with the vendors. To have a broader acceptance of Flatpak the support of software vendors is crucial, and this needs to be highlighted in the web page as well (“verified vendor” or similar).
As mentioned Flatpak has repositories baked in, and it is well documented. It is easy to generate your own repository for your own flatpaks. This is especially appealing to projects or vendors who do not want to host their applications themselves.
While today it is more or less common to use a central market (Android, iOS, etc.) some still prefer to keep their code in there hands. It sometimes makes it easier to provide testing and development versions. Other use cases are software which is just developed and used in-house, or the mirroring of existing repositories for security or offline reasons: such use cases require local hubs, and it is no problem at all to bring them up with Flatpak.
Flatpak is currently supported on most distributions. Many of them have the support built in right from the start, others, most notably Ubuntu, need to install some software first. But in general it is quite easy to get started – and once you did, there are hundreds of applications you can use.
Of course Flatpak is not the only solution out there. After all, this is the open source world we are talking about, so there must be other solutions
Snapcraft is a way to “deliver and update your app on any Linux distribution – for desktop, cloud, and Internet of Things.” The concept and idea behind it is somewhat similar to Flatpak, with a few notable differences:
Some more technical differences are in the way packages are built, how the sandbox work and so on, but we will noch focus on those in this post.
The Snapcraft market place called snapcraft.io provides lists of applications, but is much more mature than Flathub: it has vendor testimonials, features verified accounts, multiple versions like beta or development can be picked from within the market, there are case stories, for each app additional blog posts are listed, there is integration with social accounts, you can even see the distribution by countries and Linux flavors.
And as you can see, Snapcraft is endorsed and supported by multiple companies today which are listed on the web page and which maintain their applications in the market.
Flathub has a lot to learn until it reaches the same level of maturity. However, while I’d say that snapcraft.io is much more mature than Flathub it also misses the possibility to rate packages, or just list them by popularity. Am I the only one who wants that?
The main disadvantage I see is the monopoly. snapcraft.io is tightly controlled by a single company (not a foundation or similar). It is of course Canonical’s full right to do so, and the company and many others argue that this is not different from what Apple does with iOS. However, the Linux ecosystem is not the Apple ecosystem, and in the Linux ecosystem there are often strong opinions about monopolies, closed source solutions and related topics which might lead to acceptance problems in the long term.
Also, technically it is not possible to launch your own central server for example for in-house development, or for hosting a local mirror, or to support offline environments or for other reasons. To me this is particularly surprising given that Snapcraft targets specifically iot devices, and I would run iot devices in an closed network wherever I can – thus being unable to connect to snapcraft.io. The only solution I was able to identify was running a http proxy, which is far from the optimal solution.
Another a little bit unusual feature of Snapcraft is that updates are installed automatically, thanks to theo.9dor for the hint:
The good news is that snaps are updated automatically in the background every day!https://tutorials.ubuntu.com/tutorial/basic-snap-usage#2
While in the end a development model with auto deployments, even dozens per day, is a worthwhile goal I am not sure if everyone is there yet.
So while Snapcraft has a mature market place, targets much more use cases and provides more packages to this date, I do wonder how it will turn out in the long run given that we are talking about the Linux ecosystem here. And while Canonical has quite some experience to develop their own solutions outside the “rest” of the community, those attempts seldom worked out.
I’ve already mentioned AppImage above and I’ve written about it in the past when it was still called Klik. AppImage is “way for upstream developers to provide native binaries for Linux”. The result is basically a file that contains your entire application and which you can copy everywhere. It exists for more than a dozen years now.
The thing that is probably most worth mentioning about it is that it never caught on. After all, already long time ago it provided many impressive features, and made it possible to install software cross distribution. Many applications where also available as AppImage – and yet I never saw wider adoption. It seems to me that it only got traction recently because Snapcraft and Flatpak entered the market and kind of dragged it with them.
I’d love to understand why that is the case, or have an answer to the “why”. I only have few ideas but those are just ideas, and not explanations why AppImage, in all the years, never managed to become the Docker of the Linux desktop.
Maybe one problem was that it never featured a proper store: today we know from multiple examples on multiple platforms that a store can mean the difference. A central place for the users to browse, get a first idea of the app, leave comments and rate the application. Docker has a central “store”, Android and iOS have one, Flatpak and Snapcraft have one. However, AppImage never put a focus on that, and I do wonder if this was a missed opportunity. And no, appimage.github.io/apps is not a store.
Another difference to the other tools is that AppImage always focused on the Open Source tools. Don’t get me wrong, I appreciate it – but open source tools like Digikam were available on every distribution anyway. If AppImage would have focused to reach out to closed source software vendors as well, together with marketing this aggressively, maybe things would have turned out differently. You do not only need to make software easily available to users, you also need to make software available the people want.
Last but not least, AppImage always tried to provide as many features as possible, while it might have benefited from focusing on some and marketing them stronger. As an example, AppImage advertises that it can run with and without sandboxing. However, sandboxing is a large benefit of using such a solution to begin with. Another thing is integrated updates: there is a way to automatically update all appimages on a system, but it is not built in. If both would have been default and not optional, things maybe would have been different.
But again, these are just ideas, attempts to find explanations. I’d be happy if someone has better ideas.
There are some disadvantages with the Flatpak approach – or the Snapcraft one, or in general with any container approach. Most notably: libraries and dependencies.
The basic argument here is: all dependencies are kept in each package. This means:
Especially the last part is crucial: in case of a serious library security problem the user has to rely on each and every package vendor that they update the library in the package and release an updated version. With a dependency based system this is usually not the case.
People often compare this problem to the Windows or Java world were a similar situation exists. However, while the underlying problem is existent and serious, with Flatpak at least there is a sandbox and a permission system something which was not the case in former Windows versions.
There needs to be made a trade off between the advanced security through permissions and sandboxing vs the risk of having not-updated libraries in those packages. That trade off is not easily done.
This question might be strange, given the needs I identified in the past and my obvious enthusiasm for it. However, these days more and more apps are created as web applications – the importance of the desktop is shrinking. The dominant platform for users these days are mobile phones and tablets anyway. I would even go so far to say that in the future desktops will still be there but mainly to launch a web browser
But we are not there yet and today there is still the need for easy consumption of software on Linux desktops. I would have hoped though to see this technology and this much traction and distribution and vendor support 10 years ago.
Well – as I mentioned early on, I can get somewhat obsessed with the topic. And this much too long blog post shows this for sure
But as a conclusion I say that the days of difficult-to-install-software on Linux desktops are gone. I am not sure if Snapcraft or Flatpak will “win” the race, we have to see that.
At the same time we have to face that desktops in general are just not that important anymore. But until then, I am very happy that it became so much easier for me to install certain pieces of software in up2date versions on my machine.
We are happy to announce the release of Qt Creator 4.8.0 Beta2!
This release comes with the many fixes that we have done since our first Beta release.
Additionally we upgraded the LLVM for the Clang code model to version 7.0, and our binary packages to the Qt 5.12 prerelease.
The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.8 Beta2 is also available under Preview > Qt Creator 4.8.0-beta2 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.
Hi! I had the opportunity to participate in QtCon Brasil 2018 as a speaker during the last weekend. It happened in São Paulo, which is a city that I haven't visited for a long time. My talk was about the integration of Qt applications and Computer Vision, specially focused on the mobile environment with QtQuick … Continue reading Talking about Qt and Computer Vision at QtCon Brasil 2018
These past couple weekends were a blast! On the weekend of November 3 and 4, it happened on Rio de Janeiro the first Maker Faire of Latin America. And I was able to do a talk about Atelier and the current status of our project. The event hold more than 1.500 people on the first... Continue Reading →
With Qt for Python released, it’s time to look at the powerful capabilities of these two technologies. This article details one solopreneur’s experiences.
Back in 2016, I started developing a cross-platform file manager called fman. Its goal: To let you work with files more efficiently than Explorer on Windows or Finder on Mac. If you know Total Commander and Sublime Text, a combination of the two is what I am going for. Here’s what it looks like:
The biggest question, in the beginning, was which GUI framework to use. fman’s choice needs to tick the following boxes:
At the time, Electron was the most-hyped project. It has a lot of advantages: You can use proven web technologies. It runs on all platforms. Customizing the UI of your application is trivial by means of CSS. Overall, it’s a very powerful technology.
The big problem with Electron is performance. In particular, the startup time was too high for a file manager: On an admittedly old machine from 2010, simply launching Electron took five seconds.
After considering a few technologies, I settled on Qt. It’s cross-platform, has great performance and supports custom styles. What’s more, you can use it from Python. This makes at least me orders of magnitude more productive than the default C++.
Controlling Qt from Python is very easy. You simply install Python and use its package manager to fetch the Qt bindings. Gone are the days where you had to set up gigabytes of software, or even Qt. It literally takes two minutes. I wrote a Python Qt tutorial that shows the exact steps and sample code if you’re interested.
You can choose between two libraries for using Qt from Python:
From a code perspective, it makes no difference because the APIs are virtually identical. fman uses PyQt because it is more mature. But given that Qt for Python is now officially supported, I hope to switch to it eventually.
Once you’ve written an application with Python and Qt, you want to bring it into the hands of your users. This turns out to be surprisingly hard.
It starts with packaging. That is, turning your source code into a standalone executable. Special Python libraries exist for solving this task. But they never “just work”. You may have to manually add DLLs that are only required on some Windows versions. Or avoid shipping shared libraries that are incompatible with those already present on the users’ systems. Or add special handling for one of your dependencies. Etc.
Then come installers. On Windows, most users expect a file such as
AppSetup.exe. On Mac, it’s usually
App.dmg. Whereas on Linux, the most common file formats are
.pkg.tar.xz. Each of these technologies takes days to learn and set up. In total, I have spent weeks just creating fman installers for the various platforms.
By default, operating systems show a warning when a user downloads and runs your app:
To prevent this, you need to code sign your application. This creates an electronic signature that lets the OS verify that the app really comes from you and has not been tampered with. Code signing requires a certificate, which acts like a private encryption key.
Obtaining a certificate differs across platforms. It’s easiest on Linux, where everyone can generate their own (GPG) key. On Mac, you purchase a Developer Certificate from Apple. Windows is the most bureaucratic: Certificate vendors need to verify your identity, your address etc. The cheapest and easiest way I found was to obtain a Comodo certificate through a reseller, then have them certify my business – not me as an individual.
Once you have a certificate, code signing involves a few more subtleties: On Windows and Ubuntu, you want to ensure that SHA256 is used for maximum compatibility. On Linux, unattended builds can be tricky because of the need to inject the passphrase for the GPG key. Overall, however, it’s doable.
Like many modern applications, fman updates itself automatically. This enables a very rapid development cycle: I can release multiple new versions per day and get immediate feedback. Bugs are fixable in less than an hour. All users receive the fix immediately, so I don’t get support requests for outdated problems.
While automatic updates are a blessing, they have also been very expensive to set up. The tasks we have encountered so far – packaging, installers and code signing – took a month or two. Automatic updates alone required the same investment again.
The most difficult platform for automatic updates was Windows. fman uses a Google technology called Omaha, which is for instance used to update Chrome. I wrote a tutorial that explains how it works.
Automatic updates on Mac were much easier thanks to the Sparkle update framework. The main challenge was to interface with it via Objective-C from Python. Here too I wrote an article that shows you how to do it.
Linux was “cleanest” for automatic updates. This is because modern distributions all have a built-in package manager such as
pacman. The steps for enabling automatic updates were typically:
.debinstallers and metafiles that describe them.
Because fman supports four Linux distributions, this still required learning quite a few tools. But once you’ve done it for two package managers, the others are pretty similar.
There aren’t just different operating systems, there are also different versions of each OS. To support them, you must package your application on the earliest version. For instance, if you want your app to run on macOS 10.10+, then you need to build it on 10.10, not 10.14. Virtual machines are very useful for this. I show fman’s setup in a video.
So how does fman cope with all this complexity? The answer is simple: A Python-based build script. It does everything from creating standalone executables to uploading to an update server. I feel strongly that releasing a desktop app should not take months, so am making it available as open source. Check it out if you are interested!
Python and Qt are great for writing desktop apps: Qt gives you speed, cross-platform support and stylability while Python makes you more productive. On the other hand, deployment is hard: Packaging, code signing, installers, and automatic updates require months of work. A new open source library called fbs aims to bring this down to minutes.
Thank you to everyone who participated last Bug Day! We had a turnout of about six people, who worked through about half of the existing REPORTED (unconfirmed) Konsole bugs. Lots of good discussion occurred on #kde-bugs as well, thank you for joining the channel and being part of the team!
We will be holding a Bug Day on November 17th, 2018, focusing on Okular. Join at any time, the event will be occurring all day long!
This is a great opportunity for anyone, especially non-developers to get involved!
If you need any help, contact me!
We promised Digital Atelier would be available in the Krita Shop after the succesful finish of the Fundraiser. A bit later than expected, we’ve updated the shop with the Digital Atelier brush preset bundle and tutorial download:
There are fifty great brush presents, more than thirty brush tips, twenty paper textures and almost twohours of in-depth video tutorial, working you through the process of creating new brush presets.
Digital Atelier sells for 39,95 euros, ex VAT.
And we’ve also created a new USB-card, with the newest stable version of Krita for all OSes. Includes Comics with Krita, Muses, Secrets of Krita and Animate with Krita tutorial packs.
KDevelop 5.3 released
A little less than a year after the release of KDevelop 5.2 and a little more than 20 years after KDevelop's first official release, we are happy to announce the availability of KDevelop 5.3 today. Below is a summary of the significant changes.
We plan to do a 5.3.1 stabilization release soon, should any major issues show up.
With 5.1, KDevelop got a new menu entry Analyzer which features a set of actions to work with analyzer-like plugins. For 5.2 the runtime analyzer Heaptrack and the static analyzer cppcheck have been added. During the last 5.3 development phase, we added another analyzer plugin which now is shipped to you out of the box:
Clazy is a clang analyzer plugin specialized on Qt-using code, and can now also be run from within KDevelop by default, showing issues inline.
The KDevelop plugin for Clang-Tidy support had been developed and released independently until after the feature freeze of KDevelop 5.3. It will be released as part of KDevelop starting with version 5.4.
With all the Analyzers integration available, KDevelop's own codebase has been subject for their application as well. Lots of code has been optimized and, where indicated by the analyzers, stabilized. At the same time modernization to the new standards of C++ and Qt5 has been continued with the analyzers aid, so it can be seen only in the copyright headers KDevelop was founded in 1998 .
A lot of work was done on stabilizing and improving our clang-based C++ language support. Notable fixes include:
Thanks to Heinz Wiesinger we've got many improvements for the PHP language support.
The developers have been concentrating on fixing bugs, which already have been added into the 5.2 series.
There are a couple of improved features in 5.3:
KDevelop is written with portability in mind, and relies on Qt for solving the big part there, so next to the original "unixoid" platforms like Linux distributions and the BSD derivatives, other platforms with Qt coverage are in good reach as well, if people do the final pushing. So far Microsoft Windows has been another supported platform, and there is some experimental, but maintainer-seeking support for macOS. Some porters of Haiku, the BeOS inspired Open Source operating system, have done porting as well, building on the work done for other Qt-based software. For KDevelop 5.3 the small patch still needed has been applied to KDevelop now, so the Haiku ports recipe for KDevelop no longer needs it.
KDevelop is already in the HaikuDepot, currently still at version 5.2.2. It will be updated to 5.3.0 once the release has happened.
The Clazy support as mentioned above has a recommended optional runtime dependency, clazy, more specifically the clazy-standalone binary. Currently clazy is only packaged and made available to their users by a few distributions, e.g. Arch Linux, openSUSE Tumbleweed or OpenMandriva,
If your distribution has not yet looked into packaging clazy, please consider to do so. Next to enabling the Clazy support feature in KDevelop, it allows developers to easily fix and optimize their Qt-based software, resulting in a less buggy and more performant software again for you.
You can find more information in the the release announcement of the currently latest clazy release, 1.4.
Together with the source code, we again provide a prebuilt one-file-executable for 64-bit Linux (AppImage), as well as binary installers for 32- and 64-bit Microsoft Windows. You can find them on our download page.
The 5.3.0 source code and signatures can be downloaded from here.
Should you find any issues in KDevelop 5.3, please let us know in the bug tracker.
Elisa (product page, release announcements blog) is a music player designer for excellent integration into the KDE Plasma desktop (but of course it runs everywhere, including some non-Free platforms). I had used it a few times, but had not gotten around to packaging it. So today I threw together a FreeBSD port of Elisa, and you’ll be able to install it from official packages whenever the package cluster gets around to it.
I say “threw together” because it took me only a half hour (most of that was just building it a half-dozen times in different ways). The Elisa code itself is very straightforward and nice. It doesn’t even spit out a single warning with Clang 6 on FreeBSD. That’s an indicator (for me, anyway) of good code. So if you want more music on FreeBSD, it’s there! (And the multimedia controls work from the SDDM lock screen, too)
Last month French publisher D-Booker released the 2nd edition of Timothée Giet’s book “Dessin et peinture numérique avec Krita”.
The first edition was written for Krita 2.9.11, almost three years ago. A lot of things have changed since then! So Timothée has completely updated this new edition for Krita version 4.1. There are also a number of notes about the new features in Krita 4.
And more-over, D-Booker worked again on updating and improving the French translation of Krita! Thanks again to D-Booker edition for their contribution.
You can order this book directly from the publisher’s website. There is both a digital edition (pdf or epub) as well as a paper edition.
I’m Brigette, but I go mainly go by my online handle of HoldXtoRevive. I’m from the UK and mostly known as a fanartist.
I have had a few commissions but outside that I would call myself a hobbyist. I would love to work professionally at some point.
I do semi-realistic sci-fi art. I most recently I have been drawing character portraits inspired from the Art Nouveau style, the majority of it has been fanart of a few different Sci-fi games.
It’s hard to list them all really. Top of the list would be my other half, RedSkittlez, who is an amazing concept and character artist, also my friends Blazbaros, SilverBones and many more that would cause this to go on for too long.
Outside of my friends I would say Charles Walton, Pete Mohrbacher and Valentina Remenar to name a few.
About 4 years ago I downloaded GIMP as I wanted to get back into art after not drawing for about 15 years. I got a simple drawing tablet soon after and things just progressed from there.
The flexibility and practicality of it. Whilst I would love to try traditional acquiring, maintaining and storing supplies is not easy for me.
My partner was looking at alternatives to photoshop and came across it via a youtube video. He recommended me to try it out.
How clean the UI is and how all of the tools where easy to find, and the fun I had messing with the brushes.
The fact it was really easy to get to grips with, yet I can tell there is more I can get from it. Also the autosave.
I would like a brightness/contrast slider alongside the curve for ease of use. It would also be nice if the adjustment windows would not close when the autosave kicks in.
I have not used many programs before I can across Krita. But the thing that jumped out at me was the ease of use and it had everything I wanted in an art program, I know that if I want to try animation I do not need to go and find another program.
That is hard to say, but in a pinch I would say the one titled “Saladin’s White Wolf”. I was really happy how the background came out, it was also the one to be picked out and promoted by Bungie on their twitter.
For the most part I use a multiply layer over flats for shading. My main brushes are just the basic tip (gaussian), basic wet soft and the soft smudge brush.
I’m bad with words but I want to show appreciation to the Krita crew for making this wonderful program and to everyone who has supported and encouraged me.
Over the last few weeks I published some blogs on the Visual C++ blog about Clang AST Matchers. The series can be found here:
I am not aware of any similar series existing which covers creation of clang-tidy checks, and use of clang-query to inspect the Clang AST and assist in the construction of AST Matcher expressions. I hope the series is useful to anyone attempting to write clang-tidy checks. Several people have reported to me that they have previously tried and failed to create clang-tidy extensions, due to various issues, including lack of information tying it all together.
Other issues with clang-tidy include the fact that it relies on the “mental model” a compiler has of C++ source code, which might differ from the “mental model” of regular C++ developers. The compiler needs to have a very exact representation of the code, and needs to have a consistent design for the class hierarchy representing each standard-required feature. This leads to many classes and class hierarchies, and a difficulty in discovering what is relevant to a particular problem to be solved.
I noted several problems in those blog posts, namely:
Last week at code::dive in Wroclaw, I demonstrated tooling solutions to all of these problems. I look forward to video of that talk (and videos from the rest of the conference!) becoming available.
Meanwhile, I’ll publish some blog posts here showing the same new features in clang-query and clang-tidy.
Recent work by the Compiler Explorer maintainers adds the possibility to use source code tooling with the website. The compiler explorer contains new entries in a menu to enable a clang-tidy pane.
I demonstrated use of compiler explorer to use the clang-query tool at the code::dive conference, building upon the recent work by the compiler explorer developers. This feature will get upstream in time, but can be used with my own AWS instance for now. This is suitable for exploration of the effect that changing source code has on match results, and orthogonally, the effect that changing the AST Matcher has on the match results. It is also accessible via cqe.steveire.com.
It is important to remember that Compiler Explorer is running clang-query in script mode, so it can process multiple let and match calls for example. The new command set print-matcher true helps distinguish the output from the matcher which causes the output. The help command is also available with listing of the new features.
The issue of clang-query not printing both diagnostic information and AST information at the same time means that users of the tool need to alternate between writing
set output diag
set output dump
to access the different content. Recently, I committed a change to make it possible to enable both output and diag output from clang-query at the same time. New commands follow the same structure as the set output command:
enable output dump disable output dump
The set output <feature> command remains as an “exclusive” setting to enable only one output feature and disable all others.
This command design also enables the possibility of extending the features which clang-query can output. Up to now, developers of clang-tidy extensions had to inspect the AST corresponding to their source code using clang-query and then use that understanding of the AST to create an AST Matcher expression.
That mapping to and from the AST “mental model” is not necessary. New features I am in the process of upstreaming to clang-query enable the output of AST Matchers which may be used with existing bound AST nodes. The command
enable output matcher
causes clang-query to print out all matcher expressions which can be combined with the bound node. This cuts out the requirement to dump the AST in such cases.
Inspecting the AST is still useful as a technique to discover possible AST Matchers and how they correspond to source code. For example if the functionDecl() matcher is already known and understood, it can be dumped to see that function calls are represented by the CallExpr in the Clang AST. Using the callExpr() AST Matcher and dumping possible matchers to use with it leads to the discovery that callee(functionDecl()) can be used to determine particulars of the function being called. Such discoveries are not possible by only reading AST output of clang-query.
The other important discovery space in creation of clang-tidy extensions is that of Source Locations and Source Ranges. Developers creating extensions must currently rely on the documentation of the Clang AST to discover available source locations which might be relevant. Usually though, developers have the opposite problem. They have source code, and they want to know how to access a source location from the AST node which corresponds semantically to that line and column in the source.
It is important to make use a semantically relevant source location in order to make reliable tools which refactor at scale and without human intervention. For example, a cursory inspection of the locations available from a FunctionDecl AST node might lead to the belief that the return type is available at the getBeginLoc() of the node.
However, this is immediately challenged by the C++11 trailing return type feature, where the actual return type is located at the end. For a semanticallly correct location, you must currently use
It should be possible to use getReturnTypeSourceRange(), but a bug in clang prevents that as it does not appreciate the trailing return types feature.
Once again, my new output feature of clang-query presents a solution to this discovery problem. The command
enable output srcloc
causes clang-query to output the source locations by accessor and caret corresponding to the source code for each of the bound nodes. By inspecting that output, developers of clang-tidy extensions can discover the correct expression (usually via the clang::TypeLoc heirarchy) corresponding to the source code location they are interested in refactoring.
I have made many more modifications to clang-query which I am in the process of upstreaming. My Compiler explorer instance is listed as the ‘clang-query-future’ tool, while the clang-query-trunk tool runs the current trunk version of clang-query. Both can be enabled for side-by-side comparison of the future clang-query with the exising one.
You have probably read a lot about Akademy 2018 recently, and how great it was.
For me it was a great experience too and this year I met a lot of KDE people, both old and new. This is always nice.
I arrived on Thursday so I had one day to set everything up and had a little bit of time to get to know the city.
On Friday evening I enjoyed the "Welcoming evening", but I was very surprised when Volker told me that I would be on stage the next day, talking about privacy.
He told me that someone should have informed me several days before. The scheduled speaker, Sebastian, couldn't make it to Akademy.
That was really a pity. If I had known earlier, I would have prepared a proper presentation, as I think I have a lot to say on the topic. I care a lot about good encryption support in KDE PIM and I have also been busy with Tails, a distribution strongly focused on privacy and anonymity.
So the next day I rambled a bit about encrypting mail headers (also known as Memory Hole), but it wasn't a great talk. It was also very stressful, as we started with a lot of delay: Setting up the audio took much longer than expected so all talks started late. As I was in such a hurry. I forgot a lot of what I wanted to say and failed to draft a greater picture for the privacy goal.
To compensate, let me say it here. Privacy starts with encrypting everything, starting with the content of the connection. This is mostly done via TLS. Here KDE is lacking some bits, as we cannot display the encryption methods, so we are sometimes forced to downgrade to old and insecure encryption methods without user visual feedback. But at least we support TLS everywhere! Unfortunately metadata is often enough to track where people are. To hide this information, we need TOR/VPN support within the applications. For most applications you can use the TOR network by running
but this is not really user-friendly.
It also has the disadvantage that you can't see if DNS requests are transmitted via the TOR tunnel. And then there's deamons, which you don't normally run from the command line. Furthermore, e.g. GnuPG has an option to enable TOR support and does not like to run using torsocks.
We need more control over the used TLS certificates and a way to display the TLS parameters, like accepted ciphers, protocol versions etc. Because at the moment KDE just asks you about unknown certificates, but you can't view/ control the other important details of a TLS connection to harden your system. Sure a default user can't decide whether something is good or not and a user may need to access some rotten company side. But we can provide a rating like ssllabs to show that the TLS connection is not good.
Another topic is to have a good support for TOR/VPN for applications and KDE in total in a user friendly way. And be sure that everything within these applications really uses the tunnel. I think at usecases like: I want TOR for everything except this application. After we have good solutions for those, we can focus on all the different applications. I haven't a good overview what each application should do in regards to the Privacy goal additionally.
As I'm busy within KDE PIM I see some things there:
To move forward in reaching the privacy goal, I volunteered to organize a Sprint together with Louise Galathea. It will hopefully take place next Spring.
But back to the Akademy - two days of many talks which was pretty overwhelming, in a good way. I really enjoyed the mix of non technical talks and technical talks. My highlight was a talk about Kontact from Markus Feilner. It is interessting to see how users master issues around Kontact and what solutions they find. The talk would have benefited from Markus talking to KDE PIM before, as some of his suggestions weren't correct. But it was nice to see that there are happy users who love all bells and whistles from Kontact. The days after the talk I encountered another user who gave Kontact a try after years, because of that talk.
The BOF session started on Monday. I started the Mondey joining the Promo BoF to get some insights into how Promo works and how KDE PIM can have better promotion to attract new contributors. Together we created a rough plan to focus first on the new website and then create several blog posts to promote KDE PIM. Internaly we started to create several junior jobs and are discussing who can mentor new people when they come along. This is still ongoing and hopefully new people will join the new spirit of KDE PIM and aren't overwhelmed by the big amount of repositories.
After the AGM I joined the Distros BoF and was impressed by how many Distros have a KDE focus. We started to talk about issues regarding KDE and downstream. Especially if distros are time based, they need a lot of manpower to validate if bugs have already been fixed upstream, or if it is a downstream bug. The fast release cycle of different KDE bundels (Plasma, Frameworks & Applications) makes it even more difficult to follow. Maybe this BoF helps the distros to talk more about their issues in the email@example.com mailinglist directly and solutions can be found for common issues.
The next day I attend KConfig BOF and talked about issues reagarding config migration and how to store config. Also a very common issue is that you need to get updates of config values in your application. KDE PIM has some examples of bad decisions reagarding config handeling, as you can configure a lot of stuff and everything is seperated into several files. But the seperation is not well formed from the beginning - an issue that's grown with the years. It is also written down as task to do cleanups and make the configs easier to consume. Because a clearer seperation also helps users who want to move their KDE PIM from one computer to another.
In the afternoon, the KDE PIM BoF took place, where we followed up on what has been going on since the last Akademy and what new things we want to do. We discussed a lot of how we can improve the onbording for KDE PIM.
As we want to make it easier for other applications to use PIM related data, we have a long standing goal to move single parts of KDE PIM to Frameworks. For the next KDE Application 18.12 we want to push syndication into Frameworks release. If we would have more manpower, we would move those things faster to Frameworks. If you are interested, or currently using some parts of KDE PIM within your application, it is a good idea to help us getting things cleaned up and moved to Frameworks. A bigger audience can consume PIM data more easily.
In the evening I joined the LGBT & Quuer Dinner and it was nice to cook together and chat.
The next days I mostly sat down and tried to get some smaller things fixed, like the issue with the suspend icon within the logout Applet. It is very good to sit next to Kai-Uwe so he could see and propose workarounds directly. Finally he found the hard coded color in the theme and the issue is solved in the next release.
Since there were many open discussions I did not managed to write a line of code within these days - I was busy talking. Additionally I wanted to see the newest KDE PIM within Debian, so this also took some time.
On Friday I took a train back to Germany. As Volker is always interested about live data from delays for improving KItinerary, he wished me some delays. And I had a lot of delay - I arrived in Berlin 12h later than expected. I had to sleep in Dresden, as the last train from Prag leaves at 18:07 towards Berlin. And since I missed that one, I had to take a local train to Dresden and there was no train anymore to Berlin. And it wasn't really late - just midnight. I never thought about that Dresden is that badly connected to rest of Germany...
But it was nice as I stayed at a friends place, who I wouldn't have met otherwise. From the improving KItinerary it was not a success, as I only got one update from Deutsche Bahn at 23:25 - telling me that the train from Leipzig to Berlin would use another track. The more useful information, that I never would reach that train would be more useful.
Week 44 in Usability & Productivity is coming right up! This week was a bit lighter in terms of the number of bullet points, but we got some really great new features, and there’s a lot of cool stuff in progress that I hope to be able to blog about next week! In the meantime, take a look at this week’s progress:
Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!
If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon, LiberaPay, or PayPal. Also consider making a donation to the KDE e.V. foundation.
C++ comparison operators are usually fairly straightforward to implement. Writing them by hand can however be quite error prone if there are many member variables to consider. Missing a single one of them will still compile and mostly work fine, apart from some hard to debug corner cases, such as misbehaving or crashing algorithms and containers, or data loss. Can we do better?
In KItinerary we have a number of classes representing schema.org types. Those are fairly straightforward value types with a number of properties, to be consumed both statically (by C++ code) and dynamically (by QML, Grantlee and JSON-LD de/serialization).
As implementing getters, setters,
Q_PROPERTY statements, member variables, etc for about a hundred or so properties by hand would result
in an unreasonable amount of boilerplate code, this is all done by a macro
So far so good, but now we’d like to add comparison operators for those types. Specifically we needed
optimizing away some memory allocations in case of non-changing write operations (similar to the common pattern of not emitting
change signals in setter methods on non-changes). But the thoughts below are of course also applicable to any other comparison function,
not just equality.
Ideally we’d find an alternatives to writing the comparison functions by hand that would either be impossible to break or would at least not fail silently (e.g. by producing compiler errors).
One idea would be to implement the comparison functions entirely generically by leveraging the property iteration support
QMetaObject. That is we iterate over all properties with the
STORED flag and compare their values. This gets the job
done, but has two drawbacks:
QVariant, which means we have to register comparison operators for all types with the meta type system. That might actually be nice to have anyway, but it is limited to less-than and equal.
QVariant, which in some situations can have a relevant performance impact, in particular when using types that are too large for inline storage inside
Another idea could be to use more elaborate preprocessor constructs that allow for iteration over all properties. The Boost.Preprocessor library has the building blocks for this. From experience in an old pre-C++11 project attempting to catch SQL errors at compile time this works but doesn’t really lead to a nice syntax nor easily maintainable code.
The solution I ended up using is inspired by Woboq’s Verdigris. The basic idea
is that each property macro generates a part of the comparison function only, the comparison for its property and a call to the
comparison function for the previous property. The chaining is done by overloading on a template type that essentially describes
a numerical value by inheritance. A little
constexpr helper functions allows us to determine the index of a property at compile
time. Olivier describes this in more detail in his blog post about Verdigris implementation details.
The resulting code for KItinerary can be found here. It’s worth noting that while this might look inefficient due to the many function calls, this is all inline code in a single translation unit, so in an optimized build this ends up essentially as the hand-written comparison function would look like.
Would it make sense to have C++ support something like
bool operator==(const T&) const = default, that is let the compiler
generate the implementation for us, as it can for a number of other member functions? Proposals
for such a language extension exist.
There’s a bigger conceptual problem though, that one runs immediately into once the comparison operator is implemented: the semantics of comparing things. Here are a few examples:
NANdoes not compare equal to itself, which isn’t what KItinerary needed either, as NAN is its indicator for “value not set”.
QString: here the default equal comparison doesn’t distinguish between null and empty strings. That’s probably very often what you’d expect, unless you put special semantics on that distinction, such as in the
QDateTimeinstances compare equal if they refer to the same point in time, not if they represent exactly the same information (e.g. time specified with an UTC offset vs a full IANA timezone). For KItinerary timezones are a very crucial information, so the default semantic doesn’t cut it there.
An all-or-nothing approach for compiler-generated comparison operator implementations means it’s not a viable option in case one needs more control over the semantics. That of course does not mean it is useless, but it does mean the alternative implementation techniques remain valid either way.
Make sure you commit anything you want to end up in the 18.12 release to them
We're already past the dependency freeze.
The Freeze and Beta is this Thursday 15 of November.
More interesting dates
November 29: KDE Applications 18.12 RC (18.11.90) Tagging and Release
December 6: KDE Applications 18.12 Tagging
December 13: KDE Applications 18.12 Release
The weekend of 3 and 4 November Dave and I went to staff the KDE booth at Freenode#live, in Bristol. I had never been in that corner of England before, It turns out to have hills, and a river, and tides. Often an event brings me to a city, and then out, without seeing much of it. This time I traveled in early and left late the day after the event so I had some time to wander around, and it was quite worthwhile.
Turns out there is quite a lot of cider available, and the barman gave me an extensive education on the history of cider and a bit on apple cultivation when I asked about it. Sitting down with a Slimbook and a pint can be quite productive; I got some Calamares fixes done before the conference.
We (as in the KDE community) have invested into getting booth materials ready for this kind of events, so we quickly had Konqui peeking at visitors and Plasma running on a range of devices (no Power9 this week, though). Dave had 3D-printed a KDE logo we stuck to the top of the monitor, and had a pile of leaflets and stickers (KDE, neon, Krita, GCompris, and some Wiki To Learn) to hand out.
Next to our stand was MineTest, with a demo and they sat hacking at the code of the game for most of the weekend. On the other side, Freenode themselves were handing our IRC.horse stickers and other goodies.
A conference booth isn’t just about giving away goodies, though, and we spent the weekend, 9-to-5, talking with people about the KDE community and its software products. Plasma could be seen — and played with — on the machines we had at hand, and we did some small application demo’s. As usual, people told us they used i3 — and our response as usual is that Free Software on the desktop is better than non-free software. Some people had last used KDE in 2010 or so, so we could show how the different parts now operate independently and more flexibly — and how Plasma is now pretty darn lightweight.
Here in the photo is Dave extolling the virtues of something. (I’ve edited the photo to make some people in the background un-recognizable by pasting Dave’s eyeball over their faces — I forgot to ask permission to post)
.. and a conference isn’t just about the booths and vendor stands, either. It’s about the talks. Here’s one I sat in on about business models and how licensing affects the available models (in particular with reference to recent license changes in some projects). The KDE community is interesting because it’s not one-community-one-company like a lot that I see. We have a collection of small and medium companies building on, and building with, the KDE community’s software products. I think that’s healthy and generally happy. No licensing gotcha’s for us when using Qt under the LGPL version 2.1 and KDE’s Frameworks also under the LGPL.
Two talks I would like to single out are Leslie Hawthorn and VM Brasseur, (those are YouTube links). Neither are technical talks, but social talks, about what we (as Free Software contributors) do and why, and social issues we all face — and how to bring Software Freedom to more people.
A concept I didn’t know existed was platform shaming. See above: there’s lots of ways to use Free Software from the KDE community, in combination with i3, or on Windows, and on Free Software operating systems. Those are all OK.
I’d like to thank Dave for being a great booth- room- and dinner-mate, and Christel and the Freenode#live crew for a wonderful well-organized weekend. Not to mention the speakers and attendees who made it a happy and instructive time.
Kdenlive 18.08.3 is out with updated build scripts as well as some compilation fixes. All work is focused on the refactoring branch so nothing major in this release. On the other hand in the Windows front some major breakthroughs were made like the fix of the play/pause lag as well as the ability to build Kdenlive directly from Windows. The next milestone is to kill the running process on exit making Kdenlive almost as stable as the Linux version.
In other news, we are organizing a bug squash day on the first days of December. If you are interested in participating this is a great opportunity since we have prepared a list of low hanging bugs to fix. See you!
Have you ever wanted to test if your application works on ARM, but don’t want to make an image and launch a real device, or wanted to test an architecture you don’t even have hardware for? I have been following a set of instructions I have gathered from various online sources, and have been teaching others how to do the same, but I want to collect it all here and publish it here for you to follow as well. While writing this I will be following the instructions myself for an architecture I haven’t done this for before.
The secret is of course QEMU, but to make testing easier I will set it up so that QEMU doesn’t simulate an entire system, but runs foreign applications as if they were native inside my existing X11 desktop. This is possible through the qemu-user binaries, which can run statically linked executables or dynamically linked ones if you are in a properly set up chroot.
So how do you get this properly set up chroot? There are various ways depending on the Linux distro. I personally use Debian, and the instructions here will be almost identical to what is done on Ubuntu. If you use Suse or RedHat, I know you can find official guidelines on setting up a chroot for qemu.
First, get the dependencies (as root or sudo) :
apt-get install binfmt-support qemu qemu-user-static debootstrap
Most of these are just the tools needed, and binfmt-support is configuration glue what will make Linux launch ELF executables with non-native architectures using qemu, which is another piece of the magic puzzle.
For my example I am testing while writing this, I will be targeting the IBM mainframe architecture S/390x and will also need the cross-building tools for that as well. I can get all of those on Debian by just pulling in the right cross-building g++ (as root or sudo) :
apt-get install g++-s390x-linux-gnu
S/390x is interesting for me because Debian dropped official support for big endian PPC64, so I need a new way of testing that Qt works on 64bit big-endian platforms.
Now go to where you want your sysroots installed, and run debootstrap. Debootstrap is the Debian tool for downloading and installing a Debian image into a sub-directory
(as root or sudo) :
debootstrap stable --arch s390x stable debian-s390x http://deb.debian.org/debian/
Next prepare it for being a chroot system (as root or sudo) :
mount --rbind /dev debian-s390x/dev/
mount --rbind /sys debian-s390x/sys/
mount --rbind /proc debian-s390x/proc/
Enter the chroot (as root or sudo) :
If you run ‘uname -a’ here you will now get something like:
Linux hostname 4.x.x-x-amd64 #1 SMP Debian 4.x.x-x (2018-x-x) s390x GNU/Linux
In other words, a native AMD64 kernel with x390x userland! Every command you execute here will be using s390x binaries in the chroot, but using the native kernel for system calls. If you launch an X11 app, it will be using the X11-server running natively on your machine already.
To build Qt, we also need all the Qt dependencies inside the chroot. In Debian we can do that using ‘apt-get build-dep’.
Edit debian-s390x/etc/apt/sources.list and add a deb-src line, such as:
deb-src http://deb.debian.org/debian stable main
Install the Qt dependencies (inside chroot as root):
apt-get build-dep qtbase-opensource-src
Finally, for the setup we need to make sure it works a sysroot for cross-building by replacing absolute links to system libraries with relative ones that work both inside and outside the chroot (inside chroot as root):
apt-get install symlinks
symlink -rc .
We can now build Qt for this sysroot. If you want to test you own applications, you could also install an official prebuilt Qt version from Debian, and build against that.
Configuring Qt for cross-building to a platform like that looks like this on Debian, and only needs a few changes to work on other platforms:
$QTSRC/configure -prefix /opt/qt-s390x -release -force-debug-info \
-system-freetype -fontconfig -no-opengl \
-device linux-generic-g++ -sysroot /sysroots/debian-s390x \
You can remove the -no-opengl argument if you want to test OpenGL using the mesa software implementation, but otherwise I don’t recommend using OpenGL inside the chroot.
If everything is setup correctly you don’t need to specify ‘-system-freetype -fontconfig’. It is just there to catch failures in pkg-config, so we don’t end up building a Qt version that requires manually bundling fonts.
In general cross-compiling with Qt always looks similar to the above, but note that if you are cross-building to ARM32, it looks a little bit different:
$QTSRC/configure -prefix /opt/qt-armhf -release -force-debug-info \
-system-freetype -fontconfig -no-opengl \
-device linux-arm-generic-g++ -sysroot /sysroots/debian-armhf \
-device-option DISTRO_OPTS=hard-float \
-device-option COMPILER_FLAGS='-march=armv7-a -mfpu=neon' \
Notice that ARM32 has a specific device name (linux-arm-generic-g++ instead of linux-generic-g++) and needs to have a floating-point mode set. Additionally, for older embedded architectures, it is generally a good idea to specify COMPILER_FLAGS to select a specific architecture. You need to pass compiler flags as a device-option, because the normal way of specifying QMAKE_CFLAGS and QMAKE_CXXFLAGS after configure options sets the host compiler flags, not the target compiler flags. If you need more complicated options, you should make your own qmake device target. You can always base it of linux-generic-g++ for good defaults.
With Qt configured like this, you can build it as usual and run make install to have the libraries installed into the /opt directory under the chroot.
Issue encountered; as I built qtbase, I hit a linking error:
qt5/qtbase/src/corelib/global/qrandom.cpp:143: error: undefined reference to 'getentropy'
The problem turns out to be that my cross-building tools have libc6 2.27 while the sysroot has libc6 2.24. The best thing to do would be using the same libc version on both sides. However, as they are fully compatible, except for newer features in one, I chose to just disable this feature in Qt. I accomplished this by calling the configure command again, this time with -no-feature-getentropy added.
With this change, everything I tried building built and installed without further issues.
To get some examples to test, go to the folder of the example in the build tree and run ‘make install’. For instance, my standard example for widgets:
Then (inside chroot as root, assuming you have created a testuser):
su - testuser
/opt/qt-s390x/examples/widgets/richtext/textedit/textedit -platform xcb
And it just works…
The -platform xcb argument is needed because the default for embedded builds is using the eglfs QPA. I am assuming you are developing on X11 and want the jaw-dropping experience of having a foreign architecture Qt application just launch like a normal Qt app.
In this setup, the applications running in QEMU will be much faster than under a full system emulation, because the kernel and the X11 server are still the native ones, and thus running at normal speed. However, for X11 it means sometimes having a few oddities, as there can be client-server mismatch in endianness. While it is supposed to be supported, it is such an edge case that it often has issues. This is the reason why QT_XCB_NO_MITSHM=1 is set. Otherwise, we would hit an issue with passing file descriptors between client and server, which is likely a bug in libxcb. If we were testing a little-endian architecture like arm64 or riscv instead, the environment variable would not be needed.
So that is it, quite a few steps, but most only done once. And as I wrote this, I followed the same instructions for a rather uncommon and not officially supported platform, and only hit two issues with easy work-arounds.
The post Testing Qt on Emulated Architectures Using QEMU User Mode appeared first on Qt Blog.
(Post in french, english version below)
Le mois dernier est sorti la seconde édition de mon livre “Dessin et peinture numérique avec Krita”. Je viens juste de recevoir quelques copies, le moment est donc parfait pour en parler rapidement sur mon blog.
J’avais écrit la première édition pour la version 2.9.11 de Krita, il y a bientôt trois ans. Beaucoup de choses ont changées depuis, j’ai donc mis à jour cette seconde édition pour la version 4.1.1, et ajouté quelques notes supplémentaires concernant de nouvelles fonctions.
Aussi, mon éditeur a de nouveau travaillé sur la mise à jour et l’amélioration de la traduction Française de Krita. Merci encore aux éditions D-Booker pour leur contribution
Vous pouvez commander ce livre directement sur le site de l’éditeur, au format papier ou numérique.
Last month was released the 2nd edition of my book “Dessin et peinture numérique avec Krita”. I just received a few copies, so now is time to write a little about it.
I wrote the first edition for Krita 2.9.11, almost three years ago. A lot of things have changed, so I updated this second edition for Krita version 4.1.1, and added a few notes about some new features.
Also, my publisher worked again on updating and improving the french translation of Krita. Thanks again to D-Booker edition for their contribution
You can order this book directly from the publisher website, printed or digital edition.
In the Configure Feed menu at the top you can select to read blogs in different languages. By default it shows only blogs in English, as well as Dot News, Project News and any User blogs who have asked to be added (only two are listed in our config). You can also show blogs in Chinese (also only 2 listed), Italian (none listed), Polish (one), Portugese (two), Spanish (four but kdeblog by Jose is especially prolific) or French (none). Work to be done includes working out how to make this apply to the RSS feed.by