May 25, 2017

Si la gente de la Comunidad KDE no para (ayer se lanzó su nueva versión de Plasma), la gente de Slimbook parece que tampoco. Y es que hace unos días que fue lanzado Slimbook Pro, una nueva maravilla en ultrabooks para linux que te ofrece una nueva alternativa a la hora de adquirir un dispositivo los más libre posible.

Slimbook Pro, una nueva maravilla en ultrabooks para linux

La búsqueda de un portátil compatible con GNU/Linux fue una de mis luchas en el pasado. Eran difíciles de encontrar, de calidad dudosa y caros.

Afortunadamente, en el presente, el problema de encontrar uno ha cambiado: ahora es cual escoger. En la actualidad que existen algunas marcas que han ofrecido y ofrecen portátiles con Linux: Dell, System76, Slimbook, etc.

Y es de esta última marca de la que quiero hablar hoy, ya que hace unos días han lanzado un nuevo modelo a su creciente colección de ultrabooks y dispositivos compatibles 100% con GNU/Linux.

Se trata de Slimbook Pro, una nueva maravilla de los chicos de la empresa del mismo nombre que viene a complicarnos más la vida a la hora de decantarnos por uno o por otro portátil.

Slimbook Pro

Además de un cambio de carcasa los nuevos Slimbook Pro nos ofrecen:

  • Pantalla de 13″ (mi tamaño ideal) con dos resoluciones disponibles FullHD de 1920 x 1080 o  QHD+ de 3200 x 1800 HiDPI.
  • Procesadores Intel i3, i5 e i7 de 7ª generación
  • Memoria RAM DDR4
  • Espacio para un disco duro principal (normalmente un sssd) y otro auxiliar (mecánico o ssd) del tipo SATA 3 y formato clásico 2,5 pulgadas.
  • Tarjetas wifi Intel 8265 AC con nuevas antenas
  • Nuevo touchpad de la marca Synaptics
  • Conectividad: red RJ45, USB3, HDMI y DisplayPort, entrada de audio jack y salida independiente de jack.
  • Peso: 1,3 kg

Como véis, es como si Slimbook Clásico lo hubieran vitaminado y nos ofreciera una versión muy mejorada, acorde con los nuevos tiempos.

Con toda seguridad podré tener uno en mis manos el próximo julio durante Akademy-es y Akademy de Almería, así que ya os informaré de más detalles.

Por cierto, y a modo de recordatorio, no penséis que hago toda esta publicidad porque reciba una compensación económica… la publicidad del margen es voluntaria y no recibo nada por ponerla. Está porque creo firmemente en este proyecto.

Más información: Slimbook

Qt for Device Creation provides ready disk images for a variety of devices. When you flash it to a device, start enterprise Qt Creator and plug the device in via USB, it will be detected automatically. You are ready to run, debug and profile your applications right on the device. From a user’s point of view the green marker for a ready device just appears.

ready-device

But how do we actually see the device? There have been changes here for 5.9 and in this post I’ll discuss what we ended up doing and why.

How things used to be

Previous versions of Qt for Device Creation use Android Debug Bridge (ADB) for the device discovery. As you can guess from the name, it’s the same component that is used in the development of Android applications. It was a natural choice early in the development of the Boot2Qt when Android was a supported platform along with embedded Linux. But nowadays we focus on embedded Linux only. (In Device Creation with the device images, Qt can of course still be used to build applications on Android.)

Due to requiring Google’s USB drivers, ADB has made installing more complicated than desired for our users on Windows. And when they jumped through the hoops, they could end up with a different version than we tested against. There’s also the risk of mixups with Android development environments, who may include their own versions of ADB. There were also some things missing, which required working around inside our Qt Creator integration.

Recognizing USB devices

So to avoid those issues we decided to decided to write our own debug bridge, which we without extraneous imagination called QDB. It looks for Boot2Qt devices in a similar way as the purpose of other USB devices is discovered. When a device is enumerated in the universal serial bus, it describes its class, subclass and protocol. For example for my mouse the command lsusb -v reveals:

      bInterfaceClass         3 Human Interface Device
      bInterfaceSubClass      1 Boot Interface Subclass
      bInterfaceProtocol      2 Mouse

There is a vendor-defined class 255. We have picked a subclass and protocol inside that which our devices use, thus allowing QDB to find them. Finding them is of course not enough, since there needs to a way to transfer data between the host computer and the device.

Network over USB

ADB implements file transfers and port forwards. It transfers the data over the USB connection using its own protocol. One obvious option would have been to do the same thing. That would have been reinventing the wheel, as was quickly pointed out by many. There was also a second place where duplication of effort to accomplish the same thing was happening. The Boot2Qt plugin for Qt Creator was implementing support for running, debugging and profiling applications with ADB. But Qt Creator also supports these things with any Linux device over SSH through the RemoteLinux plugin. If we were able to use SSH, all of that duplication could be gotten rid of (after the support window for older Qt for Device Creation releases runs out).

Linux allows a device to present itself as an USB Ethernet adapter with the kernel module usb_f_rndis. The device then shows up as a network card in both Linux and Windows. This way we can have a network connection between the host computer and the device, which allows the use of SSH and thus the desired reuse. And apart from Qt Creator activity, the user can also use regular SSH to connect to the device. It has a properly resizing terminal, unlike adb shell! All the other things you might do over the network also become possible, even if the embedded device has no Ethernet socket.

But there’s something we glossed over. Networks don’t configure themselves. If the user would need to set the right IP address and subnet mask on both the computer and the device, then we certainly wouldn’t meet the bar of just plugging in the device and being ready to go.

Configuring the network

Now despite what I just said there actually are efforts for networks to configure themselves. Under the umbrella term zeroconf there are two things of interest in particular: link-local IPv4 addresses as specified in RFC 3927 and mDNS/DNS-SD, which allows finding out the addresses of devices in the network. For a while we tried to use these to accomplish the configuration of the network. However, getting the host computer to actually use link-local addresses for our network adapter proved fiddly and even if it worked there was a bit too long delay. The connection only works after both the host computer and device have gotten their IP which wasn’t predictable. I hope we will be able to revisit mDNS/DNS-SD at some point, because it might allow us to provide device discovery when devices are connected over Ethernet instead of USB, but for now zeroconf required too much configuration.

Another thing which we looked at was using IPv6 link-local addresses. Unlike their IPv4 cousin they are part of the protocol and always available, which would eliminate the delays and configuration burden. Here the downside is that they are a bit more local to the link. An IPv4 link-local IP is from the block 169.254.0.0/16 and you can just connect to it regularly. IPv6 versions use the prefix fe80::/10, but they also require a “scope ID” to describe the network adapter to use. I’d rather not write

ssh user@fe80::2864:3dff:fe98:9b3a%enp0s20f0u4u4u3

That’s superficial, but there was also a more important issue: All the tools would need to support IPv6 addresses and giving these scope IDs. GDB – which we use for debugging – didn’t.

Back to the drawing board. The simplest approach would be picking up a fixed IP address for the devices. That has two issues. First, you can’t connect more than one device. Second, the fixed IP address might already be in use on the host computer. We ended up using the following approach to circumvent these problems: The same process that recognizes the USB devices knows a list of candidate network configurations in the private use IPv4 ranges. When a new device is connected, it looks at the networks the host computer currently has and then picks a candidate that doesn’t conflict. The device is told the configuration, sets its own IP address accordingly and then acts as a DHCP server that provides an IP for the host computer.

After this process is done, the host computer and device have matching network configurations, Qt Creator knows the IP of the device and everything is ready. If you connect a second device, a different candidate configuration is picked, since the first one is already in use. The DHCP server is disabled when the device is disconnected, because otherwise host computer could get an IP from a previous configuration when it is connected again.

The post Device detection in Qt for Device Creation 5.9 appeared first on Qt Blog.

May 24, 2017

LaKademy 2017 group photo

Some weeks ago we had the fifth edition of the KDE Latin-America summit, LaKademy. Since the first edition, KDE community in Latin-America has grown up and now we has several developers, translators, artists, promoters, and more people from here involved in KDE activities.

This time LaKademy was held in Belo Horizonte, a nice city known for the amazing cachaça, cheese, home made beers, cheese, hills, and of course, cheese. The city is very cosmopolitan, with several options of activities and gastronomy, while the people is gentle. I would like to back to Belo Horizonte, maybe in my next vacation.

LaKademy activites were held in CEFET, an educational technological institute. During the days of LaKademy there were political demonstrations and a general strike in the country, consequence of the current political crisis here in Brazil. Despite I support the demonstrations, I was in Belo Horizonte for event. So I focused in the tasks while in my mind I was side-by-side with the workers on the streets.

Like in past editions I worked a lot with Cantor, the mathematical software I am the maintainer. This time the main tasks performed were an extensive set of reviews: revisions in pending patches, in the bug management system in order to close very old (and invalid) reports, and in the task management workboard, specially to ping developers with old tasks without any comment in the last year.

There were some work to implement new features as well. I finished a backends refactoring in order to provide a recommended version of the programming language for each backend in Cantor. How each programming language has its own planning and scheduling, it is common some programming language version not be correctly supported in a Cantor backend (Sage, I am thinking you). This feature presents a “recommended” version of the programming language supported for the Cantor backend, meaning that version was tested and it will work correctly with Cantor. It is more like a workaround in order to maintain the sanity of the developer while he try to support 11 different programming languages.

Other feature I worked but it is not finished is a option to select different LaTeX processors in Cantor. Currently there are several LaTeX processors available (like pdflatex, pdftex, luatex, xetex, …), some of them with several additional features. This option will increased the versatility of Cantor and will allow the use of moderns processors and their features in the software.

I addition to these tasks I fixed some bugs and helped Fernando Telles, my past SoK student, with some tasks in Cantor.

(Like in past editions)², in LaKademy 2017 I also worked in other set of tasks related to the management and promotion of KDE Brazil. I investigated how to bring back our unified feed with Brazilian blogs posts as in the old Planet KDE Português, utilized to send updates about KDE in Brazil to our social networks. Fred implemented the solution. So I updated this feed in social networks, updated our e-mail contact utilized in this networks, and started a bootstrap version of LaKademy website (but the team is migrating to WordPress, I think it will not be used). I also did a large revision in the tasks of KDE Brazil workboard, migrated past year from the TODO website. Besides all this we had the promo meeting to discuss our actions in Latin-America – all the tasks were documented in the workboard.

Of course, just as we worked intensely in those days, we also had a lot of fun between a push and other. LaKademy is also a opportunity to find old friends and make new ones. It is amazing see again the KDE fellows, and I invite the newcomers to stay with us and go to next LaKademy editions!

This year we had a problem that we must to address in next edition – all the participants were Brazilians. We need to think about how to integrate people from other Latin-America countries in LaKademy. It would be bad if the event become only an Akademy-BR.

Filipe and Chicão

So, I give my greetings to the community and put myself in the mission to continue to work in order to grown the Latin-America as an important player to the development and future of KDE.

Its been some time since I’ve posted any progress for AtCore. Some may wonder what we have been up to ..

  • Implimented a Comand Queue

We would like to get some info from the printer while printing we need to have some method of flow control. AtCore now uses a command queue to send all commands. With the command Queue in place we have to handle stop and Emergency stop differently . For example Emergency Stop should skip the queue and be sent as soon as the button is pressed.

  • Requesting Temperatue from the printer

After you connect to a FW plugin every 5 seconds a m105 will be send to the printer and the temperature results are put onto a pretty graph that Patrick made. In order to make the graph work we need read the M105 return and extract the data from it. While our current methods of parsing this info work its very specific to each firmwares return . Since these string can be differnt between fw versions it can crash so we are working on a way to do this better.

  • Cleaned up the test Client Gui

The test clients GUI needed some love. All the widgets now live within a dock and the docks can be arranged how ever the user likes. I’ve also added a status bar to show the status of AtCore. We’ve also added a print job timer and remaining print time estimate. A Seperate axis control for relative was asked for by a user and I’ve added one that Lays wrote some time ago . It works well and allows for movements on 1, 10 or 25 units

  • Fixed the windows build so that It builds and deploys correctly via craft

Lays has been working hard to get atcore buildable via craft. Now we can build and deploy AtCore (and testgui) from craft. After building we had a problem of not finding the plugins on windows and them not deploying to the correct path.I added some instructions for deploy and tweaked plugin detection on window. Everything is now working well.


Foto em grupo do LaKademy 2017

E chegamos à 5ª edição do encontro latino-americano do KDE, o LaKademy. Nesse tempo todo foi perceptível o crescimento da comunidade na região, em especial no Brasil, ainda que mantendo o fluxo típico dos trabalhos voluntários onde participantes vem e vão de acordo com suas demandas.

Dessa vez o evento saiu das praias cariocas e adentrou um pouco mais para o interior do país, subindo o morro urbano de Belo Horizonte. Cidade aprazível conhecida pelas cachaças, queijos, cervejas artesanais, queijos, ladeiras e queijos, Belo Horizonte combina um ar cosmopolita, com diversas opções de lazer, culinária e mais, com um jeito cordial e solícito típico de seus moradores. Adorei a cidade e espero um dia agendar uma viagem que não seja a trabalho para lá.

As atividades do LaKademy ocorreram nas dependências do CEFET, do final de abril ao início de maio, em pleno feriadão do dia do trabalhador combinado a uma greve geral dias antes. Muitos que participaram do evento (eu incluso) defendiam as pautas da greve, mas não podíamos abandonar o evento após todo o investimento feito pelo KDE. Portanto, fica aqui meu mea culpa sobre esse tema. 🙂

A exemplo das demais edições do evento trabalhei bastante no Cantor, software matemático o qual sou mantenedor. Dessa vez as principais tarefas que desenvolvi podem ser resumidas em um grande esforço de triagem: revisões de patches pendentes, uma extensa revisão para fechar todos os bugs antigos e inválidos existentes, deixando abertos apenas aqueles que importam, e outra revisão nas tarefas pendentes, em especial naquelas que estavam abertas há quase um ano mas cujo os desenvolvedores responsáveis não haviam realizado qualquer movimentação durante o referido tempo.

No campo das funcionalidades, finalizei uma refatoração nos backends para apresentar a versão recomendada da linguagem de programação no Cantor. Como cada linguagem tem seu próprio planejamento, é comum que de uma versão para outra o backend do Cantor comece a se comportar de maneira inesperada ou mesmo deixe de funcionar (Sage, estou pensando em você). Essa funcionalidade apresenta a versão “recomendada” da linguagem para o backend do Cantor, significando que essa versão descrita foi testada e sabemos que funcionará bem com a ferramenta. Isso serve como um workaround para manter a sanidade do desenvolvedor enquanto suporta 11 backends diferentes.

Outra funcionalidade que trabalhei mas ainda não finalizei foi a adição de um seletor de backends LaTeX para usar no Cantor. Atualmente existem muitas opções de processadores LaTeX (pdflatex, pdftex, luatex, xetex, …), alguns deles com muitas opções adicionais. Isso aumentaria a versatilidade do Cantor e permitira que processadores modernos possam ser utilizados no software.

Além dessas funcionalidades houveram correções de bugs e auxílio ao Fernando Telles em algumas tarefas sobre esse software.

Outras tarefas que desenvolvi nessa edição, também a exemplo das demais, foram as relacionadas com o gerenciamento e promoção do KDE Brasil. Nelas, pesquisei como trazer de volta o feed do Planet KDE Português (que o Fred acabou desenvolvendo), atualização dos feeds automáticos nas nossas redes sociais, atualização da conta de e-mail que utilizamos para gerenciar nossas redes, port do site do LaKademy para bootstrap (que acho q o pessoal não vai utilizar pois estão migrando para WordPress) e uma pesada triagem das tarefas no workboard do KDE Brasil. Além de tudo isso, ainda tivemos a famosa reunião de promo onde discutimos ações de promoção para o país e região – tudo também documentado no workboard.

E claro, assim como trabalhamos muito e de forma muito intensa esses dias todos, o LaKademy também é um momento de reencontrar amigos e afetos e se divertir bastante entre um push e outro. É sempre reconfortante encontrar a galera inteira, e fica o convite para que os calouros apareçam sempre.

Uma falta da edição desse ano foi a ausência de não brasileiros – precisamos pensar em estratégias de termos latino-americanos de outros países participando do LaKademy. Seria ruim que o evento passasse a ser tão somente um Akademy-BR.

Filipe e Chicão

Para finalizar, deixo meu agradecimento à comunidade e minha disposição para continuar trabalhando para tornar a América Latina uma região cada vez mais importante para o desenvolvimento e futuro do KDE.

The OpenStreetMap plugin in QtLocation 5.8 has received a new Plugin Parameter, osm.mapping.offline.directory, to specify a indexable location from which to source offline tiles. Since this new feature seems to have generated some confusion, here is an attempt to clarify how it works.

Until now, when a tile became visible on the map, the QtLocation engine would first attempt to source it from the tile cache. If not present, it would attempt to fetch it from the provider.

With QtLocation 5.8 it is possible to pass an additional offline directory to the OSM plugin. When this parameter is present, tiles will be sourced from the specified directory before being searched in the tile cache, and, after that, being requested to the provider.

The content of such an offline directory is read only, for QtLocation, meaning that the engine will not add, delete or update tiles present therein. Tile filenames have to follow the osm plugin filename pattern, meaning osm_100-<l|h>-<map_id>-<z>-<x>-<y>.<extension>.

The field map_id goes from 1 to 7 (or 8, if a custom map type is specified), referring to the map types street, satellite, cycle, transit, night-transit, terrain and hiking, in this order.
The field before has to contain an l or an h, which stands low-dpi and high-dpi tiles, respectively.

The intended use cases that this feature aims to address are mainly:
– shipping custom maps for limited regions with the application.
– shipping tiles for low zoom levels with the application so to prevent blank maps when the application is run for the first time without internet connection.

To exemplify the usage of this feature, OsmOffline is an example application that uses tiles from the Natural Earth project, which are licensed under very permissive TOS, to address the second scenario.
This example embeds tiles for one map type at zoom levels 0, 1 and 2 using the Qt Resource system, and sources them from the qrc path :/offline_tiles/.

The post QtLocation: using offline map tiles with the OpenStreetMap plugin appeared first on Qt Blog.

Hace tiempo que no hago un pequño tutorial. Las noticias sociales cada vez tienen más peso y creo que son muy importantes en algo como KDE, que no olvidemos que es una Comunidad. Además, creo que el mundo GNU/Linux es cada día más sencillo y necesita menos este tipo de artículos. Así que me complace haber encontrado un hueco para publicar cómo instalar Telegram en openSUSE de dos formas, las dos muy sencillas.

Cómo instalar Telegram en openSUSE

Cómo instalar Telegram en openSUSEPor méritos propios, y aunque no sea 100% libre según los cánones, Telegram se ha convertido en una herramienta muy importante para la Comunidades del Software Libre. Y poco a poco se

Para instalar Telegram en openSUSE tenemos varias opciones.

  • Las primera de ellas se basan en utilizar respositorios adicionales, ya que esta aplicación no se encuentra en los habituales, de esta forma  se actualizará automáticamente la aplicación cuando el mantenedor lo haga y nosostros actualicemos nuestro sistema
  • La última será bajar directamente la aplicación desde los servidores oficiales de Telegram, con lo que deberemos actualizar cuando la aplicación nos lo pida.

Desde repositorios

Para realizar esta forma debemos

  • Abrir la consola
  • Añadir el siguiente repositorio para openSUSE Leap 42.2.

$ sudo zypper addrepo http://download.opensuse.org/repositories/server:messaging/openSUSE_Leap_42.2/server:messaging.repo

  • Actualizar

$ sudo zypper refresh

  • Instalar Telegram

$ sudo zypper install telegram-desktop

Con One Click Install

Básicamente se trata de pinchar este enlace para openSUSE Leap 42.2 y seguir instrucciones.

Para otras versiones de openSUSE os aconsejo visitar la página fuente de este minitutorial.

Desde página web

Este método se basa en:

Este método es muy sencillo pero no me gusta que se me quede la aplicación “suelta” por mi disco duro.

Por cierto, aprovecho la coyuntura para pedir ayuda a algún desarrollador que quiera hacer el favor a un profesor que anehla una aplicación que una comunicación, escuela y Telegram.

May 23, 2017

On my last blog post I discussed about how some assumptions such as the platform developed on can affect our development. We need to minimize it by empowering the developers with good tools so that they can develop properly. To that end, I introduced runtimes in our IDE to abstract platforms (much like on Gnome’s Builder or Qt Creator).

There are different platforms that we’ll be developing for and they need to be easily reachable when coding and testing. Both switching and interacting transparently with the different platforms.

To that end I implemented 4 approaches that integrate different runtimes:

  • Docker, allows you to develop directly against virtually any system. This is especially interesting because it enables to reproduce the environment our users are having: behavior on execution and project information (i.e. the imports are the ones from the target rather the ones on our local system). Docker is a wide-spread technology in the cloud, I hope many developers will see the value in integrating the deployed environment into the IDE while they are coding.
  • Flatpak, is a solution that targets specifically desktop Linux applications. We are talking about distributing bundled applications to users, there we have the opportunity to integrate the tooling specifically to that end: from fetching dependencies to testing on other devices (see videos below).
  • Android, as you know it’s something I’ve been pushing for years. Finally we are getting to a space where the IDE can help get some set up troubles out of the way.
  • The local host, i.e. what we have now.

And remember KDevelop is extensible. Do you want snapcraft?, vagrant?, mock? Contributions are very welcome!

If there’s something better than a list of technologies and buzzwords, that’s videos. Let’s see why this could change how you develop your software.

One development, any platform

We get to develop an application and switch back and forth the target platform we are developing for.

Here I put together a short video that tests Blinken on different platforms:

One development, any device

Using the right SDK is not enough proof that the application will work as expected on every device, especially those our users will be using. Being able to easily send our application to another device to test and play around with is something I had needed for longtime. Especially important when we need to test different form factors or input devices.

In this video we can see how we can easily test an application locally and when it works just switch to Android and send to the device for proper test on the smaller touch screen.

Here we can see how we can just test an application by executing it remotely on another device. This is done by creating a bundle of the application, sending it to the device where we want to test it and executing it there.

Hassle-free contributions

You can’t deny it. You’ve wanted to fix things in the past, but you couldn’t be bothered with setting up the development environment. Both Flatpak and Docker offer the possibility to maintainers to distribute recipes to set up development platforms that can and should be integrated so that we can dedicate this 1 hour in the week-end to fixing that bug that’s been annoying us rather than reading a couple of wikis and – oh, well, never mind, gotta make dinner.

We can do this either by providing the flatpak-builder json manifest (disclaimer: the video is quite slow).

Or a Dockerfile.

You can try this today by building kdevelop git master branch, feedback is welcome. Or wait for KDevelop 5.2 later this year. ��

Happy hacking!

Quinto mes y quinto podcast. Al menos este año no os podréis quejar de la regularidad de los miembros de KDE España que se reunen alrededor de sus pantallas, micrófonos y cámaras web para hablar un rato sobre temas relacionados con el mundo KDE. De esta forma se grabó el decimonoveno podcast que estuvo dedicado al trabajo, generalmente oculto, de los Sysadmins de KDE. Espero que sea de vuestro agrado.

Sysadmins de KDE decimonoveno podcast de KDE España

El quinto podcast de vídeo de la tercera temporada de KDE España titulado Sysadmins de KDE se grabó el pasado 16 de mayo utilizando los servicios de Google sin ningún problema técnico destacable.

Los participantes del decimoséptimo vídeo podcast fueron:

  • Ruben Gómez Antolí, miembro de KDE España y que siguió realizando las labores de presentador.
  • Baltasar Ortega (@baltolkien), secretario de KDE España y creador y editor del presente blog, qué puso el punto de vista del usuario y labores de presentador.
  • Y el invitado especial, Nicolas Álvarez, sysadmin de la Comunidad KDE, que nos ilustró con sus conocimientos ya que es uno de las 3 personas que se encargan de que todo el engranaje virtual de KDE funcione.

A lo largo de casi la hora y cuarto que duró el vídeo podcast se habló de todo lo relacionado con el trabajo de Sysadmin: servidores, mantenimiento, seguridad, software utilizado, aplicaciones ofrecidas, carga de trabajo, problemas, soluciones, futuro, etc.


Además, gracias al trabajo de VictorHck (no os perdáis su blog) ya está disponible el podcast en archive.org.

Espero que os haya gustado, si es así ya sabéis: “Manita arriba“, compartid y no olvidéis visitar y suscribiros al canal de Youtube de KDE España.

Sysadmins de KDE decimonoveno podcast de KDE España

Como siempre, esperamos vuestros comentarios que os aseguro que son muy valiosos para los desarrolladores, aunque sean críticas constructivas (las otras nunca son buenas para nadie). Así mismo, también nos gustaría saber los temas sobre los que gustaría que hablásemos en los próximos podcast.

Aprovecho la ocasión para invitaros a suscribiros al canal de Ivoox de los podcast de KDE España que pronto estará al día.

Once again a lot has been going on behind the scenes since the last release. The HTML gallery tool is back, database shrinking (e.g. purging stale thumbnails) is also supported on MySQL, grouping has been improved and additional sidecars can now be specified. Therefore the release of 5.6.0 will be (is already) delayed, as we would like to invite you to test all these features. As usual they are available in the pre-release bundles or obviously directly from the git repository. Please report any dysfunctions, unexpected behaviour or suggestions for improvement to our bug tracker.

The HTML gallery is accessible through the tools menu in the main bar of both digiKam and showFoto. It allows you to export pictures to a gallery that you can then open in any browser. There are many themes to select and you can create your own as well.

Already in 5.5.0 tests for database integrity and obsolete information have been introduced. Besides obvious data safety improvements this can free up quite a lot of space in the digiKam databases. For technical reasons only SQLite database were shrunk to this smaller size in 5.5.0. Now this is also possible for MySQL databases.

Earlier changes to the grouping behaviour proved that digiKam users have quite diverse workflows - so with the current change we try to represent that diversity.

Originally grouped items were basically hidden away. Due to requests to include grouped items in certain operations, this was changed entirely to include grouped items in (almost) all operations. Needless to say, this wasn’t such a good idea either. So now you can choose which operations should be performed on all images in a group or just the first one.
The corresponding settings lives in the configuration wizard under Miscellaneous in the Grouping tab. By default all operations are set to Ask, which will open a dialog whenever you perform this operation and grouped items are involved.

Another new capability is to recognise additional sidecars. Under the new Sidecars tab in the Metadata part of the configuration wizard you can specify any additional extension that you want digiKam to recognise as a sidecar. These files will neither be read from nor written to, but they will be moved/rename/deleted/… together with the item that they belong to.

Finally, the last important change done for the next version is to restore the geolocation bookmarks feature which do not work with bundle versions of digiKam (AppImage, MacOS, and Windows). The new bookmarker was been fully re-written and still compatible with previous geolocation bookmarks settings. It now able to display the bookmark GPS information over a map for a better usability while editing your collection.

Thanks in advance to everyone testing this new release and in general everyone using digiKam - we hope you keep enjoying this tool and spread the word!

May 22, 2017

After a month of dread and panicking about the fact that Google Summer of Code results are announced in the middle of exam season... I'm happy to say I'll be doing the Rust plugin for KDevelop!

Quick intro

My name is Emma. Just turned 21. I'm a second-year undergrad at Imperial College London. Been programming since I was 10. I've worked on a bunch of different projects over the years. Many of them are open source. I've contributed to the KDevelop Python plugin previously. I worked at Microsoft Research in summer 2016 on the AssessMS project. I'm interested in a couple of different areas of computer science: artificial intelligence, computer vision, and lately compilers, type systems and operating systems. Favourite languages: Haskell, C++ and as of recently...

Rust

Rust is a rather new programming language, but it's already gained a lot of traction. It was voted “most loved” language by developers for the second year in a row in the StackOverflow developer survey. There have been projects made using Rust on everything from operating systems to game engines for Minecraft-like games. Despite this, IDE support is still lacking. This has to change...

KDevelop

KDevelop is a really great IDE, but it currently does not support Rust at all. However, it does have an easily extensible plugin architecture, so the logical conclusion is to write a Rust plugin! 

And there you have it. That was basically my reasoning when applying to KDE to do this project.

What now?

I had a bit of a snag with timing: my university exams were basically back to back for the past three weeks, and May is supposed to be used for community bonding, so I'm a bit behind on that. However, I have been playing around with Rust quite a bit (I started writing a small OS kernel because why not). Rust does interface quite nicely with C (aside from half of the code being littered with 'unsafe's). Still, this means my initial idea should work quite nicely. The plan is to get all necessary packages and a skeleton project set up by May 30 when coding begins.

The plan for the next month: parsing Rust code

Arguably the most difficult part of this whole project. Rust is, in my opinion, very similar to C++ when it comes to parsing (that is, a nightmare). So the plan is basically not to do any parsing at all. Bear with me for a moment.

The Rust compiler is nicely split up into different modules. One of those is the syntax parsing library, appropriately named libsyntax. Normally, it's private, except in the Rust nightly compiler (mainly for debugging purposes I suppose). However, a fork of it is available for the stable branch, named the syntex_syntax package. Several other Rust tools including rustfmt, Rust Racer and Rust Language Server use this package, so I'll assume it's stable.

It does the parsing and provides a nice visitor-pattern approach to traversing the AST. Hook this up to C++ with some foreign function calls and that's about it for parsing.

Semantic highlighting at this point becomes a matter of traversing the AST produced by the syntax parsing library (which includes the ranges of all elements), and constructing the appropriate structures on KDevelop's side.

And afterwards...

The final goal is to have full language support, which includes semantic highlighting, navigation, code completion, as many possible code refactoring/generation options as possible, and debugging. Some of these partially overlap: highlighting, navigation and completions all depend on building a knowledge base of the code. Debugging, should be a matter of hooking up GDB/LLDB, which KDevelop already supports, for Rust-compiled objects. Finally, refactoring and code generation should be quite fun to do, and I think that would make KDevelop the Rust IDE.

Stay tuned for updates...

Installers for Kate 17.04.1 are now available for download!

This release includes, besides bug-fixing and features, an update to the search in files plugin. The search-while-you-type in the current file should not “destroy” your last search in files results as easily as previously. The search-combo-box-history handling is also improved.

Grab it now at download.kde.org:  Kate-setup-17.04.1-KF5.34-32bit or Kate-setup-17.04.1-KF5.34-64bit

With an apology to English-speaking audiences ��

Anche quest’anno KDAB partecipa a QtDay, la conferenza italiana interamente dedicata a Qt. Giunta oramai alla sua sesta edizione, QtDay continua a crescere. Quest’anno QtDay si articola in 3 giorni: il primo dedicato a un training su QML, seguito da due giorni di conferenza vera e propria.

Durante la conferenza terrò due interventi:

  • Venerdì 23 giugno parteciperò ad una tavola rotonda sul come contribuire allo sviluppo di Qt;
  • Sabato 24 giugno

The post Ci vediamo a QtDay 2017? appeared first on KDAB.

The Akademy programme (saturday, sunday) is actually pretty long; the conference days stretch into feels-like-evening to me. Of course, the Dutch are infamous for being “6pm at the dinner table, and eat potatoes” so my notion of evening may not match what works on the Mediterranean coast. Actually, I know it doesn’t since way back when at a Ubuntu Developer Summit in Sevilla it took some internal-clock-resetting to adjust to dinner closer to midnight than 18:00.

Akademy LogoForeseen clock-adjustment difficulties aside, I have a plan for Akademy.

  • Attend a bunch of talks. Telemetry / User Feedback sounds like a must-see for me, and lightning talks, and Input Methods is something I know nothing about and should (hey, my work-work application is Latin-1 only and therefore can’t even represent the names of all of its developers properly, and that in 2017), and the analysing code and fuzzing talk connects way back to the English Breakfast Network days of KDE Code Quality.
  • Hammer (and saw, and sand, and paint) on the KDE CI for FreeBSD; this will involve a fair amount of futzing with the base system, but also gently pushing changes to a whole bunch of repositories. KDE Frameworks 5 are mostly blue / yellow. It’s time to start adding higher layers of the software stack to the whole.
  • BoF it up around CMake, FreeBSD, CI, and LDAP.
  • Have fun at the day trip.

Hi, I'm Davide and I'm 22.
I was born on May 17th so I'm considering being accepted by KDE as a little gift.
The first month is usually related to "Community Bonding". What does it mean?

First of all, I created this blog. Here I'll post updates about Chat Bridge (now renamed to Brooklyn) and myself.
Then, I retrieved my KDE Identity account. The main problem was that I had lost my username.
So I wrote to sysadmin@kde.org and five minutes after the username was no longer a problem.
Shortly after I've done a lot of stuff, but I don't want to bother my readers.

After this boring to-do list, I've contacted my mentor to keep him updated.
We decided to start the development of the application and we defined how the app configuration file should be.
It is obviously open-source, you can use it for your projects! For now, it works only on IRC/Telegram but it will support soon also Rocketchat.
It can only support plain text, but it's temporary, don't worry.

I'm planning (but I've not decided yet because of university exams) to go to Akademy 2017 with some guys at WikiToLearn.
I can't wait to start coding!

What do you think about this project?
Do you have plans to use it?
Don't be shy, write me everything you want!


External links:

May 21, 2017

The annual openSUSE Conference 2017 is upcoming! osc17finalNext weekend it will be again in the Z-Bau in Nuremberg, Germany.

The conference program is impressive and if you can make it, you should consider stopping by.

Stefan Schäfer from the Invis server project and me will organize a workshop about openSUSE for Small and Medium Business (SMB).

SMB is a long running concern of the heart of the two of us: Both Stefan, who even does it for living, and me have both used openSUSE in the area of SMB for long and we know how well it serves there. Stefan has even initiated the Invis Server Project, which is completely free software and builds on top of the openSUSE distributions. The Invis Server adds a whole bunch of extra functionality to openSUSE that is extremely useful in the special SMB usecase. It came a long way starting as Stefans own project long years ago, evolving as proper maintained openSUSE Spin in OBS with a small, but active community.

The interesting question is how openSUSE, Invis Server and other smaller projects like for example Kraft can unite and offer a reliable maintained and comprehensive solution for this huge group of potential users, that is now locked in to proprietary technologies mainly while FOSS can really make a difference here.

In the workshop we first will introduce the existing projects briefly, maybe discuss some technical questions like integration of new packages in the openSUSE distributions and such, and also touch organizational question like how we want to setup and market openSUSE SMB.

Participants in the workshop should not expect too much presentation. We rather hope for a lively discussion with many people bringing in their projects that might fit, their experiences and ideas. Don’t be shy ��

 

 


In KDE we cover a mix of platforms and form factors that make our technology very powerful. But how to reach so many different systems while maintaining high quality on all of them?

What variables are we talking about?

Form factors

We use different form factors nowadays, daily. When moving, we need to be straight-forward; when focusing we want all functionality.

Together with QtQuick Controls, Kirigami offers ways for us to be flexible both in input types and screen sizes.

Platforms

We are not constantly on the same device, diversity is part of our lives. Recommending our peers the tools we make should always be a possibility, without forcing them into major workflow changes (like changing OS, yes).

Qt has been our tool of choice for years and it’s proven to keep up with the latest industry changes, embracing mobile, and adapting to massively different form factors and operating systems. This integration includes some integration in their look and feel, which is very important to many of us.

Devices & Quality Assurance

We are targeting different devices, we need to allow developers to test and make it easy to reproduce and make the most out of the testing we get, learn from our users.

Whatever is native to the platform. APK (and possibly even Google Play) on Android, Installers on Windows and distribution packages for GNU/Linux.
Furthermore, we’ve been embracing new technologies on GNU/Linux systems that can help a lot in this front including Snap/Flatpak/AppImage, which could help streamline this process as well.

What needs to happen?

Some of these technologies are slowly blooming as they get widely adopted, and our community needs as well to lead in offering tooling and solutions to make all of this viable.

  • We need straightforward quality assurance. We should ensure the conditions under which we develop and test are our users’ platforms. When facing an error, being able to reproduce and test is fundamental.
  • We should allow for swift release cycles. Users should always be on fresh stable releases. When a patch release is submitted, we should test it and then have it available to the users. Nowadays, some users are not benefiting from most stable releases and that’s makes lots of our work in vain.
  • Feedback makes us grow. We need to understand how our applications are being used, if we want to solve the actual problems users are having.

All of this won’t happen automatically. We need people who wants to get their hands dirty and help build the infrastructure to make it happen.

There’s different skills that you can put in practice here: ranging from DevOps, helping to offer fresh quality recipes for your platform of choice, improving testing infrastructure, or actual system development on our development tools and of course any of the upstream projects we use.

Hop on! Help KDE put Free Software on every device!

Hi everyone, is everything ok? I hope so.

Today, I will talk about my week working on Krita during this community bonding period that ends next Sunday.

GSOC

You probably are asking, why I put a Garfield comic here. First, I love cats and Garfield :). Second, I figure out that represents what open source community needs, more consistency and constant work. Boud told me sometimes that we need to commit and be in touch with the community every day. It's a problem, send a huge modification or do not enter in IRC for a long time. I'm trying to be more constant because is not all about code.

This week was pretty cool why I could know more the community, talking with users and devs to define the initial set for the Krita's showcase.

  • Monday - I opened a discussion in the Krita’s forum to obtain new suggestions for the Krita’s showcase.

  • Tuesday - I was trying to understand some current features of the Krita that users told me like Image Reference and Palette.

  • Wednesday - I organized and wrote all suggestion of users from the forum and from the IRC on the phabricator task.

  • Thursday - I asked more experienced devs for help with suggestions in the task thread, as you can see here.

  • Friday - A day to solve some personal problems.

  • Saturday - I wrote an answear with my guideline for the GSoC period.

That’s it for now. Thanks, Krita community. Until next week, See ya!!

Today I streamed the first half of the Plasma 5.11 wallpaper production, and it was an interesting experience. The video above is the abridged version sped up ~20x, heavily edited to the actual creation, and should be a fun watch for the interested.

It looks like there’s another full work-day that needs to go into the wallpaper still, and while I think I’ll also record the second half I don’t think I’ll livestream it; while I’m very appreciative of the viewers I had, it was quite a bit of extra work and quite difficult to carry on a one-man conversation for 8 hours, while working, for at most a few people. Like I said, I will still record the second half of the wallpaper for posterity, I simply don’t think I’ll be streaming it. I do think I’ll keep streaming the odd icon batch, as those are about as long as I want, so they can be kept to a digestible hour.

plasma-5-11-inprogress.png

The wallpaper as it is is based on an image of a reef along with a recent trip to the beach during the Blue Systems sprint. There’s still a long way to go, and I can easily see another 8 hours going into this before it’s completed; there’s water effects, tides, doing the rocks, and taking a second pass at the foam – among other things – especially before I hit the level of KDE polish I’d like meet.

Looking at it, I may also make a reversed image with only the shoreline components for dual-screen aficionados.

Within the next week or so I’ll post the next timelapse after I complete the wallpaper. ��


May 20, 2017

Two days after the Global Accessibility Awareness Day we go live with the registration for the Randa Meetings 2017. Thus we would like to bring as many people to Randa this September to make more of our and other Free Software more accessible.

Another topic during this year’s Randa Meetings will be KDE PIM but it’s for sure not forbidden to work on accessibility feature of our PIM stuff as well.

So please come and make KDE more accessible. CU there.

Flattr this!

KDE Project:

The second version (0.4.90) towards Simon 0.5.0 is out in the wilds. Please download the source code, test it and send us feedback.

What we changed since the alpha release:

  • Bugfix: The download of Simon Base Models work again flawlessly (bug: 377968)
  • Fix detection of utterid APIs in Pocketsphinx

You can get it here:
https://download.kde.org/unstable/simon/0.4.90/simon-0.4.90.tar.xz.mirrorlist

In the work is also an AppImage version of Simon for easy testing. We hope to deliver one for the Beta release coming soon.

Known issues with Simon 0.4.90 are:

  • Some Scenarios available for download don't work anymore (BUG: 375819)
  • Simon can't add Arabic or Hebrew words (BUG: 356452)

We hope to fix these bugs and look forward to your feedback and bug reports and maybe to see you at the next Simon IRC meeting: Tuesday, 23rd of May, at 10pm (UTC+2) in #kde-accessibility on freenode.net.

About Simon
Simon is an open source speech recognition program that can replace your mouse and keyboard. The system is designed to be as flexible as possible and will work with any language or dialect. For more information take a look at the Simon homepage.

Not only, but to a large extent I worked in the last few months on foundational improvements to KWin’s DRM backend, which is a central building block of KWin’s Wayland session. The idea in the beginninng was to directly expand upon my past Atomic Mode Setting (AMS) work from last year. We’re talking about direct scanout of graphic buffers for fullscreen applications and later layered compositing. Indeed this was my Season of KDE project with Martin Flöser as my mentor, but in the end relative to the initial goal it was unsuccessful.

The reason for the missed goal wasn’t a lack of work or enthusiasm from my side, but the realization that I need to go back and first rework the foundations, which were in some kind of disarray, mostly because of mistakes I did when I first worked on AMS last year, partly because of changes Daniel Stone made to his work-in-progress patches for AMS support in Weston, which I used as an example throughout my work on AMS in KWin, and also because of some small flaws introduced to our DRM backend before I started working on it.

The result of this rework are three seperate patches depending on each other and all of them got merged last week. They will be part of the 5.10 release. The reason for doing three patches instead of only one, was to ease the review process.

The first patch dealt with the query of important kernel display objects, which represent real hardware, the CRTCs and Connectors. KWin didn’t remember these objects in the past, although they are static while the system is running. This meant for example that KWin requeried all of them on a hot plugging event and had no prolonged knowledge about their state after a display was disconnected again. The last point made it in particular difficult to do a proper cleanup of the associated memory after a disconnect. So changing this in a way that the kernel objects are only queried once in the beginning made sense. Also from my past work I already had created a generic class for kernel object with the necessary subclasses, which could be used in this context. But still to me this patch was the most “controversial” one of the three, which means it was the one I was most worried about being somehow “wrong”, not just in details, but in general, especially since it didn’t solve any observable specific misbehaviour, which it could be benchmarked against. Of course I did my research, but there is always the anxiety of overlooking something crucial. Too bad the other patches depended on it. But the patch was accepted and to my relief everything seems to work well on the current master and the beta branch for the upcoming release as well.

The second patch restructured the DrmBuffer class. We support KWin builds with or without Generic Buffer Manager (GBM). It made therefore sense to split off the GBM dependent part of DrmBuffer into a seperate file, which gets only included when GBM is available. Martin had this idea and, although the patch is still quite large because of all the moved around code and renamed classes, the change was straight forward. I still managed to introduce a build breaking regression, which was quickly discovered and easily to solve. This patch was also meant as a preperation for the future direct scanout of buffers, which will then be done by a new subclass of DrmBuffer, also depending on GBM.

The last patch finally directly tackled all the issues I experienced when trying to use the before that rather underwhelming code path for AMS. Yes, you saw the picture on the screen, the buffer flipping worked, but basic functionality like hot plugging or display suspending were not working at all or led to unpredictable behaviour. Basically a complete rewrite later with many, many manual in and out pluggings of external monitors to test the bahaviour the problems have been solved to the point I consider the AMS code path now to be ready for daily use. For Plasma 5.11 I therefore plan to make it the new default. That means that it will be available on Intel graphics automatically from Linux kernel 4.12 onwards, when on the kernel side the Intel driver also defaults to it. If you want to test my code on Plasma 5.10 you need to set the environment variable KWIN_DRM_AMS and on kernels older than 4.12 you need to add the boot parameter i915.nuclear_pageflip. If you use Nvidia with the open source Nouveau driver, AMS should be available to you since kernel 4.10. In this case you should only need to set the environment variable above on 5.10, if you want to test it. Since I only tested AMS with Intel graphics until now, some reports back how it works with Nouveau would be great.

That’s it for now. But of course there is more to come. I haven’t given up on the direct scanout and at some point in the future I want to finish it. I already had a working prototype and mainly waited for my three patches to land. But for now I’ll postpone further work on direct scanout and layered compositing. Instead the last weeks I worked on something special for our Wayland session in 5.11. I call it Night Color, and with this name you can probably guess what it will be. And did I mention, that I was accepted as a Google Summer of Code student for the X.org foundation with my project to implement multi buffered present support in XWayland? Nah, I didn’t. Sorry for rhetorical asking in this smug way, but I’m just very happy and also a bit proud of having learned so much in basically only one year to the point of now being able to start work on an X.org project directly. I’ll write about it in another blog post in the near future.

May 19, 2017

Hi there,

It's been almost a year since I, Filipe and Aracele were having a beer at Alexander Platz after the very last day of QtCon Berlin, when Aracele astutely came up with a very crazy idea of organizing QtCon in Brazil. Since then, we have been maturing such an idea and after a lot of work we are very glad to announce: QtCon Brasil 2017 happens from 18th to 20th August in São Paulo.

QtCon Brazil 2017 is the first Qt community conference in Brazil and Latin America. Its goals are twofold: i) provide a common venue for existing Brazilian and Latin-American Qt developers, engineers, and managers share their experiences on creating Qt-powered technologies and ii) disseminate Qt adoption in Latin America, with the purpose of expanding its contributors base, encouraging business opportunities, and narrowing relationships between sectors like industry, universities, and government.

In this first edition of QtCon Brazil, the conference will focus on cross-platform development enabled by Qt. With that, the meeting can benefit a wider range of stakeholders, with interest in all sort of platforms, including desktop (Windows, Linux, and OS X), mobile (Android and iOS), embedded, and IoT. We are bringing together experienced Qt specialists from Brazil and overseas to delivery a state-of-art program of talks and training sessions that illustrate how Qt has been used as enabling technology for many sectors in industry.

QtCon Brazil 2017 will take place in São Paulo – the most important technical, social, and cultural hub in Brazil and the world’s tenth largest GDP. São Paulo is easily reachable from most of Brazilian airports, provides satisfactory infrastructure regarding the venue and accommodation, and is a strategic place to augment the achievements of this first edition of QtCon Brazil. The venue where the meeting will happen is Espaço Fit, a multifunctional conference center with an auditorium for 220 people and that provides all required infrastructure regarding multimedia equipments, Wi-Fi, and catering services. Espaço Fit is located in São Paulo downtown – at Avenida Paulista – it is easily reachable from airports, in a walking distance from metro stations, and nearby a large array of hotels.

QtCon Brasil 2017 is kindly sponsored by The Qt Company, Toradex, openSUSE and KDE. It has also the valuable support for logistics and outreach of Carreira RH and Embarcados. Thank you all for making QtCon Brasil possible.

You can find more information (in portuguese) at QtCon Brasil webpage. Also, be sure to follow us in QtCon Brasil Twitter, Facebook and Google+ pages.

ICC Examin allows since version 1.0 ICC Color Profile viewing on the Android mobile platform. ICC Examin shows ICC color profile elements graphically. This way it is much easier to understand the content. Color primaries, white point, curves, tables and color lists are displayed both numerically and as graphics. Matrices, international texts, Metadata are much easier to read.

Features:
* most profile elements from ICC specification version 2 and version 4
* additionally some widely used non standard tag are understood

ICC color profiles are used in photography, print and various operating systems for improving the visual appearance. A ICC profile describes the color response of a color device. Read more about ISO 15076-1:2010 Standard / Specification ICC.1:2010-12 (Profile version 4.3.0.0), color profiles and ICC color management under www.color.org .

The ICC Examin App is completely rewritten in Qt/QML. QML is a declarative language, making it easy to define GUI elements and write layouts with fewer code. In recent years the Qt project extended support from desktop platforms to mobiles like Nokias Meego, Sailfish OS, iOS, Android, embedded devices and more. ICC Examin is available as a paid app in the Google Play Store. Sources are currently closed in order to financially support further development. This ICC Examin version continues to use Oyranos CMS. New is the dependency to RefIccMAX for parsing ICC Profile binaries. In the process both the RefIccMAX library and the Oyranos Color Management System obtained changes and fixes in git for cross compilation with Android libraries. Those changes will be in the next respective releases.

The FLTK Toolkit, as used in previous versions, was not ported to the Android or other mobile platforms. Thus a complete rewrite was unavoidable. The old FLTK based version is still maintained by the same author.

KDE Project:

This July KDE's user and developer community is once again going to come together at Akademy, our largest annual gathering.

I'm going there this year as well, and you'll even be able to catch me on stage giving a talk on Input Methods in Plasma 5. Here's the talk abstract to hopefully whet your appetite:


An overview over the How and Why of Input Methods support (including examples of international writing systems, emoji and word completion) in Plasma on both X11 and Wayland, its current status and challenges, and the work ahead of us.

Text input is the foundational means of human-computer interaction: We configure or systems, program them, and express ourselves through them by writing. Input Methods help us along by converting hardware events into text - complex conversion being a requirement for many international writing systems, new writing systems such as emoji, and at the heart of assistive text technologies such as word completion and spell-checking.

This talk will illustrate the application areas for Input Methods by example, presenting short introductions to several international writing systems as well as emoji input. It will explain why solid Input Methods support is vital to KDE's goal of inclusivity and how Input Methods can make the act of writing easier for all of us.

It will consolidate input from the Input Methods development and user community to provide a detailed overview over the current Input Methods technical architecture and user experience in Plasma, as well as free systems in general. It will dive into existing pain points and present both ongoing work and plans to address them.


This will actually be the first time I'm giving a presentation at Akademy! It's a topic close to my heart, and I hope I can do a decent job conveying a snaphot of all the great and important work people are doing in this area to your eyes and ears.

See you there!

May 18, 2017

5 days ago I did the part one of this adventure, that you can check here. Now it's time for part two. =D Well, I was able to Craft AtCore. And have it running on Windows. However, that raised a problem that I had when I crafted AtCore on the beginning of the year. It [...]


The current world of high DPI works fine when dealing with a single montior and only dealing with modern apps.
but it breaks down with multiple monitors.

What we want to see:

What we need to render:

As well as windows being spread across outputs, we also want the following features to work:

  • Legacy applications to still be readable and usable
  • Mouse speed to be consistent
  • Screenshots to be consistent across screens
  • All toolkits behaving the same through a common protocol

Handling scaling is part of the core wayland protocol and, with some changes in kwin, solves all of these problems.

The system

The system is a bit counter-intuitive, yet at the same time very simple; instead of clients having bigger windows and adjusting all the
input and positioning, we pretend everything is normal DPI and kwin just renders the entire screen at twice the size.
Clients, then provide textures (pictures of their window contents) that are twice the resolution of their window size.

This covers all possible cases:
- we render a 1x window on a 2x screen:
Because kwin scales up all the rendering, the window is shown twice the size, and therefore readable, albeit at standard resolution.

- we render a 2x window on a 1x screen:
The window texture will be downsampled to be the right size.

- we render a 2x window on a 2x screen:
Kwin scales up all the output, so we draw the window at twice the size. However, because the texture is twice as detailed this cancels out
and we end up showing it at the native drawn resolution giving us the high DPI detail.

The changes in KWin are not about adding explicit resizing or input redirection anywhere; but instead about decoupling the assumption between the size of a window or monitor, and its actual resolution.

What changes for app developers?

Nothing.

When?

All the kwin code changes landed in time for Plasma 5.10, but dynamically changing the screen scale exposed some problems elsewhere in the stack. Thefore the main UI has been disabled till hopefully Plasma 5.11. This also allows us to expand our testing with users that want to manually edit their kscreen config and opt-in.

What about fractional scaling?

Despite Qt having quite good fractional scaling support, the wayland protocol limits itself to integers. This is in the very core protocol and somewhat hard to avoid. However, there's technically nothing stopping kwin from scaling to a different size than it tells the client to scale at...So it's something we can revisit later.

Tags: 

On June 23rd 2017, there’s a new, one day training in our Berlin facility, this time in German.

Training in FOSS Compliance will be available, in English, in Berlin and other locations later this year. More on that below, in English. Meanwhile, since our first open-enrolment training to help you with the complex issues around compliance is in German…….

Der Begriff Corporate Compliance ist seit einigen Jahren in den Fokus der Öffentlichkeit gerückt, aber wenige Unternehmen wissen bislang um die

The post Training in Foss Compliance appeared first on KDAB.

May 17, 2017

I am pleased to confirm that Qt 5.9 LTS officially supports the INTEGRITY Real-Time Operating System (RTOS) from Green Hills Software. INTEGRITY was initially supported with Qt 4.8 and will again be supported from Qt 5.9 onwards. The interest in running Qt on INTEGRITY has been customer driven, primarily from Automotive for use in safety certified Instrument Clusters, but also from other industry sectors.

One might ask why is it important to support INTEGRITY for Qt? Why is the leading cross-platform application and UI framework needed in RTOS applications? Aren’t these the ones so deeply buried into our infrastructure, that you almost never notice when you come across one? Well, yes and no. It is true that most of the RTOS applications are still done without any UI, let alone a fancy and interactive graphical user interface. But the number of those RTOS applications that require an advanced GUI framework to meet the user expectation is growing fast, along with the demand to run Qt on an RTOS that provides high degree of safety and security. Other embedded operating systems, such as embedded Linux, are not sufficient when it comes to their real-time capability, reliability, security and certified operations essential for certain industries such as automotive, medical, aerospace and industrial automation.

Qt 5.9 LTS for INTEGRITY is covered by our excellent technical support for all existing Qt for INTEGRITY license holders. All the most important modules are available, for example: Qt Core, Qt Network, Qt GUI, Qt Quick, Qt Qml, Qt Quick Controls 2 and Qt 3D. Initially we are supporting NXP i.MX6 and NVIDIA Tegra X1 hardware. Other hardware, such as Intel Apollo Lake can also be used, and we intend to continue adding support for new hardware with subsequent Qt releases. The following video shows Qt 5.9 LTS based digital instrument cluster running on top of INTEGRITY RTOS with NXP i.MX6 and NVIDIA Tegra X1.

Leveraging the Qt framework with INTEGRITY RTOS enables an easy way of adding state-of-the-art graphical user interfaces to security and safety critical systems. It is easier to certify the product to comply with the requirements of for example the automotive or medical industries when the solution is built on top of an RTOS that is already certified for that industry domain. We see many industries, for example medical, automotive, industrial automation, aerospace and defense, directly benefiting from now being able to leverage Qt for INTEGRITY. The in-built capabilities of the INTEGRITY RTOS platform allow the Qt framework to run in such a way that it does not interfere with the real-time operation of the system. This simplifies creation of safety critical systems and enables certifying the Qt based system to standards for functional safety such as IEC 61508, ISO 26262 and IEC 62304.

Green Hills INTEGRITY is not only a certified RTOS, but also provides a real-time hypervisor, INTEGRITY Multivisor, allowing guest OSes such as Linux to run on the same SoC as the safety critical RTOS. This simplifies building complex systems such as digital cockpits. The following video shows Qt Automotive Suite running on top of Linux as well as a Qt-based instrument cluster running on the INTEGRITY RTOS – both on the same Intel Apollo Lake SoC leveraging INTEGRITY secure virtualisation.

If you have not yet considered leveraging Qt for your next safety-critical project, I recommend taking a deeper look. Our sales, consultants and support are available to guide you through the evaluation process. If you have any questions, we are very happy to discuss more on how Qt meets the needs of your next product – please contact us.

The post INTEGRITY RTOS Support in Qt 5.9 LTS appeared first on Qt Blog.

The program has been finalized, and registration is now open, for Akademy 2017, in Almería, Spain. I had a tiny amount of input for the programme and schedule, and I’d really like to thank the Real Programme Committee for putting together a neat set of talks covering KDE technology and community for us all, and the local team in advance for all their great work.

I don’t see an official I’m-going banner yet on the under-construction wiki page with tips and tricks for this year’s conference, and I’m not going to subject anyone to my art skills either (I can’t seem to find my older Kolourpaint-based efforts for earlier Akademies, either).

[[ And since there’s always bugs: my IRC nick is [ade], which on import into the conference system got turned into the string Array. PHP never ceases to amaze. ]]


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.