... And version 1.0 is here!

GCompris is a popular collection of educational and fun activities for children from 2 to 10 years old. GCompris has become popular with teachers, parents, and, most importantly, kids from around the world and offers an ever-growing list of activities -- more than 150 at the last count. These activities have been translated to over 20 languages and cover a wide range of topics, from basic numeracy and literacy, to history, art, geography and technology.





GCompris offers children between the ages of 2 and 10 more than 150 fun educational activities.

The newest version of GCompris also incorporates a feature that teachers and parents alike will find useful: GCompris 1.0 lets educators select the level of the activities according to the proficiency of each child. For example, in an activity that lets children practice numbers, you can select what numbers they can learn, leaving higher and more difficult numbers for a later stage. An activity for practicing the time lets you choose whether the child will practice full hours, half hours, quarters of an hour, minutes, and so on. And in an activity where the aim is to figure out the change when buying things for Tux, the penguin, you can choose the maximum amount of money the child will play with.

We have built the activities to follow the principles of "nothing succeeds like success" and that children, when learning, should be challenged, but not made to feel threatened. Thus, GCompris congratulates, but does not reprimand; all the characters the child interacts with are friendly and supportive; activities are brightly colored, contain encouraging voices and play upbeat, but soothing music.





GCompris now lets you select the level of some activities according to the child's proficiency.

The hardware requirements for running GCompris are extremely low and it will run fine on older computers or low-powered machines, like the Raspberry Pi. This saves you and your school from having to invest in new and expensive equipment and it is also eco-friendly, as it reduces the amount of technological waste that is produced when you have to renew computers to adapt to more and more power-hungry software. GCompris works on Windows, Android and GNU/Linux computers, and on desktop machines, laptops, tablets and phones.

GCompris is built, maintained and regularly updated by the KDE Community and is Free and Open Source Software. It is distributed free of charge and requires neither subscriptions nor asks for personal details. GCompris displays no advertising and the creators have no commercial interest whatsoever. Any donations are pooled back into the development of the software.

Seeking to engage more professional educators and parents, we are working on several projects parallel to our software and have recently opened a forum for teachers and parents and a chat room where users and creators can talk live to each other, suggest changes, share tips on how to use GCompris in the classroom or at home, and find out upcoming features and activities being added to GCompris.

Apart from increasing the number and variety of activities, for example, an upcoming feature is a complete dashboard that will provide teachers with better control of how pupils interact with GCompris. We are also working with teachers and contributors from different countries to compile a "Cookbook" of GCompris recipes that will help you use GCompris in different contexts. Another area where we are working with contributors is on translations: if you can help us translate GCompris into your language (with your voice!), we want to hear from you! Your help and ideas are all welcome.

Visit our forum and chat and tell us how you use GCompris and we will share it with the world.


KDE is a community of volunteers that creates a wide range of software products, like the Plasma desktop, the Krita painting program, the Kdenlive video editor, the GCompris educational suite of activities and games, as well as dozens of other high-quality applications and utilities. Among them, KDE develops and maintains several educational programs for children and young adults.

All KDE's products are Free and Open Source Software and can be downloaded, used and shared without charge or limitations.

Thursday

19 November, 2020

Some of you may be wondering what I have been up to lately since I took a break from my work in the KDE community. Well, it was time for a change, a change towards family, friends and a more local life. The result is a more balanced, a more grown up me. These changes in my life lead to me having a small family and a group of new friends, both of which I spend a lot of time with. They brought more light into my life, one could say.

That is not all I want to talk about, however. I the past 1.5 years I worked on a new project of mine that combines my love for software with the physical world. I created a product and brought it to the market last month. Now, we’re ready for the international launch of Organic Lighting. The product is a design smart lamp for the living room. It combines unique and dynamic visual effects with natural, sustainable materials.
Meet our Lavalamp:

It’s a connected device that can be eighter controlled using its physical knob on the front, or via its web ui (or REST interface). Effects can be changed, tweaked and its firmware can be updated (nobody should want an IoT device that can’t get security of feature updates). The concept here, technically is to do “light in software”. The lamp is run by a microcontroller embedded in its foot. Its roughly 600 leds produce ca. 4000 Lumen and render effects at more than 200 frames per seconds.
The lamp is built with easy repairs in mind, and it’s designed for a long-lasting experience, it respects your privacy, and it creates a unique atmosphere in your living space.

With our products, we’re offering an alternative to planned deprecation, throw-away materials and hidden costs of cheap electronics that make you the product by consuming your data for breakfast.

In the future, we will build on these concepts and technology and offer more devices and components that match our principles and that enhance one-another. Stay tuned!

This is the second of a series of two articles describing the idea of correlating software product delivery process performance metrics with community health and collaboration metrics as a way to engage execution managers so their teams participate in Open Source projects and Inner Source programs. The first article is called If you want to go far, together is faster (I). Please read it before this post if you haven’t already. You can also watch the talk I gave at InnerSource Commons Fall 2020 that summarizes these series.

In the previous post I provided some background and described my perception about what causes resistance from managers to involve their development teams in Open Source projects and Inner Source programs. I enumerated five not-so-simple steps to reduce such resistance. This article explain such steps in some detail.

Let me start enumerating again the proposed steps:

1.- In addition to collaboration and community health metrics, I recommend to track product delivery process performance ones in Open Source projects and Inner Source programs.
2.- Correlate both groups of metrics.
3.- Focus on decisions and actions that creates a positive correlation between those two groups of metrics.
4.- Create a reporting strategy to developers and managers based on such positive correlation.
5.- Use such strategy to turn around your story: it is about creating positive business impact at scale through open collaboration.

The solution explained

1.- Collaboration and community health metrics as well as product delivery process performance metrics.

Most Open Source projects and Inner Source programs focus their initial efforts related with metrics in measuring collaboration as well as community healthiness. There is an Open Source project hosted by the Linux Foundation focused on the definition of many types of metrics. Collaboration and community health metrics are among the more mature ones. The project is called CHAOSS. You can find plenty of examples of these metrics applied in a variety of Open Source projects there too.

Inner Source programs are taking the experience developed by Open Source projects in this field and apply them internally so many of them are using such metrics (collaboration and community health) as the base to evaluate how successful they are. In our attempt to expand our study of these collaboration environments to areas directly related with productivity, efficiency etc., additional metrics should be considered.

Before getting into the core ones, I have to say that many projects pay attention to code review related metrics as well as defect management to evaluate productivity or performance. These metrics go in the right direction but they are only partial and, in order to demonstrate a clear relation between collaboration and productivity or performance for instance, they do not work very well in many cases. I will put a few examples why.

Code review is a standard practice among Open Source projects, but at scale is perceived by many as an inefficient activity compared to others, when knowledge transfer and mentorship are not a core goal. Pair or mob programing as well as code review restricted to team scale are practices perceived by many execution managers as more efficient in corporate environments.

When it comes to defect management, companies have been tracking these variables for a long time and it will be very hard for Open Source and Inner Source evangelists to convince execution managers that what you are doing in the open or in the Inner Source program is so much better ans specially cheaper than it is worth participating. For many of these managers, cost control goes first and code sustainability comes later, not the other way around.

Unsurprisingly, I recommend to focus on the delivery process of the software product production as a first step towards reducing the resistance from execution managers to embrace collaboration at scale. I pick the delivery process because it is deterministic, so it is simpler to apply process engineering (so metrics) than to any other stage of the product life cycle that involves development. From all the potential metrics, throughput and stability are the essential ones.

Throughput and Stability

It is not the point of this article to go deep into these metrics. I suggest you to refer to delivery or product managers at your organization that embrace Continuous Delivery principles and practices to get information about these core metrics. You can also read Steve Smith’s book Measuring Continuous Delivery, which defines the metrics in detail, characterize them and provide guidance on how to implement them and use them. You can find more details about this and other interesting books at the Reads section of this site, by the way.

There are several reasons for me to recommend these two metrics. Some of them are:

  • Both metrics characterize the performance of a system that processes a flow of elements. The software product delivery can be conceive as such a system where the information flows in the form of code commits, packages, images… .
  • Both metrics (sometimes in different forms/expressions) are widely used in other knowledge areas, in some cases for quite some time now, like networking, lean manufacturing, fluid dynamics… There is little magic behind them.
  • To me the most relevant characteristic is that, once your delivery system is modeled, both metrics can be applied at system level (globally) and at a specific stage (locally). This is extremely powerful when trying to improve the overall performance of the delivery process through local actions at specific points. You can track the effect of local improvements in the entire process.
  • Both metrics have simple units and are simple to measure. The complexity is operational when different tools are used across the delivery process. The usage of these metrics reduce the complexity to a technical problem.
  • Throughput and Stability are positively correlated when applying Continuous Delivery principles and practices. In addition, they can be used to track how good you are doing when moving from a discontinuous to a continuous delivery system. Several of the practices promoted by Continuous Delivery are already very popular among Open Source projects. In some cases, some would claim that they were invented there, way before Continuous Delivery was a thing in corporate environments. I love the chicken-egg debates… but not now.

Let’s assume from now on that I have convinced you that Throughput and Stability are the two metrics to focus on, in addition with the already in use collaboration and community health metrics your Open Source or Inner Source project is already using.

If you are not convinced, by the way, even after reading S. Smith book, you might want to check the most common references to Continuous Delivery. Dave Farley, one of the fathers of the Continuous Delivery movement, has a new series of videos you should watch. One of them deals with these two metrics.

2.- Correlate both groups of metrics

Let’s assume for a moment that you have implemented such delivery process metrics in several of the projects in your Inner Source initiative or across your delivery pipelines in your Open Source project. The following step is to introduce an Improvement Kata process to define and evaluate the outcome of specific actions over prestablished high level SMART goals. Such goals should aim for a correlation between both types of metrics (community health / collaboration and delivery process ones).

Let me put one example. It is widely understood in Open Source projects that being welcoming is a sign of good health. It is common to measure how many newcomers the project attract overtime and their initial journey within the community, looking for their consolidation as contributors. A similar thinking is followed in Inner Source projects.

The truth is that not always more capacity translate into higher throughput or an increase of process stability, on the contrary, it is a widely accepted among execution managers that the opposite is more likely in some cases. Unless the work structure, so the teams and the tooling, are oriented to embrace flexible capacity, high rates of capacity variability leads to inefficiencies. This is an example of an expected negative correlation.

In this particular case then, the goal is to extend the actions related with increasing our number of new contributors to our delivery process, ensuring that our system is sensitive to an increase of capacity at the expected rate and we can track it accordingly.

What do we have to do to mitigate the risks of increasing the Integration failure rate due to having an increase of throughput at commit stage? Can we increase our build capacity accordingly? Can our testing infrastructure digest the increase of builds derived from increasing our development capacity, assuming we keep the number of commits per triggered build?

In summary, work on the correlation of both groups of metrics, so link actions that would affect both, community health and collaboration metrics together with delivery metrics.

3.- Focus on decisions and actions that creates a positive correlation between both groups of metrics.

There will be executed actions designed to increase our number of contributors that might lead to a reduction of throughput or stability, others that might have a positive effect in one of them but not the other (spoiler alert, at some both will decrease) and some others that will increase both of them (positive correlation).

If you work in an environment where Continuous Delivery is the norm, those behind the execution will understand which actions have a positive correlation between throughput and stability. Your job will only be associated to link those actions with the ones you are familiar with in the community health and collaboration space. If not, you work will be harder, but still worth it.

For our particular case, you might find for instance, that a simple measure to digest the increasing number of commits (bug fixes) can be to scale up the build capacity if you have remaining budget. You might find though that you have problems doing so when reviewing acceptance criteria because you lack automation, or that your current testing-on-hardware capacity is almost fixed due to limitations in the system that manage your test benches and additional effort to improve the situation is required.

Establishing experiments that consider not just the collaboration side but also the software delivery one as well as translating into production those experiments that demonstrate a positive correlation of the target metrics, increasing all of them, might bring you to surprising results, sometimes far from common knowledge among those focused on collaboration aspects only, but closer to those focused in execution.

4.- Create a reporting strategy to developers and managers based on such positive correlation.

A board member of an organization I was managing, once told me what I follow ever since. It was something like…

Managers talk to managers through reports. Speak up clearly through them.

As manager I used to put a lot of thinking in the reporting strategy. I have some blog posts written about this point. Beside things like the language used or the OKRs and KPIs you base your reporting upon, understanding the motivations and background of the target audience of those reports is as important.

I suggest you pay attention to how those you want to convince about participating in Open Source or Inner Source projects report to their managers as well as how others report to them. Are those report time based? KPIs based, are they presented and discussed in 1:1s or in a team meeting? etc. Usually every senior manager dealing with execution have a consolidated way of reporting and being reported. Adapt to it instead of keeping the format we are more used to in open environments. I love reporting through a team or department blog but it might not be the best format for this case.

After creating and evaluating many reports about community health and collaboration activities, I suggest to change how they are conceived. Instead of focusing on collaboration growth and community health first and then in the consequences of such improvements for the organization (benefits), focus first on how product or project performance have improved while collaboration and community health has improved. In other words, change how cause-effect are presented.

The idea is to convince execution managers that by anticipating in Open Source projects or Inner Source programs, their teams can learn how to be more efficient and productive in short cycles while achieving long term goals they can present to executives. Help those managers also to present both type of achievements to their executives using your own reports.

For engineers, move the spotlight away from the growth of interactions among developers and put it in the increase of stability derived from making those interactions meaningful, for instance. Or try to correlate diversity metrics with defects management results, or with reductions in change failure rates or detected security vulnerabilities, etc. Move partially your reporting focus away from teams satisfaction (a common strategy within Open Source projects) and put it in team performance and productivity. They are obviously intimately related but tech leads and other key roles within your company might be more sensitive to the latter.

In summary, you achieve the proposed goal if execution managers can take the reports you present to them and insert them in theirs without re-interpreting the language, the figures, the datasets, the conclusions…

5.- Turn your story around.

If you manage to find positive correlations between the proposed metrics and report about those correlations in a way that is sensitive for execution managers, you will have established a very powerful platform to create an unbeatable story around your Inner Source program or your participation at Open Source projects. Investment growth will receive less resistance and it will be easier to infect execution units with practices and tools promoted through the collaboration program.

Prescriptors and evangelists will feel more supported in their viral infection and those responsible for these programs will gain an invaluable ally in their battle against legal, procurement, IP or risks departments, among others. Collaboration will not just be good for the developers or the company but also clearly for the product portfolio or the services. And not just in the long run but also in a shorter term. That is a significant difference.

Your story will be about increasing business impact through collaboration instead of about collaborating to achieve bigger business impact. Open collaboration environments increase productivity and have a tangible positive impact in the organization’s product/service, so it has a clear positive business impact.

Conclusion

In order to attract execution managers to promote the participation of their departments and teams in Open Source projects and Inner Source programs, I recommend to define a different communication strategy, one that rely in reports based on results provided by actions that show a positive correlation between community health and collaboration metrics with delivery process performance metrics, especially throughput and stability. This idea can be summarized in the following steps, explained in these two articles:

  • Collaboration within a commercial organization matters more to management if it has a measurable positive business impact.
  • To take decisions and evaluate their impact within your Inner Source program or the FLOSS community, combine collaboration and community health metrics with delivery metrics, fundamentally throughput and stability.
  • Prioritize those decisions/actions that produce a tangible positive correlation between these two groups of metrics.
  • Report, specially to managers, based on such positive correlation.
  • Adapt your Inner Source or Open Source story: increase business impact through collaboration.

In a nutshell, it all comes down to prove that, at scale…

if you want to go far, together is faster.

Check the first one of this article series if you haven’t. You can also watch the recording of the talk provided at ISC Fall 2020 where I summarized what is explained in these two articles.

I would like to thank the ISC Fall 2020 content committee and organizers for giving me the opportunity to participate in such interesting and well organized event.

Overview

Qt 6 is nearly upon us. While this has not been addressed by other publications, Qt 3D is also introducing a number of changes with this major release. This includes changes in the public API that will bring a number of new features and many internal changes to improve performance and leverage new, low-level graphics features introduced in QtBase. I will focus on API changes now, while my colleague, Paul Lemire, will cover other changes in a follow up post.

Distribution of Qt 3D for Qt 6

Before looking at what has changed in the API, the first big change concerns how Qt 3D is distributed. Qt 3D has been one of the core modules that ship with every Qt release. Starting with Qt 6, however, Qt 3D will be distributed as a separate source-only package. Qt 6 will ship with a number of such modules, and will use conan to make it easy for those modules to be built. This means that users interested in using Qt 3D will need to compile once for every relevant platform.

Since it ships independently, Qt 3D will also most likely be on a different release cycle than the main Qt releases. We will be able to release more, frequent minor releases with new features and bug fixes.

Another consequence of this is that Qt 3D will not be bound by the same binary compatibility constraints as the rest of Qt. We do however, aim to preserve source compatibility for the foreseeable future.

Basic Geometry Types

The first API change is minimal, but, unfortunately, it is source-incompatible. You will need to change your code in order to compile against these changes.

In order to make developing new aspects that access geometry data more straight forward, we have moved a number of classes that relate to that from the Qt3DRender aspect to the Qt3DCore aspect. These include QBuffer, QAttribute and QGeometry.

When using the QML API, impact should be minimal, the Buffer element still exists, and importing the Render module implicitly imports the Core module anyway. You may have to change your code if you’ve been using module aliases, though.

In C++, this affects which namespace these classes live in, which is potentially more disruptive. So if you were using Qt3DRender::QBuffer (often required to avoid clash with QBuffer class in QtCore), you would now need to use Qt3DCore::QBuffer, and so on…

If you need to write code that targets both Qt5 and Qt6, one trick you can use to ease the porting is to use namespace aliases, like this:

#if QT_VERSION >= QT_VERSION_CHECK(6, 0, 0)
#include <Qt3DCore/QBuffer>
namespace Qt3DGeometry = Qt3DCore;
#else
#include <Qt3DRender/QBuffer>
namespace Qt3DGeometry = Qt3DRender;
#endif

void prepareBuffer(Qt3DGeometry::QBuffer *buffer) {
    ...
}

The main reason this was done is so that all aspects could have access to the complete description of a mesh. Potential collision detection or physics simulation aspects don’t need to have their own representation of a mesh, separate from the one used for rendering.

So QBuffer, QAttribute and QGeometry are now in Qt3DCore. But this is not enough to completely describe a mesh.

Changes in Creating Geometry

A mesh is typically made of a collection of vertices. Each vertex will have several properties (positions, normal, texture coordinates, etc.) associated to it. The data for those properties is stored somewhere in memory. So in order to register a mesh with Qt 3D, you need:

  • A QGeometry instance that is simply a collection of QAttribute instances
  • Each QAttribute instance to define the details of a vertex attribute. For example, for the position, it would include the number of components (usually 3), the type of the component (usually floats), the name of the attribute as it will be exposed to the shaders (usually “position” or QAttribute::defaultNormalAttributeName(), if you are using Qt3D built-in materials), etc.
  • Each QAttribute to also point to a QBuffer instance. This may be the same for all attributes, or it may be different, especially for attribute data that needs to be updated often.

But this is still incomplete. We are missing details such as how many points make up the mesh and what type of primitives these points make up (triangles, strips, lines, etc) and more.

Prior to Qt 6, these details were stored on a Qt3DRender::QGeometryRenderer class. The name is obviously very rendering-related (understatement), so we couldn’t just move that class.

For these reasons, Qt 6 introduces a new class, Qt3DCore::QGeometryView. It includes a pointer to a QGeometry and completely defines a mesh. It just doesn’t render it. This is useful as the core representation of a mesh that can then be used for rendering, bounding volume specifications, picking, and much more.

Bounding Volume Handling

One of the very first things Qt 3D needs to do before rendering is compute the bounding volume of the mesh. This is needed for view frustum culling and picking. Internally, the render aspect builds a bounding volume hierarchy to quickly find objects in space. To compute the bounding volume, it needs to go through all the active vertices of a mesh. Although this is cached, it can take time the first time the object is rendered or any of its details change.

Furthermore, up to now, this was completely internal to Qt 3D’s rendering backend and the results were not available to the user for using in the rest of the application.

So Qt 6 introduces a QBoundingVolume component which serves two purposes:

  • it has implicit minimum point and maximum point properties that contain the result of the bounding volume computations done by the backend. This can be used by the application.
  • it has explicit minimum point and maximum point properties which the user can set. This will prevent the backend from having to calculate the bounds in order to build the bounding volume hierarchy.

The minimum and maximum extents points are the corners of the axis-aligned box that fits around the geometry.

But how does QBoundingVolume know which mesh to work on? Easy — it has a view property which points to a QGeometryView instance!

Reading bounding volume extents

So if you need to query the extents of a mesh, you can use the implicit values:

Entity {
   components: [
       Mesh {
           source: "..."
           onImplicitMinPointChanged: console.log(implicitMinPoint)
       },
       PhongMaterial { diffuse: "green" },
       Transform { ... }
    ]
}

Note that if the backend needs to compute the bounding volume, this is done at the next frame using the thread pool. So the implicit properties might not be immediately available when updating the mesh.

If you need the extents immediately after setting or modifying the mesh, you can call QBoundingVolume::updateImplicitBounds() method, which will do the computations and update the implicit properties.

Setting bounding volume extents

But you know the extents. You can set them explicitly to stop Qt 3D from computing them:

Entity {
   components: [
       Mesh {
           source: "..."
           minPoint: Qt.vector3d(-.5, -.5, -.5)
           maxPoint: Qt.vector3d(.5, .5, .5)
       },
       PhongMaterial { diffuse: "green" },
       Transform { ... }
    ]
}

Note that, since setting the explicit bounds disables the computation of the bounding volume in the backend, the implicit properties will NOT be updated in this case.

Mesh Rendering

Now, before everyone goes and adds QBoundingVolume components to all their entities, one other thing: QGeometryRenderer, in the Qt3DRender module, now derives from QBoundingVolume. So, it will also have all the extents properties.

It also means you can provide it with a geometry view to tell it what to draw, rather than providing a QGeometry and all the other details.

It still, however, has all the old properties that are now taken care of by the QGeometryView instance. If that is defined, all the legacy properties will be ignored. We will deprecate them soon and remove them in Qt 7.

So what happens if you provide both a QBoundingVolume and a QGeometryRenderer component to an entity? In this case, the actual bounding volume component takes precedence over the geometry renderer. If it specifies explicit bounds, those will be used for the entity.

The main use case for that is to specify a simpler geometry for the purpose of bounding volume computation. If you don’t know the extents of a mesh but you know that a simpler mesh (with much fewer vertices) completely wraps the object you want to render, using that simpler mesh can be a good way of speeding up the computations that Qt 3D needs to do.

New Core Aspect

Most of these computations took place previously in the Render aspect. Since this is now in core, and in order to fit in with Qt 3D’s general architecture, we introduce a new Core aspect. This aspect will be started automatically if you are using Scene3D or Qt3DWindow. In cases where you are creating your own aspect engine, it should also automatically be started as long as the render aspect is in use via the new aspect dependency API (see below).

The core aspect will take care of the all the bounding volume updates for the entities that use the new geometry view-based API (legacy scenes using a QGeometryRenderer instances without using views will continue to be updated by the rendering aspect).

The Core aspect also introduces a new QCoreSettings component. Like the QRenderSettings component, a single instance can be created. It is, by convention, attached to the root entity.

Currently, its only purpose is to be able to completely disable bounding volume updating. If you are not using picking and have disabled view frustum culling, bounding volumes are actually of no use to Qt 3D. You can disable all the jobs which are related to bounding volume updates by setting QCoreSettings::boundingVolumesEnabled to false. Note that implicit extent vertices on QBoundingVolume component will then not be updated.

New Aspect and AspectJob API

 

The base class for aspects, QAbstractAspect, has gained a few useful virtual methods:

  • QAbstractAspect::dependencies() should return the list of aspect names that should be started automatically if an instance of this aspect is registered.
  • QAbstractAspect::jobsDone() is called on the main thread when all the jobs that the aspect has scheduled for a given frame have completed. Each aspect has the opportunity to take the results of the jobs and act upon them. It is called every frame.
  • QAbstractAspect::frameDone() is called when all the aspects have completed the jobs AND the job post processing. In the case of the render aspect, this is when rendering actually starts for that frame.

 

Similarly, jobs have gained a number of virtual methods on QAspectJob:

  • QAspectJob::isRequired() is called before a job is submitted. When building jobs, aspects will build graphs of jobs with various dependencies. It’s often easier to build the same graph every frame, but not all jobs might have something to do on a given frame. For example, the picking job has nothing to do if there are no object pickers or the mouse has not moved. The run method can test for this and return early, but this still causes the job to be scheduled onto a thread in the pool, with all the associated, sometime expensive locking. If QAspectJob::isRequired() returns false, the job will not be submitted to the thread pool and processing will continue with its dependent jobs.
  • QAspectJob::postFrame() is called on the main thread once all the jobs are completed. This is place where most jobs can safely update the foreground classes with the results of the backend computations (such as bounding volume sizes, picking hits, etc).

Picking Optimization

We have introduced optimization for picking. A QPickingProxy component has been introduced, deriving from QBoundingVolume. If it has an associated geometry view, that mesh will be used instead of the rendered mesh for picking tests. This applies to QObjectPicker and the QRayCaster and QScreenRayCaster classes. Since precise picking (when QPickingSettings is not set to use bounding volume picking) needs to look at every primitive (triangle, line, vertex), it can be very slow. Using QPickingProxy makes it possible to provide a much simpler mesh for the purpose of picking.

Entity {
    components: [
        GeometryRenderer { view: actualView },
        PickingProxy { view: simpleView }
        ...
]
...

So for example, you can provide a down sampled mesh, such as the bunny on the right (which only includes 5% of the total of primitives in the original mesh), to get fast picking results.

Of course, the picking results (the local coordinate, the index of the picked primitive, etc) will all be defined relative to the picking proxy mesh (not the rendered mesh).

 

Finally, QRayCaster and QScreenRayCaster now have pick() methods, which do a ray casting test synchronously, whereas the pre-existing trigger() methods would schedule a test for the next frame. This will block the caller until completed and return the list of hits. Thus, it’s possible for an application to implement event delegation. For example, if the user right clicks, the application can decide to do something different depending on the type of the closest object, or display a context menu if nothing was hit.

Conclusion

As you can see, Qt 3D for Qt 6 has quite a few changes. My colleague, Paul Lemire, will go through many more internal changes in a later post. We hope this ensures an on-going successful future for Qt 3D in the Qt 6 series.

KDAB provides a number of services around Qt 3D, including mentoring your team and embedding Qt 3D code into your application, among others. You can find out more about these services here.

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Qt 3D Changes in Qt 6 appeared first on KDAB.

Tuesday

17 November, 2020

This is the first of a series of two articles describing the reasoning and the steps behind the idea of correlating software product delivery process performance metrics with community health and collaboration metrics as a way to engage execution managers so their teams participate in Open Source projects and Inner Source programs. What you will read in these articles is an extension of a talk I gave at the event Inner Source Commons Fall 2020.

Background

There is a very popular African proverb within the Free Software movement that says…

If you want to go fast, go alone. If you want to go far, go together.

Many of us used it for years to promote collaboration among commercial organizations over doing it internally in a fast way at the risk of reinventing the wheel, not following standards, reducing quality, etc.

The proverb describes an implicit OR relation between the traditional Open Source mindset, focused on longer term results obtained through extensive collaboration, and the traditional corporate mindset, where time-to-market is almost an obsession.

Early in my career I got exposed as manager (I do not code) to agile and a little later to Continuous Delivery. This second set of principles and practices had a big impact on me because of the tight and positive correlation that proposes between speed and quality. Until then, I had assumed that such correlation was negative, when one increases the other decreases or vice-versa.

During a long time, I also assumed unconsciously as truth the negative correlation between collaboration and speed. It was not until I started working in projects at scale when I became aware of such unconscious assumption and start question it first and challenging it later.

In my early years in Open Source I found myself many times discussing with executives and managers about the benefits of this “collaboration framework” and why they should adopt it. Like probably many of you, I found myself being more successful among executives and politicians than middle layer managers.

– “No wonder they are executives” I thought more than once back then.

But time prove me wrong once again.

Problem statement: can we go faster by going together?

It was not until later on in my career, when I could relate to good Open Source evangelists but specially good sales professionals. I learned a bit about how different groups within the same organization are incentivized differently and you need to understand those incentives to tune your message in a a way that they can relate to it.

Most of my arguments and those from my colleagues back then were focused on cost reductions and collaboration, on preventing silos, on shorten innovation cycles, on sustainability, prevention of vendor lock-in, etc. Those arguments resonate very well among those responsible for strategic decisions or those managers directly related with innovation. But they did not work well with execution managers, specially senior ones.

When I have been a manager myself in the software industry, frequently my incentives had little to do with those arguments. In some cases, either my manager’s incentives had little to do with such arguments despite being an Open Organization. Open Source was part of the company culture but management objectives had little to do with collaboration. Variables like productivity, efficiency, time-to-market, customer satisfaction, defects management, release cycles, yearly costs, etc., were the core incentives that drove my actions and those around me.

If that was the case for those organizations I was involved in back then, imagine traditional corporations. Later on I got engage with such companies which confirmed this intuition.

I found myself more than once arguing with my managers about priorities and incentives, goals and KPIs because, as Open Source guy, I was for some time unable to clearly articulate the positive correlation between collaboration and efficiency, productivity, cost reduction etc. In some cases, this inability was a factor in generating a collision that end up with my bones out of the organization.

That positive correlation between collaboration and productivity was counter-intuitive for many middle managers I know ten years ago. Still is for some, even in the Open Source space. Haven’t you heard from managers that if you want to meet deadlines do not work upstream because you go move slower? I’ve heard so many times that, as mentioned before, during years I believed it was true. It might at small scale, but at big scale, it is not necessarily true.

It was not until two or three years ago that I started paying attention to Inner Source. I realized that many have inherited this belief. And since they live in corporate environments, the challenge that represents convincing execution related managers is bigger than in Open Source.

Inner Source programs are usually supported by executives and R&D departments but receive resistance from middle management, especially those closer to execution units. Collaborating with other departments might be good in the long term but it is perceived as less productive than developing in isolation. Somehow, in order to participate in Inner Source programs, they see themselves choosing between shorter-term and longer-term goals, between their incentives and those of the executives. It has little to do with their ability to “get it“.

So either their incentives are changed and they demonstrate that the organization can still be profitable, or you need to adapt to those incentives. What I believe is that adapting to those incentives means, in a nutshell, to provide a solid answer to the question, can we go faster by going together?

The proposed solution: if you want to go far, together is faster.

If we could find a positive correlation between efficiency/productivity and collaboration, we could change the proverb above by something like…

And hey, technically speaking, it would still be an African proverb, since I am from the Canary Islands, right?.

The idea behind the above sentence is to establish an AND relation between speed and collaboration meeting both, traditional corporate and Open Source (and Inner Source) goals.

Proving such positive correlation could be help to reduce the resistance offered by middle management to practice collaboration at scale either within Inner Source programs or Open Source projects. They would perceive such participation as a path to meet those longer term goals without contradicting many of the incentives they work with and they promote among their managees.

So the following question is, how we can do that? how can we provide evidences of such positive correlation in a language that is familiar to those managers?

The solution summarized: ISC Fall 2020

I tried to briefly explain to people running Inner Source programs, during the ISC Fall 2020, a potential path to establish such relation in five not-so-simple steps. The core slide of my presentation enumerated them as:

1.- In addition to collaboration and community health metrics, I recommend to track product delivery process performance ones.
2.- Correlate both groups of metrics.
3.- Focus on decisions and actions that creates a positive correlation between them.
4.- Create a reporting strategy to developers and managers based on such positive correlation.
5.- Use such strategy to turn around your Inner Source/Open Source story: it is about creating positive business impact at scale through open collaboration.

A detailed explanation of these five points can be found in the second article of this series:

If you want to go far, together is faster (II).

You can also watch the recording of my talk at ISC Fall 2020.

Monday

16 November, 2020

Calamares is a Linux installer. Bluestar Linux is a Linux distribution. KDE Plasma Desktop is KDE’s flagship desktop environment. Together, these three bits of software got into a spot of trouble, but what’s more important, got out of trouble again with good communications, good bug reports and a “we can fix it” attitude.

When Calamares is run in a KDE Plasma Desktop environment, for a distro that uses KDE Plasma Desktop – and bear in mind, Calamares is a distro- and desktop-independent project, so it will just as gladly install a variant of Debian with i3 as a variant of openSUSE with GNOME as a variant of Fedora with KDE Plasma – one of the modules that the distro can use is the plasmalnf module. This configures the look-and-feel of KDE Plasma Desktop in the target system, so that after the installation is done you don’t have to set a theme again. You might think of this as one tiny part of a first-run “here’s some cool options for your desktop” tool.

Plasma Look-and-Feel Module
Plasma Look-and-Feel Module

The distro is expected to provide a suitable screenshot on the live ISO for use with the Look-and-Feel module; since I don’t have one, the images used here are not representative for what Breeze actually looks like.

A minor feature of the module is that it also will update the live environment – if that is KDE Plasma Desktop – to follow the selection, so on an openSUSE variant you can try out Breeze, Breeze Dark and the openSUSE theme before installation.

Anyway, Bluestar Linux uses this module, and reported that the Look-and-Feel module was not correctly writing all of the keys needed to switch themes. And, here’s the thing, they reported it. In the right place (for me), which is the issue tracker for Calamares. And they described the problem, and how to reproduce the problem, and what they expected.

Give yourself a pat on the back for writing good bug reports: it’s amazing what a difference there is between “it doesn’t work” and something that I can work with.

I experimented a bit – most of Calamares works on FreeBSD as well, so I can check in my daily live environment, as well as in various Linux VMs – and it turned out there is a difference between what the lookandfeeltool writes as configuration, and what the Plasma Theme KDE Control Module (KCM) writes. It’s not the kind of thing I would spot, so I’m doubly glad for downstream distro’s that see things differently.

Having confirmed that there’s a difference, I took the problem to the KDE Plasma developers – this is where it is really useful to live in multiple communities at once.

The folks at Bluestar Linux had written a workaround already, and with the description of what was going on the KDE Plasma folks spent maybe an hour from start to finish (including whatever else goes on on a wednesday morning, so coffee, sending memes and debugging other stuff in the meantime) and we now have two results:

  • Look-and-Feel tool has a bugfix that will land in the next release (Plasma releases are monthly, if I recall)
  • Calamares has a workaround that landed in the latest release (Calamares releases are every-two-weeeks-if-I-can-swing-it)

So, as the person in the middle, I’d like to say “thanks” to downstream for reporting well and upstream for acting quickly. And then I can put on my sidestream hat and port the fix to FreeBSD’s packaging of KDE Plasma Desktop, too.

If you expect the PinePhone to match up to your current pocket supercomputer/surveillance device, you don't get it.

Pattern generated by KSeExpr

Today, we’re happy to announce the release of KSeExpr 4.0.0!

KSeExpr is the fork of Disney Animation’s SeExpr expression language library that we ship with Krita. It powers the SeExpr Fill Layer that was done in Amyspark’s Google Summer of Code 2020 project.

The main changes

This is a ginormous release, but these are the most important bits:

  • We’ve rebranded the fork. This allows us to both ship the library without conflict with upstream.
    • The library as a whole is now namespaced (both in CMake and in C++) as KSeExpr.
    • The base library is now KSeExpr, whereas the UI is now KSeExprUI. The include folders have been flattened accordingly, to e.g. <KSeExpr/Expression.h> and <KSeExprUI/ExprControlCollection.h>.
  • We’ve changed the license to GPL v3. The original code was (and is still released) under a tainted form of Apache 2.0, which has brought us many headaches. We’ve followed LibreOffice’s lead and our changes are now released under this license.
  • All code has been reformatted and upgraded with C++14 features.
  • We’ve dropped the Python bindings, as well as pthread. If you just need the library (like us), all you need now is Qt and a C++14 compiler.
  • The existing optional LLVM evaluator has reached feature parity with the interpreter. We’ve patched missing functionality, such as automatic casting from 1D vectors to 3D and string operators.
  • Our fork fully supports static LLVM and Android. No more linking or API level issues.
  • Arc trigonometric functions and rand(), previously documented but non-existing in the runtime, have been added.

Download

Source code: kseexpr-4.0.1.0.tar.gz

Release hashes:

  • md5sum: f3242a4969cda9833c2f685786310b76 kseexpr-4.0.1.0.tar.gz
  • sha256: 13b8455883001668f5d79c5734821c1ad2a0fbc91d019af085bb7e31cf6ce926 kseexpr-4.0.1.0.tar.gz

GPG signature: kseexpr-4.0.1.0.tar.gz.asc.

The tarball is now signed by Amyspark’s Github GPG key (FC00108CFD9DBF1E). You can get the key at their Github’s profile.

The full changelog for v4.0.0.0 (November 12, 2020)

Added

  • Add implementation of rand() function (a84fe56)
  • Enable ECM’s automatic ASAN support (16f58e9)
  • Enable and fix skipped imaging and string tests (e8b8072)
  • Standardize all comment parsing (c12bdb4)
  • Add README for the fork (abc4f35)
  • Rebrand our fork into KSeExpr (97694c4)
  • Automagically deploy pregenerated parser files (0ae6a43)
  • Use SPDX license statements (83614e6)
  • Enable version detection (e79c35b)
  • Use STL-provided mutex and drop pthread dependency (1782a65)
  • Reimplement Timer (20a25bd)
  • Complete the relicensing process (b19fd13)
  • Enable arc functions (08af2ef)
  • Add abandoned test for nested expressions (2af1db3)
  • Add abandoned type check tests (65064ad)
  • Implement equality between ColorSwatchEditables (8d864ce)
  • Add the abandoned typePrinter executable (2171588)
  • Add BSD-3 release scripts (fe11265)
  • Automatically deploy version changes (1ebb54b)

Fixed

  • Fix printf format validation (a77cbfd)
  • Fix LLVM’s support for string variables (13c1dcd)
  • Detect and link against shared libLLVM (b57c323)
  • Fix compilation on Android (3969081)
  • Only build KSeExprUI if Qt5 is enabled (63a0e3f)
  • Sort out pregenerated parser files (ee47a75)
  • Fix translation lookup (e37d5f0)
  • Fix path substitution with pregenerated files (46acc2e)
  • Restore compatibility with MSVC on Windows (9a8fa7c)
  • Properly trim range comments (6320439)
  • Fix Vec1d promotion with LLVM evaluator (cd9651d)
  • Fix interpreter state dump in MinGW (ee2ca3e)
  • Fix pointless negative check on demos (7328466)
  • Fix SpecExaminer and add abandoned pattern matcher tool (366e733)

Removed

  • Clean up various strings (8218ab3)
  • Remove Disney internal widgets (part 1) (a30cfe5)
  • Remove Disney internal widgets (part 2) (14b2610)
  • Remove Disney internal widgets (part 3) (d3b9d34)
  • Remove Disney internal widgets (part 4) (bc65b77)
  • Remove Disney-internal libraries (da04f96)
  • Remove Qt 4 compatibility (bdef3e2)
  • Drop unused demos (884a977)
  • Assorted cleanup (6c5134f)
  • Assorted linkage cleanup (18af7e6)
  • Clean up KSeExpr header install logic (98b4c50)
  • Assorted cleanup in KSeExpr (735958f)
  • Remove more unused documentation (8a2ac53)
  • Remove KSeExprUIPy (68baed1)
  • Remove Platform header (6d6db30)
  • Cleanup and remove the plugin system (b3c4d48)
  • Remove unused files in KSeExprUI (6229b88)
  • Remove last remnants of sscanf/old number parse logic (5717cd6)
  • Remove leftovers of Disney’s Python tests (df24cc4)
  • General cleanup of the package lookup system (d332d35)
  • Clean up last remaining warnings (36ea2d5)
  • Remove unused variable in the parser (813d1a0)
  • Remove redundant inclusion (fb55833)

Changed

  • Set Krita build (library and UI only) as default (2deb17a)
  • Update pregenerated files (2c8481c)
  • Update and clean Doxygen docs (7df9011)
  • Make performance monitoring an option (6253bcd)
  • clang-tidy: Curve (5584b30)
  • clang-tidy: Vec (b02a8b0)
  • clang-tidy: Utils (f9b89ae)
  • Update README (05212cb)
  • clang-tidy: ExprType (e07d9d1)
  • clang-tidy: ExprPatterns (03010ff)
  • clang-tidy: ExprEnv (a22d3a3)
  • Modernize imageSynth demo to C++14 (474e268)
  • Modernize imageEditor demo to C++14 (a9c7538)
  • Modernize asciiGraph demo to C++14 (ec103be)
  • Modernize asciiCalculator demo to C++14 (8939da6)
  • Modernize imageSynthForPaint3d demo to C++14 (7658d75)
  • clang-tidy in KSeExprUI (85860c0)
  • clang-tidy: Context (574b711)
  • clang-tidy: ErrorCode (74860fb)
  • constexpr-ize noise tables (7335fc7)
  • clang-tidy: VarBlock (935da03)
  • clang-tidy: Interpreter (83ed077)
  • Split tests by category and use GTest main() (933f0cc)
  • clang-tidy: ExprColorCurve (675f160)
  • clang-tidy: ExprBrowser (84e2782)
  • clang-tidy: ExprWalker (5d24b2b)
  • clang-tidy: ExprColorSwatch (c667d97)
  • clang-tidy: ExprControl (9313acf)
  • clang-tidy: ExprControlCollection (fd0693d)
  • clang-tidy: ExprCurve (efeff98)
  • clang-tidy: ExprEditor (338dc3c)
  • clang-tidy: Evaluator (LLVM disabled) (3927858)
  • clang-tidy: ExprBuiltins (part 1) (8e8fe4f)
  • clang-tidy: ExprBuiltins (part 2) (05c7e70)
  • clang-tidy unused variables (58aef1d)
  • Make Examiner::post pure virtual at last (e5cc038)
  • clang-tidy: ExprNode (7da56ba)
  • clang-tidy: ExprLLVM (LLVM disabled) (aa34f51)
  • clang-tidy: ExprFuncX (6715180)
  • Modernize tests to C++14 (455c3b6)
  • clang-tidy Utils (ec8c1f0)
  • clang-tidy: Evaluator (LLVM enabled) (9e82340)
  • clang-tidy: ExprLLVMCodeGeneration (f23aca9)
  • :gem: v4.0.0.0 (5f02791)

The post KSeExpr 4.0.0 Released! appeared first on Krita.

The change to move Dolphin’s URL Navigator/breadcrumbs bar into the toolbar hasn’t been received as well as we were hoping, and I wanted to let people know that we’re aware and will find a way to address the concerns people brought up. Hang tight!

2020 has been a fascinating year, and an exciting one for Kubuntu. There seems to be a change in the market, driven by the growth in momentum of cloud native computing.

As markets shift towards creative intelligence, more users are finding themselves hampered by the daily Windows or MacOS desktop experience. Cloud native means Linux, and to interoperate seamlessly in the cloud space you need Linux.

Kubuntu Focus Linux Laptop

Here at Kubuntu we were approached in late 2019 by Mindshare Management Ltd. MSM wanting to work with us to bring a cloud native Kubuntu Linux laptop to the market, directly aimed at competing with the MacBook Pro. As 2020 has progressed the company has continued to grow and develop the market, releasing their second model the Kubuntu Focus M2 in October. Their machines are not just being bought by hobby and tech enthusiasts, the Kubuntu Focus team have sold several high spec machines to NASA via their Jet Propulsion Laboratory.

Lenovo launches Linux range

Lenovo also has a vision for Linux on the Desktop, and as an enterprise class vendor they know where the market is heading. The Lenovo Press Release of 20th September announced 13 machines with Ubuntu Linux installed by default.

These include 13 ThinkStation™ and ThinkPad™ P Series Workstations and an additional 14 ThinkPad T, X, X1 and L series laptops, all with the 20.04 LTS version of Ubuntu, with the exception of the L series which will have version 18.04.

When it comes to desktops, at Kubuntu, we believe the KDE desktop experience is unbeatable. In October KDE announced the release of Plasma-Desktop 5.20 as “New and improved inside and out”. Shortly after the release, the Kubuntu team set to work on building out Kubuntu with this new version of the KDE Plasma desktop.

KDE Plasma Desktop on Linux

Our open build process means that you can easily get your hands on the current developer build of Kubuntu Linux ‘Hirsute Hippo’ from our Nightly Builds Repo.

It’s been an exciting year, and 2021 looks even more promising, as we fully anticipate more vendors to bring machines to the market with Linux on the Desktop.

Even more inspiring is the fact that Kubuntu Linux is built by enthusiastic volunteers who devote their time, energy and effort. Those volunteers are just like you, they contribute what they can, when they can, and the results are awesome!

About the Author:

Rick Timmis is a Kubuntu Councillor, and advocate. Rick has been a user and open contributor to Kubuntu for over 10 years, and a KDE user and contributor for 20