This was the third week since the commencement of official coding period on 1st June. In the last three weeks a lot of work is done and a lot of work still remains. The phase 1 evaluation submissions shall begin from monday next week, so now just 7 days are remaining.

This week was mostly spent on accommodating review changes by my mentors and writing some unit tests.

Work Done:


I started my week writing some unit tests for the draw_dab and get_color methods that I had implemented previous to previous week.


Boud suggested me to do some refactoring as the design decisions of my previous approach were not that good. I had linked libmypaint to other component responsible for preset management in the code which was not the right thing to do as a good design would be to have all the dependencies that belong to a plugin contained in the plugin itself. Ideally, a plugin should be removable from the entire system by deleting or commenting a single line. So, I had to implement a separate paintop factory class and hook the plugin within this class. Boud gave me an entire archive with code to the previous mypaint plugin that we had in Krita years ago. So, this was nothing more than understanding its approach and implement the same thing.


In my previous post I had mentioned that the brush engine is lagging. My mentors suggested me to profile the brush strokes as it would reveal the bottlenecks. I did the same and as it turned out that 3 specific methods (KoColor::toQcolor, KoColor::fromQcolor and KoColor(colorspace)) were responsible for the drag. These methods are designed in a manner that they do colorspace conversions for us so that the user can stay away from the dirty work. The problem with using these methods was that they were doing complex processing for something as simple as divide by 255.0f causing the strokes to lag a lot. To mitigate this, I simply removed those methods and did the operation manually which solved the problem. So the lag is much less severe now. 

In short, the week was mostly good, and it went as expected :)

I’ve already announced this on Krita Artists, but I haven’t had time to write more fully about it, so…

I’m glad to announce the second alpha of my GSoC 2020 project. For anyone not in the loop, I’m working on integrating Disney’s SeExpr expression language as a new type of Fill Layer.

Releases are available here:

Integrity hashes:

114ae155fb682645682dda8d3d19a0b82a4646b7732b42ccf805178a38a61cd0  krita-4.3.1-alpha-cff8314-x86_64.appimage

In this release, I fixed the following bugs:

  • Keypad presses ignored when coming from non-QWERTY keyboards. They were previously considered by the SeExpr editor widget as Ctrl-keys for the autocompletion 🤦‍♀ (thanks David Revoy)
  • Pasting rich text on the editor breaks formatting
  • No autoupdate on editing the script (thanks Emmet O’Neill)

Thanks to David Revoy’s advice, I’ve changed the widgets to use their native (original) sizes, and the Fill Layer dialog’s size to match others in Krita.

Please see the changes for yourself:

Before and after.

Additionally, I cleaned and sorted a lot of the SeExpr code, hiding unnecessary and legacy features behind feature flags. I also reworked the CMake scripts to make use of imported targets. These two changes mean that SeExpr is now fully compatible with the Krita build process; it no longer needs target_include_directories, target_link_directories, or the legacy OpenGL component from Qt4.

As of the writing of this post, the following issues are still outstanding:

  • Configuration save/restore when changing between Fill layer types (it’s more of a Krita architectural problem, bug 422885)
  • Completion help tooltip constantly loses focus in macOS
    • I have not released alphas for this platform because I cannot sign them
  • Translate and test error messages
  • Make SeExpr parsing locale-proof
  • Fully clean up header installation (currently it’s a massive glob, ignoring selected components)

Thanks Wolthera van Hövell for reporting these:

  • UI widget component labels need extra spacing
  • Error reporting seems to be not bundled or broken

And, of course, I would like to bundle SeExpr scripts like other Krita resources. I have the basic infrastructure working locally, but there’s going to be a lot of work adding the necessary widgets. Remember, I’m still looking for example scripts we can ship later!

I apologise in advance for this post coming out so late, I’ve been (and still am) dealing with a lot of homework, since my last term finishes on June 3.

Looking forward to your comments!



Part 3

If you are here from Part 1, you missed Part 2, somehow.

Black Screen

It’s a night scene

You have tried everything from the first two parts that seemed applicable, and your screen is still a window to the void? No problem: we’ve gathered another five reasons this could be happening and how to go about fixing them.

Issue 11: Is it a bird, is it a plane?

Your 3D environment (aka: scene) normally has several elements and those elements each have their own properties. One element of particular importance is ‘you’, the viewer of the scene. If you aren’t in a room, you can’t be expected to see what is in that room. With 3D scenes, the viewer is usually referred to as the camera. (Unlike in 2D where it’s often called a window or view)

Perspective View Frustrum

Wikibooks: Vertex Transformations

Part of the camera’s properties are the near and far clipping planes, which specify the closest point to the camera which is visible, and the furthest away point. Anything closer than the near plane, or further away than the far plane, will be clipped and hence invisible.

Of course, you can get something in between. If your cube is 200 units across, sitting at 900 units from the camera, and the far plane is at 1000 units … you will see half of it.

The solution here is to set the near and far plane distances appropriately to the scene you’re working in: sometimes this is easy, everything is a similar scale and stays a consistent distance to the camera. Other times, it’s a huge topic which requires redesigning your renderer to avoid  artefacts : especially when you have large distances or tiny objects. For more on this, and why selecting good near/far value is hard, read up ‘depth buffer precision’.

Issue 12: I just want to be normal.

When transforming surface normals, it’s important to use a correctly derived normal matrix (from your model and view transformations). If this normal matrix is incorrect, your normals will be incorrectly scaled or rotated, and this can break all the the corresponding lighting calculations.

(There’s many ways incorrectly loaded, transformed or generated normals can break lighting, more to come in a future part)

Technically you need to compute the transpose of the inverse of the upper 3×3 of your combined model-view matrix. Which is some slightly nasty mathematical juggling. Fortunately, QMatrix4x4 has a helper to compute the correct matrix for you. Just make sure to compute the normal matrix for any of your transformation matrices, and pass this into your shaders as an additional uniform value.

Issue 13: All the world’s a stage…

Ready, steady,… and? You have a beautifully crafted startup animation. There are fades, there’s camera movement, there’s a reflection map swooshing over the shiny surface of your model. Just remember to actually start the animation: in Qt, animations are not played on load (unlike some authoring tools), so maybe you just need to press ‘play’.

Issue 14: Triangels. Vertexes. Phong shading.

If you’re writing shaders by hand, and you have a misnaming of the attribute in your shader code, compared to the C++ or QML code which binds uniforms or attributes to those names, then most rendering languages will treat the unbound data as 0,0,0,0 in the shader. If it’s a colour, you’ll get black (if it’s a normal or vertex position, it’s likely also not what you want). The good news is the shading language doesn’t care about your spelling, it just cares that the names you use match. So you can happily use Triangels, so long as you call them that everywhere. (But it will break if someone helpfully fixes your code in one place and not the other…)

If you’re lucky, your graphics driver has a debug mode, or some developer tooling, to warn you when you set a name which is not used in the shader. However, there are various techniques which rely on unbound uniforms or attributes efficiently returning zeroes, so the default production driver is unlikely to warn you about this.

Issue 15: Primitive thoughts.

GPUs draw triangles. Lots of triangles, lovely triangles everywhere. But occasionally some old-timer with an SGI Indigo under their desk will mention some other stuff – fans and strips? Or quads? Or maybe you’re using tessellation shaders (they are great). All of these things are different primitive types, which tell the GPU what kind of thing we’re drawing. Even if you’re not using tessellation shaders (where you draw patches), drawing lines can be very useful in industrial and scientific models, and drawing points can be one way to draw many lights (think flying over a city at night) or clouds of particles.

But if you’re sending drawing commands yourself, you need to specify the primitive type: even if you’ve carefully arranged your geometry into buffers and arrays and indexes of beautiful triangles. And if the type is incorrect, you won’t see triangles, but maybe just points, which by default are single-fragment dots. Which can be really hard to see, or even invisible (depending on your lighting model and fragment shader)

There are many ways to mess up 3D rendering; new technologies, new languages, new engines are coming out everyday. If we helped you with your issue, or even more than one, great! However, if you are still having issues, we have more help on the way. Why not comment your issue below to possibly have it feature in one of the following parts?

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Why is my screen dark appeared first on KDAB.

Hi! This is my report of the weeks 1, 2 and 3 of GSoC 2020.

First of all, sorry for taking a while to write the first post of the coding period, before writing I was making sure that everything was working properly and that I hadn’t broken anything, well now lets go to the actual report.

I have fixed the build errors of marK and merged the code of the branch of SoK 2020 that had yet to reach master, !2 . Also I started the implementation of text annotation.

Unfortunately I have nothing visual to show has the modifications do not change anything GUI-related, but there are things worth mentioning:

  • Use of opaque pointers, this is an important step for plugins support in the future.
  • Migrated the image annotation to its rightful place and separated it from the core of marK.
  • Improved the logic of the class that write/read annotation data to/from json and xml files.

That is it, see you in the next post ; )

GSoC Week 3 - Qt3D based backend for KStars

In the third week of GSoC, I worked on defining a coordinate system which works on right ascension and declination instead of x, y and z coordinates.

What’s done this week

  • Shaders for Lambert, Azimuthal, Orthographic, Equirectangular, Stereographic and Gnomic projections.

  • Shaders for instanced rendering with support for the above projection modes(Will be used for stars).

The Challenges

  • Integration issues with the original SkyPainter API written to support multiple backends - Had to prototype outside of KStars.

  • Lack of C++ resources for Qt3D.

  • Switching SkyQPainter’s 2D projector class to GLSL.

What remains

My priorities for the next week include.

  • Integrating written shaders and KStars backend.

  • Display of grid lines and basic star catalog using SkymapComposite.


Spheres using Lambert Projection Spheres using Azimuthal Projection Spheres using Stereographic Projection

The Code

Hello everyone, In the last blog, I wrote about the wrapping of the community bonding period. In this blog, I would write about what I have completed till now as in the coding period.

As my project was to implement multiple datasets for several activities. I started my work with the Enumeration Memory games activity. As there are a total of 20 memory activities so all other memory activities share the common code between them. Out of which in a few of the activities no multiple datasets needs to be implemented. I modified the common memory code to load the multiple datasets and the default dataset as used by some of the activities in which no multiple datasets need to be implemented. After that, I successfully implemented multiple datasets to enumeration memory games activity.

I also modified the code a bit in memory.js to support the display of levels with two images too. I tested the memory activities manually and also got it tested by my younger sister after making any changes to the code to make sure that there’s no regression and it doesn’t break anything. I have also maintained a framacalc sheet table to update all about the code modifications, mark the regressions if there’s any, how I have fixed the regression, and the testing I have done after any modifications.

I have also implemented multiple datasets to subtraction memory activity which is completed and all good. Apart from this, I have also implemented multiple datasets to addition memory games, multiplication memory games, and addition memory games with Tux which is currently under review by the mentors.

I really love the way I am working under the guidance of all of my mentors. Hope to have more fun ahead as the coding period proceed!!

Deepak Kumar


21 June, 2020

Kate and Okular 20.04 are now available in the store!

I hope this update solves some issues of the 19.12 versions available before.

Here are the number of acquisitions for the last 30 days (roughly equal to the number of installations, not mere downloads) for our applications:

A nice stream of new users for our software on the Windows platform.

If you want to help to bring more stuff KDE develops on Windows, we have some meta Phabricator task were you can show up and tell for which parts you want to do work on.

A guide how to submit stuff later can be found on our blog.

Thanks to all the people that help out with submissions & updates & fixes!

If you encounter issues on Windows and are a developer that wants to help out, all KDE projects really appreciate patches for Windows related issues.

Just contact the developer team of the corresponding application and help us to make the experience better on any operating system.

For completeness, overall acquisitions since the stuff is in the store:


20 June, 2020

Hello everyone,

this is the second post about the progress in my GSoC project and I want to present the new zoom widget feature and some useful tooltip changes.

As the name says, the zoom widget feature brings a zoom widget in Cantor. This widget is more useful than the previous workflow with "increase/decrease zoom" actions for them, if you for example want to directly zoom to 200%:

It also important to notice that there are people who don't zoom often and dislike one more wide widget in main toolbar. As usual in KDE applications, they can just hide the widget via the toolbar settings dialog:

The second mentioned change is simple, but quite useful. As you can see in the following screenshot, the description text exposed to the user in Cantor's settings dialog is a balance between "short and useless text" and "detailed explanation which don't fit into the settings window". This was the situation before. Now, the detailed explanation pops up as a tooltip and the main setting text is a short text, which better fits into the dialog:

In the next post I plan to show another important and somewhat bigger feature which about handling of external graphical packages inside of Cantor.

Last week I wrote about train station and airport maps for KDE Itinerary. One important challenge for deploying this is how to get the necessary OpenStreetMap data to our users, a prototype that requires a local OSM database doesn’t help with that. There’s currently two likely contenders, explained below.

Determining Relevant Areas

Before we obtain the map data we have to solve another problem first: which area do we actually want to display? In typical map applications the area that is presented is usually not constraint, you can scroll in any direction as long as you want. That’s not what we need here though, we are only interested in a single (large) building.

Constraining the map display to that area has a number of advantages, such as having a well-defined memory bounds. Even a big station mapped in great detailed fits in a few hundred kB in OSM’s o5m binary format. That avoids the need of any kind of tile or level of detail management you’d usually need on global scope.

It also means working with “raw” OSM data is feasible (which we need to enable all the features we want), we don’t need to reduce the level of detail of the data when sufficiently constraining the area.

For now we have a reasonably well working heuristic that takes care of this.

Marble Vector Tiles

Since the data for the entire world is about 60GB, we obviously need something that breaks this down into much smaller chunks. Fortunately, one such system already exists, within KDE’s infrastructure even, Marble’s OSM vector tile server.

These tiles are provided in OSM’s o5m binary format, and don’t contain any application-specific pre-filtering, making them extremely versatile, and therefore perfect for our use-case. On the highest available zoom level (which contains 2¹⁷ subdivisions per dimension), we need typically 9-12 tiles for a large station, so this also provides a reasonable trade-off between overhead and download volume.

There’s unfortunately two major challenges with this.

Automatic Updates

The currently available tiles are slightly outdated, as we are lacking an automatic and continuous update process it seems. I somehow suspect that a full re-generation of the entire world at a high frequency is going to be too costly, so this will probably need some development work to consume OSM’s differential update files.

Doing this would however not only help us but also all other consumers of those files, such as Marble itself.

Geometry Reassembly

A side-effect of using tiled data is that the geometry in there is split along the tile boundaries. When used as-is, that leads to ugly and confusing visual effects as well as duplicated text labels. To some extend we are meanwhile able to re-assemble the split geometry, but it’s still far from perfect, and it needs more heuristics than one would want there.

Screenshots comparing the geometry of a room crossing vector tile boundaries with and without geometry reassembly.
Room geometry and label placement without (left) and with (right) tile geometry reassembly.

You can observe similar issues in Marble itself when using its vector OSM map. It might be possible to aide this by some changes in the tile generator, which would then also benefit all other consumers in re-assembling the geometry.

Dedicated O5M Files

Should the above approach turn out to not be feasible or taking to long to implement and deploy, what could we do instead? The o5m file format works great, it’s compact and nevertheless allows mmap’ed zero-copy use of string data, so that’s something to keep. But instead of generating hundreds of millions of tiles, we could just generate individual files per airport/train station. That’s in the ten thousands, several orders of magnitudes less.

This would also need some development work, as we need a way to determine the bounding boxes for all relevant areas, and then an efficient way to cut out those areas from the full dataset. Doing this for a single area with the OSM command line tools takes about 20-30 minutes, doing this for multiple areas in one go would presumably scale significantly better.

The big downside of this is it’s limiting us to a fixed set of locations, and we end up with a special-purpose solution just for KDE Itinerary. So this is only the backup plan for now.


If you have ideas for features or use-cases for this, or want to help, check out the corresponding workboard on Gitlab. I’ll try to write up some details about the declarative styling system next.