I’ve already announced this on Krita Artists, but I haven’t had time to write more fully about it, so…
I’m glad to announce the second alpha of my GSoC 2020 project. For anyone not in the loop, I’m working on integrating Disney’s SeExpr expression language as a new type of Fill Layer.
Releases are available here:
114ae155fb682645682dda8d3d19a0b82a4646b7732b42ccf805178a38a61cd0 krita-4.3.1-alpha-cff8314-x86_64.appimage 20df504642d7d6bcc96867a95a0e3d418c640d87cf7b280034d64a1587df5e2c krita-4.3.1-alpha-cff83142d4-x86_64.zip
In this release, I fixed the following bugs:
Thanks to David Revoy’s advice, I’ve changed the widgets to use their native (original) sizes, and the Fill Layer dialog’s size to match others in Krita.
Please see the changes for yourself:
Additionally, I cleaned and sorted a lot of the SeExpr code, hiding unnecessary
and legacy features behind feature flags. I also reworked the CMake scripts to
make use of imported targets.
These two changes mean that SeExpr is now fully compatible with the Krita build
process; it no longer needs
target_link_directories, or the legacy OpenGL component from Qt4.
As of the writing of this post, the following issues are still outstanding:
Thanks Wolthera van Hövell for reporting these:
And, of course, I would like to bundle SeExpr scripts like other Krita resources. I have the basic infrastructure working locally, but there’s going to be a lot of work adding the necessary widgets. Remember, I’m still looking for example scripts we can ship later!
I apologise in advance for this post coming out so late, I’ve been (and still am) dealing with a lot of homework, since my last term finishes on June 3.
Looking forward to your comments!
You have tried everything from the first two parts that seemed applicable, and your screen is still a window to the void? No problem: we’ve gathered another five reasons this could be happening and how to go about fixing them.
Issue 11: Is it a bird, is it a plane?
Your 3D environment (aka: scene) normally has several elements and those elements each have their own properties. One element of particular importance is ‘you’, the viewer of the scene. If you aren’t in a room, you can’t be expected to see what is in that room. With 3D scenes, the viewer is usually referred to as the camera. (Unlike in 2D where it’s often called a window or view)
Part of the camera’s properties are the near and far clipping planes, which specify the closest point to the camera which is visible, and the furthest away point. Anything closer than the near plane, or further away than the far plane, will be clipped and hence invisible.
Of course, you can get something in between. If your cube is 200 units across, sitting at 900 units from the camera, and the far plane is at 1000 units … you will see half of it.
The solution here is to set the near and far plane distances appropriately to the scene you’re working in: sometimes this is easy, everything is a similar scale and stays a consistent distance to the camera. Other times, it’s a huge topic which requires redesigning your renderer to avoid artefacts : especially when you have large distances or tiny objects. For more on this, and why selecting good near/far value is hard, read up ‘depth buffer precision’.
Issue 12: I just want to be normal.
When transforming surface normals, it’s important to use a correctly derived normal matrix (from your model and view transformations). If this normal matrix is incorrect, your normals will be incorrectly scaled or rotated, and this can break all the the corresponding lighting calculations.
(There’s many ways incorrectly loaded, transformed or generated normals can break lighting, more to come in a future part)
Technically you need to compute the transpose of the inverse of the upper 3×3 of your combined model-view matrix. Which is some slightly nasty mathematical juggling. Fortunately,
QMatrix4x4 has a helper to compute the correct matrix for you. Just make sure to compute the normal matrix for any of your transformation matrices, and pass this into your shaders as an additional uniform value.
Issue 13: All the world’s a stage…
Ready, steady,… and? You have a beautifully crafted startup animation. There are fades, there’s camera movement, there’s a reflection map swooshing over the shiny surface of your model. Just remember to actually start the animation: in Qt, animations are not played on load (unlike some authoring tools), so maybe you just need to press ‘play’.
Issue 14: Triangels. Vertexes. Phong shading.
If you’re writing shaders by hand, and you have a misnaming of the attribute in your shader code, compared to the C++ or QML code which binds uniforms or attributes to those names, then most rendering languages will treat the unbound data as 0,0,0,0 in the shader. If it’s a colour, you’ll get black (if it’s a normal or vertex position, it’s likely also not what you want). The good news is the shading language doesn’t care about your spelling, it just cares that the names you use match. So you can happily use Triangels, so long as you call them that everywhere. (But it will break if someone helpfully fixes your code in one place and not the other…)
If you’re lucky, your graphics driver has a debug mode, or some developer tooling, to warn you when you set a name which is not used in the shader. However, there are various techniques which rely on unbound uniforms or attributes efficiently returning zeroes, so the default production driver is unlikely to warn you about this.
Issue 15: Primitive thoughts.
GPUs draw triangles. Lots of triangles, lovely triangles everywhere. But occasionally some old-timer with an SGI Indigo under their desk will mention some other stuff – fans and strips? Or quads? Or maybe you’re using tessellation shaders (they are great). All of these things are different primitive types, which tell the GPU what kind of thing we’re drawing. Even if you’re not using tessellation shaders (where you draw patches), drawing lines can be very useful in industrial and scientific models, and drawing points can be one way to draw many lights (think flying over a city at night) or clouds of particles.
But if you’re sending drawing commands yourself, you need to specify the primitive type: even if you’ve carefully arranged your geometry into buffers and arrays and indexes of beautiful triangles. And if the type is incorrect, you won’t see triangles, but maybe just points, which by default are single-fragment dots. Which can be really hard to see, or even invisible (depending on your lighting model and fragment shader)
There are many ways to mess up 3D rendering; new technologies, new languages, new engines are coming out everyday. If we helped you with your issue, or even more than one, great! However, if you are still having issues, we have more help on the way. Why not comment your issue below to possibly have it feature in one of the following parts?
Hi! This is my report of the weeks 1, 2 and 3 of GSoC 2020.
First of all, sorry for taking a while to write the first post of the coding period, before writing I was making sure that everything was working properly and that I hadn’t broken anything, well now lets go to the actual report.
I have fixed the build errors of marK and merged the code of the branch of SoK 2020 that had yet to reach master, !2 . Also I started the implementation of text annotation.
Unfortunately I have nothing visual to show has the modifications do not change anything GUI-related, but there are things worth mentioning:
That is it, see you in the next post ; )
In the third week of GSoC, I worked on defining a coordinate system which works on right ascension and declination instead of x, y and z coordinates.
Shaders for Lambert, Azimuthal, Orthographic, Equirectangular, Stereographic and Gnomic projections.
Shaders for instanced rendering with support for the above projection modes(Will be used for stars).
Integration issues with the original SkyPainter API written to support multiple backends - Had to prototype outside of KStars.
Lack of C++ resources for Qt3D.
Switching SkyQPainter’s 2D projector class to GLSL.
My priorities for the next week include.
Integrating written shaders and KStars backend.
Display of grid lines and basic star catalog using SkymapComposite.
Hello everyone, In the last blog, I wrote about the wrapping of the community bonding period. In this blog, I would write about what I have completed till now as in the coding period.
As my project was to implement multiple datasets for several activities. I started my work with the Enumeration Memory games activity. As there are a total of 20 memory activities so all other memory activities share the common code between them. Out of which in a few of the activities no multiple datasets needs to be implemented. I modified the common memory code to load the multiple datasets and the default dataset as used by some of the activities in which no multiple datasets need to be implemented. After that, I successfully implemented multiple datasets to enumeration memory games activity.
I also modified the code a bit in memory.js to support the display of levels with two images too. I tested the memory activities manually and also got it tested by my younger sister after making any changes to the code to make sure that there’s no regression and it doesn’t break anything. I have also maintained a framacalc sheet table to update all about the code modifications, mark the regressions if there’s any, how I have fixed the regression, and the testing I have done after any modifications.
I have also implemented multiple datasets to subtraction memory activity which is completed and all good. Apart from this, I have also implemented multiple datasets to addition memory games, multiplication memory games, and addition memory games with Tux which is currently under review by the mentors.
I really love the way I am working under the guidance of all of my mentors. Hope to have more fun ahead as the coding period proceed!!
I hope this update solves some issues of the 19.12 versions available before.
Here are the number of acquisitions for the last 30 days (roughly equal to the number of installations, not mere downloads) for our applications:
Kate - Advanced Text Editor - 4,465 acquisitions
Okular - More than a reader - 4,399 acquisitions
Filelight - Disk Usage Visualizer - 1,193 acquisitions
Kile - A user-friendly TeX/LaTeX editor - 617 acquisitions
KStars - Astronomy Software - 163 acquisitions
Elisa - Modern Music Player - 138 acquisitions
A nice stream of new users for our software on the Windows platform.
If you want to help to bring more stuff KDE develops on Windows, we have some meta Phabricator task were you can show up and tell for which parts you want to do work on.
A guide how to submit stuff later can be found on our blog.
Thanks to all the people that help out with submissions & updates & fixes!
If you encounter issues on Windows and are a developer that wants to help out, all KDE projects really appreciate patches for Windows related issues.
Just contact the developer team of the corresponding application and help us to make the experience better on any operating system.
For completeness, overall acquisitions since the stuff is in the store:
Kate - Advanced Text Editor - 46,824 acquisitions
Okular - More than a reader - 37,212 acquisitions
Filelight - Disk Usage Visualizer - 6,532 acquisitions
Kile - A user-friendly TeX/LaTeX editor - 4,408 acquisitions
KStars - Astronomy Software - 2,496 acquisitions
Elisa - Modern Music Player - 1,450 acquisitions
Last week I wrote about train station and airport maps for KDE Itinerary. One important challenge for deploying this is how to get the necessary OpenStreetMap data to our users, a prototype that requires a local OSM database doesn’t help with that. There’s currently two likely contenders, explained below.
Before we obtain the map data we have to solve another problem first: which area do we actually want to display? In typical map applications the area that is presented is usually not constraint, you can scroll in any direction as long as you want. That’s not what we need here though, we are only interested in a single (large) building.
Constraining the map display to that area has a number of advantages, such as having a well-defined memory bounds. Even a big station mapped in great detailed fits in a few hundred kB in OSM’s o5m binary format. That avoids the need of any kind of tile or level of detail management you’d usually need on global scope.
It also means working with “raw” OSM data is feasible (which we need to enable all the features we want), we don’t need to reduce the level of detail of the data when sufficiently constraining the area.
For now we have a reasonably well working heuristic that takes care of this.
Since the data for the entire world is about 60GB, we obviously need something that breaks this down into much smaller chunks. Fortunately, one such system already exists, within KDE’s infrastructure even, Marble’s OSM vector tile server.
These tiles are provided in OSM’s o5m binary format, and don’t contain any application-specific pre-filtering, making them extremely versatile, and therefore perfect for our use-case. On the highest available zoom level (which contains 2¹⁷ subdivisions per dimension), we need typically 9-12 tiles for a large station, so this also provides a reasonable trade-off between overhead and download volume.
There’s unfortunately two major challenges with this.
The currently available tiles are slightly outdated, as we are lacking an automatic and continuous update process it seems. I somehow suspect that a full re-generation of the entire world at a high frequency is going to be too costly, so this will probably need some development work to consume OSM’s differential update files.
Doing this would however not only help us but also all other consumers of those files, such as Marble itself.
A side-effect of using tiled data is that the geometry in there is split along the tile boundaries. When used as-is, that leads to ugly and confusing visual effects as well as duplicated text labels. To some extend we are meanwhile able to re-assemble the split geometry, but it’s still far from perfect, and it needs more heuristics than one would want there.
You can observe similar issues in Marble itself when using its vector OSM map. It might be possible to aide this by some changes in the tile generator, which would then also benefit all other consumers in re-assembling the geometry.
Should the above approach turn out to not be feasible or taking to long to implement and deploy, what could we do instead? The o5m file format works great, it’s compact and nevertheless allows mmap’ed zero-copy use of string data, so that’s something to keep. But instead of generating hundreds of millions of tiles, we could just generate individual files per airport/train station. That’s in the ten thousands, several orders of magnitudes less.
This would also need some development work, as we need a way to determine the bounding boxes for all relevant areas, and then an efficient way to cut out those areas from the full dataset. Doing this for a single area with the OSM command line tools takes about 20-30 minutes, doing this for multiple areas in one go would presumably scale significantly better.
The big downside of this is it’s limiting us to a fixed set of locations, and we end up with a special-purpose solution just for KDE Itinerary. So this is only the backup plan for now.
If you have ideas for features or use-cases for this, or want to help, check out the corresponding workboard on Gitlab. I’ll try to write up some details about the declarative styling system next.