Could you tell us something about yourself?

I’m Anilia, a freelance illustrator. I’m cooperating with many authors from all around the world. The key part of my job is to transform authors’ visions into shapes, forms, emotions and characters on drawings. Thanks to that global cooperation, many fantastic stories were published.

Do you paint professionally, as a hobby artist, or both?

At first digital painting was my hobby but shortly it became my full time job and one of most important parts of my life.

What genre(s) do you work in?

Mainly I make illustrations and character designs and portraits.

Whose work inspires you most — who are your role models as an artist?

David Revoy. I’m constantly getting back to his works and trying to learn new stuff from him.

How and when did you get to try digital painting for the first time?

I bought my first tablet while I was still studying at Silesian University of Technology, so a few years ago…

What makes you choose digital over traditional painting?

Digital painting offer so much more for artist! When you have basic knowledge about the technical side of digital painting, you can do almost everything with your piece. You can transform everything, add or cut elements if you like. This technique gives you unlimited options of change for every element of your drawing. Digital painting allows you to control every part of your work, that’s fantastic.

How did you find out about Krita?

From internet search. I was searching for a better solution than Gimp and Photoshop.

What was your first impression?

Amazing! When I first opened Krita I thought – This is exactly what I need, that’s my perfect tool.

What do you love about Krita?

Everything, but the most important thing is that Krita gives me exactly what I need for digital painting. I have all necessary tools in one place and those tools works perfectly with my tablet. I don’t need to spend hours to customize the program and search for options.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Elements that are not related to painting – I still use a second program to add text, edit vector elements and export my works to pdf files.

What sets Krita apart from the other tools that you use?

Functionality. Krita works perfectly with my tablet. The float of the brush is fantastic. I just need to pick a tool and it works exactly like it should. Additional I have many useful tools designed for drawing (reference images, painting assistant, blending options, etc.). I feel that someone really knew what is important when you draw and designed every tool perfectly.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

That question is very hard. I usually like my last painting but I always have in mind some improvements… I’m constantly learning so with every piece I have a new idea how to make it better.

What techniques and brushes did you use in it?

I usually use David Revoy brushes (basic set in Krita) and lately I use hard light blending mode to add shadows and then I only correct painting with a final layer.

Where can people see more of your work?

My portfolio: https://aniliaart.artstation.com

Commissions: http://artistsnclients.com/people/Anilia

Anything else you’d like to share?

Krita is a fantastic tool for digital drawing and painting. My work developed with this program. If you are a beginner – try Krita, it will open for you a fantastic world of digital art. If you are painting professionally – try Krita, your work will be so much easier.

Tuesday

24 March, 2020

Qt 3D, being a retained mode high level graphic API abstraction, tries to hide most of the details involved in rendering the data provided by applications. It makes a lot of decisions and operations in the background in order to get pixels on the screen. But, because Qt 3D also has very rich API, developers can have a lot of control on the rendering by manipulating the scene graph and, more importantly, the frame graph. It is however sometimes difficult to understand how various operations affect performance.

In this article, we look at some of the tools, both old and new, that can be used to investigate what Qt 3D is doing in the back end and get some insight into what is going on during the frame.

 

Built in Profiling

The first step in handling performance issues is, of course, measuring where time is spent. This can be as simple as measuring how long it took to render a given frame. But to make sense of these numbers, it helps to have a notion of how complex the scene is.

In order to provide measurable information, Qt 3D introduces a visual overlay that will render details of the scene, constantly updated in real time.

 

The overlay shows some real time data:

  • Time to render last frame and FPS (frames per second), averaged and plotted over last few seconds. As Qt 3D is by default locking to VSync, this should not exceed 60fps on most configurations.
  • Number of Jobs: these are the tasks that Qt 3D executes on every frame. The number of jobs may vary depending on changes in the scene graph, whether animations are active, etc.
  • Number of Render Views: this matches loosely to render pass, see below discussion on the frame graph.
  • Number of Commands: this is total number of draw calls (and compute calls) in the frame.
  • Number of Vertices and Primitives (triangles, lines and points combined).
  • Number of Entities, Geometries and Textures in the scene graph. For the last two, the overlay will also show the number of geometries and textures that are effectively in use in the frame.

As seen in the screen shots above, the scene graph contains two entities, each with one geometry. This will produce two draw calls when both objects are in frame. But as the sphere rotates out of the screen, you can see the effect of the view frustum culling job which is making sure the sphere doesn’t get rendered, leaving a single draw call for the torus.

This overlay can be enabled by setting the showDebugOverlay property of the QForwardRenderer to true.

 

Understanding Rendering Steps

To make sense of the numbers above, it helps to understand the details of the scene graph and frame graph.

In the simple case, as in the screen shots, an entity will have a geometry (and material, maybe a transform). But many entities may share the same geometry (a good thing if appropriate!). Also, entities may not have any geometry but just be used for grouping and positioning purposes.

So keeping an eye on the number of entities and geometries, and seeing how that effects the number of commands (or draw calls), is valuable. If you find one geometry drawn one thousand times in a thousand separate entities, if may be a good indication that you should refactor your scene to use instanced rendering.

In order to provide more details, the overlay has a number of buttons that can be used to dump the current state of the rendering data.

For a deeper understanding of this, you might consider our full Qt 3D Training course.

Scene Graph

Dumping the scene graph will print data to the console, like this:

Qt3DCore::Quick::Quick3DEntity{1} [ Qt3DRender::QRenderSettings{2}, Qt3DInput::QInputSettings{12} ]
  Qt3DRender::QCamera{13} [ Qt3DRender::QCameraLens{14}, Qt3DCore::QTransform{15} ]
  Qt3DExtras::QOrbitCameraController{16} [ Qt3DLogic::QFrameAction{47}, Qt3DInput::QLogicalDevice{46} ]
  Qt3DCore::Quick::Quick3DEntity{75} [ Qt3DExtras::QTorusMesh{65}, Qt3DExtras::QPhongMaterial{48},
                                       Qt3DCore::QTransform{74} ]
  Qt3DCore::Quick::Quick3DEntity{86} [ Qt3DExtras::QSphereMesh{76}, Qt3DExtras::QPhongMaterial{48}, 
                                       Qt3DCore::QTransform_QML_0{85} ]

This prints the hierarchy of entities and for each of them lists all the components. The id (in curly brackets) can be used to identify shared components.

Frame Graph

Similar data can be dumped to the console to show the active frame graph:

Qt3DExtras::QForwardRenderer
  Qt3DRender::QRenderSurfaceSelector
    Qt3DRender::QViewport
      Qt3DRender::QCameraSelector
        Qt3DRender::QClearBuffers
          Qt3DRender::QFrustumCulling
            Qt3DRender::QDebugOverlay

This is the default forward renderer frame graph that comes with Qt 3D Extras.

As you can see, one of the nodes in that graph is of type QDebugOverlay. If you build your own frame graph, you can use an instance of that node to control which surface the overlay will be rendered onto. Only one branch of the frame graph may contain a debug node. If the node is enabled, then the overlay will be rendered for that branch.

The frame graph above is one of the simplest you can build. They may get more complicated as you build effects into your rendering. Here’s an example of a Kuesa frame graph:

Kuesa::PostFXListExtension
  Qt3DRender::QViewport
    Qt3DRender::QClearBuffers
      Qt3DRender::QNoDraw
    Qt3DRender::QFrameGraphNode (KuesaMainScene)
      Qt3DRender::QLayerFilter
        Qt3DRender::QRenderTargetSelector
          Qt3DRender::QClearBuffers
            Qt3DRender::QNoDraw
          Qt3DRender::QCameraSelector
            Qt3DRender::QFrustumCulling
              Qt3DRender::QTechniqueFilter
                Kuesa::OpaqueRenderStage (KuesaOpaqueRenderStage)
                  Qt3DRender::QRenderStateSet
                    Qt3DRender::QSortPolicy
            Qt3DRender::QTechniqueFilter
              Kuesa::OpaqueRenderStage (KuesaOpaqueRenderStage)
                Qt3DRender::QRenderStateSet
                  Qt3DRender::QSortPolicy
            Qt3DRender::QFrustumCulling
              Qt3DRender::QTechniqueFilter
                Kuesa::TransparentRenderStage (KuesaTransparentRenderStage)
                  Qt3DRender::QRenderStateSet
                    Qt3DRender::QSortPolicy
            Qt3DRender::QTechniqueFilter
              Kuesa::TransparentRenderStage (KuesaTransparentRenderStage)
                Qt3DRender::QRenderStateSet
                  Qt3DRender::QSortPolicy
          Qt3DRender::QBlitFramebuffer
            Qt3DRender::QNoDraw
    Qt3DRender::QFrameGraphNode (KuesaPostProcessingEffects)
      Qt3DRender::QDebugOverlay
        Qt3DRender::QRenderStateSet (ToneMappingAndGammaCorrectionEffect)
          Qt3DRender::QLayerFilter
            Qt3DRender::QRenderPassFilter

If you are not familiar with the frame graph, it is important to understand that each path (from root to leaf) will represent a render pass. So the simple forward renderer will represent a simple render pass, but the Kuesa frame graph above contains eight passes!

It is therefore often easier to look at the frame graph in term of those paths. This can also be dumped to the console:

[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QClearBuffers, Qt3DRender::QNoDraw ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene),
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QClearBuffers, Qt3DRender::QNoDraw ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene), 
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QCameraSelector, Qt3DRender::QFrustumCulling, 
  Qt3DRender::QTechniqueFilter, Kuesa::OpaqueRenderStage (KuesaOpaqueRenderStage), Qt3DRender::QRenderStateSet, 
  Qt3DRender::QSortPolicy ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene), 
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QCameraSelector, Qt3DRender::QTechniqueFilter, 
  Kuesa::OpaqueRenderStage (KuesaOpaqueRenderStage), Qt3DRender::QRenderStateSet, Qt3DRender::QSortPolicy ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene),
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QCameraSelector, Qt3DRender::QFrustumCulling,
  Qt3DRender::QTechniqueFilter, Kuesa::TransparentRenderStage (KuesaTransparentRenderStage), Qt3DRender::QRenderStateSet,
  Qt3DRender::QSortPolicy ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene),
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QCameraSelector, Qt3DRender::QTechniqueFilter,
  Kuesa::TransparentRenderStage (KuesaTransparentRenderStage), Qt3DRender::QRenderStateSet, Qt3DRender::QSortPolicy ]
[ Kuesa::PostFXListExtension, Qt3DRender::QViewport, Qt3DRender::QFrameGraphNode (KuesaMainScene),
  Qt3DRender::QLayerFilter, Qt3DRender::QRenderTargetSelector, Qt3DRender::QBlitFramebuffer, Qt3DRender::QNoDraw ]

Hopefully this is a good way of finding out issues you may have when building your custom frame graph.

Draw Commands

On every pass of the frame graph, Qt 3D will traverse the scene graph, find entities that need to be rendered, and for each of them, issue a draw call. The number of objects drawn in each pass may vary, depending on whether the entities and all of their components are enabled or not, or whether entities get filtered out by using QLayers (different passes may draw different portions of the scene graph).

The new profiling overlay also gives you access to the actual draw calls.

So in this simple example, you can see that two draw calls are made, both for indexed triangles. You can also see some details about the render target, such as the viewport, the surface size, etc.

That information can also be dumped to the console which makes it easier to search in a text editor.

 

Built in Job Tracing

The data above provides a useful real time view on what is actually being processed to render a particular frame. However, it doesn’t provide much feedback as to how long certain operations take and how that changes during the runtime of the application.

In order to track such information, you need to enable tracing.

Tracing tracks, for each frame, what jobs are executed by Qt 3D’s backend. Jobs involve updating global transformations and the bounding volume hierarchy, finding objects in the view frustum, layer filtering, picking, input handling, animating, etc. Some jobs run every frame, some only run when internal state needs updating.

If your application is slow, it may be because jobs are taking a lot of time to complete. But how do you find out which jobs take up all the time?

Qt 3D has had tracing built in since a few years already, but it was hard to get to. You needed to do your own build of Qt 3D and enable tracing when running qmake. From thereon, every single run of an application linked against that build of Qt 3D would generate a trace file.

In 5.15, tracing is always available. It can be enabled in two ways:

  • By setting the QT3D_TRACE_ENABLED environment variable before the application starts (or at least before the aspect engine is created). This means the tracing will happen for the entire run of the application.
  • If you’re interested in tracing for a specific part of your application’s life time, you can enable the overlay and toggle tracing on and off using the check for Jobs. In this case, a new trace file will be generated every time the tracing is enabled.

For every tracing session, Qt 3D will generate one file in the current working directory. So how do you inspect the content of that file?

KDAB provides a visualisation tool but it is not currently shipped with Qt 3D. You can get the source and build it from GitHub here. Because jobs change from one version of Qt 3D to the next, you need to take care to configure which version was used to generate the trace files. Using that tool, you can open the trace files. It will render a time line of all the jobs that were executed for every frame.

In the example above, you can see roughly two frames worth of data, with jobs executed on a thread pool. You can see the longer running jobs, in this case:

  • RenderViewBuilder jobs, which create all the render views, one for each branch in the frame graph. You can see some of them take much longer that others.
  • FrameSubmissionPart1 and FrameSubmissionPart2 which contain the actual draw calls.

Of course, you need to spend some time understanding what Qt 3D is doing internally to make sense of that data. As with most performance monitoring tools, it’s worth spending the time experimenting with this and seeing what gets affected by changes you make to your scene graph or frame graph.

Job Dependencies

Another important source of information when analysing performance of jobs is looking at the dependencies. This is mostly useful for developers of Qt 3D aspects.

Using the profiling overlay, you can now dump the dependency graph in GraphViz dot format.

Other Tools

Static capabilities

Qt 3D 5.15 introduces QRenderCapabilities which can be used to make runtime decisions based on the actual capabilities of the hardware the application is running on. The class supports a number of properties which report information such as the graphics API in use, the card vendor, the supported versions of OpenGL and GLSL. It also has information related to the maximum number of samples for MSAA, maximum texture size, if UBOs and SSBOs are supported and what their maximum size is, etc.

Third Party Tools

Of course, using more generic performance tools is also a good idea.

perf can be used for general tracing, giving you insight where time is spent, both for Qt 3D and for the rest of your application. Use it in combination with KDAB’s very own hotspot to get powerful visualisation of the critical paths in the code.

Using the flame graph, as show above (captured on an embedded board), you can usually spot the two main sections of Qt 3D work, the job processing and the actual rendering.

Other useful tools are the OpenGL trace capture applications, either the generic ones such as apitrace and renderdoc, or the ones provided your hardware manufacturer, such as nVidia or AMD.

 

Conclusion

We hope this article will help you get more performance out of your Qt 3D applications. The tools, old and new, should be very valuable to help find bottlenecks and see the impact of changes you make to your scene graph or frame graph. Furthermore, improvements regarding performance are in the works for Qt 6, so watch this space!

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Debugging and Profiling Qt 3D applications appeared first on KDAB.

Monday

23 March, 2020

This blog post was not easy to write as it started as a very simple thing intended for developers, but later, when I was digging around, it turned out that there is no good single resource online on copyright statements. So I decided to take a stab at writing one.

I tried to strike a good balance between 1) keeping it short and to the point for developers who just want to know what to do, and 2) FOSS compliance officers and legal geeks who want to understand not just best practices, but also the reasons behind them.

If you are extremely short on time, the TL;DR should give you the bare minimal instructions, but if you have just 2 minutes I would advise you to read the actual HowTo a bit lower below.

Of course, if you have about 18 minutes of time, the best way is always to start reading at the beginning and finish at the end.

Where else to find this article

A copy of this blog is available also on Liferay Blog.
Haksung Jang (장학성) was awesome enough to publish a Korean translation.

TL;DR

Use the following format:

SPDX-FileCopyrightText: © {$year_of_file_creation} {$name_of_copyright_holder} <{$contact}>

SPDX-License-Identifier: {$SPDX_license_name}

… put that in every source code file and go check out (and follow) REUSE.software best practices.

E.g. for a file that I created today and I released under the BSD-3-Clause license, I would use put the following as a comment at the top of the source code file:

SPDX-FileCopyrightText: © 2020 Matija Šuklje <matija@suklje.name>

SPDX-License-Identifier: BSD-3-Clause

Introduction and copyright basics

Copyright is automatic (since the Berne convention) and any work of authorship is automatically protected by it – essentially giving the copyright holder1 exclusive power over its work. In order for your downstream to have the rights to use any of your work – be that code, text, images or other media – you need to give them a license to it.

So in order for you to copy, implement, modify etc. the code from others, you need to be given the needed rights – i.e. a license2 –, or make use of a statutory limitation or exception3. And if that license has some obligations attached, you need to meet them as well.

In any case, you have to meet the basic requirements of copyright law as well. At the very least you need to have the following two in place:

  • attribution – list the copyright holders and/or authors (especially in jurisdictions which recognise moral rights);
  • license(s) – since a license is the only thing that gives anybody other than the copyright holder themself the right to use the code, you are very well advised to have a notice of the the license and its full text present – this goes for both for your outbound licenses and the inbound licenses you received from others by using 3rd party works, such as copied code or libraries.

Inbound vs. outbound licenses

The license you give to your downstream is called an outbound license, because it handles the rights in the code that flow out of you. In turn that same license in the same work would then be perceived by your downstream as their inbound license, as it handles the rights in the code that flows into them.

In short, licenses describing rights flowing in are called inbound licenses, and the licenses describing rights flowing out are called outbound licenses.

The good news is that attribution is the author’s right, not obligation. And you are obliged to keep the attribution notices only insofar as the author(s) made use of that right. Which means that if the author has not listed themselves, you do not have to hunt them down yourself.

Why have the copyright statement?

Which brings us to the question of whether you need to write your own copyright statement4.

First, some very brief history …

The urge to absolutely have to write copyright statements stems from the inertia in the USA, as it only joined the Berne convention in 1989, well after computer programs were a thing. Which means that until then the US copyright law still required an explicit copyright statement in order for a work to be protected.

Copyright statements are useful

The copyright statement is not required by law, but in practice very useful as proof, at best, and indicator, more likely, of what the copyright situation of that work is. This can be very useful for compliance reasons, traceability of the code etc.

Attribution is practically unavoidable, because a) most licenses explicitly call for it, and if that fails b) copyright laws of most jurisdictions require it anyway.

And if that is not enough, then there is also c) sometimes you will want to reach the original author(s) of the code for legal or technical reasons.

So storing both the name and contact information makes sense for when things go wrong. Finding the original upstream of a runaway file you found in your codebase – if there are no names or links in it – is a huge pain and often includes (currently still) expensive specialised software. I would suspect the onus on a FOSS project to be much lower than on a corporation in this case, but still better to put a little effort upfront than having to do some serious archæology later.

How to write a good copyright statement and license notice

Finally we come to the main part of this article!

A good copyright statement should consist of the following information:

  • start with the © sign;
  • the year of the first publication – a good date would be the year in which you created the file and then do not touch it anymore;
  • the name of the copyright holder – typically the author, but can also be your employer or the if there is a CLA in place another legal entity or person;
  • a valid contact to the copyright owner

As an example, this is what I would put on something I wrote today:

© 2020 Matija Šuklje <matija@suklje.name>

While you are at it, it would make a lot of sense to also notify everyone which license you are releasing your code under as well. Using an SPDX ID is a great way to unambiguously state the license of your code. (See note mentioned below for an example of how things can go wrong otherwise.)

And if you have already come so far, it is just a small step towards following the best practices as described by REUSE.software by using SPDX tags to make your copyright statement (marked with SPDX-FileCopyrightText) and license notice (marked with SPDX-License-Identifier and followed by an SPDX ID).

Here is now an example of a copyright statement and license notice that check all the above boxes and also complies with both the SPDX and the REUSE.software specifications:

SPDX-FileCopyrightText: © 2020 Matija Šuklje <matija@suklje.name>

SPDX-License-Identifier: BSD-3-Clause

Now make sure you have these in comments of all your source code files.

Q&A

Over the years, I have heard many questions on this topic – both from developers and lawyers.

I will try to address them below in no particular order.

If you have a question that is not addressed here, do let me know and I will try to include it in an update.

Why keep the year?

Some might argue that for the sake of simplicity it would be much easier to maintain copyright statements if we just skip the years. In fact, that is a policy at Microsoft/GitHub at the time of this writing.

While I agree that not updating the year simplifies things enormously, I do think that keeping a date helps preserve at least a vague timeline in the codebase. As the question is when the work was first expressed in a medium, the earliest date provable is the time when that file was first created.

In addition, having an easy way to find the earliest date of a piece of code, might prove useful also in figuring out when an invention was first expressed to the general public. Something that might become useful for patent defense.

This is also why e.g. in Liferay our new policy is to write the year of the file creation, and then not change the year any more.

Innocent infringement excursion for legal geeks

17 U.S. Code § 401.(d) states that if a work carries a copyright notice in the form that the law proscribes, in a copyright infringement case the defendant cannot rely on the innocent infringement defense, except if they had reason to believe their use was covered fair use. And even then, the innocent infringer would have to be e.g. a non-profit broadcaster or archive to be still eligible to such defence.

So, if you are concerned with copyright violations (at least in USA), you may actually want to make sure your copyright statements include both the copyright sign and year of publication.

See also note in Why the © sign for how a copyright notice following the US copyright act looks like.

Why not bump the year on change?

I am sure you have seen something like this before:
Copyright (C) 1992, 1995, 2000, 2001, 2003 CompanyX Inc.

The presumption behind this is that whenever you add a new year in the copyright statement, the copyright term would start anew, and therefore prolong the time that file would be protected by copyright.

Adding a new year on every change – or, even worse, simply every 1st January – is a practice still too wide-spread even today. Unfortunately, doing this is useless at best, and misleading at worst. For the origin of this myth see the short history above.

A big problem with this approach is that not every contribution is original or substantial enough to be copyrightable – even the popular 5 (or 10, or X) SLOC rule of thumb5 is legally-speaking very debatable

So, in order to keep your copyright statement true, you would need to make a judgement call every time whether the change was substantial and original enough to be granted copyright protection by the law and therefore if the year should be bumped. And that is a substantial test for every time you change a file.

On the other hand copyright lasts at least 50 (and usually 70) years6 after the death of the author; or if the copyright holder is a legal entity (e.g. CompanyX Inc.), since publication. So the risk of your own copyright expiring under your feet is very very low.

Worst case thought experiment

Let us imagine the worst possible scenario now:

1) you never bump the year in a copyright statement in a file and 2) 50+ years after its initial release, someone copies your code as if it were in public domain. Now, if you would have issue with that and go to court, and 3) the court would (very unlikely) take only the copyright statements in that file into account as the only proof and based on that 4) rule that the code in that file would have fallen under public domain and therefore the FOSS license would not apply to it any more.

The end result would simply be that (in one jurisdiction) that file would fall into public domain and be up for grabs by anyone for anything, no copyright, no copyleft, 50+ years from the file’s creation (instead of e.g. 5, maybe 20 years later).

But, honestly, how likely is it that 50 years from now the same (unaltered) code would still be (commercially) interesting?

… and if it turns out you do need to bump the year eventually, you still have, at worst, 50 years to sort it out – so, ample opportunity to mitigate the risk.

In addition to that, as typically a single source code file is just one of the many cogs in a bigger piece of software, what you are more concerned with is the software product/project as a whole. As the software grows, you will keep adding new files, and those will obviously have newer years in them. So the codebase as a whole work will already include copyright statements with newer years in it anyway.

Keep the Git/VCS history clean

Also, bumping the year in all the files every year messes with the usefulness of the Git/VCS history, and makes the log unnecessarily long(er) and the repository consumes more space.

It makes all the files seem equally old (in years), which makes it hard to identify stale code if you are looking for it.

Another issue might be that your year-bumping script can be too trigger-happy and bump the years also in the files that do not even belong to you. Furthering misinformation both in your VCS and the files’ copyright notices.

Why not use a year range?

Similar to the previous question, the year span (e.g. 1990-2013) is basically just a lazy version of bumping the year. So all of the above-mentioned applies.

A special case is when people use a range like {$year}-present. This has almost all of the above-mentioned issues7, plus it adds another dimension of confusion, because what constitutes the “present” is an open – and potentially philosophical – question. Does it mean:

  • the time when the file was last modified?
  • the time it was released as a package?
  • the time you downloaded it (maybe for the first time)?
  • the time you ran it the last time?
  • or perhaps even the ever eluding “right now”?

As you can see, this does not help much at all. Quite the opposite!

But doesn’t Git/Mercurial keep a better track?

Not reliably.

Git (and other VCS) are good at storing metadata, but you should be careful about it.

Git does have an Author field, which is separate from the Committer field. But even if we were to assume – and that is a big assumption8 – Git’s Author was the actual author of the code committed, they may not be the copyright holder.

Furthermore, the way git blame and git diff currently work, is line-by-line and using the last change as the final author, making Git suboptimal for finding out who actually wrote what.

Token-based blame information

For a more fine-grained tool to see who to blame for which piece of code, check out cregit.

And ultimately – and most importantly – as soon as the file(s) leave the repository, the metadata is lost. Whether it is released as a tarball, the repository is forked and/or rebased, or a single file is simply copied into a new codebase, the trace is lost.

All of these issues are addressed by simply including the copyright statement and license information in every file. REUSE.software best practices handle this very well.

Why the © sign?

Some might argue that the English word “Copyright” is so common nowadays that everyone understands it, but if you actually read the copyright laws out there, you will find that using © (i.e. the copyright sign) is the only way to write a copyright statement that is common in copyright laws around the world9.

Using the © sign makes sense, as it is the the common global denominator.

Comparison between US and Slovenian copyright statements

As an EU example, the Slovenian ZASP §175.(1) simply states that holders of exclusive author’s rights may mark their works with a (c)/© sign in front of their name or firm and year of first publication, which can be simply put as:

© {$year_of_first_publication} {$name_of_author_or_other_copyright_holder}

On the other side of the pond, in the USA, 17 U.S. Code § 401.(b) uses more words to give a more varied approach, and relevant for this question in §401(b)(1) proscribes the use of

the symbol © (the letter C in a circle), or the word “Copyright”, or the abbreviation “Copr.”;

The rest you can go read yourself, but can be summarised as:

(©|Copyright|Copr.) {$year_of_first_publication} {$name_or_abreviation_of_copyright_holder}

See also the note in Why keep the year for why this can matter in front of USA courts.

While the © sign is a pet peeve of mine, from the practical point of view, this is the least important point here. As we established in the introduction, copyright is automatic, so the actual risk of not following the law by its letter is pretty low if you write e.g. “Copyright” instead.

Why leave a contact? Even when there is more than one author?

A contact is in no way required by copyright law, but from practical reasons can be extremely useful.

It can happen that you need to access the author and/or copyright holder of the code for legal or technical question. Perhaps you need to ask how the code works, or have a fix you want to send their way. Perhaps you found a licensing issue and want to help them fix it (or ask for a separate license). In all of these cases, having a contact helps a lot.

As pretty much all of internet still hinges on the e-mail10, the copyright holder’s e-mail address should be the first option. But anything really goes, as long as that contact is easily accessible and actually in use long-term.

Avoiding orphan works

For the legal geeks out there, a contact to the copyright holder mitigates the issue of orphan works.

There will be cases where the authorship will be very dispersed or lie with a legal entity instead. In those cases, it might be more sense to provide a URL to either the project’s or legal entity’s homepage and provide useful information there. If a project lists copyright holders in a file such as AUTHORS or CONTRIBUTORS.markdown a permalink to that file (in the master) of the publicly available repository could also be a good URL option.

How to handle multitudes of authors?

Here are two examples of what you can write in case the project (e.g. Project X) has many authors and does not have a CAA or exclusive CLA in place to aggregate the copyright in a single entity:

© 2010 The Project X Authors <{$url}>

© 1998 Contributors to the Project X <{$url}>

What about public domain?

Public domain is tricky.

In general the public domain are works to which the copyright term has expired11.

While in some jurisdictions (e.g. USA, UK) you can actually waive your copyright and dedicate your work to public domain, in most jurisdiction (e.g. most of EU member countries) that is not possible.

Which means that depending on the applicable jurisdiction, it may be that although an author wrote that they dedicate their work into public domain this does not meet the legal standard for it to actually happen – they retain the copyright in their own work.

Unsurprisingly, FOSS compliance officers and other people/projects who take copyright and licensing seriously are typically very wary of statements like “this is public domain”.

This can be mitigated in two ways:

  • instead of some generic wording, when you want to dedicate something to public domain use a tried and tested public copyright waiver / public domain dedication with a very permissive license, such as CC0-1.0; and
  • include your name and contact if you are the author in the SPDX-FileCopyrightText: field – 1) because in doubt that will associate you with your dedication to the public domain, and 2) in case anything is unclear, people have a contact to you.

This makes sense to do even for files that you deem are not copyrightable, such as config files – if you mark them as above, everyone will know that you will not exercise your author’s rights (if they existed) in those files.

It may seem a bit of a hassle for something you just released to the public to use however they see fit, without people having to ask you for permission. I get that, I truly do! But do consider that if you already put so much effort into making this wonderful stuff you and donating it to the general humanity, it would be a huge pity that, for (silly) legal details, in the end people would not (be able to) use it at all.

What about minified JS?

Modern code minifiers/uglifiers tend to have an optional flag to preserve copyright and licensing info, even when they rip out all the other comments.

The copyright does not simply go away if you minify/uglify the code, so do make sure that you use a minifier that preserves both the copyright statement as well as the license (at least its SPDX Identifier) – or better yet, the whole REUSE-compliant header.

Transformations of code

Translations between different languages, compilations and other transformations are all exclusive rights of the copyright owner. So you need a valid license even for compiling and minifying.

What is wrong with “All rights reserved”?

Often you will see “all rights reserved” in copyright statements even in a FOSS project.

The cause of this, I suspect, lies again from a copycat behaviour where people tend to simply copy what they so often found on a (music) CD or in a book. Again, the copyright law does not ask for this, even if you want to follow the fullest formal copyright statement rules.

But what it does bring, is confusion.

The statement “all rights reserved” obviously contradicts the FOSS license the same file is released under. The latter gives everyone the rights to use, study, share and improve the code, while the former states that all of these rights the author reserves to themself.

So, as those three words cause a contradiction, and do not bring anything useful to the table in the first place, you should not write them in vain.

Practical example

Imagine12 a FOSS project that has a copy of the MIT license stored in its LICENSE file and (only) the following comment at the top of all its source code files:

# This file is Copyright (C) 1997 Master Hacker, all rights reserved.

Now imagine that someone simply copies one file from that repository/archive into their own work, which is under the AGPL-3.0-only license, and this is also what it says in the LICENSE file in the root of its own repository. And you, in turn, are using this second person’s codebase.

According to the information you have at hand:

  • the copyright in the copied file is held by Master Hacker;
  • apparently, Mr Hacker reserves all the rights they have under copyright law;
  • if you felt like taking a risk, you could assume that the copied file is under the AGPL-3.0-or-later license – which is false, and could lead to copyright violation13;
  • if you wanted to play it safe, you could assume that you have no valid license to this file, so you decide to remove it and work around it – again false and much more work, but safe;
  • you could wait until 2067 and hope this actually falls under public domain by then – but who has time for that.

This example highlights both how problematic the wording of “all rights reserved” can be even if there is a license text somewhere in the codebase.

This can be avoided by using a sane copyright statement (as described in this blog post) and including an unambiguous license ID. REUSE.software ties both of these together in an easy to follow specification.

hook out → hat tip to the TODO Group for giving me the push to finally finish this article and Carmen Bianca Bakker for some very welcome feedback


  1. This is presumed to be the author at least initially. But depending on circumstances can be also some other person, a legal entity, a group of people etc. 

  2. A license is by definition “[t]he permission granted by competent authority to exercise a certain privilege that, without such authorization, would constitute an illegal act, a trespass or a tort.” 

  3. Limitations and exceptions (or fair use/dealings) in copyright are extremely limited when it comes to software compared to more traditional media. Do not rely on them. 

  4. In USA, the copyright statement is often called a copyright notice. The two terms are used intercheangably. 

  5. E.g. the 5 SLOC rule of thumb means that any contribution that is 5 lines or shorter, is (likely) too short to be deemed copyrightable, and therefore can be treated as un-copyrightable or as in public domain; and on the flip-side anything longer than 5 lines of code needs to be treated as copyrightable. This rule can pop up when a project has a relatively strict contribution agreement (a CLA or even CAA), but wants some leeway to accept short fix patches from drive-by contributors. The obvious problem with this is that on one hand someone can be very original even in 5 lines (think haiku), while one can also have pages and pages of absolute fluff or just plain raw factual numbers. 

  6. This depends from jurisdiction to jurisdiction. The Berne convention stipulates at least 50 years after death of the author as the baseline. There are very few non-signatory states that have shorter terms, but the majority of countries have life + 70 years. The current longest copyright term is life + 100 years, in Mexico. 

  7. The only improvement is that it avoids messing up the Git/VCS history. 

  8. In practice what the Author field in a Git repository actually includes varies quite a bit and depends on how the committer set up and used Git. 

  9. Of course, I did not go through all of the copyright laws out there, but I checked a handful of them in different languages I understand, and this is the pattern I identified. If anyone has a more thorough analysis at hand, please reach out and I will happily include it. 

  10. Just think about it, pretty much every time you create a new account somewhere online, you are asked for your e-mail address, and in general people rarely change their e-mail address. 

  11. As stated before, in most jurisdictions that is 70 years after the death of the author. 

  12. I suspect many of the readers not only can imagine one, but have seen many such projects before ;)

  13. Granted, MIT code embedded into AGPL-3.0-or-later code is less risky than vice versa. But simply imagine what it would be the other way around … or wtih an even odder combination of licenses. 

Sunday

22 March, 2020

As promised on our mailing list back in April 2019, we are doing semi-regular updates on the state of KPhotoAlbum. To increase the visibility, from now on we also publish these reports on our project website.

So, what happened since the last update and how did we hold up?

Integrating KPhotoAlbum closer with the greater KDE community

So far, this goal is doing quite well. A visible indicator of this is the new website, which is not just good-looking, but visually in line with other KDE project websites.

On a personal note, I went to FOSDEM this year. Unfortunately, my time with other KDE people was very limited (to put it mildly), as I was occupied with FSFE topics. I did, however, say hello at the KDE booth, and was very touched by the warm welcome there. Bhushan immediately recognised me and handed me a KDE nametag, and I had a nice chat with Nicolas about some Purpose issue I was having.

KIPI Deprecation and Purpose Integration

KPhotoAlbum 5.6 was the first major version with support for the Purpose plugin framework, and the last major version to support the now unmaintained KIPI plugin interface.

The Purpose plugin framework is already supported by a number of applications, such as Spectacle and Gwenview. At this time, Purpose plugins are focused around sharing images with devices or web services.

At its prime, KIPI had quite an assortment of useful plugins. If there is a specific feature that you would like to implement as a Purpose plugin, contact us so that we can make it happen.

The Great Refactoring

As promised, this goal will stay with us for the years to come. Still, we would have liked to spend more time on refactoring KPhotoAlbum to reduce the technical debt of the codebase.

What’s New?

Geolocation Mapping

KPhotoAlbum 5.7 will see a complete rewrite of our map feature.

We have spent a good amount of work into replacing libkgeomap with Marble. By using Marble directly instead of depending on libkgeomap as a wrapper, we could both simplify the code for our map widget (one less layer of abstraction) and add new features to the map.

On the downside, as with every rewrite, there may be some new bugs that we haven’t found yet.

Tooling

Sometimes, a graphical user interface won’t cut it. For that reason, Robert has contributed the kpa-filter and kpa-merge scripts for working with database files on the commandline.

Robert is working on a replacement for kpa-merge, called kpa-util. The new script will allow users to add tags to existing categories, and to tag images without the need for a graphical user interface.

KDE Community Goals

The KDE community has decided on three goals to focus on for the next couple of years.

We already have some ideas on how to improve KPhotoAlbum regarding the Consistency goal. If you have further suggestions and ideas we would love to hear them!

— Johannes

Here are the number of acquisitions for the last 30 days (roughly equal to the number of installations, not mere downloads) for our applications:

A nice stream of new users for our software on the Windows platform.

If you want to help to bring more stuff KDE develops on Windows, we have some meta Phabricator task were you can show up and tell for which parts you want to do work on.

A guide how to submit stuff later can be found on our blog.

Thanks to all the people that help out with submissions & updates & fixes!

If you encounter issues on Windows and are a developer that wants to help out, all KDE projects really appreciate patches for Windows related issues.

Just contact the developer team of the corresponding application and help us to make the experience better on any operating system.

KDE developers have started pumping out some seriously excellent new features for Plasma and apps releases this week, with more stuff on the way soon! In addition, many bugs were fixed, and the UI polish continued apace. Take a look!

New Features

Bugfixes & Performance Improvements

User Interface Improvements

How You Can Help

In Plasma 5.19, we are making a push on our Breeze Theme Evolution work. It’s proceeding, but would go faster with your help! There are tons and tons of mockups in the linked task and its child tasks, and what we really need at this point is people willing to help implement them. QML skills are helpful, and C++ is also useful for the needed work on the Breeze theme itself. If this sounds interesting to you, don’t be shy, step right up! Head over to the VDG channel to find out how you can get involved and coordinate work.

More generally, have a look at https://community.kde.org/Get_Involved and find out more ways to help be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Finally, consider making a tax-deductible donation to the KDE e.V. foundation.

Saturday

21 March, 2020

  • Upgraded the flatpak to qt 5.14 (hoping to get my hands on the new markdown support), which resulted in discovering a regression for QSet properties.
  • The flatpak now employs a patch so the pinentry tool just uses libsecret as cache, which means if you run gnome-keyring you get password-less logins (and if somebody is going to finish ksecretservice that would of course work too). I have also looked into getting access to the host gpg-agent (which seems like the better solution), but that effort is currently stuck due to a missing feature in bubblewrap and because it’s not entirely clear if this will really be the way to go forward. Feel free to weigh in.
  • Fixed a bunch of rendering issues in invitations and calendar.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to: kube-project.com

Introduction

I started my professional career in an archipelago and I have been involved in Open Source for years so managing remote software related teams, departments and even organizations has been the default for me. I have been also working as consultant in a remote-friendly environment and now I am working at MBition remotely. I believe I am familiar with many aspect of the The Remote Journey, which is a topic I am interested on beyond my work, since it is tightly related to the way of life I want to live.

Remote work is a fairly mature topic at individual (software development), team and department level. It is maturing at company level too which means that there are already resources in internet that will cover most of the basic questions and topics that most of the companies struggling today with moving from colocated directly to remote-only environments might have.

This forced shift will have a major impact in every aspect of the company, in you as professional and in the way you understand your work, which is why I strongly recommend that you make yourself and your colleagues aware of the challenge and embrace it, which means to me at least two things:

  • Be ready to challenge most of what you currently know about how you work, how your colleagues work.
  • Be open to learn. Read and talk about what you and your colleagues are going through assuming that no, you do not have the situation under control. You will need to learn again what under control means.

The good news is that this worldwide crisis will change the way we all see remote work, hopefully for the better.

The Remote Journey

The journey from being co-located to a remote-only environment has different stages. There is no agreement on how to name them but in general, I will define them in the following way:

  • Co-located: teams/departments are located in the same physical location (office).
  • Distributed: teams/departments are located in different physical locations (office).
  • Remote-friendly: the workforce of the organization can work at a different location than the office part of the time. A mature remote-firnedly environment has a minority but significant part of the workforce working remotely and coming to the office frequently. Usually these workers are related with sales, support, business development… When it comes to product development or services, those remoters are senior professionals with wide experience in remote working so they can overcome the technical, process and cultural gaps they face on daily basis due to living in a colocated culture.
  • Remote-first: most of the workforce works remotely. The office is usually reduced to specific areas of the company like labs, administration, HR, junior developers… Mature remote-first environments usually have their workforce distributed across different time-zones/countries.
  • Remote-only: there is no office or when there is, working from it is voluntary. Employees are supposed to work from home/coworkings, including supporting services and departments like admin, HR, etc.

You can read a little about these definitions here:

There is one undeniable fact though, remote-only organizations not just exists, they are successful. In my opinion, it is up to each organization how they want to transit through this journey and which stage is their target. Obviously the current crisis has left many companies with no choice but to jump most stages and go directly to becoming a remote-only organization for a while, but still they can learn from other people journeys. There are plenty of additional articles about the different stages and how routines, processes, methodologies, performance, evaluations, etc. are affected at every level (individual, team, dept. and organization). They should be easier to find now that you know some of the nomenclature.

Team ceremonies need to adapt

It is my belief that in general, habits change mindsets instead of the other way around. When walking through The Remote Journey together with teams and organizations, I put emphasis in the ceremonies as a way to drive the needed change at every level: personal, team, department and organization. If you successfully adapt the ceremonies, your are in a great position to modify people’s habits.

Personal ceremonies are that, personal. I will not get into them. There is plenty of literature in internet about how to face remote work, the advantages, the challenges and how to approach them. I have my own routines. They are not static although some of them have been with me for some time now. Some have been affected due to the confinement state we are in right now in Spain so I am adapting them to evaluate how they work. My advice in this regard is that you read about other people routines, identify yours, track them and experiment to find the right combination. Again, assume they will evolve over time.

I have written in the past several articles about team ceremonies. These articles have helped me to explain certain basic topics. You will need to find the routines that work for you and your colleagues though, in the same way that it happens at individual level. The articles were written thinking mostly about team leads at any level but hopefully there is plenty of useful stuff for team members too:

  • Working in distributed / remote environments 0: presentation.  Motivations, introductions and some definitions. The article includes the link to the rest of the series.
  • Working in distributed / remote environments 1: daily short meetings. Tips and recommendations about adapting the team daily meeting that is so popular on colocated environments.
  • Working in distributed / remote environments 2: the calendar tool. Comments about the increase of relevance for any team that the calendar has in remote environments together with some tips on how to use it.
  • Working in distributed / remote environments 3: weekly meetings (I) and (II). Teams meetings are an essential ceremony to drive change, detect issues early, solve conflicts. They are essential in remote environments. These two articles provide an overview of how relevant they are, why and how to adapt them to the remote nature of the team.
  • Working in distributed / remote environments 4: one on ones (1:1s). in co-located environments, 1:1s are not perceived as a priority by many. Only when the organization grows beyond certain point, this ceremony gets the attention that in my opinion it deserves. In remote environments you cannot wait to get “big enough”. The nature of the work environment force you to establish proactive measures to align, define expectation based on company, department, team and individual goals and evaluate, together with the workforce, progress. The articles provide tips to adapt 1:1s to a remote environment.

You will see that in the articles I use the term distributed and remote environments (DRE) to avoid referring to the different stages of the journey. This is for simplicity. Ceremonies might slightly change depending on the stage the organization is at.

Companies

It is always good to have references, right?

Remote-first and specially remote-only companies need to pay an extraordinary amount of attention to company culture. They usually provide plenty of resources to their employees about this topic. These organizations start hiring experienced remoters at first but as they grow, they realize they need to educate their workforce in remote working, which requires the development of contents. These two links might be a good starting point to find the right companies, what are they doing and why:

  • FlexJobs is an online job board specialised in remote work. They publish the main list of companies walking through their Remote Journey regularly.
  • If you are focusing in tech companies culture, you probably will prefer this article.

Reports about remote work

There are three interesting reports I recommend to read if you like the remote work topic:

I have used them in the past to open conversations about this topic with managers and HR departments, for instance.

Books

I have to admit that I haven’t found yet THE book about the topic, and I have been searching for years. This is the one I recommend:

  • Remote. Office Not Required. It is from 2013 but still (sadly) the best. It is based on the experience of 37signals (now Basecamp). Their authors have accumulated an extensive experience in remote organization since then.

If you are not into buying books (what?), this is an online free book written by Zapier, a popular company in this field.

Social media

  • Twitter: follow the hashtags #remotework #workfromhome . Besides plenty of advertisements from coworkings, you will find useful resources once in a while as well as blog posts.
  • Other social media references: those of you interested in digital nomads or digital travelers, can follow #digitalnomad hashtag on Twitter. I joined some time ago the Digital Nomads Telegram channel.

Events

There are really good ones out there to learn about this topic:

  • The Remote Work Summit: a remote event with many interesting talks and material. You can get free passes to some content and online talks.
  • Nomad City: this event takes place in Gran Canaria, Spain (in English). It is a great one to meet digital nomads, digital travelers and remote workers as well as remote organizations leaders.
  • CoworkingEurope: it is not directly related with remote work but with flexible working spaces, but you can find useful references to companies and processes to follow from there. I have worked in coworking spaces at different locations around Europe. They are a great source of remote work knowledge.
  • I have been the last couple of years trying to join the Nomad Cruise. Let’s see if this year…

Summary

In my experience, going from colocated to remote-only environments changes way more things that you can expect at first. Keeping high levels of efficiency, alignment and workforce satisfaction requires time and effort. Do not underestimate them. The good news is that although not at the speed this crisis is forcing many to do, plenty of people and tech organizations have experienced such transition. Some have published lots of contents about their experiences moving through The Remote Journey or living and growing at a specific stage. Look for those content in internet. Hopefully they are easier to find after reading this article.

A single man experience is very limited though. You probably have experience too. Please share it as well as links to further content.

Thursday

19 March, 2020

We are happy to announce the release of Qt Creator 4.12 Beta2 !

FOSS-North has been canceled this year, like so many events, due to COVID19. There will be a virtual conference instead, live-streamed on YouTube.

The KDE Community Day which was going to happen before / in parallel to the conference, is also canceled. It is not responsible to be getting together right now. We will be looking at other ways to build the community and maintain the ties we have during the lockdown period.