January 02, 2019

I delivered a talk about writing a refactoring tool with Clang Tooling at code::dive in November. It was uploaded to YouTube today:

The slides are available here and the code samples are here.

This was a fun talk to deliver as I got to demo some features which had never been seen by anyone before. For people who are already familiar with clang-tidy and clang-query, the interesting content starts about 15 minutes in. There I start to show new features in the clang-query interpreter command line.

The existing clang-query interpreter lacks many features which the replxx library provides, such as syntax highlighting and portable code completion:

It also allows scrolling through results to make a selection:

A really nice feature is value-based code-completion instead of type-based code completion. Existing code completion only completes candidates based on type-compatibility. It recognizes that a parameterCountIs() matcher can be used with a functionDecl() matcher for example. If the code completion already on the command line is sufficiently constrained so that there is only one result already, the code completion is able to complete candidates based on that one result node:

Another major problem with clang-query currently is that it is hard to know which parenthesis represents the closing of which matcher. The syntax highlighting of replxx help with this, along with a brace matching feature I wrote for it:

I’m working on upstreaming those features to replxx and Clang to make them available for everyone, but for now it is possible to experiment with some of the features on my Compiler Explorer instance on ce.steveire.com.

I wrote about the AST-Matcher and Source Location/Source Range discovery features on my blog here since delivering the talk. I also wrote about Composing AST Matchers, which was part of the tips and tricks section of the talk. Over on the Visual C++ blog, I wrote about distributing the refactoring task among computers on the network using Icecream. My blogs on that platform can be seen in the Clang category.

All of that blog content is repeated in the code::dive presentation, but some people prefer to learn from conference videos instead of blogs, so this might help the content reach a larger audience. Let me know if there is more you would like to see about clang-query!

During the last months I had the opportunity to participate in Google Code-in 2018 as a mentor for the KDE Community. I've created tasks and assisted pre-university students aged from 13 to 17 in their first contributions to free software projects. My focus was on KDE Partition Manager, because I'm contributing to it as a … Continue reading Google Code-in 2018 – My First Experience as a Mentor in KDE

2018 is over and 2019 starts. This is a great opportunity to look back, reflect and to try to look into the future. I predict that 2019 will be a very good year for privacy, open source and decentralized cloud software. Maybe even the mainstream breakthrough of federated and decentralized internet services!

Let me explain why:

The mainstream opinion about centralized services started to change in 2018 and I think this trend will continue in 2019. More and more people see the issue with large, centralized data silos that control more and more of our private lives, democratic processes and society as a whole. Some examples from 2018 where bad news hit the press include:

  • The never ending list of Facebook scandals: Wired
  • Twitter election meddling: BostonGlobe
  • Amazon Alexa is listening to private conversations and is leaking the data: Heise and  BusinessInsider
  • Dropbox is leaking private date: TechTarget
  • Google Plus is insecure and will shut down: CNBC

This year, Europe introduced the GDPR to regulate the collection of private data. I believe it is a good start and think we ultimately we need rules as described in the User Data Manifesto
I expected that people in the US and Asia wouldn’t take the GDPR seriously and make fun of Europeans tendency to ‘over-regulate’. So I was surprised to see that the GDPR was widely praised as a step into the right direction. People in Asia and US are already asking for similar regulations in their markets, California has already announced its own variant of the GDPR with the California Consumer Privacy Act.

This clearly shows that the world is changing. People realize more and more that extensive centralized data collection is a problem. This is an opportunity for open source and decentralized and federated alternatives to enter the mainstream.

At Nextcloud we have become widely recognized as one of the major alternatives. And this year was big for us, with three big releases introducing new technologies the world needs going forward. Let me name just a few:

  • End-to-end Encryption. In 2018 Nextcloud launched support for full end 2 end encrypted file sync and share.
  • Nextcloud Talk. Beginning of 2018 we launched Nextcloud Talk as a fully integrated self hosted, open source and decentralized chat and audio/video call solution
  • Just a few weeks ago we launched Social with ActivityPub support to integrated with Mastodon and other projects of the Fediverse.
  • Simple Signup. In summer we launched the Simple Signup feature to make it possible for new users to sign up at one of the Nextcloud providers directly from the Mobile and Desktop apps.
  • We launched our unique Video Verification feature to become the most secure file share platform.
  • In summer we announced the initiative to ship Nextcloud preinstalled on millions of NEC routers, something that will take off in 2019, you might have seen the prototype devices on social media.
  • This fall we launched the Nextcloud Include program with funding from the Reinhard von König Preis for innovation. I’m happy we run this project together with my old friends from KDE.

In 2018 I traveled to more events and countries than ever before. It’s great to see how the Nextcloud community is growing all over the globe. On the company and business side we also have good news. The Nextcloud company is growing nicely in all areas. There will be separate news about this soon.

Of course it’s the mission of Nextcloud to not do everything alone. This is why we launched a lot of integration projects in 2018. For example with Rocket.Chat, Moodle, StorJ, Mastodon and others. I’m really happy to see that other open source and decentralization projects do as well as Nextcloud.

I think 2019 could be the year where open source, federated and self-hosted technology hits mainstream, taking on the proprietary, centralized data silos keeping people’s personal information hostage. Society becoming more critical about data collection will fuel this development.

If you want to make a difference then join Nextcloud or one of the other project that develop open source decentralized and federated solutions. I think 2019 is the year were we can win the internet back!

We need to talk about calories! Not the calories from your Christmas cookies — those don’t count. But, calories in your Qt application. We’re going to take a look at a technique that is easy to enable and helps you save precious bytes around your application’s waistline.

The Old vs The New

Traditionally, you would build your application by letting the compiler translate your .cpp source files to machine code. The result is stored in .o object files, which we then pass over to the linker, to resolve references between the files. At this point, the linker does not change the machine code that was generated. This division of work between the compiler and the linker allows for quick development cycles. If you modify one source file, only that file gets recompiled and then the linker quickly re-assembles the application’s binary. Unfortunately, this also means that we are missing out on an opportunity to optimize.

Imagine that your application has two functions: main() in main.cpp and render() in graphics.cpp. As an experienced developer, you keep all your graphics code encapsulated in the render() function — anyone can call it from anywhere! In reality, it is only the application’s main() that calls render(). Theoretically, we could just copy and paste the code in render() into main() — inlining it. This would save the machine code instructions in main() to call render(). Once that’s done, we may even see opportunities to reuse some variables and save even more space and code. Now, if we tried to do this by hand, it would quickly escalate into Spaghetti code with lots of sauce.

Luckily, most compilers these days offer a technique that allows you apply such optimizations (and deal with the spaghetti mess) while retaining the modularity and cleanliness of your code. This is commonly called “Link Time Optimizations” or “Link Time Code Generation”. The latter describes best what really happens: Instead of compiling each source file to machine code one-by-one, we delay the code generation step until the very end — linking time. Code generation at linking time not only enables smart inlining of code, but it also allows for optimizations such as de-virtualizing methods and improved elimination of unused code.

Link Time Optimization in Qt

To enable this technique in Qt, you have to build from source. At the configure step, add -ltcg to the command line options. We thought hard, and this is the most cryptic and vowel-free name we could come up with ��

To demonstrate the effectiveness of Link Time Code Generation, let’s look at a fresh build of the Qt 5.12 branch, compiled with GCC 7.3.0 for ARMv7 against an imx6 Boot2Qt sysroot. For analysis, we’re going to use Bloaty McBloatface (https://github.com/google/bloaty), which is a lovely size profiler for binaries. The Qt Quick Controls 2 Gallery, statically linked, serves as a sample executable. When running bloaty on it, with a regular Qt build, you’ll see output like this:

    VM SIZE                      FILE SIZE
 --------------                --------------
   0.0%       0 .debug_info      529Mi  83.2%
   0.0%       0 .debug_loc      30.4Mi   4.8%
   0.0%       0 .debug_str      18.6Mi   2.9%
   0.0%       0 .debug_line     14.2Mi   2.2%
  68.1%  13.9Mi .text           13.9Mi   2.2%
   0.0%       0 .debug_ranges   9.60Mi   1.5%
   0.0%       0 .debug_abbrev   6.29Mi   1.0%
  29.5%  6.01Mi .rodata         6.01Mi   0.9%
   0.0%       0 .strtab         3.17Mi   0.5%
   0.0%       0 .symtab         2.35Mi   0.4%
   0.0%       0 .debug_frame    1.80Mi   0.3%
   0.0%       0 .debug_aranges   485Ki   0.1%
   1.2%   249Ki .data.rel.ro     249Ki   0.0%
   0.3%  68.2Ki .ARM.extab      68.2Ki   0.0%
   0.2%  38.2Ki .bss                 0   0.0%
   0.1%  30.3Ki [25 Others]     35.4Ki   0.0%
   0.1%  30.3Ki .got            30.3Ki   0.0%
   0.1%  24.1Ki .ARM.exidx      24.1Ki   0.0%
   0.1%  15.1Ki .dynstr         15.1Ki   0.0%
   0.1%  13.6Ki .data           13.6Ki   0.0%
   0.1%  13.2Ki .dynsym         13.2Ki   0.0%
 100.0%  20.4Mi TOTAL            637Mi 100.0%

 

The “VM SIZE” column is what’s particularly interesting to us — it tells us how much space the different sections of the program consume when loaded into memory. Here, we see that the total cost will be ~20 MB.

Now, let’s compare that to a build with -ltcg enabled.

Comparison between regular and LTCG buildThe new VM size is at 17.3 MiB — that’s nearly a 15% reduction in cost, just by passing a parameter to configure.

This drastic gain here is because we chose a static build. However, even when you use a dynamic build, this optimization is worth it. In this case, LTCG is applied at the boundary of shared libraries.

Bloaty can show this by comparing a regular build against an LTCG-enabled build of libQt5Core.so.5.12.0:

    VM SIZE                      FILE SIZE
 --------------                --------------
...
 -53.8%     -28 [LOAD [RW]]          0  [ = ]
...
 -11.9% -1.78Ki .got           -1.78Ki -11.9%
  -0.2% -3.05Ki .rodata        -3.05Ki  -0.2%
 -10.0% -3.54Ki .rel.dyn       -3.54Ki -10.0%
 -17.2% -7.52Ki .ARM.exidx     -7.52Ki -17.2%
 -16.9% -18.4Ki .ARM.extab     -18.4Ki -16.9%
...
 -21.2%  -691Ki .text           -691Ki -21.2%
 -13.9%  -727Ki TOTAL           -838Ki -13.8%

 

The linker produced a smaller library with less code, less relocations, and a smaller read/write data section.

Conclusion

At this point, this seems like a win-win situation, and you may wonder: Why isn’t this enabled by default? No, it’s not because we’re stingy ��

One issue is that in the Qt build system, currently, this is a global option. So if we were to enable this with the Qt binaries, everyone using them will be slowed down and it requires them to opt-out explicitly, in the build system. We’re working on fixing that, so that eventually, we can ship Qt with LTCG enabled, and then you can enable this at application level.

Another issue is that by delaying the code generation to link time, we are increasing the time it takes from modifying a single source file to creating a new program or library. It’s almost as if you touch every single source file every time, making it less practical for day-to-day use. But, this optimization is definitely something that fits well into the release process, when creating your final build. So, your Release Manager can use it.

The post Reducing Application Size using Link Time Optimization appeared first on Qt Blog.

KDE Project:

Twice a year (on that note, happy new one!), the KDE e.V. board of directors comes together for an in-person meeting, taking care of business. It's become a tradition that on one of the two meeting days, the board hosts a dinner event open to KDE users, contributors and other interested parties.

The board's first meeting of 2019 will be in Seoul, South Korea, and the dinner will be held on Saturday, January 19th in central Seoul.

If you're interested in attending (us five board members aside, we've already had the pleasure of confirming the attendance of many notable local community members - it won't be boring!) and talking all things KDE and FOSS over good food with us, please drop me a mail. Modulo available space, I'll get back to you with details on the time and location as soon as we've finalized both.

January 01, 2019

So, two years ago I thought porting Krita to iOS or Android might make a dandy research project. A bit of context: if I spend 500 hours a year on an approved R&D project, I get a tax break. Plus, I like doing new stuff now and then. My 2018/2019 R&D project is Resource Management for the 21st Century, a previous one was Python Scripting.

In 2016, there wasn’t a decent Android tablet with a pen available anymore. The Wacom Cintiq Hybrid Companion is stuck on an ancient version of Android and wasn’t being made anymore, and Samsung’s Note tablet was an older model. The iPad Pro was new, so I decided to experiment with that. I got myself an iPad Pro, a Pencil and…

I tried to put a simple little example application on the iPad. I found something that demonstrated using the Pencil, and then discovered that Apple wouldn’t allow me to put code I had built myself on my iPad. I needed a developer account and keys and everything.

I told myself I would investigate that, but never had time to dig in.

Then in 2017, I gave the Cupertino Shylock the 99 ducats it wanted, and got the acccount. Mainly so we could sign our macOS builds and disk images — Apple making it deuced hard for people to run unsigned software. Now they’re going to make it even harder — they want applications in the macOS App Store to be notarized. But I digress…

SO, now, end of 2018, in the week off I usually allow me myself between Christmas and New Year’s Eve, I finally sat down to experiment a bit.

First, I loaded the test application I had selected in XCode. I plugged in my iPad in my Macbook Pro — for the first time since I had bought the hardware! Stuff happened, and I had to click various dialogs, and then the device popped up in XCode.

It was quite difficult to actually find where to put my Apple ID as the “Team” — it didn’t work to tell XCode what to sign the application with, it needed something it choose to call “Team”.

But then everything worked! Yay!

Okay, next step. Get a Qt application running on the iPad. I downloaded Qt again — usually I build it myself with a bunch of patches, but I didn’t want to try to build Qt for iOS myself, nor mess with the development tree I use for Krita.

Qt’s documentation was excellent, and pretty soon I had the Tablet example application running on the iPad. It looks a bit weird, because that’s a QWidget-based application, but that’s fine. ClipStudio Pro on iOS also is a compleat Desktop Application, with popup dialogs and menus and everything, so I am sure Apple wouldn’t mind… And the Pencil was supported really well, so that’s very hopeful.

Now I only had to make one more experiment before starting to tackle maybe porting Krita: port the Tablet example to CMake, load that in Qt Creator and use Qt Creator to build it and deploy it to my iPad.

Well, that was my Waterloo. CMake doesn’t officially support iOS yet. G’Compris, which does, does that by providing a qmake file and some manual instructions. Google turns up a ton of conflicting advice, some old and outdated, some newer and more hopeful. I have tried to make a start on it, but no dice yet. If you know how to make CMake build and deploy to an iPad, answers on a postcard, please!

Would you like to speak at foss-north 2019? This is an excellent opportunity to come join a great conference on the Swedish west coast and meet the free and open source community!

The call for paper is open until late February – but it always helps if you submit your talks early. The submissions are made here. We gather a mix of different speakers – experienced speakers and first timers, technically detailed and process oriented, new contents and really old stuff. As long as it is interesting and fun, you are more than welcome. You can check-out videos and slides for past events for inspiration.

There is also an opportunity for projects who wants to do something fun to join in. Be it a development sprint, an install fest, a workshop or just a general meetup – join in and be a part of the foss-north community day. Reach out to me (e8johan, gmail) and I’ll tell you more.

One of the reasons I joined Codethink was that I could work from my home island, La Palma, Canary Islands, Spain. Sadly, a few month after joining, I came to the conclusion that me travel agenda was not compatible with living in one of the smallest islands. There were no good enough internet either back then. I had to choose between moving to one of the most populated islands (Tenerife or Gran Canaria) or move to another place. I ended up choosing Málaga, in the mainland of Spain, where I lived some years back already.

I still visit the island in summer and Christmas though, when my job activity is lower and work remotely from there. Last year the optic fiber finally arrived to my place since coversage is slowly increasing. There is no coworking spaces yet but the digital nomad movement is so hot in other islands that it is just a matter of time that it develops here too. This island has so much to offer…

But there is some homework to be done. The first one to create a community of people pasionate about technology, formed by those who lives here and those who visit the island regularly or ocassionally. From there, it might be possible to attract new visitors.

So after some thoughts and conversations with friends, I decided to try out a Mettup group in La Palma. There are no active ones. This new group is called:

San Miguel de La Palma tech lovers

La Palma is a popular name and many mistake it with Las Palmas, the biggest city and capital of Gran Canaria, so I used the full official name of my home islnad island, San Miguel de La Palma, to (hopefully) avoid misunderstandings.

I do not have previous experience organising Meetup groups but I am not new to organising tech events so I hope I can do well enough to kick-off a group, where others can feel welcome and take the lead in organising activities and even managing the group.

This island have a numerous German and nothern europpean community that live here during part of the year so it is my intention to organise the group as a multilanguage one, which makes things harder at the beginning. When it comes to topics, the scope of the group needs to be wide at the beginning since the potential target is small in number and disperse in interests. Time will tell if we can focus on two or three topics like remote work, Open Source, programming, etc.

I would like to complement talks and workshops with other activities like hiking or mountain biking which are extremely popular in the island.

So this is just the initial step in a journey I will enjoy. Feel free to join the meetup group. Hopefully we will announce the first activity in the coming weeks.

December 31, 2018

Screenshot_20181231_003822
Kasa - The personal finance tracker for the hashtag era. Randomized testing data displayed.

It's that time of the year again when daylight is only seen from office windows and the darkness of evenings comes way too early. Getting much more home time, I wanted to finally get a better overview of my spending. Fortunately we live in the age of apps - you need something? There's an app for it. I searched for some of that do-it-all-for-you apps, but privacy was my primary concern. Pretty much all the popular personal finance apps take your financial data and upload it to some server to crunch it. I find that completely unnecessary. It's just a bunch of numbers, why does it need to live on a remote machine where I have zero control over it and zero control over who has access to what I spend my money on? All I want is to get the transactions file from my bank every week, throw it at some app, put some tags on things and see where my money goes. Simple enough to not require any fancy servers right?

So I turned my search into the open source ranks as I know there are some popular apps like KMyMoney or GnuCash. I tried all of them and I wasn't happy with any of them. KMyMoney and Skrooge, the duo from KDE Applications suite, are swiss-army knives that I personally find hard to navigate for my simple needs. Skrooge especially - after the import it generates a dashboard with 22 random sized widgets in no particular order. It felt so overwhelming and intimidating that I knew it's not for me and I had to close it immediately (sorry Skrooge). GnuCash takes 7 seconds to load for whatever reason and I couldn't figure out how to categorize transactions, which is the most important feature to me. Then there's HomeBank. This one came closest to what I was looking for but it was missing some features that I wanted, one of which is showing the actual amounts for tags. Also the categorization feels needlessly complicated. I feel like all the pieces are there, they're just not put together the way I wanted.

At the same time I was really missing coding in Qt. My last two years have been only Android Java and if you've ever worked in that, you know how frustrating it can get at times. Very, if you've never worked with it. Although it's helped a lot by Android Studio, which is just the best IDE I've ever used. Anywho, so there I was, wishing to do some C++/Qt again on long winter evenings without an app I wanted. So I just started coding - cause how hard can it be?™

Enter Kasa

It's a very simple app - you throw an OFX file at it, it stores all the transactions from it in local database, you put tags on them and then you profit. Nothing more, nothing less. For now. The functionality at this point is extremely basic, it's pretty much just a glorified spreadsheet, but already covers lots of my needs.

So what can you actually do? You can (multi-)tag and edit transactions. The tags are then put into a pie chart which shows you exactly where your money goes, tag by tag. You can explore your transactions by tags. You can also set custom date range for the data that it's showing, this way you can immediately get a report for last 7 days, last 17 days, last 7 months, fully up to you.

Here be dragons

Kasa right now is lightyears away from something I'd consider releasing into production. It's pre-pre-pre-alpha, but it's good enough to put it out there and get some extra eyes and hands on it. At this point it needs both code and design. Mainly design. Please get in touch if this sounds interesting to you and you'd like to contribute!

What's next

The main thing I'm working on right now is to provide much better tagging experience, including bulk tagging, tags auto-complete and easy tag removal. After that I'm going to add a setting where you can define your statement period and it will default the date range to that on start (and it will auto-advance each month too). Then it's in serious need of error handling. Right now it's virtually non-existent, it always assumes valid OFX file and so on. This will be crucial. Another thing I want to work on is multi-accounts support. While the database is already set up for that, the rest of the app isn't really; right now I'm mostly focusing on my single credit-card account use case. Next, I want to have some budgeting options per tags and then track how the budget is doing. Eg. set 100/month for eating out, how much was already spent and how much is left? Etc.

Get it (the code)

The app is all open source and you can get it and build it from here. All you need is Qt 5 (including QtCharts), CMake and libOfx.

Potentially FAQ:

What does "Kasa" mean?

The word means "cash register" in Czech. I started with it only as a codeword for the project but then it kinda stuck. I may still rename it somewhere down the line.

Is/Will Kasa be for me?

My ultimate end goal for this project is to be a good personal finance tracker - not manager. I simply want this app to show where the money goes each week/month/quarter based on bank provided files (with manual input later as well). If you want/need anything beyond that, stick with the existing apps like KMyMoney, Skrooge, GnuCash or HomeBank, they're much better suited for that.

Why not help improve existing apps?

I did consider it of course but there are couple reasons - my time for this is extremely limited. All the existing apps are pretty big and I wanted to spend my limited time on actually writing Qt code rather than trying to understand the existing codebases and then try to change those to what I wanted to have. I just wanted to code something and I wanted to scratch my itch. So I just put those two together.

QML/Kirigami?

Yes please! At this point I'm focusing on the data layer and business logic and I wanted the UI to remain as simple as possible for that, for which QWidgets are good enough. But at some point I definitely want to write a QML UI for this. QItemDelegates are....not so great :) Patches welcome!

When will this be ready for regular use?

I have no dates set. It's just a hobby project to refresh my C++/Qt knowledge and I work on it every now and then. Who knows, maybe it will never see an actual release. But if you wanna help, code or design, be sure to let me know!

That said, happy 2019!

As 2018 ends, I've been working on in a new landing page to show my work. I've been playing with Vue.Js for a while, and is quite an amazing framework =D Using Vuetify framework, that is built above Vue.Js I was able to build a new landing page with information about me and the stuff... Continue Reading →

It’s the last day of the year, and statistics are fun! So, let’s see what we did this year, in fairly meaningless numbers!

Code

We made ten releases, as mentioned before. Over the past twelve months, 47 people committed a total of 2,842 commits to the master branch of Krita. That’s up from 2,260 commits in 2017. The top-ten list looks like this:

  1. 1,026 Boudewijn Rempt
  2. 591 Dmitry Kazakov
  3. 260 Wolthera van Hövell tot Westerflier
  4. 186 Scott Petrovic
  5. 172 Michael Zhou
  6. 121 Jouni Pentikäinen
  7. 107 Alvin Wong
  8. 104 Ivan Yossi
  9. 56 Andrey Kamakin
  10. 46 David Revoy

Collectively we removed 648,887 lines of code and added 996,142 lines of code. Of course… Lines of code and numbers of commits doesn’t say a whole lot. But we’ve currently got 580,268 lines of C++, 12,054 lines of Python code out of a total of 607,193 lines of code. There are 30 libraries, 151 plugins, 243 automated tests (of which 5 are failing).

Bugs

We ended the year 2018 with 395 open bugs. That’s 228 fewer bugs than we ended 2017 with! We received 1312 new bug reports and closed 1540 bug reports. Krita is the second busiest project in the KDE bugzilla database, directly after the Plasma desktop shell. Boudewijn closed 885 bug reports, Dmitry 220, Scott 73, Raghukamath 73, Wolthera 57, Jouni 35, Alvin 34, Emmett 25. The quickest we fixed a bug was within 2 minutes.

Wishes

There are now 374 wishes in bugzilla, 40 more than we ended 2017 with. People made 148 new wishes, and we closed 107.

Money

Through Paypal (both direct and via Mollie) we received 40,171 euros, down 2% compared to 2017. We started working with Mollie in August. That added 15,589.98 in revenue through bank transfers, bitcoin, credit cards. Our total income was 55,760,97, down from 70,480.70 in 2017. (That was an exceptional year, though, because we got so many donations to help out with the tax problems.) We don’t have an overview of what we spent on development and sprints yet.

New stuff in the official FreeBSD repositories! The X11 team has landed a newer version of libinput, opening up the way for KDE Plasma 5.14 in ports. That’s a pretty big update and it may frighten people with a new wallpaper.

What this means is that the graphical stack is once again on-par with what Plasma upstream expects, and we can get back to chasing releases as soon as they happen, rather than gnashing our teeth at missing dependencies. The KDE-FreeBSD CI servers are in the process of being upgraded to 12-STABLE, and we’re integrating with the new experimental CI systems as well. This means we are chasing sensibly-modern systems (13-CURRENT is out of scope).

Looking ahead for the first quarter of 2019:

  • Qt4 is scheduled for removal mid-March. That affects a lot more ports than KDE4 does, For instance leechcraft — that looks like a desktop environment to me, which I’d never heard of before. There doesn’t seem to be a release of it based on Qt5 yet, though.
  • QtWebEngine update from 5.9.5 to 5.12. WebEngine is terrible for distro packagers, especially outside of Google’s target audience. This has been lagging, but we’re now in a position to work on an update — and we’ve welcomed a new contributor who wants to make that happen.
  • Wayland! Really, it’s time to also have a KDE Plasma Wayland session on FreeBSD. I had some stuff working experimentally a year ago, but nothing that could work in the official ports tree. That’s now feasible (and then I can sit down to debug KMail Wayland issues as well).
  • Kookbook and Kolorfill. Aww, so cute.

The KDE4 ports in the official FreeBSD ports tree have been removed. I was there at the release party, in Mountain View, in 2008. And I’m here at the end of 2018 to cast some earth upon it.

The KDE-FreeBSD team has spent the past month or more, along with FreeBSD ports committers and maintainers who have other KDE4-related ports, in bringing things up-to-date with recent KDE-Frameworks-based releases, with hunting down alternatives, and with making the tough call that some things are just going away. Thanks to Rene for doing the portmgr commits to clean it up (r488762, r488763, r488764 and followups to remove KDE4-options from other ports) .

The modern KDE Plasma desktop, KDE Applications, and the rest of the stack continue to be actively supported. As of this writing, there’s 20 ports bugs open for kde@, so I think we’re doing OK.

Photo by Ingride Costa under CC-BY 4.0

Akademy is an annual conference organized by the KDE Community. It’s the place where contributors of all kinds from past and present meet, showcase their work and discuss things that shape the future of the KDE Software. This year's Akademy was held in the TU Wien, in the beautiful and historic city of Vienna, Austria.

First of all, I'd like to apologize for being late on this post as just after reaching home, I had a minor motorcycle accident, and which was followed shortly by prolonged illness.

I've been a KDE guy since the beginning of my technology career as an open source evangelist, entrepreneur, and developer. This year, I got the opportunity to showcase my work in front of the great people I've always admired.

My talk went well apart from the only problem that my time ran out faster than I thought, and I had to stop the presentation in middle. So I thought of writing down a summary of the talk here, please note since this is not a presentation but an article the flow has changed while the context remains the same.

The Initiation

The initial part of my talk revolved around the motivation on why we do free and open source software, the philosophical aspects such as the social impact of FOSS, its impact on education and understanding of internals due to its open nature and development of better software due to community engagement and collaboration. The fact that open source furnishes competition and which in turns creates an even playing ground and finally open source brings freedom from vendor lock-in, which is one of the major reasons, we do open source in India.

History of ICT in India

One cannot learn about any geography at its present until we learn about its history. India has a country has always been at the forefront of software technology, even in the days when its economy was small. From HEC2M India's first ever computer in 1956 to PARAM 8000, India's first supercomputer, India and Indians have always shown a keen interest in Information and Communications Technology. As of 2017, IT and ICT related activities contribute 7.7% to India's GDP and the Indian IT industry is roughly 200 billion USD of worth.

Not to forget that India's Pratyush Supercomputer ranks 39th in the Top 500.

Current State of FOSS in India

The current state of India in regards to Free and Open Source Software is somewhat optimistic, with more and more states of India bringing in IT policies which gives priority to free and open source solutions. However, things are happening slowly, mainly due to the fact that India is a continent in the disguise of a country. With its federal nature of governance and many self-governing states the Indian Union has got many IT policies, almost one for each state, the policies of the Union (a.k.a Federal or Central) Government on these subject, loosely related to trade and commerce, are more or less are advisories to the states, and not may be mandatory for the states to follow. There are areas where its mandatory to follow the Union law or policy (a.k.a national security), however in case of open source and open standards adoption by a state, it's not the case as of today, and we must go much deeper into the domain of law to understand the who's who, which is a subject of another day. India is a vast and dynamic country, which brings the need of local governments but also brings loop-holes as with any form of Governance.

National Policy

The Government of India (Union Government) have brought forward the "Policy on adoption of open source software in Government of India" and also launched a comprehensive framework on open source software in e-Governance systems.  Named "Framework on Open Source Software Adoption in e-Governance Systems", the documents details all aspects of free and open source software, its various licenses, pros and cons of each license and also the business models practiced by various open source vendors. The document is a mandatory policy for all of Union Government agencies and offices unless there is no alternatively available for non-free software for a particular purpose. This is a very positive step by the Government of India, however, the policy is limited to Government of India organizations and agencies only and states may choose to opt-in if they desire so.

State Policies

Many states in India has shown a keen interest in open-source software adoption even before the above mentioned national-level policy, states like Assam, Kerala, Tamil Nadu, have emerged as being the leaders in terms of implementations of OSS in the country since much earlier.

As I belong from Assam, I can tell that if Assam did not have a positive attitude towards open source from a very early time of my life, I'd have never been where I am today. Similar effects can be seen in the lives of people from the rest of the states mentioned, people, at least the ones in charge of administration, business and academia are not only aware of open source but considers it a valuable asset towards the development of their society.

Below is the map of India, where you can see open source adoption patterns in different states.

Note: After coming back to India, one of my friends from West Bengal pointed out that things are improving there also, that the West Bengal Government has also incorporated open source in its education curriculum, which is a positive step, so the above map can be considered little bit outdated, in a positive sense. I have shown the original map here as shown in the presentation as a reference to people who have seen it before. The new map can be seen as below (please forgive my GIMP skills).
While the state of West Bengal has given FOSS a place in its education curriculum, implementation in administration is yet to be seen, while this qualifies the state to be marked green in this map, much more efforts are needed. It's a positive step by the Government of West Bengal, and we hope that they will continue to push FOSS to more areas in their state. 

As you can see in the above map, states colored in yellow, a positive trend is seen, with organic growth in the adoption and awareness of open source software. However, there is no policy by the government to promote or use open source, and people are using mostly as part of education or individually. The states colored in green are the ones with a pro-FOSS IT policy, published and enforced. The small blue dots here and there are the Union Territories (federal administrations), are also pro-FOSS due to the national policy. You might think that the green areas are very less, but remember each of these states is big landmass with a large population, even covering one of these states is an extremely tough job.


Areas with success - Web and Databases

Free and Open Source software is the basis of India's web services, with all newer services, being developed using FOSS technologies.  I'd clear here that still services which came prior to the open source adoption policies are still running on non-free technologies, even some being legacy and unmaintained by the parent companies. The popularity of open source technologies such as PHP, Python, Node.js, OpenJDK has surged in the past couple of years. Frameworks like Django are becoming more and more common with more and more developers understanding and accepting these technologies as their daily driver for work and play.

India's databases, are open, with MariaDB and alike taking majority share, and can be considered as a great contributor to India being efficient in government-to-citizen service delivery.

Areas of concern - desktop computing and cross-platform application development

This is something still an area of concern in India, as, despite all the push from various institutions, the desktop scene in India is still dominated by non-free operating systems and applications. The adoption of OSS by individuals can be seen as almost none, except few smart guys as seen elsewhere. Application development on the desktop can also be seen as limited to technologies such as .NET and Adobe Air (which is not supported on Linux anymore). Java is the only exception in some cases, that too limited to enterprise software.

We at Libresoft as part of the SuperX project has pushed open source in the desktop, and have done few agreements ensuring at least the delivery of high-quality open source desktop solutions to the people.

SuperX 2.1 "Ada" with KDE Plasma running on Computer Application lab, K.C Das Commerce College, Guwahati, Assam

We have been to the length and breadth of India, promoting open source, with our Linux distro - SuperX and KDE software. SuperX can be seen being used Universities to small mud-houses of rural India.

SuperX 3.0 "Grace" with KDE Plasma running in a mud-house at a remote village of Madhya Pradesh. The first computer the village has ever seen, powered by solar energy and open source.

We have done some meaningful partnerships as well such as with AMTRON, the nodal agency for IT in Assam. We have shipped approximately 30,000  laptops in 2013, and we together are working on next slot of 20000 laptops. All these laptops will go to students, of age 16 to 17 years, as a present from the Government, for excellence in school leaving examination, as a motivation towards higher education, especially in Computer Science, as the laptops run SuperX GNU/Linux with KDE Plasma and tons of programming tools and educational materials. These kinds of initiatives by the Government gives early and easy access to free and open source software to the children and makes them more motivated.

Memorandum of Understanding signing ceremony between Libresoft and AMTRON, in the presence of Minister, Information Technology, Government of Assam. Under the memorandum, AMTRON as the nodal agency for IT in the state of Assam shall consider SuperX GNU/Linux as its primary OS for Assam's computing needs.

We must always take these numbers with a pinch of salt, because many may format these machines with pirated copies of Windows, however, as seen in the past, many students retained their laptops with the original OS, and that is a positive development for the open source software community.  We hope that India will give the world more and more open source contributors by these initiatives.


Time up...

Well, I wanted to talk a little bit more, and maybe the most important piece, on how KDE Software be mainstreamed in India. Some of the KDE Applications, are capable enough with some changes to replace dominant non-free software in the Indian market. However, as mentioned above, time was not there to continue the talk, so I had to stop in the middle. I will write another blog post regarding the missing part, as this post has grown long enough already.


Conclusion

The experience I had there, which was literally awesome, be it from a personal perspective or professional. I learned a lot of things from the great minds accumulated there and exchanged a lot of ideas. I also had the opportunity to talk to some of my heroes in person, which is a kinda dream come true for a person like me who has admired their work all his life. I'd thank the KDE Community for selecting my talk for its grandest stage, and also the KDE e.V for sponsoring my travel and stay at Vienna. Thank you, KDE :)

Could you tell us something about yourself?

My name is Phoenix and I’m a traditional and digital artist.

Do you paint professionally, as a hobby artist, or both?

I am working my way to becoming a professional concept artist, but for now, it is a hobby.

What genre(s) do you work in?

I tend to work in multiple genres mixed together. Sometimes it’s botanical and sci-fi, and other times it’s all my ideas thrown onto paper then into the computer.

Whose work inspires you most — who are your role models as an artist?

An artist that really inspires me is Alphonse Mucha. I really love his take on women and plants. He was great at keeping a constant warm colour scheme in his works.

How and when did you get to try digital painting for the first time?

About three years ago, I met someone who was also into art. Their art was beautiful and they pushed me to do better. I slowly got better over time with them as an inspiration. Early 2018, I traded some cheap alcohol based markers for their old Wacom tablet. I have been doing digital and traditional ever since.

What makes you choose digital over traditional painting?

I don’t choose between digital and traditional because I use traditional sketches to make my digital pieces. Digital is cleaner than traditional, which is more appealing to the human eye.

How did you find out about Krita?

I found out about Krita because I wanted a free software that would do what I wanted. Also, there is a dark theme, so it is easier on my eyes at night.

What was your first impression?

My first impression of Krita was excitement because I just kept messing with all the cool brushes and tools.

What do you love about Krita?

What I love about Krita is that it doesn’t take up that much RAM compared to other softwares I have used. It makes it really easy to record speedpaints for YouTube.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Brush processing is a pain at times, but that can be blamed on my lower spec PC. There’s a few bugs, but that happens with every software out there and no one can avoid it.

What sets Krita apart from the other tools that you use?

What sets Krita apart is that its brushes and other tools are a lot cleaner to use compared to other software I have used.

If you had to pick one favourite of all your work done in Krita so far, what
would it be, and why?

My most recent work that I made. I titled it Watch and Burn. The reason I love it so much is because I finally was able to keep a very consistent colour scheme.

What techniques and brushes did you use in it?

I used a traditional sketch that was in blue pencil, then edited it in a photo software to make it crisp looking. I put it in Krita and used the multiply blending mode, so you could see the sketch. The last thing I did was colour with a basic ink brush and blended with the blender blur tool.

Where can people see more of your work?

People can see more of my work on these platforms:
Instagram
Twitter
YouTube

Anything else you’d like to share?

Please continue developing Krita because this is the most friendly art software I have ever used in a long time. Also, art can be hard, but people should continue even if they are in a rut.

December 30, 2018

Konsole has this neat feature where you can automatically title each tab in the terminal-emulator window with information from the foreground process running in that tab. Useful if you have lots of shells opened to different directories in the system.
Screenshot of konsole tab bar
It’s configurable, of course, by editing the tab’s profile (Settings -> Edit Current Profile -> Tabs). There’s a printf-ish configuration widget. I usually cut it down to just %d, the working directory.

When I switched over to FreeBSD 12.0, I enabled some extra security features from the installer, and noticed that konsole was being less informative than usual:Screen shot of konsole tab bar without directories

The reason is simple: konsole uses the debugging interface (e.g. ptrace) to query its child processes, and I have turned off debugging support for unprivileged processes. A quick sysctl security.bsd.unprivileged_proc_debug=1 restores the show-current-directory functionality in konsole, although that does, of course, switch off that security feature. Set it in /etc/sysctl.conf to make it permanent.

A few days ago I've released PhotoTeleport 0.9, whose main improvements from the previous version are the added support for Imgur and SmugMug.

PhotoTeleport is an open source, multi platform photo uploader, written in Qt + QtQuick.Controls 1 which supports uploading your photos, along with title, description and tags, to many different sites at the same time. Those interested in the detailed list of changes can read the full announcement here.

Here is the last issue of release month! Today is the first release of Nanonote, a minimalist note-taking application.

Screenshot

Quoting the README:

It automatically saves anything you type in the screen on your disk. Being minimalist means it has no synchronisation, does not support multiple documents, images or any advanced formatting (the only formatting is highlighting urls). If you have long-lived notes you should store them in a more permanent place.

I built a very early version of Nanonote a little more than a year ago and since then I have been using it at work almost every day. I use it to take notes when I am investigating bugs or diving into a new code base, collecting links to pull requests and CI builds, whatever I need. It is basically an eco-friendly blank sheet.

The way I use it shaped some of the features like "Always on Top", auto bullet-lists or the ability to indent and unindent text blocks with Ctrl+I and Ctrl+U (handy with nested lists).

Nanonote is built with Qt 5 and uses QWidgets, this might sound old-school, but it's lightweight, it works and QWidget QPlainTextEdit is superior to QtQuick Controls 2 TextArea at the moment, for example by providing a decent context menu.

There aren't any pre-compiled binaries for the moment, so you need to build it yourself. See the README for details.

That's the end of release month, I hope you enjoyed these releases!

Over the Christmas season I rebuilt my workstation — the one I use in my home office, all day every day, writing Calamares or FreeBSD ports or other stuff — to be almost-all-flash (and 3TB of spinning rust for backups). Since the machine was open and on its side on my desk anyway, I decided to try out the available graphics options.

As occurs so often: I’m not writing about something I did. It’s nearly all someone else’s work, and the FreeBSD 12.0 release notes understate it a great deal.

Various improvements to graphics support for current generation hardware.

  • Intel iGPU My machine has an Intel Skylake i7 in it, so it’s got an iGPU. The graphics/drm-kmod port supports this, and so I installed the i915kms kernel module, followed instructions, fiddled around with some settings. Results: Video works, with 2 monitors. Sound over HDMI works, although I needed to configure the Plasma mixer to use pcm instead of vol as the OSS volume knob to turn.
  • nVidia GTX730 The cheapest and most fanless card I could get two years ago. This needs the (proprietary) x11/nvidia-driver as well as an Xorg configuration file, and some other settings for sound. Results: Video works, with 2 monitors. Sound over HDMI works, with the default mixer setup.
  • Radeon R7 360 This is a friend’s old gaming card. While it’s supposed to be supported by graphics/drm-kmod, all I got out of it was screen buffer corruption on loading the driver and no X after that. I didn’t bother debugging this because the choice for a zero-extra-power part (i.e. the iGPU) was easily made.

In any case the new release and graphics improvements make it much easier to run FreeBSD on laptops and systems without a discrete GPU (I should get a Ryzen 2200G or something like that to see how other Radeon graphics do .. or debug the one I’ve got).

Week 51 of KDE’s Usability & Productivity initiative is here! Most people are still out on vacation these days, so like last week it’s a bit lighter than usual. That didn’t stop the rest of us though! Check out what we did:

Bugfixes & Performance Improvements

User Interface Improvements

Next week, your name could be in this list! Not sure how? Just ask! I’ve helped mentor a number of new contributors recently and I’d love to help you, too! You can also check out https://community.kde.org/Get_Involved, and find out how you can help be a part of something that really matters. You don’t have to already be a programmer. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

If my efforts to perform, guide, and document this work seem useful and you’d like to see more of them, then consider becoming a patron on Patreon, LiberaPay, or PayPal. Also consider making a donation to the KDE e.V. foundation.

Dear digiKam fans and users, following the first beta release published in October, we are proud to announce the third and last beta of digiKam 6.0.0, before the final release planed in February 2019. Last main components updated With the new upcoming 6.0.0 final release, we updated the code and all bundles with last stable versions of Exiv2 0.27 and Libraw 0.19 shared libraries. The first one is a main component of digiKam used to interact with file metadata, like populating database contents, updating item textual information, or handling XMP side-car for read only files.

December 29, 2018


Latte Dock v0.8.4   has been released containing important fixes and improvements!


Go get   v0.8.4  from, download.kde.orgor  store.kde.org*

-----
* archive has been signed with gpg key: 325E 97C3 2E60 1F5D 4EAD CF3A 5599 9050 A2D9 110E

Fixes:
  • restore mouse wheel action to activate your tasks that broke with version 0.8.3
  • support fillWidth(s)/Height(s) applets in Left/Center/Right alignments, add a plasma taskmanager to see what happens
  • do not break applets order in Justify alignment when some of the applets in the layout are not found in the system
  • fix a crash that was related to grouped tasks
  • improve launchers synchronization between different docks/panels

Donations:

There are users asking how they could donate to Latte. Best way to do so and one that I promote is by donating through plings in the kde store. Split your donation between my active projects in kde store that would be more that enough for me. My active projects in kde store are:







December 28, 2018

Here’s a nice obscure bug: mouse wheel events on FreeBSD on Power9 are not recognized by Qt5. I don’t have anything PPC64 .. there were some Mac G4s at a second-hand place in town, but that is not a road to happiness. So obviously I need a Power9-based system to test this, and by coincidence Raptor Computing is now accepting pre-orders.

Yeah, not for $2000 to debug a mouse problem, but if anyone has one I can borrow, hit me up ��

(PS. I mentioned back in October that KDE Plasma 5 on Debian on Power9 Just Works, so it is something in the FreeBSD side of things.)

I finally got around to doing the final merge for QmlBook this year.

I just merged the chapter on the brand new TableView. This let’s you show 2D data tables in an efficient way.

I also merged the version upgrade, so the text should now reflect what is available from Qt 5.12 and be based on menus and screens from Qt Creator 4.8.

During this update of the text, we took the decision not to upgrade all import statements to QtQuick 2.12. Instead, we use different versions in different places. However, the contents of Qt 5.12 is covered.

There are still two things left for the 5.12 branch. The first is to use the new input handlers, e.g. TapHandler instead of MouseArea. The second is to setup a release branch in the CI system so that there is a 5.12 version of the book built separately. I suspect that this will force me to learn more Travis tricks :-)

December 27, 2018

Qt5 and KDE Plasma 5 have been running smoothly on my workstation desktop for a year or more. I have a kind of boring desktop: there is one CPU, one graphics card, two network interfaces, and I use the default settings for just about everything. .. and everything (that I need) just works.

But it didn’t work for everyone: there was this one weird bug report that when the system had VLANs defined, that most Qt5-based applications would crash or refuse to start up. That first manifested itself there as a build failure of kf5-syntaxhighlighting. After some discussion with Volker, I ended up with a workaround: don’t validate the schema’s during the build. That takes away the networking dependency, and things were OK again.

Other similar bug reports trickled in. They’re now all closed as duplicates of this original. Some patches trickled in, which I didn’t particularly like because they were of the “comment this bit out and things work”. Thankfully the original reporter of the kf5-syntaxhighlighting build failure, Ting-Wei Lan, did a great deal of debugging work. Enough to give me a handle on where to continue looking. I hemmed and hawed, tried blaming the run-time loader, but really all the evidence pointed at memory corruption from inside Qt5Network.

Fortunately the problem was totally reproducible and consistent in the way it crashed: create a VLAN, and one by one all Qt-based applications that touch the network would crash with an unresolved symbol. Rebuilding with debug symbols and everything turned on .. just got me a core dump somewhere else. After much futzing about, I found one location where adding a qWarning() << "foo"; made the problem go away. That's just as unsatisfying as commenting-out bits until it works.

Valgrind to the rescue. It told me about uninitialized memory being used in ioctl() calls, in an area of Qt5Network that I already thought was a bit flaky (wrt. FreeBSD support, anyway).

En passant I learned more about gdb, valgrind, ioctl() internals, and network status querying. And on Christmas Eve (or afternoon) I finally landed a bunch of patches:

  • Network bearer detection fixed; Ethernet is now recognized as such and doesn't hit the generic bearer.
  • Network media detection fixed; don't re-use ioctl buffers for a different ioctl; use the right ioctl number (apparently NetBSD and FreeBSD differ there).
  • Be slightly smarter about closing sockets; I took this opportunity to introduce a SockPuppet class, for silliness.
  • Support LibreSSL, OpenSSL 1.0, and OpenSSL 1.1 all at the same time.

The last item, supporting different SSL implementations, is all other people's work. I just built, tested and landed their efforts. Credits are in the corresponding ports commits.

As a consequence of all this, along with the release of FreeBSD 12.0, I now have an i3-based FreeBSD 12 machine with up-to-date Intel graphics and a KDE Plasma 5 desktop that uses libressl across the entire stack, and can survive having VLANs modified, as well. That's a good present for the end of the year. (For the new year, I resolve to try to upstream some of these fixes, minus any silliness)

2018 comes to an end, and so does an exciting year for all KDE bug triagers. Let's recap what we achieved and have a look at what 2019 might have in store.

What is bug triaging, anyway?

triage, noun
The process of determining the most important people or things from amongst a large number that require attention.

This is the official definition for the word "triage" from the Oxford Dictionary, and it actually describes our work very nicely. The idea behind triaging bugs is that developers want to know which problems need the most attention right now.  

They also rely on bug reports being complete and up-to-date, which is another important part of bug triaging.

The KDE Bugsquad

Every day, KDE users from around the world report over 35 bugs against one of our programs. That is a large amount of bugs and we needed ways to deal with as many of them as possible. This is the point where the Bugsquad comes into play.  

The KDE Bugsquad keeps track of incoming bugs in KDE software,  and goes through old bugs. We verify that a bug exists, whether it is  reproducible, and that the reporter has given enough information. Our goal is to save developers from doing this, which helps them fix bugs more quickly and do more work on KDE software.
(https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging)

Now that we have a basic understanding of the Bugsquad, let's take a look at what happened in 2018.

Numbers, please!

Short disclaimer: All these figures are based on the data available until the 22th of December.

Line chart of all RESOLVED bugs in bugs.kde.org

In 2018, we managed to resolve 28060 bugs and also crossed the milestone of 300000 resolved bugs on the 16th of July! That means that we theoretically closed more than 75 bugs every day. We can also clearly see a few spikes towards the end of the year. The main reason for these spikes is our new Bug Janitor, which automatically closes stale bugs after a fixed period of time.  

Another cause for the jump at the beginning of June was the mass closure of all KDE Plasma 4 bugs, because we do not maintain KDE Plasma 4 any more.

Line chart of all open bugs in bugs.kde.org

In contrast to the always changing number of resolved bugs, our amount of open bugs stays surprisingly constant. The impact of the Bug Janitor is again clearly visible.

Line chart of all unconfirmed bugs in bugs.kde.org

An important way of checking the health of a bug reporting platform is the amount of unconfirmed bugs. In an ideal world, this amount would always be very close to zero, as new bugs should immediately get picked up by bug triagers and in the end be confirmed or closed.

In KDE, we sadly have not managed to achieve this goal just yet. On a brighter note, we were able to keep the number of unconfirmed bugs about constant, despite the fact that 12839 new bugs were filed in 2018.

KFactoids

  • Only 8 out of the 50 products that have the most open bugs actually had more bugs at the end of the year than at the beginning  
  • 30 individual contributors closed more than 100 bugs this year (the Bug Janitor excluded)  
  • Konqueror is the software with the most open bugs  
  • Plasmashell had the most new bugs reported in 2018, with a close second place being Krita  
  • The quickest fix for a bug was from Thomas Baumgart with 47 seconds between opening the bug and commiting the fix (but he reported the bug himself)  
  • The quickest fix for a bug that was not reported by the person who fixed it was from Boudewijn Rempt with a time of 2 minutes and 30 seconds  

Bug triaging days

In 2018, we also resurrected the concept of "bug triaging days". The idea is simple: if 10 people look at 10 bugs each, we can triage 100 bugs quickly within a single day. Such events are also a good way to get new people involved with KDE!  

What's next?

2018 was an impressive year for the Bugsquad, but we won't stop here. Here is my personal wishlist/list of goals for 2019:

  • Get the amount of unconfirmed bugs down to 5000  
  • Hold more bug triaging events, and always triage at least 30 bugs overall per session  
  • Ensure that all bug triaging material on the wiki is updated and easy to understand
  • Try to set up a system such that all ~35 new bugs that get reported every day get looked at by at least one triager during the next three days  
  • Encourage people who had a bad experience with our bug reporting system to report bugs again  

This year was a very interesting year in the development in KWin. After having been the maintainer for several years, I knew that I would not be able to continue being maintainer for personal reasons. Personally I had tried to lower my contributions for quite some time already and encouraged others to do reviews, keeping out of some reviews completely. In 2018 several of such code contributions landed which I hadn’t looked at all and which work out greatly – an example is the new excellent blur effect which I didn’t look at the code at all.

When I stepped down as maintainer I had to read many negative and fearful comments doubting the future of KWin. Personally I was very positive that me stepping down would not have a negative impact, in fact I even hoped for positive impact as it gives new blood the chance to fill up the gap and bring in new ideas. I had become very conservative over time.

So I just run some git stats [1] over the KWin repository to try to verify the assumption that me stepping down had no negative impact: In 2017 there were 614 non scripty commits. The author with most commits was me with 387. Overall 37 developers contributed to KWin.

In 2018 (01.01.2018-26.12.2018) there were 644 non scripty commits authored by 48 developers. The developer with most contributions is Vlad (241). I am no longer the top contributor in KWin (“only” 115 commits) – the last time this happened was 2008, the year I joined KWin development.

I am very happy about this development. We have new developers like Vlad and Alex working on areas which had been neglected for quite some time. The way the effect system improved thanks to them is really great. We have developers like David and Roman improving the Wayland support and overall KWin core. The knowledge about KWin gets spread and the development work is spread over more shoulders. Having a single developer doing the majority of commits is not always healthy. We can see the positive effects this year: we have more contributors overall and more contributors contributing multiple patches.

[1] git shortlog -sne –since=”01 Jan 2018″ –before=”01 Jan 2019″ and git shortlog -sne –since=”01 Jan 2017″ –before=”01 Jan 2018″

Soon I'll be making 2 years as an intern at the biggest tv network of Latin America. And with that, my contract will be over. And what happens next is still to be decided. This post is meant to be a register of part of my history, a history that started on January 16th of... Continue Reading →

December 26, 2018

Una de las principales razones para unirme a Codethink era la posibilidad de trabajar en remoto desde mi casa en La Palma, Islas Canarias, España. Desgraciadamente, a los pocos meses de comenzar en esta empresa me di cuenta de que mi agenda de viajes era incompatible con vivir allí. El deficiente número de conexiones internacionales y la nula disponibilidad de una conexión directa al aeropuerto de Tenerife Sur hacían mis viajes complejos. Tampoco había internet lo suficientemente bueno por aquel entonces. De modo que tuve que elegir entre mudarme a una de las islas mayores o buscar otra localización. Acabé eligiendo Málaga, donde ya había vivido con anterioridad.

Sigo visitando La Palma con frecuencia. Cuando la actividad de la empresa baja trabajo en remoto desde aquí. El pasado año por fin llegó la fibra óptica a mi casa. La cobertura de esta tecnología está aumentando poco a poco en la isla. Aún no existen espacios coworking pero el movimiento de nómadas digitales en otras islas está creciendo tanto que espero que pronto llegue a La Palma. Esta isla tiene tanto que ofrecer…

Pero para llegar ahí hay que tener ciertos deberes hechos. El primero de ellos es crear una comunidad de profesionales y aposionados por la tecnología que vivan en la isla o la visiten frecuentemente. Sólo así podremos dar otros pasos para atraer a visitantes que quieran quedarse aquí unas semanas, mientras trabajan.

Así que despues de darle vueltas durante meses y diferentes conversaciones con amigos, me decidí a lanzar un grupo Meetup en La Palma, puesto que no existen precedentes en funcionamiento.

El nuevo grupo se llama San Miguel de La Palma tech lovers

La Palma se confunde habitualmente con Las Palmas, la capital de la isla de Gran Canaria y la ciudad más grande del archipiélago canario, de modo que he usado el nombre oficial completo de la isla para la denominación del grupo. Espero así evitar malentendidos entre quienes no conocen nuestra geografía.

Carezco de experiencia previa en la organización de grupos de Meetup pero sí en la organización de eventos technológicos de diversa escala, así que espero poder llevar adelante la iniciativa haciendo sentir a los interesados bienvenidos, promoviendo la organización de eventos así como la delegación en un futuro próximo de todas las responsabilidades en aquellos más entusiastas y eficientes. Esto debe ser una acción colectiva para tener éxito.

La Palma cuenta con una nutrida comunidad alemana así como de países como Holanda o Austria de modo que el grupo será multilingüe, lo que dificultará su gestión. Asimismo, como nuestro público objetivo es reducido en número y de intereses dispersos, debemos abarcar inicialmente diferentes temas, lo que incrementa aún más las dificultades de tener éxito.

Espero en cualquier caso que podamos construir una comunidad activa de tech lovers sobre la base de temas como el trabajo remoto, el software libre, herramienas web o marketing digital, procesos de trabajo modernos, etc. Será todo un reto. El planteamiento inicial es complementar las charlas y workshops con actividades al aire libre como el senderismo o la bici de montaña o astroturismo, tan populares en esta isla.

Únete al grupo. 

La constitución del grupo es solo el primer paso de un excitante viaje que vamos a disfrutar. Próximamente comenzaremos la organización de nuestra primera actividad. Estate atento.

December 25, 2018

What better way to spend a lazy Christmas day than to merge some of the new chapters to the QmlBook?

Since having updated the CI system, there is a bit of manual merging forth and back to get each new piece of contents on-line. Specifically, rebase my fork’s master to upstream/master, then rebase my working branch on my master, and then pushing.

One of the newly added chapters cover Qt Quick Controls 2, which is the recommended way to create controls such as buttons, sliders, checkboxes, menues, and so on. Here we look at how you target various form factors such as desktop and mobile, but also how to maintain a common code base between the two.

The other new chapter has a look at Qt for Python from a QML perspective. This chapter goes from the very basic all the way to exposing a Qt model representing the data of an existing Python module to QML.


Older blog entries


Planet KDE is made from the blogs of KDE's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet KDE with RSS, FOAF or OPML.