As we noticed that version 3.0 contained a critical bug in the new "Comparator" activity, we decided to quickly ship this 3.1 maintenance release to fix the issue.
It also contains some little translation update.
You can find packages of this new version for GNU/Linux, Windows, Raspberry Pi and macOS on the download page. This update will be available soon in the Android Play store, the F-Droid repository and the Windows store.
A pesar de un protocolo abierto y libre, y una red pública y federada, el correo electrónico se ha convertido en un oligopolio. ¿Correrán la misma suerte Mastodon y el resto del Fediverso?
Very good piece about that dangerous moment in the creation of the latest large language models. We’re about to drown in misinformation, can we get out of it?
OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time
Tags: tech, ethics, ai, machine-learning, gpt
The human labor behind AI training is still on going. This is clearly gruesome and sent over to other countries… ignoring the price for a minute this is also a good way to hide its consequences I guess.
The Elusive Frame Timing | by Alen Ladavac | Medium
Tags: tech, 3d, performance
Excellent analysis and explanation of the stutter problem people experience with game engines. It’s an artifact of the graphics pipeline becoming more asynchronous with no way to know when something is really displayed. Extra graphics APIs will be needed to solve this for real.
This WebGPU framework is getting interesting. Definitely something to keep an eye on and evaluate for productive uses. Obviously requires WebGPU to be widely available before banking on it.
We invested 10% to pay back tech debt; Here’s what happened
Tags: tech, programming, technical-debt
Excellent piece about technical debt. The approach proposed is definitely the good one, it’s the only thing which I know to work to keep technical debt at bay.
It’s coming from the job interview domain… but I wonder if it could be more largely useful due to how simple it is (but not easy mind you). I guess I’ll experiment with it for my next project postmortem.
We are pleased to announce the release of GCompris version 3.0.
It contains 182 activities, including 8 new ones:
"Mouse click training" is an exercise to practice using a mouse with left and right clicks.
In "Create the fractions", represent decimal quantities with some pie or rectangle charts.
In "Find the fractions", it's the other way: write the fraction represented by the pie or rectangle chart.
With "Discover the International Morse code", learn how to communicate with the International Morse code.
In "Compare numbers", learn how to compare number values using comparison symbols.
"Find ten's complement" is a simple exercise to learn the concept of ten's complement.
In "Swap ten's complement", swap numbers of an addition to optimize it using ten's complement.
In "Use ten's complement", decompose an addition to optimize it using ten's complement.
We've added 2 new command line options:
List all the available activities (-l or --list-activities)
Directly start a specific activity (--launch activityName)
This version also contains several improvements and bug fixes.
On the translation side, GCompris 3.0 contains 36 languages. 25 are fully translated: (Azerbaijani, Basque, Breton, British English, Catalan, Catalan (Valencian), Chinese Traditional, Croatian, Dutch, Estonian, French, Greek, Hebrew, Hungarian, Italian, Lithuanian, Malayalam, Norwegian Nynorsk, Polish, Portuguese, Romanian, Russian, Slovenian, Spanish, Ukrainian). 11 are partially translated: (Albanian (99%), Belarusian (83%), Brazilian Portuguese (94%), Czech (82%), Finnish (94%), German (91%), Indonesian (99%), Macedonian (94%), Slovak (77%), Swedish (94%) and Turkish (71%)).
A special note about Ukrainian voices which have been added thanks to the organization "Save the Children" who funded the recording. They installed GCompris on 8000 tablets and 1000 laptops, and sent them to Digital learning Centers and other safe spaces for children in Ukraine.
Croatian voices have also been recorded by a contributor.
As usual you can find packages of this new version for GNU/Linux, Windows, Android, Raspberry Pi and macOS on the download page. This update will also be available soon in the Android Play store, the F-Droid repository and the Windows store.
For packagers of GNU/Linux distributions, note that we have a new dependency on QtCharts QML plugin, and the minimum required version of Qt5 is now 5.12. We also moved from using QtQuick.Controls 1 to QtQuick.Controls 2.
In this blog we’ll see how it was done and how you can publish your KDE app in the Microsoft Store.
Reserving a Name and Age Rating Your App
The first step requires some manual work. In Microsoft Partner Center you need to create a new app by reserving a name and complete a first submission. How to do this has been described by Christoph Cullmann in the Windows Store Submission Guide. Don’t hesitate to reserve the name of your app even if you are not yet ready for the first submission to the Microsoft Store. Once a name is reserved nobody else can publish an app with this name.
The first submission needs to be done manually because you will have to answer the age ratings questionnaire. NeoChat was rated 18+ because it allows you to publish all kinds of offensive content on public Matrix rooms. Filling out the questionnaire was quite amusing because I did it together with the NeoChat crowd at #neochat:kde.org.
On the first submission of NeoChat I chose to restrict the visibility to Private audience until it was ready for public consumption. I created a new customer group NeoChat Beta Testers with the email address of my regular Microsoft Store account in Microsoft Partner Center and then selected this group under Private audience. This way I could test installing NeoChat with the Microsoft Store app before anybody else could see it.
Don’t spend too much time filling out things like Description, Screenshots, etc. under Store Listings because some of this information will be added automatically from the AppStream data of your app for all available translations.
Semi-automatic App Submissions
The next submissions of NeoChat were done semi-automatically via the Microsoft Submission API with the submit-to-microsoft-store.py script while writing this Python script and the underlying general Microsoft Store API Python module microstore. The script is based on a Ruby prototype (windows.rb) written by Harald Sitter.
The idea is that the script is run by a (manual) CI job that the app’s release manager can trigger if they want to publish a new version on the Microsoft Store.
To run the script locally you need the credentials for an Azure AD application associated with KDE’s Partner Center account. Anything else you need to know is documented in the script’s README.md.
Making NeoChat Publically Available
The last step of the process to get NeoChat published in the Microsoft Store was another manual submission which just changed the visibility to Private audience. This could also have been done via the Microsoft Submission API (but not with the current version of the script), but I think it’s good to have a last look at the information about the app before it is published. In particular, you may have to fill out the Notes for certification, e.g. if your app cannot be tested without a service or social media account. For NeoChat we had to provide a test account for Matrix.
Moreover, you may want to fill out some details that are currently not available in the AppStream data, e.g. a list of Product features, the Copyright and trademark info, or special screenshots of the Windows version of your app.
What’s Next
On our GitLab instance, we want to provide a full CI/CD pipeline for building and publishing our KDE apps on the Microsoft Store (and many other app stores). A few important things that require special credentials or signing certificates are still missing to complete this pipeline.
And we want to get more KDE apps into the Microsoft Store.
If you need help with getting your KDE app into the Microsoft Store, then come find me in the #kde-windows room.
Updates after publication:
2023-01-31: Updated link to script after merge of the MR
New year, new RISC-V Yocto blog post \o/ When I wrote my last post, I did really not expect my brand new VisionFive-2 board to find its way to me so soon… But well, a week ago it was suddenly there. While unpacking I shortly pondered over my made plans to prepare a Plasma Bigscreen RaspberryPi 4 demo board for this year’s FOSDEM.
Obvious conclusion: “Screw it! Let’s do the demo on the VisionFive-2!” — And there we are:
After some initial bumpy steps to boot up a first self-compiled U-boot and Kernel (If you unbox a new board, you need to do a bootloader and firmware update first! Otherwise it will not boot the latest VisionFive Kernel) it was surprisingly easy to prepare Yocto to build a core-image-minimal that really boots the whole way up.
Unfortunately after these first happy hours, the last week was full of handling the horrors of closed-source binary drivers for the GPU. Even though Imagination promised to provide an open source driver at some time, right now there is only the solution to use the closed source PVR driver. After quite a lot of trying, guessing and and comparing the boot and init sequences of the reference image to the dark screen in front of me, I came up with:
a new visionfive2-graphics Yocto package for the closed source driver blobs
a fork of Mesa that uses a very heavy patch set for the PVR driver adaptions; all patches are taken from the VisionFive 2 buildroot configurations
and a couple of configs for making the system start with doing an initial modeset
The result right now:
VisionFive-2 device with Plasma-Bigscreen (KWin running via Wayland), SD card image built via Yocto, KDE software via KDE’s Yocto layers, Kernel and U-Boot being the latest fork versions from StarFive
Actually, the full UI even feels much smoother than on my RPi4, which is quite cool. I am not sure where I will end in about 3 weeks with some more debugging and patching. But I am very confident that you can see a working RISC-V board with onboard GPU and running Plasma Shell, when you visit the KDE stall at FOSDEM in February
I’m more and more tempted by this kind of approach. Managing architecture models using code seems fairly neat. That said I wish we’d have better free software tooling for that, I find it still fairly limited. Maybe I should check out the Haskell library which is mentioned.
I am pleased to announce Linux-Stopmotion release 0.8.6! The last release was three years ago and this is the first release since Stopmotion became a KDE incubator project.
About Stopmotion
Stopmotion is a Free Open Source application to create stop-motion animations. It helps you capture and edit the frames of your animation and export them as a single file.
Direct capture from webcams, MiniDV cameras, and DSLR cameras. It offers onion-skinning, import images from disk, and time lapse photography. Stopmotion supports multiple scenes, frame editing, basic sound track, animation playback at different frame rates, and GIMP integration for image. Movies can be exported to a file and to Cinelerra frame lists.
Technically, it is a C++ / Qt application with optional dependencies to camera capture libraries.
Changes in release 0.8.6
This release does not contain new features but provides changes under the hood.
New build system using CMake. The qmake one is deprecated and will be removed.
The test executable can be executed as a CMake test target (make test-stopmotion && make test).
Fixed various warnings from Clang, GCC, and Qt 5.15.
We have a build pipeline executing automated builds and tests.
Future plans
We decided to renamed the application to KStopmotion, as Linux is trademarked.
Transition from Qt 5 to version 6.
We should integrate better to KDE's tech stack: Internationalization, using KDE libraries, update and reformat documentation.
Get involved!
If you are interested, give Stopmotion a try. Reach out to our mailing list kstopmotion@kde.org to share ideas or get involved.
You can also help to improve Stopmotion. For example, we started the transition to Qt 6 and we welcome any helping hand.
Journalists (And Others) Should Leave Twitter. Here’s How They Can Get Started | Techdirt
Tags: tech, attention-economy, twitter, fediverse
Let’s hope journalists hear that call. It’s indeed sad that so far it’s mostly words and not many actions to move away from Twitter in that profession.
The internet wants to be fragmented - by Noah Smith
Tags: tech, social-media, internet
Interesting take, let’s see if it’s true and things will decentralize (or at least audiences fragment, the author seems to confuse both) more in the future.
Interesting piece. It shows quite well what users have lost with the over reliance on HTTP for everything. Moving more and more things in the brother fosters walled gardens indeed. Compound this with branding obsession from most company and you indeed end up with an absurd situation.
Excellent piece, looking back to history to justify why microservices are mostly a fad. Check what your needs really are and depending on them pick the right way to decompose the problem or organize your teams.
Microfeatures I’d like to see in more languages • Buttondown
Tags: tech, programming
Since I’m also a bit of a nerd of nice programming language features, that’s an interesting list (mostly) coming from less known languages. Some of that syntactic sugar would be welcome in more main stream languages I think.
Test Desiderata. Go placidly amid the noise and haste… | by Kent Beck | Medium
Tags: tech, tests, tdd
This what we should strive for with our tests. I like how he keeps it flexible though, again it’s likely a trade-off so you can’t have all the properties fully all the time. Still you need to know what you give up, how much of it and why.
Fast Path to a Great UX – Increased Exposure Hours — UX Articles by UIE
Tags: tech, ux
A bit old but interesting finding. Kind of confirms my own view about it: it’s best when everyone (not just designers) can interact with the users of the system you’re building.
Sign-up Versus Assignment - by Kent Beck - Geek Incentives
Tags: tech, project-management, management
It’s clearly a choice in management style. For such choices, always keep in mind the trade offs this create, maybe it’ll push you to revise your choice.
Like it or not (I’m part of those who don’t like it) but the role of manager will necessarily create power imbalances. This article is thus a must read to managers at any level to know how to deal with it properly.
Very interesting model, I didn’t know about this one. As pointed out you can’t really base policy decisions upon it but that’s still powerful since it explains some of the phenomena at play in the real world. In this way it is enough to debunk some of the assumptions taken a bit too much for granted.
As an Linux application developer, one might not aware that there could be certain effort required to support Input Method (or Input Method Editor, usually referred as IME) under Linux.
What is input method and why should I care about it?
Even if you are not aware, you are probably already using it in daily life. For example, the virtual keyboard on your smart phone is a form of input method. You may noticed that the virtual keyboard allows you to type something, and gives you a list of words based on what you already partially typed. That is a very simple use case of input method. But for CJKV (Chinese, Japanese, Korean, Vietnamese) users, Input method is necessary for them to type their own language properly. Basically imagine this: you only have 26 English key on the keyboard, how could you type thousands of different Chinese characters by a physical keyboard with only limited keys? The answers, using a mapping that maps a sequence of key into certain characters. In order to make it easy to memorize, usually such mapping is similar to what is called Transliteration , or directly use an existing Romanization system.
For example, the most popular way for typing Chinese is Hanyu Pinyin.
In the screenshot above, user just type “d e s h i j i e”, and the input method gives a list of candidates. Modern Input method always tries to be smarter to predict the most possible word that the user wants to type. And then, user may use digit key to select the candidate either fully or partially.
What do I need to do to support Input method?
The state of art of input method on Linux are all server-client based frameworks. The client is your application, and the server is the input method server. Usually, there is also a third daemon process that works as a broker to transfer the message between the application and the input method server.
1. Which GUI toolkit to use?
Gtk & Qt
If you are using Gtk, Qt, there is a good news for you. There is usually nothing you need to do to support input method. Those Gtk toolkit provides a generic abstraction and sometimes even an extensible plugin system (Gtk/Qt case) behind to hide all the complexity for the communication between input method server and application.
The built-in widget provided by Gtk or Qt already handles everything need for input method. Unless you are implementing your own fully custom widget, you do not need to use any input method API. If you need your custom widget, which sometimes happens, you can also use the API provided by the toolkit to implement it.
The best documentation about how to use those API is the built-in widget implementation.
SDL & winit
If you are using SDL, or rust’s winit, which does have some sort of input method support, but lack of built-in widget (There might be third-party library based on them, which I have no knowledge of), you will need to refer to their IME API to do some manual work, or their demos.
Refer to their offical documentation and examples for the reference:
As for XCB, you will need to use a third-party library. I wrote one for XCB for both server and client side XIM. If you need a demo of it, you can find one at:
As for writing a native wayland application from scratch with wayland-client, then you will want to pick the client side input method protocol first. The only common well supported (GNOME, KWin, wlroots, etc, but not weston, just FYI) one is:
If you use a toolkit with widget that can already support input method well, you can skip this and call it a day. But if you need to use low level interaction with input method, or just interested in how this works, you may continue to read. Usually it involves following steps:
Create a connection to input method service.
Tell input method, you want to communicate with it.
Keyboard event being forwarded to input method
input method decide how key event is handled.
Receives input method event that carries text that you need to show, or commit to the application.
Tell input method you are done with text input
Close the connection when your application ends, or the relevant widget destructs.
The 1st step sometimes contains two steps, a. create connection. b. create a server side object that represent a micro focus of your application. Usually, this is referred as “Input Context”. The toolkit may hide the these complexity with their own API.
Take Xlib case as an example:
Create the connection: XOpenIM
Create the input context: XCreateIC
Tell input method your application wants to use text input with input method: XSetICFocus
Key event is forward to input method by compositor, nothing related to keyboard event need to be done on client side.
Get committed text zwp_text_input_v3.commit_string
Call zwp_text_input_v3.disable
Destroy relevant wayland proxy object.
And always, read the example provided by the toolkit to get a better idea.
3. Some other concepts except commit the text
Support input method is not only about forwarding key event and get text from input method. There are some more interaction required between application and input method that is important to give better user experience.
Preedit
Preedit is a piece of text that is display by application that represents the composing state. See the screenshot at the beginning of this article, the “underline” text is the “preedit”. Preedit contains the text and optionally some formatting information to show some rich information.
Surrounding Text
Surrounding text is an optional information that application can provide to input method. It contains text around the cursor, where the cursor and user selection is. Input method may use those information to provide better prediction. For example, if your text box has “I love |” ( | is the cursor). With surrounding text, input method will know that there is already “I love ” in the box and may predict your next word as “you” so you don’t need to type “y-o-u” but just select from the prediciton.
Surrounding text is not supported by XIM. Also, not all application can provide valid surrounding text information, for example terminal app.
Reporting cursor position on the window
Many input method engine needs to show a popup window to display some information. In order to allow input method place the window just at the position of the cursor (blinking one), application will need to let input method know where the cursor is.
Notify the state change that happens on the application side
For example, even if user is in the middle of composing something, they may still choose to use mouse click another place in the text box, or the text content is changed programmatically by app’s own logic. When such things happens, application may need to notify that the state need a “reset”. Usually this is also called “reset” in the relevant API.