In recent weeks we have been working on transferring LabPlot’s documentation to a new format.
We decided to move the documentation from the DocBook and MediaWiki format to the Sphinx/reStrcutredText framework. In our perception Sphinx offers a user-friendly and flexible way to create and manage documentation. Easy math typing and code formatting also come along. Additionally, Sphinx supports basic syntax checks, and modern documentation practices, such as versioning and integration with various output formats like HTML, PDF and ePub.
The new user’s manual is available on a dedicated page: https://docs.labplot.org. Please check it out and let us know what you think.
The manual still needs to be supplemented with new content, so we encourage you to contribute to the documentation, e.g. by fixing and adding new sections, updating images, as collaborative efforts can lead to a more comprehensive resource for everyone. Please check the Git repository dedicated to the documentation to find more details on how to help make it better.
We started this project with the intent of providing users a tool helpful in inking sketches. It is based on a research article by Simo & Sierra published in 2016, and it uses neural networks (now commonly called simply AI) to work. The tool has been developed in partnership with Intel and it’s still considered experimental, but you can already use it and see the results.
In the section below there are some real life examples of use cases and the results from the plugin. The results vary, but it can be used for extracting faint pencil sketches from photos, cleaning up lines, and comic book inking.
Regarding the model used in the tool, we trained it ourselves. All the data in the dataset is donated from people who sent their pictures to us themselves and agreed on this specific use case. We haven’t used any other data. Moreover, when you use the plugin, it processes locally on your machine, it doesn’t require any internet connection, doesn’t connect to any server, and no account is required either. Currently it works only on Windows and Linux, but we’ll work on making it available on MacOS as well.
Use cases
It averages the lines into one line and creates strong black lines, but the end result can be blurry or uneven. In many cases however it still works better than just using a Levels filter (for example in extracting the pencil sketch). it might be a good idea to use Levels filter after using the plugin to reduce the blurriness. Since the plugin works best with white canvas and grey-black lines, in case of photographed pencil sketches or very light sketch lines, it might be a good idea to use Levels also before using the plugin.
Extracting photographed pencil sketch
This is the result of the standard procedure of using Levels filter on a sketch to extract the lines (which results in a part of the image getting the shadow):
Another possible result is to just stop at the plugin without forcing black lines using Levels, which results in a nicer, more pencil-y look while keeping the lower part of the page still blank:
Here in the pictures above you can see the comic book style inking. The result, which is a bit blurry compared to the original, can be further enhanced by using a Sharpen filter. The dragon was sketched by David Revoy (CC-BY 4.0).
Cleaning up lines
Examples of sketches I made and the result of the plugin, showing the strong and weak points of the plugin. All of the pictures below were made using the SketchyModel.
On the pictures below, on the scales of the fish, you can see how the model discriminates lighter lines and enhances the stronger lines, making the scales more pronounced. In theory you could do that using the Levels filter, but in practice the results would be worse, because the model takes into account local strength of the line.
(Optional) Install NPU drivers if you have NPU on your device (practically only necessary on Linux, if you have a very new Intel CPU): Configurations for Intel® NPU with OpenVINO™ — OpenVINO™ documentation (note: you can still run the plugin on CPU or GPU, it doesn’t require NPU)
Run the plugin:
Open or create a white canvas with grey-white strokes (note that the plugin will take the current projection of the canvas, not the current layer).
Go to Tools → Fast Sketch Cleanup
Select the model. Advanced Options will be automatically selected for you.
Wait until it finishes processing (the dialog will close automatically then).
See that it created a new layer with the result.
Advice for processing
Currently it’s better to just use the SketchyModel.xml, in most cases it works significantly better than the SmoothModel.xml.
You need to make sure the background is pretty bright, and the lines you want to keep in the result are relatively dark (either somewhat dark grey or black; light grey might result in many missed lines). It might be a good idea to use a filter like Levels beforehand.
After processing, you might want to enhance the results with either Levels filter or Sharpen filter, depending on your results.
Technology & Science behind it
Unique requirements
First unique requirement was that it had to work on canvases of all sizes. That meant that the network couldn’t have any dense/fully or densely connected linear layers that are very common in most of the image processing neural networks (which require input of a specific size and will produce different results for the same pixel depending on its location), only convolutions or pooling or similar layers that were producing the same results for every pixel of the canvas, no matter the location. Fortunately, the Simo & Sierra paper published in 2016 described a network just like that.
Another challenge was that we couldn’t really use the model they created, since it wasn’t compatible with Krita’s license, and we couldn’t even really use the exact model type they described, because one of those model files would be nearly as big as Krita, and the training would take a really long time. We needed something that would work just as well if not better, but small enough that it can be added to Krita without making it twice as big. (In theory, we could do like some other companies and make the processing happen on some kind of a server, but that wasn’t what we wanted. And even if it resolved some of our issues, it would provide plenty of its own major challenges. Also, we wanted for our users to be able to use it locally without a reliance on our servers and the internet). Moreover, the model had to be reasonably fast and also modest in regards to RAM/VRAM consumption.
Moreover, we didn’t have any dataset we could use. Simo & Sierra used a dataset, where the expected images were all drawn using a constant line width and transparency, which meant that the results of the training had those qualities too. We wanted something that looked a bit more hand-drawn, with varying line-width or semi-transparent ends of the lines, so our dataset had to contain those kinds of images. Since we haven’t been aware of any datasets that would match our requirements regarding the license and the data gathering process, we asked our own community for help, here you can read the Krita Artists thread about it: https://krita-artists.org/t/call-for-donation-of-artworks-for-the-fast-line-art-project/96401 .
The link to our full dataset can be found below in the Dataset section.
Model architecture
All main layers are either convolutional or deconvolutional (at the end of the model). After every (de)convolutional layer except for the last one there is a ReLu activation layer, and after the last convolution there is a sigmoid activation layer.
Python packages used: Pillow, Numpy, PyTorch and Openvino
Numpy is a standard library for all kinds of arrays and advanced array operations and we used Pillow for reading images and converting them into numpy arrays and back. For training, we used PyTorch, while in the Krita plugin we used Openvino for inference (processing through the network).
Using NPU for inference
This table shows the result of benchmark_app, which is a tool that’s provided with Intel’s python package openvino. It tests the model in isolation on random data. As you can see, the NPU was several times faster than the CPU on the same machine.
On the other hand, introducing NPU added a challenge: the only models that can run on NPU are static models, meaning the input size is known at the time of saving the model to file. To solve this, the plugin first cuts the canvas into smaller parts of a specified size (which depends on the model file), and then proceeds to process all of them and finally stitch the results together. To avoid artifacts on the areas next to the stitching, all of the parts are cut with a little bit of a margin and the margin is later cut off.
How to train your own model
To train your own model, you’ll need some technical skills, pairs of pictures (input and the expected output) and a powerful computer. You might also need quite a lot of space on your hard drive, though you can just remove unnecessary older models if you start having issues with lack of space.
Drivers & preparation
You’ll need to install Python3 and the following packages: Pillow, openvino, numpy, torch. For quantization of the model you will also need nncf and sklearn. If I missed anything, it will complain, so just install those packages it mentions too.
Moreover if you want to use iGPU for training (which might still be significantly faster than on CPU), you’ll probably need to use something like IPEX which allows PyTorch to use an “XPU” device, which is just your iGPU. It’s not tested or recommended since I personally haven’t been able to use it because my Python version was higher than the instruction expects, but the instruction is here: https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.5.10%2Bxpu . The sanity check for the installation is as follows: python3 -c "import torch; import intel_extension_for_pytorch as ipex; print(f'Packages versions:'); print(f'Torch version: {torch.__version__}'); print(f'IPEX version: {ipex.__version__}'); print(f'Devices:'); print(f'Torch XPU device count: {torch.xpu.device_count()}'); [print(f'[Device {i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];" It should show more than 0 devices with some basic properties.
If you manage to get XPU device working on your machine, you’ll still need to edit the training scripts so they’ll able to use it: https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/getting_started.html (most probably you’ll just need to add this line: import intel_extension_for_pytorch as ipex to the script on the very top, just underneath “import torch”, and use “xpu” as the device name when invoking the script, and it should work. But as I said, the scripts hasn’t been tested for that.
Dataset
You’ll need some pictures to be able to train your model. The pictures must be in pairs, every pair must contain a sketch (input) and a lineart picture (expected output). The better quality of the dataset, the better the results.
Before training, it’s best if you augment the data: that means the pictures are rotated, scaled up or down, and mirrored. Currently the data augmentation script also performs an inversion with the assumption that training on inverted pictures would bring the results faster (considering that black means zero means no signal, and we’d like that to be the background, so the models learn the lines, not the background around lines).
How to use the data augmentation script is explained below in the detailed instruction for the training part.
For quick results, use tooSmallConv; if you have more time and resources, typicalDeep might be a better idea. If you have access to a powerful GPU machine, you might try original or originalSmaller, which represent the original description of the model from the SIGGRAPH article by Simo-Sierra 2016, and a smaller version of it.
Use adadelta as the optimizer.
You can use either blackWhite or mse as the loss function; mse is classic, but blackWhite might lead to faster results since it lowers the relative error on the fully white or fully black areas (based on the expected output picture).
In the folder, run: python3 [repository folder]/spawnExperiment.py --path [path to new folder, either relative or absolute] --note "[your personal note about the experiment]"
Prepare data:
If you have existing augmented dataset, put it all in data/training/ and data/verify/, keeping in mind that paired pictures in ink/ and sketch/ subfolders must have the exact same names (for example if you have sketch.png and ink.png as data, you need to put one in sketch/ as picture.png and another in ink/ as picture.png to be paired).
If you don't have existing augmented dataset:
Put all your raw data in data/raw/, keeping in mind that paired pictures should have the exact same names with added prefix either ink_ or sketch_ (for example if you have picture_1.png being the sketch picture and picture_2.png being the ink picture, you need to name them sketch_picture.png and ink_picture.png respectively.)
Run the data preparer script: python3 [repository folder]/dataPreparer.py -t taskfile.yml That will augment the data in the raw directory in order for the training to be more successful.
Edit the taskfile.yml file to your liking. The most important parts you want to change are:
model type - code name for the model type, use tinyTinier, tooSmallConv, typicalDeep or tinyNarrowerShallow
optimizer - type of optimizer, use adadelta or sgd
learning rate - learning rate for sgd if in use
loss function - code name for loss function, use mse for mean squared error or blackWhite for a custom loss function based on mse, but a bit smaller for pixels where the target image pixel value is close to 0.5
Run the training code: python3 [repository folder]/train.py -t taskfile.yml -d "cpu"
On Linux, if you want it to run in a background, add “&” at the end. If it runs in a foreground, you can pause the training just by pressing ctrl+C, and if it runs in a background, find a process id (using either “jobs -l” command or “ps aux | grep train.py” command, the first number would be the process id) and kill it using “kill [process id]” command. Your results will still be in the folder, and you’ll be able to resume the training using the same command.
Convert the model to an openvino model: python3 [repository folder]/modelConverter.py -s [size of the input, recommended 256] -t [input model name, from pytorch] -o [openvino model name, must end with .xml]
Place both the .xml and .bin model files in your Krita resource folder (inside pykrita/fast_sketch_cleanup subfolder) alongside other models to use them in the plugin.
We are happy to announce Kdenlive 24.12. This release focuses on bug fixes, improved stability, and usability enhancements across the board. Numerous crashes and glitches have been addressed, including issues with audio capture, effect zones, high DPI display rendering, and subtitle editing. Proxies, rotoscoping, and project management workflows have been significantly refined, resolving lags, incorrect EXIF orientation handling, and archiving problems. We’ve managed to sneak in some little nifty features as well like the ability to resize multiple timeline items, Shift + Del shortcut to extract clips from the timeline, added actions to quickly add Marker/Guides in a specific category and mixes (same track transitions) can be 1 frame long.
Under the hood, we’ve dropped support for Qt5 and now require Qt6, alongside updated dependencies (MLT 7.28 and KF 6.3). This release comes with a lot of code cleanups and refactored Whisper settings. Optimized threading and memory management. Additionally, fail-safe measures have been taken to prevent invalid project profiles and script names.
Subtitles
We’ve added support for Advanced SubStation Alpha (ASS) subtitles, a widely used text-based format renowned for its flexibility in creating highly styled and customizable subtitles. ASS subtitles support advanced features such as font family, size, and color; text outlines and shadows; alignment and positioning; scaling and rotation; margins and spacing; and effects, including masking and other enhancements. This feature was developed by Chengkun Chen as part of Google Summer of Code (GSOC).
Subtitle Manager
The new subtitle manager is now integrated with style management and has been divided into four sections: Files, Layers and Content, Style, and Info, which correspond to the four main components of ASS subtitles.
Files – create, import and export subtitles
Layers and Content – create/remove subtitle tracks and apply styling
Styles – create and manage styles
Info – displays information about subtitles
Subtitle Style Editor
The new and powerful Subtitle Style Editor allows you to control all the styling capabilities of the ASS format.
Animated Subtitles
The ASS format supports three types of effects: Banner, where the text scrolls sideways across the screen; Scroll, where the text moves vertically; and Karaoke, where each word is highlighted in sync with the audio.
Currently, only the Banner and Scroll effects are accessible through the user interface, but additional styling, including Karaoke effects, can be applied using ASS tags.
Speech-to-Text
We’ve polished the Speech to Text features ensuring a smoother and more reliable experience. Seamless installation, GPU translation and threading issues have fixed. We’ve also resolved issues with the display of Vosk, Whisper and Seamless model folder sizes on Windows. Added the ability to update all virtual environment packages have updated to the latest version of Whisper. Lastly, the Whisper settings interface has been refactored.
Effects
With this version, we complete the final task of our fundraiser: builtin effects and a redesigned effects interface. Rendering of keyframe types like Bounce, Circular, and Exponential has been improved, alongside fixes for zone-based effects, rotoscoping lag, shape filter rendering, improved precision for time remapping, motion tracker models and prev/next seeking in monitor. It is also now possible to have single-frame mixes (same track transitions).
Interface redesign
The new Effect Stack redesign enhances usability with clearer organization of keyframeable and non-keyframeable parameters, improved layout consistency, more compact and clean. We’ve also added info buttons in effect headers for quick access to documentation.
Built-in Effects
To make your workflow much more fluid, the new effects panel gives direct access to effect parameters, allowing to quickly and easily adjust them. Currently built-in effects are Transform and Flip for video clips and Volume for audio clips. Built-in effects can be enabled/disabled in the settings.
New Effects
As usual there is always room for some eye candy, so we’ve added two color correction effects, HSL Primaries and HSL Range as well as GPS Effects (Images below displaying Distance, Altitude and Speed among many other values).
Other Highlights
Fix audio capture issues
Added Shift + Del shortcut to extract clip from timeline
Fix clip monitor history menu not showing up on audio clips
Fix spacer tool leaving a few frames after last clip
Implement resizing multiple timeline items
Fix Pexels Videos provider
Fix Alt+click to loop between clips using an effect in project monitor
Titler: ensure only plain text can be pasted
Titler: added support for tabulations
Add Actions to quickly add Marker/Guides in a specific category
Full changelog
Save extracted frames in project folder is project is supposed to save files in its parent folder. Commit. Fixes bug #496486.
Cleanup, fix incorrect invokation of setProducer. Commit.
Master effects: don’t try to refresh both monitors on each effect param change, simply mark the inactive monitor as needing a refresh on next focus action. Commit.
Together with Intel, we have been working a new plugin for Krita: the fast sketch plugin, or maybe, better, a fast inking plugin. This is an experimental plugin that makes it (sometimes) possible to automatically ink a sketch, using neural networks.
This plugin uses models to figure out how to ink a sketch: the included models were trained on openly available data: there was no scraping or stealing involved! The plugin comes with a manual that explains how to get the scripts you can use to create a model trained on your own data: what you need are before and after images of your sketch and your uncolored inked drawing, and the training software can run on your own hardware (it will take a lot of time, though).
Throughout the development process we've been discussing this plugin with artists on the Krita Artists forum.
The plugin can be downloaded and extracted in a Windows Krita 5.2.6 folder and should then be enabled in the plugin manager in Krita's settings dialog.
There is also a download of Krita 5.3.0 pre-alpha available that includes the plugin for Windows and Linux. Currently, we don't have a working MacOS version ready, and since the plugin is implemented in Python, there will be no Android packages.
All of the Maui repositories have the newly released branches and tags. You can get the sources right from the Maui group: https://invent.kde.org/maui
MauiKit 4 Frameworks & Apps
With the previous version released, MauiKit Frameworks and Maui Apps were ported over to Qt6, however, some regressions were introduced and those bugs have now been fixed with this new revision version.
Currently, there are over 10 frameworks, with two new ones recently introduced. They all, for the most part, have been fully documented, and although, the KDE doxygen agent has some minor issues when publishing some parts, you can find the documentation online at https://api.kde.org/mauikit/ (and if you find missing parts, confusing bits, or overall sections to improve – you can open a ticket at any of the framework repos and it shall be fixed shortly after)
A brief list of changes and fixes introduced to the frameworks are the following:
For MauiKit Controls
MauiKit is now no longer dependent on MauiKit-Style, so any other QQC2 style can be used with Maui Apps (other styles are not supported).
MauiKit fixes the toast area notifications. The toast notifications can now take multiple contextual actions.
MauiKit Demo app has been updated to showcase all the new control properties
New controls: TextField, Popup, DropDownIndicator,
MauiKit fixes the template delegates and the IconItem control
MauiKit fixes to the Page autohide toolbars
Update style and custom controls to use MauiKit Controls’ attached properties for level, status, title, etc.
Display keyboard shortcut info in the MenuItems
Update MauiKit Handy properties for isMobile, isTouch, and hasTransientTouchInput and fixes to the lasso selection on touch displays
Added more resize areas to the BaseWindow type
Check for system color scheme style changes and update accordingly. This works on other systems besides Plasma or Maui, such as Gnome or Android
The type AppsView has been renamed to SwipeView, and AppViewLoader to SwipeViewLoader
Update MauiKit-Style to support MauiKit Controls attached properties and respect the flat properties in buttons
Fixes to the MauiKit bug in the GridBrowser scrollbars policy
Fixes to the action buttons layout in Dialog and PopupPage controls
Refresh the icon when a system icon-theme change is detected – a workaround for Plasma is used and for other systems the default Qt API
For the MauiKit Frameworks
FileBrowsing fixes bugs with the Tagging components
Fixes to the models using dates. Due to a bug in Qt getting a file date time is too slow unless the UTC timezone is specified
Update FileBrowsing controls to use the latest Mauikit changes
Added a new control: FavButton, to mark files as favorites using the Tagging component quickly
Update and fixes to the regressions in the other frameworks
ImageTools fixes the OCR page
TextEditor fixes the line numbers implementation.
All of the frameworks are now at version 4.0.1
All of the apps have been reviewed for the regressions previously introduced in the porting to Qt6; those issues have been solved and a few new features have been added, such as:
Station, now allows opening selected links externally
Index fixes to the file previewer and support for quickly tagging files from the previewer
Vvave fixes to the minimode window closing
Update the apps to remove usage of the Qt5Compat effects module
Fix issues in Fiery, Strike, and Agenda
Fix the issue of selecting multiple items in the apps not working
Clip fixes to the video thumbnail previews and the opening file dialog
Implement the floating viewer for Pix, Vvave, Shelf, and Clip for consistency
Correctly open the Station terminal at the current working directory when invoked externally
Among many few other details
** Index, Vvave, Pix, Nota, Buho, Station, Shelf, Clip, and Communicator versions have been bumped to 4.0.1
*** Strike and Fiery browser versions have been bumped to 2.0.1
**** Agenda and Arca versions have been bumped to 1.0.1
And as for Bonsai, Era, and other applications still under development, there is still not a ported version to Qt6 as of now
Maui Shell
Although Maui Shell has been ported over to Qt6 and is working with the latest MauiKit4, a lot of pending issues are still present and being worked on. The next release will be dedicated fully on Maui Shell and all of its subprojects, such as Maui Settings, Maui Core, CaskServer, etc.
That’s it for now. Until the next blog post, that will be a bit closer to the 4.0.1 stable release.
I recently saw one of my old branded “stripes”
wallpapers in a screenshot of FreeBSD by someone on X, and that
triggered me to make a new wallpaper in a similar style.
There was a call for artwork for the next Debian release – Trixie,
and I made a modified version of one of my old wallpapers for it. As it
was not chosen to be the default in Trixie, I decided to post it here
for people who might like it.
It is, like all my wallpapers, a calm non-distracting one. (it is
much prettier full-4k-size than in the thumbnail below)
Trixie Tracks
If you like it, you can download it from Debian’s
Wiki – in 1920x1080 and 4k versions. There is also a version with
the Debian logo there for inspiration if you want to create a custom
distribution-branded one.
Welcome to the @Krita-promo team's November 2024 development and community update.
Development Report
Community Bug Hunt Ended
The Community Bug Hunt has ended, with dozens of bugs fixed and over a hundred bug more reports closed. Huge thanks to everyone who participated, and if you missed it, the plan is to make this a regular occurrence.
Can't wait for the next bug hunt to be scheduled? Neither will the bug reports! Help in investigating them is appreciated anytime!
Community Report
November 2024 Monthly Art Challenge Results
For the "Fluffy" theme, 22 members submitted 26 original artworks.
And the winner is…
Most "Fluffy" by @steve.improvthis, featuring three different fluffy submissions. Be sure to check out the other two as well!
The December Art Challenge is Open Now
For the December Art Challenge, @steve.improvthis has chosen "Tropical" as the theme, with the optional challenge of using new or unfamiliar brushes. See the full brief for more details, and find yourself a place in the sun!
Featured Artwork
Best of Krita-Artists - October/November 2024
Seven images were submitted to the Best of Krita-Artists Nominations thread, which was open from October 15th to November 11th. When the poll closed on November 14th, these five wonderful works made their way onto the Krita-Artists featured artwork banner:
Krita is Free and Open Source Software developed by an international team of sponsored developers and volunteer contributors.
Visit Krita's funding page to see how user donations keep development going, and explore a one-time or monthly contribution. Or check out more ways to Get Involved, from testing, coding, translating, and documentation writing, to just sharing your artwork made with Krita.
The Krita-promo team has put out a call for volunteers, come join us and help keep these monthly updates going.
Notable Changes
Notable changes in Krita's development builds from Nov. 12 - Dec. 11, 2024.
Stable branch (5.2.9-prealpha):
General: Fix rounding errors in opacity conversion, which prevented layered 50% brushstrokes from adding up to 100%. (bug report) (Change, by Dmitry Kazakov)
General: Fix snapping to grid at the edge of the canvas. (bug report) (Change, by Dmitry Kazakov)
General: Disable snapping to image center by default, as it can cause confusion. (bug report) (Change, by Dmitry Kazakov)
Calligraphy Tool: Fix following existing shape in the Calligraphy Tool. (bug report) (Change, by Dmitry Kazakov)
Layers: Fix "Copy into new Layer" to copy vector data when a vector shape is active. (bug report) (Change, by Dmitry Kazakov)
Selections: Fix the vector selection mode to not create 0px selections, and to select the canvas beforing subtracting if there is no existing selection. (bug report, CCbug report) (Change, by Dmitry Kazakov)
General: Add Unify Layers Color Space action. (Change, by Dmitry Kazakov)
Layers: Don't allow moving a mask onto a locked layer. (Change, by Maciej Jesionowski)
Linux: Capitalize the .AppImage file extension to match the convention expected by launchers. (bug report) (Change, by Dmitry Kazakov)
Unstable branch (5.3.0-prealpha):
Bug fixes:
Color Management: Update display rendering when blackpoint compensation or LCMS optimizations are toggled, not just when the display color profile is changed. (bug report) (Change, by Dmitry Kazakov)
Features:
Text: Implement Convert to Shape for bitmap fonts. (Change, by Wolthera van Hövell)
Filters: Add Fast Color Overlay filter, which overlays a solid color using a configurable blending mode. (Change, by Maciej Jesionowski)
Brush Engines: Add Pattern option to "Auto Invert For Eraser" mode. (Change, by Dmitry Kazakov)
Wide Gamut Color Selector Docker: Add option to hide the Minimal Shade Selector rows. (Change, by Wolthera van Hövell)
Wide Gamut Color Selector Docker: Show the Gamut Mask toolbar when the selector layout supports it. (Change, by Wolthera van Hövell)
Layers: Add a warning icon for layers with a different color space than the image. (Change 1, by Dmitry Kazakov, and Change 2, by Timothée Giet)
Pop-Up Palette: Add an option to sort the color history ring by last-used instead of by color. (bug report) (Change, by Dmitry Kazakov)
Export Layers Plugin: Add option to use incrementing prefix on exported layers. (wishbug report) (Change, by Ross Rosales)
Nightly Builds
Pre-release versions of Krita are built every day for testing new changes.
The open source project I work on for the longest time is KDE and there more specific Kate.
This means I look at user bug reports for over 20 years now.
The statistics tell me our team got more than 9000 bugs since around 2001 (just for Kate, this excludes the libraries like KTextEditor that we maintain, too).
Kate Bug Statistics
That is a bit more than one bug per day for over two decades.
And as the statistics show, especially in the last years we were able to keep the open bug count down, that means we fixed a lot of them.
Given we are a small team, I think that is a nice achievement.
We not just survived over 20 years, we are still alive and kicking and not just a still compiling zombie project.
Thanks a lot to all people that are contributing to this success!
Let’s keep this up in the next year and the ones following.
Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps.
This week aside of releasing KDE Gear 24.12.0 and Kaidan 0.10.0, we added an overview of all your data in Itinerary and polished many other apps. Some of us also meet in Berlin and organized a small KDE sprint where aside of eating some Crêpes Bretonnes, we had discussion around Itinerary, Kirigami, Powerplant and more.
Itinerary has a new "My Data" page containing your program membership, health certificates, saved locations, travel statistics and let you export and import all the data from Itinerary. (Carl Schwan, 25.04.0 — Link)
Mathis redesigned various part of Powerplant and added a tasks view. (Mathis Brucher)
Other
More Kirigami applications are now remembering their size accross restart by using KConfig.WindowStateSaver. (Nate Graham, 25.04.0 — Skanpage and Elisa)
For a complete overview of what's going on, visit KDE's Planet, where you can find all KDE news unfiltered directly from our contributors.
Get Involved
The KDE organization has become important in the world, and your time and
contributions have helped us get there. As we grow, we're going to need
your support for KDE to become sustainable.
You can help KDE by becoming an active community member and getting involved.
Each contributor makes a huge difference in KDE — you are not a number or a cog
in a machine! You don’t have to be a programmer either. There are many things
you can do: you can help hunt and confirm bugs, even maybe solve them;
contribute designs for wallpapers, web pages, icons and app interfaces;
translate messages and menu items into your own language; promote KDE in your
local community; and a ton more things.
You can also help us by donating. Any monetary
contribution, however small, will help us cover operational costs, salaries,
travel expenses for contributors and in general just keep KDE bringing Free
Software to the world.
To get your application mentioned here, please ping us in invent or in Matrix.