Skip to content

Sunday, 29 December 2024

The Amarok Development Squad is happy to announce the immediate availability of Amarok 3.2 "Punkadiddle"!

2024 was the year that finally introduced a Qt5/KF5 based Amarok 3 release in April, and it was followed by a number of 3.x bugfix and feature releases. Now, to conclude 2024, it is time for Amarok 3.2.0. The most interesting change is probably the ability to build the same codebase on both Qt5 and Qt6. Qt5/KF5 is still the recommended, tested configuration, for now. Qt6 version should be usable, but in addition to any unknown issues, there are a number of known issues documented in README.

Additionally, 3.2.0 includes e.g. collection filtering and Ampache related features and fixes, oldest resolved feature request being from 2013. Multiple long-standing crash bug reports have been closed lately, and probable fixes for various issues observed in crash report data have been introduced. Amarok 3.2.0 should thus feature slightly improved stability. Some 3.2.x bugfix releases are to be expected in 2025, before the focus shifts to preparations for an "Amarok 4".

Changes since 3.1.1

CHANGES:
  • Building an experimental Qt6/KF6 Amarok version is now possible
  • Allow filtering collection by lack of tag / empty tag (BR 325317)
CHANGES:
  • Amarok now depends on KDE Frameworks 5.108
  • Show current track context applet by default
BUGFIXES:
  • Probably fix occasional crashes when filtering collection (BR 492406)
  • Probably fix occasional crashes when clearing CompondProgressBars
  • Fix context view applets on Qt6/KF6
  • Fix Ampache login on server version 5.0.0 and later (BR 496581)
  • Fix crash if Ampache login is redirected (BR 396590)

The git repository statistics between 3.1.0 and 3.2.0 are as follows:

l10n daemon script: 68 commits, +41987, -35275
Tuomas Nurmi: 46 commits, +723, -4854
Raresh Rus: 3 commits, +41, -53
Ian Abbott: 1 commit, +17, -3
Heiko Becker: 1 commit, +0, -3

Getting Amarok

In addition to source code, Amarok is available for installation from many distributions' package repositories, which are likely to get updated to 3.2.0 soon, as well as the flatpak available on flathub.

Packager section

You can find the tarball package on download.kde.org and it has been signed with Tuomas Nurmi's GPG key.

Unfortunately, there won't be any "This Week in KDE Apps" blog post this week as I (Carl) and others are at the #38C3 (Chaos Communication Congress) in Hamburg. But if you are also there, don't hesitate to come by and say hi.

The KDE stand at 38c3

One of the much requested feature for Kdenlive was a modern background removal tool.

Among the many features and enhancements that will come in 2025, we are excited to announce a preview version with a background removal tool using object masks. The feature is based on SAM2‘s object segmentation. You can download the Kdenlive test alpha version from the links at the bottom of this page.

Since this is a testing preview version, the binaries are not signed and you might need to manually allow the install on Windows.

Here is a quick demo of how the feature works in screenshots:

  1. Add a clip to your project
  2. Select a zone to apply the background removal
  3. Click on the Mask button
  4. Select Configure to setup the tool (to be done only the first time)
  1. Click on Install in the Object detection setup

Go drink a coffee while the module is being downloaded and installed (between 5 and 15 minutes depending on your internet connection). Currently there is not much feedback from the install, we will improve this so just be patient.

Kdenlive downloads the smallest model by default. Once it shows up, you can close the dialog and start using the feature.

  1. Click on the Create new mask button
  2. Click on the object you want to keep (foreground)
  3. When the white mask appears, click on Generate Mask
  4. The video mask task starts, drink another coffee

When the video mask is created, it will appear in the mask manager dialog (2).

  1. Drag your clip zone from the clip monitor to timeline
  2. Select the newly created mask
  3. Click on Apply Mask

Success, you can now enjoy your video without the background.

Of course, this feature can also be used for other exciting things like applying an effect or a color correction to a specific object only.

Keep in mind that this is an alpha version, we will enhance and polish it for the upcoming 25.04 version. Happy editing!

Linux AppImage download:

https://files.kde.org/kdenlive/unstable/kdenlive-25.04.0-alpha-x86_64.AppImage.mirrorlist

Windows download:

https://files.kde.org/kdenlive/unstable/kdenlive-25.04-alpha-x86_64.exe.mirrorlist

Flatpak:

Check our experimental nightly version (see instructions at the bottom of our download page)

Screen magnification is an accessibility feature that enlarges the screen to make text, images, and other user interface components easier to see or read. It is not something that requires constant developer attention, however, in Plasma 6.3, the zoom plugin received some improvements that I’d like to go over quickly.

Pixel grid

A script element has been removed to ensure Planet works properly. Please find it in the original post.

Arguably, it will be too hard to read text if the screen is “too” zoomed in. There are several ways how this case can be handled. For example, the magnification factor can be capped (e.g. to x8 or x10), or do nothing and just display blurry upscaled screen contents… or display something else.

With the old behavior, the zoom plugin used not to do anything special when the magnification factor reaches a high value, but with the new behavior, it is going to display the individual pixels on the screen. This can be very useful to developers, designers, etc.

System settings

In addition to the new pixel grid mode, the system settings for the zoom plugin received minor polishing to look more consistent with other config modules.

Future improvements

Keyboard shortcuts are not the only way how the zoom plugin can be triggered. For example, it can be also triggered by pressing Meta and Control keys and scrolling the mouse wheel to zoom. However, it is not exposed anywhere in the user interface and some people may prefer zooming with just the Meta key pressed. In order to address the discoverability issue of the mouse wheel gesture and allow using a different combination of modifier keys, there is already a patch to add the corresponding system setting, but it’s 6.4 material. It would be also nice to move screen magnifier settings from the desktop effects config module to the accessibility config module.

Last but not least, the zoom effect currently uses the bi-linear magnification filter, which produces okay-ish visual results, but it’s worth looking for alternative upscaling algorithms that handle edges better so zoomed in text looks less blurry.

Hello,

I need your help. I’ve created a first version of Skrooge that can be built on KF6 and Qt6 (Its temporary number version is 2.33.8).

I use it daily for managing my own accounts. However, before releasing an official version, I’d like some of you to test it and provide feedback by reporting any issues you encounter.

I’m counting on you! To get started, check out the download section and the README.md.

Thanks in advance!

Friday, 27 December 2024

Bundle Creator

After almost a year, I finally found some time to dive back into Krita. I stumbled upon the Memileo Impasto Brushes bundle, which mimics the texture and thickness of real paint—perfect for adding depth and dimension. Inspired to try them out, I created this quick one-hour painting.

Friday, 20 December 2024

I started this blog back in 2010. Back then I used Wordpress and it worked reasonably well. In 2018 I decided to switch to a static generated site, mostly because the Wordpress blog felt slow to load and it was hassle to maintain. Back then the go-to static site generator was Jekyll, so I went with that. Lately I’ve been struggling with it though, because in order to keep all the plugins working, I needed to use older versions or Ruby, which meant I had to use Docker to build the blog locally. Overall, it felt like too much work and for the past few years I’ve been eyeing Hugo - more so since Carl and others migrated most of KDE websites to it. I mean, if it’s good enough for KDE, it’s good enough for me, right?

So this year I finally got around to do the switch. I migrated all the content from Jekyll. This time I actually went through every single post, converted it to proper Markdown, fixed formatting, images etc. It was a nice trip down the memory lane, reading all the old posts, remembering all the sprints and Akademies… I also took the opportunity to clean up the tags and categories, so that they are more consistent and useful.

Finally, I have rewritten the theme - I originally ported the template from Wordpress to Jekyll, but it was a bit of a mess, responsivity was “hacked” in via JavaScript. Web development (and my skills) has come a long way since then, so I was able to leverage more modern CSS and HTML features to make the site look the same, but be more responsive and accessible.

Comments

When I switched from Wordpress to Jekyll, I was looking for a way to preserve comments. I found Isso, which is basically a small CGI server backed with SQLite that you can run on the server and embed it into your static website through JavaScript. It could also natively import comments from Wordpress, so that’s the main reason why I went with it, I think. Isso was not perfect (although the development has picked up again in the past few years) and it kept breaking for me. I think it haven’t worked for the past few years on my blog and I just couldn’t be bothered to fix it. So, I decided to ditch it in favor of another solution…

I wanted to keep the comments for old posts by generating them as static HTML from the Isso’s SQLite database, alas the database file was empty. Looks like I lost all comments at some point in 2022. It sucks, but I guess it’s not the end of the world. Due to the nature of how Isso worked, not even the Wayback Machine was able to archive the comments, so I guess they are lost forever…

For this new blog, I decided to use Carl’s approach with embedding replies to a Mastodon. I think it’s a neat idea and it’s probably the most reliable solution for comments on a static blog (that I don’t have to pay for, host myself or deal with privacy concerns or advertising).

I have some more ideas regarding the comments system, but that’s for another post ;-) Hopefully I’ll get to blog more often now that I have a shiny new blog!

Happy Holidays 🎄

Enjoy the holidays and see you in 2025 🥳!

In recent weeks we have been working on transferring LabPlot’s documentation to a new format.

We decided to move the documentation from the DocBook and MediaWiki format to the Sphinx/reStrcutredText framework. In our perception Sphinx offers a user-friendly and flexible way to create and manage documentation. Easy math typing and code formatting also come along. Additionally, Sphinx supports basic syntax checks, and modern documentation practices, such as versioning and integration with various output formats like HTML, PDF and ePub.

The new user’s manual is available on a dedicated page: https://docs.labplot.org. Please check it out and let us know what you think.

The manual still needs to be supplemented with new content, so we encourage you to contribute to the documentation, e.g. by fixing and adding new sections, updating images, as collaborative efforts can lead to a more comprehensive resource for everyone. Please check the Git repository dedicated to the documentation to find more details on how to help make it better.

Fast Sketch Cleanup plugin

Introduction

We started this project with the intent of providing users a tool helpful in inking sketches. It is based on a research article by Simo & Sierra published in 2016, and it uses neural networks (now commonly called simply AI) to work. The tool has been developed in partnership with Intel and it’s still considered experimental, but you can already use it and see the results.

In the section below there are some real life examples of use cases and the results from the plugin. The results vary, but it can be used for extracting faint pencil sketches from photos, cleaning up lines, and comic book inking.

Regarding the model used in the tool, we trained it ourselves. All the data in the dataset is donated from people who sent their pictures to us themselves and agreed on this specific use case. We haven’t used any other data. Moreover, when you use the plugin, it processes locally on your machine, it doesn’t require any internet connection, doesn’t connect to any server, and no account is required either. Currently it works only on Windows and Linux, but we’ll work on making it available on MacOS as well.

Use cases

It averages the lines into one line and creates strong black lines, but the end result can be blurry or uneven. In many cases however it still works better than just using a Levels filter (for example in extracting the pencil sketch). it might be a good idea to use Levels filter after using the plugin to reduce the blurriness. Since the plugin works best with white canvas and grey-black lines, in case of photographed pencil sketches or very light sketch lines, it might be a good idea to use Levels also before using the plugin.

Extracting photographed pencil sketch

This is the result of the standard procedure of using Levels filter on a sketch to extract the lines (which results in a part of the image getting the shadow):

sketch_girl_original_procedure_comparison_small

sketch_girl_original_procedure_comparison_small1843×1209 165 KB

The sketch was drawn by Tiar (link to KA profile)

This is the procedure using the plugin with SketchyModel (Levels → plugin → Levels):

sketch_girl_new_procedure_comparison_small

sketch_girl_new_procedure_comparison_small1843×2419 267 KB

Comparison (for black lines):

sketch_girl_procedures_comparison_small

sketch_girl_procedures_comparison_small1920×1260 215 KB

Another possible result is to just stop at the plugin without forcing black lines using Levels, which results in a nicer, more pencil-y look while keeping the lower part of the page still blank:

sketch_girl_after_plugin_small

sketch_girl_after_plugin_small1536×2016 161 KB

Comic book-like inking


Picture of a man made by BeARToys

Here in the pictures above you can see the comic book style inking. The result, which is a bit blurry compared to the original, can be further enhanced by using a Sharpen filter. The dragon was sketched by David Revoy (CC-BY 4.0).

Cleaning up lines

Examples of sketches I made and the result of the plugin, showing the strong and weak points of the plugin. All of the pictures below were made using the SketchyModel.

flower_001

flower_0011209×739 46.5 KB

flower_001_detail

flower_001_detail681×456 22.1 KB

portrait_man_portrait_2_comparison_2_small

portrait_man_portrait_2_comparison_2_small1305×505 139 KB

portrait_man_portrait_2_detail

portrait_man_portrait_2_detail646×1023 26.6 KB

All of the pictures above painted by Tiar (link to KA profile)

On the pictures below, on the scales of the fish, you can see how the model discriminates lighter lines and enhances the stronger lines, making the scales more pronounced. In theory you could do that using the Levels filter, but in practice the results would be worse, because the model takes into account local strength of the line.


fish_square_sketchy_comparison_small1920×968 156 KB

Picture of the fish made by Christine Garner (link to portfolio)

How to use it in Krita

To use the Fast Sketch Cleanup plugin in Krita, do the following:

  1. Prepare Krita:

    1. On Windows:

      1. Either in one package: download Krita 5.3.0-prealpha with Fast Sketch Cleanup plugin already included: https://download.kde.org/unstable/krita/5.3.0-prealpha-fast-sketch/krita-x64-5.3.0-prealpha-cdac9c31.zip

      2. Or separately:

        1. Download portable version of Krita 5.2.6 (or similar version - should still work)
        2. Download separately the Fast Sketch Cleanup plugin here: https://download.kde.org/stable/krita/FastSketchPlugin-1.0.2/FastSketchPlugin1.0.2.zip
        3. Unzip the file into krita-5.2.6/ folder (keeping the folder structure).
        4. Then go to Settings → Configure Krita → Python Plugin Manager, enable Fast Sketch Cleanup plugin, and restart Krita.
    2. On Linux:

      1. Download the appimage: https://download.kde.org/unstable/krita/5.3.0-prealpha-fast-sketch/krita-5.3.0-prealpha-cdac9c31c9-x86_64.AppImage
  2. (Optional) Install NPU drivers if you have NPU on your device (practically only necessary on Linux, if you have a very new Intel CPU): Configurations for Intel® NPU with OpenVINO™ — OpenVINO™ documentation (note: you can still run the plugin on CPU or GPU, it doesn’t require NPU)

  3. Run the plugin:

    1. Open or create a white canvas with grey-white strokes (note that the plugin will take the current projection of the canvas, not the current layer).
    2. Go to Tools → Fast Sketch Cleanup
    3. Select the model. Advanced Options will be automatically selected for you.
    4. Wait until it finishes processing (the dialog will close automatically then).
    5. See that it created a new layer with the result.

Advice for processing

Currently it’s better to just use the SketchyModel.xml, in most cases it works significantly better than the SmoothModel.xml.

You need to make sure the background is pretty bright, and the lines you want to keep in the result are relatively dark (either somewhat dark grey or black; light grey might result in many missed lines). It might be a good idea to use a filter like Levels beforehand.

After processing, you might want to enhance the results with either Levels filter or Sharpen filter, depending on your results.

Technology & Science behind it

Unique requirements

First unique requirement was that it had to work on canvases of all sizes. That meant that the network couldn’t have any dense/fully or densely connected linear layers that are very common in most of the image processing neural networks (which require input of a specific size and will produce different results for the same pixel depending on its location), only convolutions or pooling or similar layers that were producing the same results for every pixel of the canvas, no matter the location. Fortunately, the Simo & Sierra paper published in 2016 described a network just like that.

Another challenge was that we couldn’t really use the model they created, since it wasn’t compatible with Krita’s license, and we couldn’t even really use the exact model type they described, because one of those model files would be nearly as big as Krita, and the training would take a really long time. We needed something that would work just as well if not better, but small enough that it can be added to Krita without making it twice as big. (In theory, we could do like some other companies and make the processing happen on some kind of a server, but that wasn’t what we wanted. And even if it resolved some of our issues, it would provide plenty of its own major challenges. Also, we wanted for our users to be able to use it locally without a reliance on our servers and the internet). Moreover, the model had to be reasonably fast and also modest in regards to RAM/VRAM consumption.

Moreover, we didn’t have any dataset we could use. Simo & Sierra used a dataset, where the expected images were all drawn using a constant line width and transparency, which meant that the results of the training had those qualities too. We wanted something that looked a bit more hand-drawn, with varying line-width or semi-transparent ends of the lines, so our dataset had to contain those kinds of images. Since we haven’t been aware of any datasets that would match our requirements regarding the license and the data gathering process, we asked our own community for help, here you can read the Krita Artists thread about it: https://krita-artists.org/t/call-for-donation-of-artworks-for-the-fast-line-art-project/96401 .

The link to our full dataset can be found below in the Dataset section.

Model architecture

All main layers are either convolutional or deconvolutional (at the end of the model). After every (de)convolutional layer except for the last one there is a ReLu activation layer, and after the last convolution there is a sigmoid activation layer.

Python packages used: Pillow, Numpy, PyTorch and Openvino

Numpy is a standard library for all kinds of arrays and advanced array operations and we used Pillow for reading images and converting them into numpy arrays and back. For training, we used PyTorch, while in the Krita plugin we used Openvino for inference (processing through the network).

Using NPU for inference


This table shows the result of benchmark_app, which is a tool that’s provided with Intel’s python package openvino. It tests the model in isolation on random data. As you can see, the NPU was several times faster than the CPU on the same machine.

On the other hand, introducing NPU added a challenge: the only models that can run on NPU are static models, meaning the input size is known at the time of saving the model to file. To solve this, the plugin first cuts the canvas into smaller parts of a specified size (which depends on the model file), and then proceeds to process all of them and finally stitch the results together. To avoid artifacts on the areas next to the stitching, all of the parts are cut with a little bit of a margin and the margin is later cut off.

How to train your own model

To train your own model, you’ll need some technical skills, pairs of pictures (input and the expected output) and a powerful computer. You might also need quite a lot of space on your hard drive, though you can just remove unnecessary older models if you start having issues with lack of space.

Drivers & preparation

You’ll need to install Python3 and the following packages: Pillow, openvino, numpy, torch. For quantization of the model you will also need nncf and sklearn. If I missed anything, it will complain, so just install those packages it mentions too.

If you’re on Windows, you probably have drivers for NPU and dedicated GPU. On Linux, you might need to install NPU drivers before you’ll be able to use it: https://docs.openvino.ai/2024/get-started/configurations/configurations-intel-npu.html .

Moreover if you want to use iGPU for training (which might still be significantly faster than on CPU), you’ll probably need to use something like IPEX which allows PyTorch to use an “XPU” device, which is just your iGPU. It’s not tested or recommended since I personally haven’t been able to use it because my Python version was higher than the instruction expects, but the instruction is here: https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.5.10%2Bxpu .
The sanity check for the installation is as follows:
python3 -c "import torch; import intel_extension_for_pytorch as ipex; print(f'Packages versions:'); print(f'Torch version: {torch.__version__}'); print(f'IPEX version: {ipex.__version__}'); print(f'Devices:'); print(f'Torch XPU device count: {torch.xpu.device_count()}'); [print(f'[Device {i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
It should show more than 0 devices with some basic properties.

If you manage to get XPU device working on your machine, you’ll still need to edit the training scripts so they’ll able to use it: https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/getting_started.html (most probably you’ll just need to add this line:
import intel_extension_for_pytorch as ipex
to the script on the very top, just underneath “import torch”, and use “xpu” as the device name when invoking the script, and it should work. But as I said, the scripts hasn’t been tested for that.

Dataset

You’ll need some pictures to be able to train your model. The pictures must be in pairs, every pair must contain a sketch (input) and a lineart picture (expected output). The better quality of the dataset, the better the results.

Before training, it’s best if you augment the data: that means the pictures are rotated, scaled up or down, and mirrored. Currently the data augmentation script also performs an inversion with the assumption that training on inverted pictures would bring the results faster (considering that black means zero means no signal, and we’d like that to be the background, so the models learn the lines, not the background around lines).

How to use the data augmentation script is explained below in the detailed instruction for the training part.

Here’s the dataset that we used (please read the license carefully if you want to use it): https://files.kde.org/krita/extras/FastSketchCleanupPluginKritaDataset.zip

Choice of model and other parameters

For quick results, use tooSmallConv; if you have more time and resources, typicalDeep might be a better idea. If you have access to a powerful GPU machine, you might try original or originalSmaller, which represent the original description of the model from the SIGGRAPH article by Simo-Sierra 2016, and a smaller version of it.

Use adadelta as the optimizer.

You can use either blackWhite or mse as the loss function; mse is classic, but blackWhite might lead to faster results since it lowers the relative error on the fully white or fully black areas (based on the expected output picture).

Training

  1. Clone the repository at https://invent.kde.org/tymond/fast-line-art (at 33869b6)
    git clone https://invent.kde.org/tymond/fast-line-art.git

  2. Then, prepare the folder:

    • Create a new folder for the training.
    • In the folder, run:
      python3 [repository folder]/spawnExperiment.py --path [path to new folder, either relative or absolute] --note "[your personal note about the experiment]"
  3. Prepare data:

    • If you have existing augmented dataset, put it all in data/training/ and data/verify/, keeping in mind that paired pictures in ink/ and sketch/ subfolders must have the exact same names (for example if you have sketch.png and ink.png as data, you need to put one in sketch/ as picture.png and another in ink/ as picture.png to be paired).
    • If you don't have existing augmented dataset:
      1. Put all your raw data in data/raw/, keeping in mind that paired pictures should have the exact same names with added prefix either ink_ or sketch_ (for example if you have picture_1.png being the sketch picture and picture_2.png being the ink picture, you need to name them sketch_picture.png and ink_picture.png respectively.)
      2. Run the data preparer script:
        python3 [repository folder]/dataPreparer.py -t taskfile.yml
        That will augment the data in the raw directory in order for the training to be more successful.
  4. Edit the taskfile.yml file to your liking. The most important parts you want to change are:

    • model type - code name for the model type, use tinyTinier, tooSmallConv, typicalDeep or tinyNarrowerShallow
    • optimizer - type of optimizer, use adadelta or sgd
    • learning rate - learning rate for sgd if in use
    • loss function - code name for loss function, use mse for mean squared error or blackWhite for a custom loss function based on mse, but a bit smaller for pixels where the target image pixel value is close to 0.5
  5. Run the training code:
    python3 [repository folder]/train.py -t taskfile.yml -d "cpu"

    On Linux, if you want it to run in a background, add “&” at the end. If it runs in a foreground, you can pause the training just by pressing ctrl+C, and if it runs in a background, find a process id (using either “jobs -l” command or “ps aux | grep train.py” command, the first number would be the process id) and kill it using “kill [process id]” command. Your results will still be in the folder, and you’ll be able to resume the training using the same command.

  6. Convert the model to an openvino model:
    python3 [repository folder]/modelConverter.py -s [size of the input, recommended 256] -t [input model name, from pytorch] -o [openvino model name, must end with .xml]

  7. Place both the .xml and .bin model files in your Krita resource folder (inside pykrita/fast_sketch_cleanup subfolder) alongside other models to use them in the plugin.

Thursday, 19 December 2024

We are happy to announce Kdenlive 24.12. This release focuses on bug fixes, improved stability, and usability enhancements across the board. Numerous crashes and glitches have been addressed, including issues with audio capture, effect zones, high DPI display rendering, and subtitle editing. Proxies, rotoscoping, and project management workflows have been significantly refined, resolving lags, incorrect EXIF orientation handling, and archiving problems. We’ve managed to sneak in some little nifty features as well like the ability to resize multiple timeline items, Shift + Del shortcut to extract clips from the timeline, added actions to quickly add Marker/Guides in a specific category and mixes (same track transitions) can be 1 frame long.

Under the hood, we’ve dropped support for Qt5 and now require Qt6, alongside updated dependencies (MLT 7.28 and KF 6.3). This release comes with a lot of code cleanups and refactored Whisper settings. Optimized threading and memory management. Additionally, fail-safe measures have been taken to prevent invalid project profiles and script names. Subtitles

We’ve added support for Advanced SubStation Alpha (ASS) subtitles, a widely used text-based format renowned for its flexibility in creating highly styled and customizable subtitles. ASS subtitles support advanced features such as font family, size, and color; text outlines and shadows; alignment and positioning; scaling and rotation; margins and spacing; and effects, including masking and other enhancements. This feature was developed by Chengkun Chen as part of Google Summer of Code (GSOC).

Subtitles

Subtitle Manager

The new subtitle manager is now integrated with style management and has been divided into four sections: Files, Layers and Content, Style, and Info, which correspond to the four main components of ASS subtitles.

Files – create, import and export subtitles

Layers and Content – create/remove subtitle tracks and apply styling

Styles – create and manage styles

Info – displays information about subtitles

Subtitle Style Editor

The new and powerful Subtitle Style Editor allows you to control all the styling capabilities of the ASS format.

Animated Subtitles

The ASS format supports three types of effects: Banner, where the text scrolls sideways across the screen; Scroll, where the text moves vertically; and Karaoke, where each word is highlighted in sync with the audio.

Currently, only the Banner and Scroll effects are accessible through the user interface, but additional styling, including Karaoke effects, can be applied using ASS tags.

Speech-to-Text

We’ve polished the Speech to Text features ensuring a smoother and more reliable experience. Seamless installation, GPU translation and threading issues have fixed. We’ve also resolved issues with the display of Vosk, Whisper and Seamless model folder sizes on Windows. Added the ability to update all virtual environment packages have updated to the latest version of Whisper. Lastly, the Whisper settings interface has been refactored.

Effects

With this version, we complete the final task of our fundraiser: builtin effects and a redesigned effects interface. Rendering of keyframe types like Bounce, Circular, and Exponential has been improved, alongside fixes for zone-based effects, rotoscoping lag, shape filter rendering, improved precision for time remapping, motion tracker models and prev/next seeking in monitor. It is also now possible to have single-frame mixes (same track transitions).

Interface redesign

The new Effect Stack redesign enhances usability with clearer organization of keyframeable and non-keyframeable parameters, improved layout consistency, more compact and clean. We’ve also added info buttons in effect headers for quick access to documentation.

Built-in Effects

To make your workflow much more fluid, the new effects panel gives direct access to effect parameters, allowing to quickly and easily adjust them. Currently built-in effects are Transform and Flip for video clips and Volume for audio clips. Built-in effects can be enabled/disabled in the settings.

New Effects

As usual there is always room for some eye candy, so we’ve added two color correction effects, HSL Primaries and HSL Range as well as GPS Effects (Images below displaying Distance, Altitude and Speed among many other values).

Other Highlights

  • Fix audio capture issues
  • Added Shift + Del shortcut to extract clip from timeline
  • Fix clip monitor history menu not showing up on audio clips
  • Fix spacer tool leaving a few frames after last clip
  • Implement resizing multiple timeline items
  • Fix Pexels Videos provider
  • Fix Alt+click to loop between clips using an effect in project monitor
  • Titler: ensure only plain text can be pasted
  • Titler: added support for tabulations
  • Add Actions to quickly add Marker/Guides in a specific category

For the full changelog continue reading on kdenlive.org.