Hello Planet KDE, I am Matthieu Gallien. I am a software developer mainly working with C++ and PHP. I am passionate about free software in general and KDE in particular.
I have been working on a music player for some time. I have decided to try to implement a design made by KDE VDG and especially Andrew Lake (Design for a Music Player). I had this idea after I saw this article from Thomas Pfeiffer (Phoronix offers some criticism of KDE software, and this is how KDE deals with it).
I would like to thank them for being a source of inspiration and the high quality of their work.
Elisa is using Baloo indexer in order to get all music indexed by it without needing any configuration by the user. This also means that support for other indexers or platforms is not yet done.
In addition, KDE VDG have done a lot of design work that I am trying to follow (KDE Visual Design Group/Music Player).
You can browse music by albums or by artists. More navigation possibilities will be added latter (Genre, All tracks, …).
The playlist is always shown to allow feedback when inserting tracks in it. You can also display it maximized. In this case, extra information will be shown related to the currently playing track.
It is already usable but I still need to do a first release. There is at least one problem that need to be fixed before a release. If you move tracks inside directories indexed by Baloo, they will disappear from Elisa until you restart the application.
It is using some KDE Frameworks 5 libraries (Baloo, KFileMetaData, …), Qt Quick Controls 1 and QtMultimedia. Writing a music player on top of those foundations have been a real pleasure.
Some design points are still open:
I welcome all kind of feedback on it. Especially any bug reports would be much appreciated.
My available time being limited, any help would be also very much appreciated.
Development is happening in KDE infrastructure and can be followed from the workboard in KDE’s Phabricator (Elisa Workboard).
I would also like to thank the KDE community and especially all those that have helped me during the development of Elisa.
Since I was a kid I had dreamed about being a tech guy, I remember that I used to search the trash of my father looking for broken circuit boards back in 1988, he had a notebook computer, a Toshiba T1000 with amazing ~5mhz and 512kb ram, one of the best machines of the time. He tried to teach me how to Pascal back then, but I had better things to do like stuck my foot in my mouth, or look for broken circuits for I had learned on the TV that I could just plug random eletricity circuits and cables and I would have a Super Sentai robot for me, and I would fight crime dressed like Jaspion.
From there I forgot about computers for a while, I lived in a farm and had 18 dogs and 22 cats to play all thay long, sometimes even a duck or a goose would appear to my panick because my dogs liked to hunt things that they tougth it was good for eating. My second computer was a ~proper one?~, it was basically a keyboard with a Cassete Tape jack that would load applications, and a copy of Windows 1.0, for that I have no pictures to prove, nor I remember the names of the technologies, so I cannot actually say anything.
Around 1998 I started to use Irc, on the now defunc BrasIRC network, and I met many people that would help me on my desire to program, and a few of them are actually also in the same path as me nowdays. I discovered that the Irc clients of the time -for windows, as my dad didn’t allowed me to put linux on the computer for I had no experience on it and he feared I could break stuff- allowed scripts and programming from within it, I got together with a few fellow Irc mates (like Arx Cruz, now a Red Hat hacker) and started to create our own flavor of Irc.
More Years passed and I discovered Sphere, an ultima online emulator (I have no idea how the project is today, I have completely no idea if the project still exists today) – one of the best video games invented, I have no shame to say that I prefered Ultima Online to girls back then, girls where nice, but they didn’t had over 70 skills that you could master using macro applications that also could teach you how to program (and I was really awkward torwards, you know, people).
From Ultima Online I managed to make a bit of money with my knowledge gained from the Sphere programming language, programming little Ultima Online servers for Lan Hourses and Lan Parties back in Salvador, and I managed to buy my first Computer, an AMD K6, and I finally was able to install Linux on it. My first flavor was Connectiva 4, a brazilian distro that later joined Mandrake and became Mandriva.
From there I really wanted to be a programmer, like, a real one, that could get ideas and transform into code, not the ones that are shown in movies that just dragged things around and need a Visual Basic GUI to find an IP. I did the UFMG C Course, now completely outdated (for the compilers that it uses doesn’t exists anymore, or are completely unusable, like Borland Turbo C)
From the UFMG C to Qt C++ was a step, and I started to do every university thing with it, even when that meant having to stand up against my teacher: “This is OO class tomaz, you need to use Java!”, “But teacher, C++ is also OO” and things like that. I remember that I had a terrible time on Graph Theory class, too abstract, nodes, and edges are okay, but Matrices representing them being displayed on the monochrome letters of the terminal? I didn’t had the internal RAM to parse that things on my head and transform that into the data that I was trying to analize, the computer should do that for me.
And I talked with Annma, explained her what this application should do and she agreed that could be a good thing to go to KDE. and then I joined KDE edu, my first ever contribution to an Project. From that time I had won life, all my dreams where realized, and I could sleep in peace.
It’s time for a long-overdue blogpost about the status of Tanglu. Tanglu is a Debian derivative, started in early 2013 when the systemd debate at Debian was still hot. It was formed by a few people wanting to create a Debian derivative for workstations with a time-based release schedule using and showcasing new technologies (which include systemd, but also bundling systems and other things) and built in the open with a community using the similar infrastructure to Debian. Tanglu is designed explicitly to complement Debian and not to compete with it on all devices.
Tanglu has achieved a lot of great things. We were the first Debian derivative to adopt systemd and with the help of our contributors we could kill a few nasty issues affecting it and Debian before it ended up becoming default in Debian Jessie. We also started to use the Calamares installer relatively early, bringing a modern installation experience additionally to the traditional debian-installer. We performed the usrmerge early, uncovering a few more issues which were fed back into Debian to be resolved (while workarounds were added to Tanglu). We also briefly explored switching from initramfs-tools to Dracut, but this release goal was dropped due to issues (but might be revived later). A lot of other less-impactful changes happened as well, borrowing a lot of useful ideas and code from Ubuntu (kudos to them!).
On the infrastructure side, we set up the Debian Archive Kit (dak), managing to find a couple of issues (mostly hardcoded assumptions about Debian) and reporting them back to make using dak for distributions which aren’t Debian easier. We explored using fedmsg for our infrastructure, went through a long and painful iteration of build systems (buildbot -> Jenkins -> Debile) before finally ending up with Debile, and added a set of own custom tools to collect archive QA information and present it to our developers in an easy to digest way. Except for wanna-build, Tanglu is hosting an almost-complete clone of basic Debian archive management tools.
During the past year however, the project’s progress slowed down significantly. For this, mostly I am to blame. One of the biggest challenges for a young project is to attract new developers and members and keep them engaged. A lot of the people coming to Tanglu and being interested in contributing were unfortunately no packagers and sometimes no developers, and we didn’t have the manpower to individually mentor these people and teach them the necessary skills. People asking for tasks were usually asked where their interests were and what they would like to do to give them a useful task. This sounds great in principle, but in practice it is actually not very helpful. A curated list of “junior jobs” is a much better starting point. We also invested almost zero time in making our project known and create the necessary “buzz” and excitement that’s actually needed to sustain a project like this. Doing more in the advertisement domain and “help newcomers” area is a high priority issue in the Tanglu bugtracker, which to the day is still open. Doing good alone isn’t enough, talking about it is of crucial importance and that is something I knew about, but didn’t realize the impact of for quite a while. As strange as it sounds, investing in the tech only isn’t enough, community building is of equal importance.
Regardless of that, Tanglu has members working on the project, but way too few to manage a project of this magnitude (getting package transitions migrated alone is a large task requiring quite some time while at the same time being incredibly boring :P). A lot of our current developers can only invest small amounts of time into the project because they have a lot of other projects as well.
The other issue why Tanglu has problems is too much stuff being centralized on myself. That is a problem I wanted to rectify for a long time, but as soon as a task wasn’t done in Tanglu because no people were available to do it, I completed it. This essentially increased the project’s dependency on me as single person, giving it a really low bus factor. It not only centralizes power in one person (which actually isn’t a problem as long as that person is available enough to perform tasks if asked for), it also centralizes knowledge on how to run services and how to do things. And if you want to give up power, people will need the knowledge on how to perform the specific task first (which they will never gain if there’s always that one guy doing it). I still haven’t found a great way to solve this – it’s a problem that essentially kills itself as soon as the project is big enough, but until then the only way to counter it slightly is to write lots of documentation.
Last year I had way less time to work on Tanglu than the project deserves. I also started to work for Purism on their PureOS Debian derivative (which is heavily influenced by some of the choices we made for Tanglu, but with different focus – that’s probably something for another blogpost). A lot of the stuff I do for Purism duplicates the work I do on Tanglu, and also takes away time I have for the project. Additionally I need to invest a lot more time into other projects such as AppStream and a lot of random other stuff that just needs continuous maintenance and discussion (especially AppStream eats up a lot of time since it became really popular in a lot of places). There is also my MSc thesis in neuroscience that requires attention (and is actually in focus most of the time). All in all, I can’t split myself and KDE’s cloning machine remains broken, so I can’t even use that ;-). In terms of projects there is also a personal hard limit of how much stuff I can handle, and exceeding it long-term is not very healthy, as in these cases I try to satisfy all projects and in the end do not focus enough on any of them, which makes me end up with a lot of half-baked stuff (which helps nobody, and most importantly makes me loose the fun, energy and interest to work on it).
So, this sounded overly negative, so where does this leave Tanglu? Fact is, I can not commit the crazy amounts of time for it as I did in 2013. But, I love the project and I actually do have some time I can put into it. My work on Purism has an overlap with Tanglu, so Tanglu can actually benefit from the software I develop for them, maybe creating a synergy effect between PureOS and Tanglu. Tanglu is also important to me as a testing environment for future ideas (be it in infrastructure or in the “make bundling nice!” department).
So, what actually is the way forward? First, maybe I have the chance to find a few people willing to work on tasks in Tanglu. It’s a fun project, and I learned a lot while working on it. Tanglu also possesses some unique properties few other Debian derivatives have, like being built from source completely (allowing us things like swapping core components or compiling with more hardening flags, switching to newer KDE Plasma and GNOME faster, etc.). Second, if we do not have enough manpower, I think converting Tanglu into a rolling-release distribution might be the only viable way to keep the project running. A rolling release scheme creates much less effort for us than making releases (especially time-based ones!). That way, users will have a constantly updated and secure Tanglu system with machines doing most of the background work.
If it turns out that absolutely nothing works and we can’t attract new people to help with Tanglu, it would mean that there generally isn’t much interest from the developer or user side in a project like this, so shutting it down or scaling it down dramatically would be the only option. But I do not think that this is the case, and I believe that having Tanglu around is important. I also have some interesting plans for it which will be fun to implement for testing
The only thing that had to stop is leaving our users in the dark on what is happening.
Sorry for the long post, but there are some subjects which are worth writing more than 140 characters about
It looks like I will be at Debconf this year as well, so you can also catch me there! I might even talk about PureOS/Tanglu infrastructure at the conference.
|- @varlesh work -|
|- Latte youtube presentation -|
|- @varlesh work -|
|- robust multi-screen support -|
|- Unity layout -|
|- Plasma layout -|
|- My favourite ;) -|
|- plasma looks, an Always Visible Latte dock instance -|
|- kwin blur effect -|
|- Latte Dock inside MATE -|
(Read this listening to “Gostava Tanto de Voce”, from Tim Maia)
I moved from the southern brazilian lands, where history was made and some amazing things where invented (like the Airplane and the Wrist Watch) and stories about Ghosts, Ghouls, Sacis, Caiporas and Mba’e Tatas where told to children, trying to scare them to sleep fearing about the Black Faced Bull (that will catch and eat kids that fears him), and moved to Europe, where history was made and some amazing things where invented (like Dr Who and the English Accent) and stories about Gosts, Ghouls, Dragons, Hobgoblins and Werewolfs where told to children, trying to scare them to sleep fearing about the Baba Yaga (that will catch and eat kids that enters her house).
I’v been living here for the past six months, and I wanted to write earlier, but you guys know how things are, things like that can simply be in your head for quite a while before you actually do something. So, if anyone from the KDE community would like to get together for something in Munchen, please get in touch. I live near the Worthsee area, quite far from munchen ~40 minutes via SBahn, but I work in the heart of the city.
Some new features are:
In the work is also an AppImage version of Simon for easy testing. We hope to deliver one for the Beta release coming soon.
Known issues with Simon 0.4.80 are:
We hope to fix these bugs and look forward to your feedback and bug reports and maybe to see you at the next Simon IRC meeting: Tuesday, 4th of April, at 10pm (UTC+2) in #kde-accessibility on freenode.net.
Simon is an open source speech recognition program that can replace your mouse and keyboard. The system is designed to be as flexible as possible and will work with any language or dialect. For more information take a look at the Simon homepage.
today I want to expose my my thoughts about the general hype for micro-services.
The first objection that one can move against this approach is that it does not really solve the problem of having the maintainable code because the same principles can be found in a lot of other paradigms that did not prevent bad software to be produced.
I believe that the turning point of the micro-services stuff is that is compatible with the devops philosophy.
With the combination of micro-services and devops you get software that has some reasonably well-defined limits and whose management is assigned to the people who developed the software.
This combination avoids of development shortcuts that make management more difficult (maintenance is a big deal).
This thing also solve one of the great IT open problems: the documentation.
It is true that it can not force us to produce documentation, but, at least, who run the code is exactly who produced it and i can guess that who writes the code knows how the code has to work.
It is now possible to build applications with high performance and functionality unimmaginable before, all this thanks to the fact that each component can be realized, evolved and delployed with the best life cycle that we are able to develop, without limiting the entire ecosystem.
Thanks for reading, see you next time
I have just noticed that some old articles concerning Now Dock appeared in planetkde... So sorry for this!! I am preparing the announcement for the new Latte Dock and I touched a bit these Now Dock old articles in order to archive them in the future...
This was a mistake from my side, please dont give them any attention as Now Dock is considered deprecated...
thanks a lot...
|Now Dock Panel v0.5.0|
|Now Dock Panel v0.4|
|Now Dock Window Previews|
|task progress (copying a file)|
|Now Dock Panel hovering widgets|
|opening app launcher|
|Now Dock Panel Configuration|
|hovering the now dock plasmoid|
|hovering now dock panel|
|hovering now dock panel above a window|
Ever since the port to QT5/KF5 in 2015, Kdenlive has seen an increasing momentum to developing its full potential in being a stable and reliable video editing tool which the FLOSS community can use to create content and democratize communication. In 2016 the project saw a redesign of its visual identity (logo, website), the reintroduction of some much requested tools like rotoscoping and a Windows port. During these couple of years we’ve seen a boom in the size of the community.
We’ve had some highs and lows during this process and are now ready to go a step further and bring professional-grade features to the FLOSS video editing world. To make this happen faster we would love to see new contributors jumping in. These are some parts that you can contribute to:
Since the beginning of the year, we have been working on a big refactoring/rewrite of some of the core parts of Kdenlive. Being more than 10 years old, some parts of our code had become messy and impossible to maintain. Not to mention the difficulty in adding new features.
Part of the process involves improving the architecture of the code, adding some tests, and switching the timeline code from QGraphicsView to the more recent QML framework. This should hopefuly improve stability, allow further developments and also more flexibility in the display and user interaction of the timeline.
You can see a preview of some of the new QML timeline features in the above video.
We plan to release the refactoring branch for the Kdenlive 17.08 release cycle.
Our initial Alpha release of Windows has been a success. Some people have switched to editing fulltime in Kdenlive and reviewers have praised it. We need developers to help find and fix some Windows specific bugs and bring Kdenlive on par with its GNU/Linux counterpart. One example is currently a bug that prevents rendering Jpeg images.
Due to the refactoring efforts, the 17.04 release cycle, which is right down the corner, will include code cleanup and some welcome bugfixes but no major changes. More details about this release will follow soon.
Our next monthly café will be held on Tuesday, the 11th of april 2017, at 21pm (CET) on irc.freenode.net, channel #kdenlive, everyone is welcome to join and this is a great opportunity to get in touch with the dev team, which can otherwise be contacted through a good old mailing list.
On a side note, the Frei0r project, which powers many Kdenlive effects just released version 1.6.0 which brings some new filters as well as crash fixes, so all packagers are encouraged to upgrade.
All in all 2017 promises to be an exciting year for Kdenlive, join us!
In Qt 5.9 is now possible to render Qt Quick applications with OpenVG when using hardware that supports it. This is made possible by a new scene graph adaptation that uses EGL and OpenVG to render Qt Quick scenes. When using Qt for Device Creation, it means that it now be possible to run with graphics hardware acceleration on some devices where today only software rendering is available.
OpenVG is an API for hardware accelerated 2D vector graphics. The API exposes the ability to draw and shade paths and images in an accelerated manner. The OpenVG 1.1 standard was developed by the Khronos Group and is implemented by vector GPU vendors. The reason for the tone of sarcasm in my sub-heading and why I am sure there will be more than a few readers eye-rolling is that OpenVG has been around for quite some time. The latest update of the OpenVG 1.1 standard was released in 2008. In addition the Khronos working group for OpenVG has since disbanded likely meaning there will not be any further updates.
This is also not the first time that Qt has supported OpenVG in one way or another. In Qt 4 there was an OpenGL paint engine that enabled QPainter commands to be rendered using the OpenVG API. I do not wish to revive that code, but rather choose to limit usage of the OpenVG API to a smaller subset to accelerate the rendering of Qt Quick applications.
Qt runs on many embedded devices, but to get the most benefit out of Qt Quick has so far required at least OpenGL 2.0 support. At the same time customers want to use Qt Quick on their low-end embedded devices lacking OpenGL-capable GPUs. So first we introduced the Software adaptation, previously known as the Qt Quick 2D Renderer. See our previous posts here and here. There is however an in-between where hardware has a GPU supporting OpenVG 1.1 but not OpenGL 2.0. OpenVG is a good match for accelerating the rendering of Qt Quick because most features can be enabled, leading to better performance on hardware that has OpenVG-capable GPU.
A few examples of system-on-chips with this configuration are the NXP iMX6 SoloLite, and Vybrid VF5xxR chips which both use the GC355 Vector GPUs enabling OpenVG. The OpenVG working group may no longer be actively working on the standard itself, but SoC vendors are still releasing on hardware that supports OpenVG.
The expected behavior for the OpenVG adaptation is that it fills the space between OpenGL and Software rendering. With hardware that supports both OpenGL and OpenVG, expect the OpenGL renderer to outperform OpenVG as OpenGL gives more opportunities for optimisation. If you test the OpenVG adaptation on a Raspberry Pi you will see that default OpenGL renderer will do significantly more before dropping below 60 FPS.
To use the OpenVG backend you will need to build Qt with support for it. In Qt 5.9 we have re-added a test for OpenVG support which will enable the feature in Qt Quick. Once you have a suitable build of Qt deployed to your target device you will need to run your application with a platform plugin that supports EGL (EGLFS or MinimalEGL). Then if you set the environment variable QT_QUICK_BACKEND=openvg your Qt Quick applications will create OpenVG capable EGL surfaces, and render using OpenVG commands. For more information, see the scenegraph adaptation section at the snapshot documentation site.
Like the Software adaptation, the OpenVG adaptation comes with some limitations due to the lack of 3D and Shader Effects. It is not possible to use QML components that depend on OpenGL or Shader Effects directly. That means that Qt Quick modules like Particles and Graphical Effects are not available. If your application works with the Software adaption, it will work better with the OpenVG backend with hardware capable of using it.
The EGLFS platform plugin also introduces some limitations. When using EGLFS platform with Qt for Device Creation you may have become accustom to having a mouse cursor and support for multiple child windows. Despite the platform plugins name (EGL Fullscreen) it is a bit naughty and does use OpenGL for a few things. Specifically composing multiple windows and the mouse cursor. If you use EGLFS on a platform without OpenGL though, these features are not available. In a shipping device this usually isn’t an issue. It can be annoying if you don’t expect it during the development phase. Funny enough we had a very similar limitation with the OpenVG paint engine in Qt 4 with QWS.
In the embedded space there is a need to make use of any available resources. This adaptation is just one more way that Qt is helping fill that need. I hope that this adaptation will make some of your device creation efforts easier so that you can spend more time making cool products with Qt. Thanks for reading and keep on hacking.
We’re working like crazy on the next versions of Krita — 3.1.3 and 4.0. Krita 3.1.3 will be a stable bugfix release, 4.0 will have the vector work and the python scripting. This week we’ve prepared the first 3.1.3 alpha builds for testing! The final release of 3.1.3 is planned for end of April.
We’re still working on fixing more bugs for the final 3.1.3 release, so please test these builds, and if you find an issue, check whether it’s already in the bug tracker, and if not, report it!
We are still struggling with Intel’s GPU drivers; recent Windows updates seem to have broken Krita’s OpenGL canvas on some systems, and since we don’t have access to a broken system, we cannot work around the issue.
Note for Windows users: if you encounter crashes, please follow these instructions to use the debug symbols so we can figure out where Krita crashes.
A snap image for the Ubuntu App Store will be available soon. You can also use the Krita Lime PPA to install Krita 3.1.3-alpha.2 on Ubuntu and derivatives.
For all downloads:
Here’s Nathan with a piece of good news:
After months of work, I’m glad to announce that Make Professional Painterly Game Art with Krita is out! It is the first Game Art training for your favourite digital painting program.
In this course, you’ll learn:
1. The techniques professionals use to make beautiful sprites
2. How to create characters, background and even simple UI
3. How to build smart, reusable assets
With the pro and premium versions, you’ll also get the opportunity to improve your art fundamentals, become more efficient with Krita, and build a detailed game mockup for your portfolio.
The course page has free sample tutorials and the answers to all of your questions.
We are happy to announce the release of Qt Creator 4.3 Beta!
Qt Quick Designer now integrates a QML code editor. This allows you to use views like the Properties editor and the Navigator also for text based editing. When you use the split view, you directly see the effects of what you are doing. The graphical editor got support for adding items and tab bar to stacked containers like StackedLayout and SwipeView, a tool bar with common actions, and support for HiDPI displays.
When you profile your Qt Quick application with the QML Profiler, you see performance information now also directly in the QML code editor. And the profiler itself received many performance improvements as well.
If you use Qt Creator with CMake 3.7 or later, we now use the server-mode that was added to CMake 3.7 for the benefit of IDEs. It provides much better information about the project structure, include paths, and more, than what we could parse from the generators and Makefile before. As a result you also see products and targets in the project tree and can build them individually.
Regardless of CMake version we added header files to the project tree, even if they are not listed explicitly in the project files. You now can also import existing builds of a CMake project, like we already provide for QMake based projects, which sets up a kit with the information found in the CMake cache from the build, and registers new toolchains and Qt versions as needed.
Sometimes code is interpreted differently in different contexts. A file can be used by different (sub-)projects with different defines, or be included in the context of C, C++, Objective-C, or Objective-C++. You already could choose a different project in the dialog behind the little # in the editor toolbar. We moved this to a separate dropdown menu in the editor toolbar, and added the choice of language as well.
If you are up for a bit of experimentation, enable the ClangRefactoring plugin. It adds preliminary support for clang-query to Advanced Find and uses Clang for the local renaming refactoring.
If you use Qt Creator for iOS development, you can now choose the developer team and provisioning profile used for signing. This overrides the default that QMake chooses and any settings you have in your project files.
Unfortunately the newest version 25.3.1 of the Android SDK does not work with current Qt and Qt Creator versions. Some essential tools that we relied on have changed. We are working on fixing the issue. You can track it through QTCREATORBUG-17814. For the time being please stay at Android SDK 25.2.5.
The CDB debugging support that we ship with our packages now uses a Python based pretty printing backend. That has multiple advantages. The debugger starts much faster, and the unification of pretty printing between GDB, LLDB and CDB brings more and better pretty printers to Qt Creator’s CDB support.
There have been many more improvements, which are described in more detail in our change log.
The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.
Note: We now provide 64-bit offline installers for Windows.
This is a neat little trick that’s been making the rounds, and after seeing success with several people on Reddit I thought it was worth posting somewhere more visible. This will look at removing screen tearing (often entirely) when using Nvidia Proprietary graphics on the Plasma Desktop.
First, you should only do this if…
The trick is enabling a feature called “Force Composition Pipeline”, or “Force Full Composition Pipeline”. What this does is essentially a driver-level vsync, but I haven’t found particularly good documentation on the feature. Most instructions you can find online will instruct you how to do this manually via config files, but I’ll explain how to do it via GUI as less can go wrong, and the GUI is there to be used. You can do a search online and easily find several manual sets of instructions if the GUI isn’t your thing.
If you still experience tearing, you may need to go and “Force Full Composition Pipeline”, which is a more extreme version of the feature.
As a follow-up, if the composition pipeline is on and working, the Nvidia driver is essentially providing its own flavour of vsync. You’ll likely want to turn off Kwins vsync otherwise you may experience stutter in several situations, where it essentially halfs your potential frame rate. This mostly applies to games, possibly video, and I only recommend this step if you see stutter too;
Of course, there are some “gotchas” to keep in mind!
First; if you change your displays, rotate them, or anything else, you may experience tearing again, especially if you disabled Kwins Tearing Prevention. You’ll need to go back in and re-apply the settings.
Second; you may see a performance impact with games. I personally haven’t, but several articles on this subject do mention it as a drawback of this feature. Additionally it’s not quite something you can toggle on/off easily if getting your game on is impacted.
Third; on laptops with Nvidia PRIME, you may have difficulty enabling this feature. If you do, you may want to leave Kwin Tearing Prevention alone should it switch to Intel graphics and begin tearing. I’m not an expert on this and cannot test this with a PRIME-enabled machine, so your mileage may vary.
Lastly; these are instructions from someone who doesn’t know a huge amount about drivers and display servers. For all I know this heats up your GPU to 300 degrees and was meant to roast marshmallows. Proceed with caution.
Special thanks to “AbstractOperator” and “cristianadam” for letting me know this works on multiple monitors, and cristianadam for pointing out this youtube video; play this video (ideally full-screen) to test and spot tearing.
Quick update: Ishiiruka (fork of the Wii/GameCube emulator Dolphin for those who missed my last post) is now available on openSUSE Leap as well.
In addition to that, all openSUSE builds now build against shared wxWidgets 3.1 instead of statically linking the included one.
I’ve been trying macro photography and using the depth of field to make the subject of my photos stand out more from the background. This photo of a parrotfish shows promising results beyond “blurry fish butt” quality. I’ll definitely use this technique more often in the future, especially for colorful fish with colorful coral in the background.
My nickname is Dolly, I am 11 years old, I live in Cannock, Staffordshire, England. I am at Secondary school, and at the weekends I attend drama, dance and singing lessons, I like drawing and recently started using the Krita app.
My dad and my friend told me about it.
I draw on paper, and I like Krita more than paper art as there’s a lot more colours instantly available than when I do paper art.
I mostly draw my original character (called Phantom), I draw animals, trees and stars too.
I think choosing the colour is easy, its really good, I find getting the right brush size a little difficult due to the scrolling needed to select the brush size.
The thing most fun for me is colouring in my pictures as there is a great range of colour available, far more than in my pencil case.
I think Krita is almost perfect the way it is at the moment however if the brush selection expanded automatically instead of having to scroll through it would be better for me.
I can, I have attached some of my favourites that I have done for my friends.
I usually start with the a standard base line made up of a circle for the face and the ears, I normally add the hair and the other features (eyes, noses and mouth) and finally colour and shade and include any accessories.
I really enjoy Krita, I think its one of the best drawing programs there is!
I am extremely pleased to have confirmed the entire speaker line-up for foss north 2017. This will be a really good year!
Trying to put together something like this is really hard – you want the best speakers, but you also want a mix of local and international, various technologies, various viewpoints and much, much more. For 2017 we will have open hardware and open software, KDE and Gnome, web and embedded, tech talks and processes, and so on.
You may have heard about Dolphin, not our file manager but the GameCube and Wii emulator of the same name. What you may not have heard of is Ishiiruka, a fork of Dolphin that prioritizes performance over emulation accuracy – and clean code if comments by an upstream Dolphin author on Reddit are to be believed.
Although Ishiiruka began as a reaction to remove the Direct3D 9 renderer in the Windows version of Dolphin (which is probably why the Linux community ignored it for the most part), it also began to tackle other performance issues such as “micro stuttering”.
Recently the Git master branch of Ishiiruka shipped compilation fixes for Linux, so I decided to dust off my old dolphin-emu.spec file and give it a try (I’m hardly an expert packager). So after some dabbling I succeeded. For now only Fedora 24, Fedora 25, and openSUSE Tumbleweed are supported. The packages are available from https://software.opensuse.org/package/ishiiruka-dolphin-unstable.
openSUSE Leap requires some workaround because it defaults to GCC 4. I plan to look into it at a later time. Once Tino creates a new Stable branch that incorporates the Linux fixes, I’ll post it under https://software.opensuse.org/package/ishiiruka-dolphin.
If anyone of you is interested in Arch, Debian, Ubuntu,… packages (anything supported by OBS), I’ll gladly accept Submit Requests for PKGBUILD etc. files at https://build.opensuse.org/project/show/home:KAMiKAZOW:Emulators.
Hi all, I have an awesome laptop I bought from my son, a hardcore gamer. So used, but also very beefy and well-cared-for. Lately, however, it has begun to freeze, by which I mean: the screen is not updated, and no keyboard inputs are accepted. So I can't even REISUB; the only cure is the power button.
I like to leave my laptop running overnight for a few reasons -- to get IRC posts while I sleep, to serve *ubuntu ISO torrents, and to run Folding@Home.
Attempting to cure the freezing, I've updated my graphics driver, rolled back to an older kernel, removed my beloved Folding@Home application, turned on the fan overnight, all to no avail. After adding lm-sensors and such, it didn't seem likely to be overheating, but I'd like to be sure about that.
Lately I turned off screen dimming at night and left a konsole window on the desktop running `top`. This morning I found a freeze again, with nothing apparent in the top readout:
KDE.org quite possibly has one of the largest open-source websites compared to any other desktop-oriented project, extending beyond into applications, wikis, guides, and much more. The amount of content is dizzying and indeed a huge chunk of that content is about as old as the mascot Kandalf – figuratively and literally.
The KDE.org user-facing design “Aether” is live and various kinks have been worked out, but one fact is glaringly obvious; we’ve made the layers of age and look better by adding another layer. Ultimately the real fix is migrating the site to Drupal, so I figured this post would cover some of the thoughts and progress behind the ongoing work.
Right now work is on porting the Aether theme to Drupal 8, ideally it’ll be “better than perfect port” with Drupal optimizations, making better use of Bootstrap 4, and refinements. Additionally, I’m preparing a “Neverland-style” template for those planning to use Aether on their KDE-related project sites, but it’s more of a side-project until the Drupal theme lands. Recently the theme was changed to use Bootsraps’ Barrio base theme, which has been a very pleasant decision as we get much more “out of the box”. It does require a Bootstrap library module which will allow local or CDN-based Bootstrap installations, and while at first I was asking “why can’t a theme just be self-contained?”, now I’m understanding the logic – Bootstrap is popular, multiple themes use it, this will keep it all up-to-date and can be updated itself. I do think maybe one thing Drupal should do is have some rudimentary package management that says “hey, we also need to download this”, but it’s easy enough to install separately.
If you have a project website looking to port to Aether, I would first advise you simply waiting until you can consider moving your page to the main Drupal installation when it eventually goes live; in my perfect world I imagine Drupal unifying a great amount of disparate content, thus getting free updates. Additionally, consider hitting up the KDE-www mailing list and ask to help out on content, or place feature requests for front-end UI elements. While I’m currently lurking the mailing list, I’ll try to provide whatever info I can. On an aside, I had some Telegram confusion with some people looking to contribute and concerns from administrators, so please simply defer to the mailing list.
In terms of the Aether theme, I will be posting the basic theme on our git repo; when it goes up if you have Bootstrap and Twig experience (any at all is more than I had when I started), please consider contributing, especially if you maintain a page and would migrate to Drupal if it had the appropriate featureset. I will post a tiny follow-up when the repo is up.
I’m sorry that $feature behaves differently to how you expect it. But it’s the way it is and that’s by design. The feature work exactly as it’s supposed to work. I’m sorry, this won’t be changed.
With decisions like that, no wonder KDE is still a broken mess.
I wonder why the hell I even bother reporting issues. Bugs are by design these days.
Have a nice life.
A week ago I received my Raspberry Pi Zero W to play a bit with some IoT device. The specs of this small
device computer are the following:
But the interesting part comes with the connectivity:
And especially from one of the hidden features that allows one to use the device as a headless device and connect using SSH over USB adding the following line to config.txt:
And modifying the file cmdline.txt to add:
remember to create a file called ssh to enable SSH access to your Raspberry Pi. There are plenty tutorials over the Internet showing this!
One of the use cases which comes to my mind using this device and this feature is being able to create portable presentations and show them on any computer without the need of installing new software.
For the presentation, I used the qml-presentation-system (link).
More use cases could be:
Please comment if you have other ideas or use cases.
Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) Beta 2 is released . With this Beta 2 pre-release, you can see and test what we are preparing for 17.04, which we will be releasing April 13, 2017.
NOTE: This is Beta 2 Release. Kubuntu Beta Releases are NOT recommended for:
* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable
Getting Kubuntu 17.04 Beta 2:
* Upgrade from 16.10: run `do-release-upgrade -d` from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive : http://cdimage.ubuntu.com/kubuntu/releases/zesty/beta-2/
Release notes: https://wiki.ubuntu.com/ZestyZapus/Beta2/Kubuntu