I am extremely pleased to have confirmed the entire speaker line-up for foss north 2017. This will be a really good year!
Trying to put together something like this is really hard – you want the best speakers, but you also want a mix of local and international, various technologies, various viewpoints and much, much more. For 2017 we will have open hardware and open software, KDE and Gnome, web and embedded, tech talks and processes, and so on.
You may have heard about Dolphin, not our file manager but the GameCube and Wii emulator of the same name. What you may not have heard of is Ishiiruka, a fork of Dolphin that prioritizes performance over emulation accuracy – and clean code if comments by an upstream Dolphin author on Reddit are to be believed.
Although Ishiiruka began as a reaction to remove the Direct3D 9 renderer in the Windows version of Dolphin (which is probably why the Linux community ignored it for the most part), it also began to tackle other performance issues such as “micro stuttering”.
Recently the Git master branch of Ishiiruka shipped compilation fixes for Linux, so I decided to dust off my old dolphin-emu.spec file and give it a try (I’m hardly an expert packager). So after some dabbling I succeeded. For now only Fedora 24, Fedora 25, and openSUSE Tumbleweed are supported. The packages are available from https://software.opensuse.org/package/ishiiruka-dolphin-unstable.
openSUSE Leap requires some workaround because it defaults to GCC 4. I plan to look into it at a later time. Once Tino creates a new Stable branch that incorporates the Linux fixes, I’ll post it under https://software.opensuse.org/package/ishiiruka-dolphin.
If anyone of you is interested in Arch, Debian, Ubuntu,… packages (anything supported by OBS), I’ll gladly accept Submit Requests for PKGBUILD etc. files at https://build.opensuse.org/project/show/home:KAMiKAZOW:Emulators.
Nos acercamos a abril, el primer mes del año elegido para que el equipo de desarrollo de KDE haga el lanzamiento de su gran revisión de sus aplicaciones. Pero antes de este gran actualización siempre hay que probar. Por eso me complace anunciar que ha sido lanzada la beta de KDE Aplicaciones 17.04. ¡Esto no para! ¡KDE Rocks!
El pasado 24 de marzo el equipo de desarrollo de la Comunidad KDE anunció la beta de KDE Aplicaciones 17.04, otro paso más en la evolución de su ecosistema de programas que tiene dos objetivos fundamentales: seguir mejorando las aplicaciones KDE y continuar la migración de más aplicaciones al entorno de trabajo Qt5/KF5.
Tras un trabajo que se inició el mismo día que se lanzó KDE Aplicaciones 16.08, los desarrolladores han estado trabajando de forma silenciosa pero coordinada y constante preparando las nuevas funcionalidades que nos esperan en agosto
Ahora es el momento de congelar las funcionalidades y las dependencias, y que el equipo de desarrollo (y todas aquellas personas que así lo deseen) se centren en corregir errores y pulir las aplicaciones.
Más información: KDE.org
Todas las tareas dentro del mundo del Software Libre son importantes: desarrollar, traducir, empaquetar, diseñar, promocionar, etc. Pero hay una que se suele pasar por alto y de la que solo nos acordamos cuando las cosas no nos funcionan como debería: buscar errores.
Desde el blog te animo a que tú seas una de las personas responsables del éxito del nuevo lanzamiento de las aplicaciones de KDE. Para ello debes participar en la tarea de buscar y reportar errores, algo básico para que los desarrolladores los solucionen para que el despegue de KDE Aplicaciones 16.04 esté bien pulido. Debéis pensar que en muchas ocasiones los errores existen porque no le han aparecido al grupo de desarrolladores ya que no se han dado las circunstancias para que lo hagan.
Hi all, I have an awesome laptop I bought from my son, a hardcore gamer. So used, but also very beefy and well-cared-for. Lately, however, it has begun to freeze, by which I mean: the screen is not updated, and no keyboard inputs are accepted. So I can't even REISUB; the only cure is the power button.
I like to leave my laptop running overnight for a few reasons -- to get IRC posts while I sleep, to serve *ubuntu ISO torrents, and to run Folding@Home.
Attempting to cure the freezing, I've updated my graphics driver, rolled back to an older kernel, removed my beloved Folding@Home application, turned on the fan overnight, all to no avail. After adding lm-sensors and such, it didn't seem likely to be overheating, but I'd like to be sure about that.
Lately I turned off screen dimming at night and left a konsole window on the desktop running `top`. This morning I found a freeze again, with nothing apparent in the top readout:
KDE.org quite possibly has one of the largest open-source websites compared to any other desktop-oriented project, extending beyond into applications, wikis, guides, and much more. The amount of content is dizzying and indeed a huge chunk of that content is about as old as the mascot Kandalf – figuratively and literally.
The KDE.org user-facing design “Aether” is live and various kinks have been worked out, but one fact is glaringly obvious; we’ve made the layers of age and look better by adding another layer. Ultimately the real fix is migrating the site to Drupal, so I figured this post would cover some of the thoughts and progress behind the ongoing work.
Right now work is on porting the Aether theme to Drupal 8, ideally it’ll be “better than perfect port” with Drupal optimizations, making better use of Bootstrap 4, and refinements. Additionally, I’m preparing a “Neverland-style” template for those planning to use Aether on their KDE-related project sites, but it’s more of a side-project until the Drupal theme lands. Recently the theme was changed to use Bootsraps’ Barrio base theme, which has been a very pleasant decision as we get much more “out of the box”. It does require a Bootstrap library module which will allow local or CDN-based Bootstrap installations, and while at first I was asking “why can’t a theme just be self-contained?”, now I’m understanding the logic – Bootstrap is popular, multiple themes use it, this will keep it all up-to-date and can be updated itself. I do think maybe one thing Drupal should do is have some rudimentary package management that says “hey, we also need to download this”, but it’s easy enough to install separately.
If you have a project website looking to port to Aether, I would first advise you simply waiting until you can consider moving your page to the main Drupal installation when it eventually goes live; in my perfect world I imagine Drupal unifying a great amount of disparate content, thus getting free updates. Additionally, consider hitting up the KDE-www mailing list and ask to help out on content, or place feature requests for front-end UI elements. While I’m currently lurking the mailing list, I’ll try to provide whatever info I can. On an aside, I had some Telegram confusion with some people looking to contribute and concerns from administrators, so please simply defer to the mailing list.
In terms of the Aether theme, I will be posting the basic theme on our git repo; when it goes up if you have Bootstrap and Twig experience (any at all is more than I had when I started), please consider contributing, especially if you maintain a page and would migrate to Drupal if it had the appropriate featureset. I will post a tiny follow-up when the repo is up.
I’m sorry that $feature behaves differently to how you expect it. But it’s the way it is and that’s by design. The feature work exactly as it’s supposed to work. I’m sorry, this won’t be changed.
With decisions like that, no wonder KDE is still a broken mess.
I wonder why the hell I even bother reporting issues. Bugs are by design these days.
Have a nice life.
A week ago I received my Raspberry Pi Zero W to play a bit with some IoT device. The specs of this small
device computer are the following:
But the interesting part comes with the connectivity:
And especially from one of the hidden features that allows one to use the device as a headless device and connect using SSH over USB adding the following line to config.txt:
And modifying the file cmdline.txt to add:
remember to create a file called ssh to enable SSH access to your Raspberry Pi. There are plenty tutorials over the Internet showing this!
One of the use cases which comes to my mind using this device and this feature is being able to create portable presentations and show them on any computer without the need of installing new software.
For the presentation, I used the qml-presentation-system (link).
More use cases could be:
Please comment if you have other ideas or use cases.
Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) Beta 2 is released . With this Beta 2 pre-release, you can see and test what we are preparing for 17.04, which we will be releasing April 13, 2017.
NOTE: This is Beta 2 Release. Kubuntu Beta Releases are NOT recommended for:
* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable
Getting Kubuntu 17.04 Beta 2:
* Upgrade from 16.10: run `do-release-upgrade -d` from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive : http://cdimage.ubuntu.com/kubuntu/releases/zesty/beta-2/
Release notes: https://wiki.ubuntu.com/ZestyZapus/Beta2/Kubuntu
Los años no pasan, vuelan. KDE Blog llega a su noveno aniversario este 2017, lo que significa que he compartido mis conocimientos y experiencias sobre KDE y GNU/Linux en forma de más de 4200 artículos. Un aniversario que me gusta también compartir con vosotros porque este blog se ha convertido en uno de los pilares de mi vida y considero a los lectores como amigos virtuales.
En 2008, tras unos años disfrutando del mundo del Software Libre, consideré que le debía devolver algo de lo que me había dado. De esta forma pensé que la única forma con la que podía ayudar a este mundo del conocimiento libren era mediante la creación de un blog que ayudara a la difusión de las bondades del mismo.
Y eso que no me consideraba un experto, ni mucho menos, simplemente pensé que al no ser un experto y trabajar como docente me convertía en una persona ideal para poder explicar a otros novatos como yo cómo hacer cosas con Software Libre al tiempo que dejaba plasmadas mis soluciones a mis problemas de una forma pública.
Así que el 24 de marzo de 2008 nació KDE Blog. Fue un parto sencillo, me bastó comentarlo con mi compañero virtual Daniel Moreno, que me proporcionó el alojamiento, reunr unas cuantas entradas cuya finalidad era introducir a los lectores en el mundo GNU/Linux y el valor suficiente para salir de mi zona de confort y empezar a publicar de forma diaria noticias sobre KDE y sus distribuciones. Y quiero destacar “mi obligación” de publicar de forma diaria una entrada ya que fue la forma de crear un hábito en mi trabajo y que este proyecto no muriera.
Declaración de intenciones de KDE Blog
[24 de marzo de 2008]
Hola a todos y todas: Hoy se inaugura KDE Blog un nuevo blog en el inmenso e infinito mundo de los blogs. El objetivo de este blog es múltiple: ayudar a los principiantes en el mundo Linux, informar sobre el mundo KDE, (el entorno de escritorio de bajo Linux) y fomentar el uso del Software Libre.
A lo largo de estos 8 años he ido publicando entradas conmemorativas que podéis seguir en la siguiente etiqueta y este año no he querido ser menos. Pero no me voy a repetir, así que os remito a la entrada del año pasado para leer el resumen de las entradas anteriores, y este año me dedicaré a repasar mi año como humilde profeta del Conocimiento Compartido.
Desde hace un tiempo, mi resumen anual no lo hago el 31 de diciembre sino que he decidido hacerlo en marzo, para celebrar el aniversario del blog. En esta ocasión he decidido dividir el resumen en tres apartados: el blog, mi entorno y KDE España.
Este año no ha sido de los peores en cuanto a tiempo disponible. Tal y como anticipé el año pasado, poco a poco estoy recuperando ese tiempo delante del PC necesario para hacer artículos de una calidad decente.
De esta forma, un año más he conseguido mantener el ritmo de una entrada diaria, incluso teniendo durante mucho tiempo unos artículos en la recamara para sacarlos en tiempos más duros. Ha ayudado mucho el hecho de que este año hay asido prolífico en noticias sobre KDE: el 20 aniversario de KDE, los múltiples eventos del Softwae Libre, la aparición de KDE Slimbook, mi participación más activa en KDE España, decenas de actualizaciones, etc.
Ah, casi se me olvida, el blog tuvo su reconocimiento con el Premio al mejor medio de los Open Awards 2016 en la OpenExpo 2016, al cual este año no he podido presentarme por motivos de causa mayor.
Respecto a este aspecto sigo teniendo dos frentes: mi entorno como persona y mi papel como organizador de las Jornades Lliures de la UNED de Vila-real.
Cada día que pasa soy un poco más abierto en mi promoción con el Software Libre. A nivel laboral defiendo todos los días y a todos los miembros de la Comunidad Educativa (alumnos, profesores, familias, instituciones, etc) la Comunidad GNU/Linux y KDE: sus distribuciones, sus aplicaciones, sus formatos libres, etc. Reivindicando las libertades para los usuarios y las ventajas para la sociedad.
De esta forma, sigue aumentando el número de usuarios que utilizan distribuciones Linux a mi alrededor. Quizá sea el momento de dar otro paso y ser más activo para con las personas qu todavía no se deciden.
No obstante las V Jornadas y Talleres Libres, no están siendo satisfactorias. Las razones son la baja asistencia de público y algunas ponencias canceladas.
Finalmente está pesando más la falta de promoción e innovación que las ganas de continuar. Así que , me estoy pensando muy seriamente coger una año sabático con esta iniciativa. Creo que necesito un grupo de personas para seguir adelante ya que no veo los resultados esperados.
En esta ocasión creo que mi actividad con la asociación está llena de logros, en parte por el gran grupo de personas que la forma y que siempre están dispuestas a colaborar en mejorarla.
Dentro de KDE España tengo dos funciones ya que además de ser el coordinador del Grupo de Comunicación, este año he recibido el encargo de ser el Secretario, lo cual implica atender a ciertos encargos organizativos como la gestión de miembros y redactar las actas de las reuniones.
Además, mi implicación en los Podcast de KDE España es muy activa, tanto como organizador de las mismas como participante en todas las emisiones. Se trata de un medio que resulta muy atractivo y que engancha.
Además, aprovecho para invitaros también a ver o escuchar los podcast que solemos hacer mensualmente algunos miembros de la Comunidad KDE España en la que hablamos de diversos temas relacionados con el Software Libre y que ya ha llegado a su tercera temporada.
Y siguiendo la tradición quiero hacer público mi idea sobre la marcha del blog y de mi compromiso con el Software Libre para el próximo año y anunciar que:
Now that Qt 5.9 is getting closer, let’s take a look at a minor but immensely useful improvement to the basic OpenGL enablers that form the foundation of Qt Quick and the optional OpenGL-based rendering path of QPainter.
Those looking at the documentation snapshots for 5.9 may have already come across some new functions in the venerable QOpenGLShaderProgram. What is more, most internal usages in Qt have been switched over to the new API. What does this mean in practice?
As explained here, such shader programs will attempt to cache the program binaries on disk using GL_ARB_get_program_binary or the standard equivalents in OpenGL ES 3.0. When no support is provided by the driver, the behavior is equivalent to the non-cached case. The files are stored in the global or per-process cache location, whichever is writable. The result is a nice boost in performance when a program is created with the same shader sources next time.
How big is the improvement? It varies. Some drivers have already been doing some sort of caching for the past couple of years, while some others have similar features in the pipeline. However, the gains turn out to be quite significant in practice on devices that are out in the field right now:
Do not read too much into the actual numbers. What is important is the difference between Qt 5.8 and 5.9. Also, a simple Qt Quick or GL-backed QPainter scene will definitely not use 10 programs, but as complexity grows, with Qt Graphical Effects and custom ShaderEffect items entering the picture, getting similar improvements does not look far fetched anymore.
In fact we gain something even on systems that employ shader caching already. Therefore every application’s startup and view switching times are expected to benefit with Qt 5.9 – without having to change anything.
Applications that use QOpenGLShaderProgram on their own can usually switch to the cacheable function variants by just changing the name in the function call. The change have to be a conscious decision, though, since some of the APIs change semantics when program binaries are used. Most notably, QOpenGLShader, addShader(), and removeShader() are incompatible with the program-level caching since they rely on individual shader compilation.
That’s it for now, stay tuned for more posts about exciting upcoming Qt 5.9 and 5.10 features.
The post Boosting performance with shader binary caching in Qt 5.9 appeared first on Qt Blog.
A raíz de presentaros hace unos días el curso de Audacity de Podcast Linux, pensé que no estaría de más dar la mayor publicidad posible a este proyecto De esta forma me he decidido realizar una entrada en el blog promoviendo cada uno de los capítulos que ha creado el gran Juan Febles. Así pues, bienvenidos al episodio del Curso de Audacity de Podcast Linux #01 Graba y edita tu voz con el que se inicia esta más que interesante serie.
En el primer capítulo del curso de Audacity, que Juan ha titulado “Graba y edita tu voz” se nos plantean las bases del proyecto, explicando el nacimiento del mismo, así como las fuentes en las que se ha basado para realizar el mismo, que no son otras que el Curso básico de Audacity de 9 decibelios realizado por David Arribas y la serie de vídeos titulada Manuel Audacity del gran Juan Luis Fernández Gallo, conocido como Jenofonte.
También en este episodio nos recomienda el tipo de micrófonos más adecuado para la realización de podcast, que para él son los dinámicos, y explica las conexiones de su micrófono actual. Del mismo modo, nos explica los consejos básicas para realizar un correcto registro de nuestra voz: ambiente de grabación, distancia y posición respecto al micrófono, disminuir eco gracias a estanterías con libros, espumas amortiguadoras de sonidos explosivos, etc.
Tras esas explicaciones, Juan se encarga de mostrarnos las partes básicas de Audacity como el medidor de decibelios o el registro de su propia voz en forma de onda sonora. Posteriormente nos muestra algunos retoques que realiza sobre dicha grabación: eliminación del ruido de ambiente, realzar las zonas más bajas gracias Chris’s Dynamic Compressor, ecualizar un poco su voz y limitar las zonas con alta intensidad sonora.
Sin más, os dejo disfrutar de la visualización del vídeo:
One of my pet peeves with teaching FP in C++ is that if we want to have efficient code, we need to catch functions and other callable objects as template arguments.
Because of this, we do not have function signatures that are self-documenting. Consider a function that outputs items that satisfy a predicate to the standard output:
We see that the template parameter is named
so we can imply that it needs to return a
bool (or something convertible to
and we can deduce from the function name
and the type of the first argument
that it should be a function that takes an
This is a lot of reasoning just to be able to tell what we can pass to the function.
For this reason,
std::function in his blog posts
it tells us exactly which functions we can pass in.
std::function is slow.
So, we either need to have a bad API or a slow API.
With concepts, the things will change.
We will be able to define a really short (and a bit dirty) concept that will check whether the functions we get are of the right signature:
Edit: Changed the concept name to Callable to fit the naming in the standard
[func.def] since it supports any callable, not just function objects
We will be able to call
foo with any callable that looks like a int-to-int function.
And we will get an error ‘constraint Callable<int(int)> is not satisfied’
for those that do not have the matching signature.
An alternative approach is to use
std::is_invocable type trait
(thanks Agustín Bergé for writing the original proposal and pointing me to it).
It will provide us with a cleaner definition for the concept,
though the usage syntax will have to be a bit different
if we want to keep the concept definition short and succulent.
When we get concepts (C++20, hopefully), we will have the best of both worlds – we will have an optimal way to accept callable objects as function arguments, and not sacrificing the API to do it.
Today, Functional Programming in C++ is again the Deal of the Day – you get half off if you use the code dotd032317au at cukic.co/to/manning-dotd
A few months ago, Helio blogged about building KDE 1 (again) on modern systems. So recently while cleaning up some boxes of old books, I found the corresponding books — which shows that there was a time that there was a market for writing books about the Linux desktop.
Particularly the top book, “Using KDE” by Nicholas Wells, is interesting. The first page I opened it up to was a pointer to the KDE Translation teams, and information on how to contribute, how to get in touch with the translation teams, etc. You can still find the translation info online, although the location has changed since 2000.
I did a day’s training at the FLOSS UK conference in Manchester on Chef. Anthony Hodson came from Chef (a company with over 200 employees) to provide this intermediate training which covered writing receipes using test driven development. Thanks to Chef and Anthony and FLOSS UK for providing it cheap. Here’s some notes for my own interest and anyone else who cares.
Using chef generate we started a new cookbook called http.
This cookbook contains a .kitchen.yml file. Test Kitchen is a chef tool to run tests on chef recipes. ‘kitchen list’ will show the machines it’s configured to run. Default uses Virtualbox and centos/ubuntu. Can be changed to Docker or whatever. ‘kitchen create’ will make them. ‘kitchen converge to deploy. ‘kitchen login’ to log into v-machine. ‘kitchen verify’ run tests. ‘kitchen test’ will destroy then setup and verify, takes a bit longer.
Write the test first. If you’re not sure what the test should be write stub/placeholder statements for what you do know then work out the code.
ChefSpec (an RSpec language) is the in memory unit tests for receipes, it’s quicker and does finer grained tests than the Kitchen tests (which use InSpec and do black box tests on the final result). Run with chef exec rspec ../default-spec.rb rspec shows a * for a stub.
Beware if a test passes first time, it might be a false positive.
ohai is a standalone or chef client tool which detects the node attributes and passes to the chef client. We didn’t get onto this as it was for a follow on day.
Pry is a Ruby debugger. It’s a Gem and part of chefdk.
To debug recipes use pry in the receipe, drops you into a debug prompt for checking the values are what you think they are.
I still find deploying chef a nightmare, it won’t install in the normal way on my preferred Scaleway server because they’re ARM, by default it needs a Chef server but you can just use chef-client with –local-mode and then there’s chef solo, chef zero and knife solo which all do things that I haven’t quite got my head round. All interesting to learn anyway.
Ryou is the amazing artist from Japan who made the Kiki plastic model. Thanks to Tyson Tan, we now have an interview with him!
I’m Ito Ryou-ichi (Ryou), a Japanese professional modeler and figure sculptor. I work for the model hobby magazine 月刊モデルグラフィックス (Model Graphics Monthly), writing columns, building guides as well as making model samples.
Building plastic models has been my hobby since I was a kid. Back then I liked building robot models from anime titles like the Gundam series. When I grew up, I once worked as a manga artist, but the job didn’t work out for me, so I became a modeler/sculptor around my 30s (in the 2000s). That said, I still love drawing pictures and manga!
Being a former manga artist, I like to articulate my figure design from a manga character design perspective. First I determine the character’s general impression, then collect information like clothing style and other stuff to match that impression. Using those references, I draw several iterations until I feel comfortable with the whole result.
Although I like human and robot characters in general, my favorite has to be kemono (Japanese style furry characters). A niche genre indeed, especially in the modeling scene — you don’t see many of those figures around. But to me, it feels like a challenge in which I can make the best use of my taste and skills.
There are many ways of prototyping a figure. I have been using epoxy putty sculpting most of the time. First I make the figure’s skeleton using metallic wires, then put epoxy putty around the skeleton to make a crude shape for the body. I then use art knives and other tools to do the sculpting work, slowly making all the details according to the design arts. A trusty old “analogue approach” if you will. In contrast, I have been trying the digital approach with ZBrushCore as well. Although I’m still learning, I can now make something like a head out of it.
In case of Kiki’s figure (and most of my figures), the final product is known as a “Garage Kit” — a box of unassembled, unpainted resin parts. The buyer builds and paints the figure by themselves. To turn the prototype into a garage kit, the finished prototype must first be broken into a few individual parts, make sure they have casting friendly shapes. Silicon-based rubber is then used to make molds out of those parts. Finally, flowing synthetic resin is injected into the molds and parts are harvested after the injected resin settled. This method is called “resin casting”. Although I can cast them at home by myself, I often commission a professional workshop to do it for me. It costs more that way, but they can produce parts of higher quality in large quantity.
Some time ago I came across Tyson Tan’s character designs on Pixiv.net and immediately became a big fan of his work. His Kiki pictures caught my attention and I did some research out of curiosity, leading me to Krita. I haven’t yet learned how to use Krita, but I’ll do that eventually.
Ryou: Before making Kiki, I had already collaborated with a few other artists, turning their characters into figures. Tyson has a unique way of mixing the beauty of living beings and futuristic robotic mechanism that I really liked, so I contacted him on Twitter. I picked a few characters from his creations as candidates, one of them was Kiki. Although more ”glamorous” would have been great too, after some discussion we finally decided to make Kiki.
Tyson: During the discussions, we looked into many of my original characters, some cute, some sexy. We did realize the market prefer figures with glamorous bodies, but we really wanted to make something special. Kiki being Krita’s mascot, a mascot of a free and open source art software, has one more layer of meaning than “just someone’s OC”. It was very courageous for Ryou to agree on a plan like that, since producing such a figure is very expensive and he would be the one to bear the monetary risk. I really admire his decision.
The Kiki figure kit can be ordered from my personal website. I send them worldwide: http://bmwweb3.nobody.jp/mail2.html
I plan to collaborate with other artists in the future to make more furry figures like Kiki. I will contact the artist if I like their work, but you may also commission me to make a figure for a specific character.
I hope through making this Kiki figure I can connect with more people!
Ryou’s Personal Website: http://bmwweb3.nobody.jp/
WireGuard is participating in Google Summer of Code 2017. If you're a student who would like to be funded this summer for writing interesting kernel code, studying cryptography, building networks, or working on a wide variety of interesting problems, then this might be appealing. The program opened to students on March 20th. If you're applying for WireGuard, choose "Linux Foundation" and state in your proposal that you'd like to work on WireGuard with "Jason Donenfeld" as your mentor.
Kdenlive development might look a bit slow these last months, but we are very busy behind the scene. You can join us tonight on our monthly café to get an insight of the current developments, follow the discussions or ask your questions.
Café will be at 21pm, european time, on irc.freenode.net, channel #kdenlive
More news on the next releases will follow soon, so keep tuned.
If you read my recent blog post, I defined a file type (*.webapp) that contains instructions to build an Electron Web App, I wrote a script to install *.webapp files (nativefier-freedesktop) and a script/wizard to build *.webapp files starting from a URL. And I published a first Web App in KDE Store / Opendesktop / Linux-Apps.
What inspire me was AUR (Arch Linux User Repository): since it’s not safe to install binaries distributed by users, AUR hosts instead instructions to automatically download sources, build an Arch package and install it. The principle of *.webapp is the same: instructions that let the users build web apps locally, eventually with custom CSS/JS to have, for example, a dark version of some famous site like YouTube.
Also, when I use KDE Neon or other distros I miss AUR a lot: on it you can find everything and install it quickly, you can find also Git versions of apps that are in the official repos in their stable release. So I thought: since now there are distro-agnostic packages, like Flatpak, Snap and AppImage, why not create the “distro-agnostic AUR”? It would work exactly like AUR but at the end of installation process it doesn’t create an Arch package but a Flatpak/Snap/AppImage one.
So a developer could distribute i.e. the Flatpak of the 1.0 stable version of his app and an user could decide to write a DAUR (“Distro-agnostic User Repository” package with the recipe to build a Flatpak using the sources from Git, so other users will be able to install the official 1.0 version in Flatpak and the development version also in Flatpak. Or maybe an user could write a recipe for Snap because he don’t like that the developer distributes only a Flatpak etc, use cases are many.
If you like the idea, please share it to grow the interest.
KDevelop 5.1.0 released
We are happy to announce the release of KDevelop 5.1! Tons of new stuff entered KDevelop 5.1. Here's a summary of what's new in this version:
We had a great student for GSoC 2016 implementing LLDB support in KDevelop. The end result is that we now have a debugger framework which can be used both for the GDB & LLDB MI communcation. The LLDB plugin teaches KDevelop to talk to the standalone LLDB MI Driver (lldb-mi); so now it's possible to use LLDB as an alternative debugger backend for KDevelop. One interesting thing with LLDB that it's also potentially useful on OS X & Windows for us, especially when the Windows port of LLDB is getting more and more stable.
With 5.1, KDevelop got a new menu entry Analyzer which features a set of actions to work with analyzer-like plugins. During the last months, we merged analyzer plugins into kdevelop.git which are now shipped to you out of the box:
Cppcheck is a well-known static analysis tool for C/C++ code. Cppcheck is useful for taking a closer look at your source code checking for common programming faults such as out of bounds accesses, memory leaks, null pointer dereferences, uninitialized variables, etc. pp. With the Cppcheck integration in KDevelop running the cppcheck executable is just one click away. KDevelop will pass the correct parameters to cppcheck including potential include paths and other options.
While the Cppcheck plugin is shipped out of the box, other analyzers are not considered 100% stable yet and still reside in their own repositories. The clang-tidy plugin looks super promising (another static analysis & refactoring tool for C/C++) as it really easy to use from the command-line and thus easy to integrate into our IDE. We plan to import more of those analyzers into kdevelop.git so they'll be part of the kdevelop tarball and are thus available to you without having to install yet another package.
Since 5.1 KDevelop is able to parse code written in the Open Computing Language (OpenCL). The OpenCL language support inside KDevelop is backed by our Clang-based language support backend and thus just required minimal changes in KDevelop to start supporting it. Support for handling NVidia's CUDA files will be part of 5.2 instead. Stay tuned.
Note that for KDevelop to detect .cl files as OpenCL files, an up-to-date shared-mime-info package which contains this patch is required. Alternatively, you can add the mime type yourself by creating the file /usr/share/mime/text/x-opencl-src.xml with appropriate contents and re-running update-mime-database yourself.
Python language support now supports Python 3.6 syntax and semantics. In addition, thanks to the work of Francis Herne, various long-standing issues in the semantic analysis engine have been fixed:
These improvements were accompanied by cleaning up dusty code, making future changes simpler as well.Furthermore, our style checker integration has been rewritten, making it much faster and easier to configure.
Thanks to Morten Danielsen Volden we now have Perforce integration in kdevplatform.git, which can be used freely starting with KDevelop 5.1. Perforce is a commercial, proprietary revision control system. The Perforce integration in KDevelop simply works by running a local version of the p4 executable (needs to be installed independently of KDevelop) with appropriate parameters. This is similar to how KDevelop integrates with other VCS, such as Git & Bazaar.
It is now possible to select the current color scheme from within KDevelop, a feature which has been requested several times in the past. This is especially useful for when KDevelop is run under a different desktop environment than KDE Plasma, where the color scheme settings may not be easily accessible.
We're continuously improving the Windows version of KDevelop and we're planning to release a first KDevelop version for OS X soon (yep, we're repeating us here, please stay tuned!). For the Windows version, we upgraded the KF5 version to 5.32 and the LLVM/Clang version to 3.9.1.
Together with the source code, we again provide a prebuilt one-file-executable for 64-bit Linux, as well as binary installers for 32- and 64-bit Microsoft Windows. You can find them on our download page.
The 5.1.0 source code and signatures can be downloaded from here.
Please give this version a try and as always let us know about any issues you find via our bug tracker.
Timothée Giet has finished his latest training course for Krita. In three parts, Timothée introduces the all-new animation feature in Krita. Animation was introduced in Krita 3.0, last year and is already used by people all over the world, for fun and for real work.
Animation in Krita is meant to recreate the glory days of hand-drawn animation, with a modern twist. It’s not a flash substitute, but allows you to pair Krita’s awesome drawing capabilities with a frame-based animation approach.
In this training course, Timothée first gives us a tour of the new animation features and panels in Krita. The second part introduces the foundation of traditional animation. The final part takes you through the production of an entire short clip, from sketching to exporting. All necessary production files are included, too!
Animate with Krita is available as a digital download and costs just €14,95 (excluding VAT in the European Union) English and French subtitles are included, as well as all project files.
As a professional mobile application developer you probably already know by now how awesome Qt makes most of your daily job.
Yet, things like finding that useful (but undocumented) qmake option, deciding upon the application architecture to use for your next project, implementing native extensions on Android and iOS, checking whether a suitable library for your component already exists, might prove quite difficult if you are on your own.
The forums, IRC and mailing lists are great places to start from for one-shot questions, but sometimes you need a dedicated environment with peers where you can have a more focused and ongoing discussion about your daily matters. Here is where QtMob comes in.
QtMob is a global community of professional Qt mobile application developers that are willing to share their pains, knowledge and resources.
QtMob is a Slack chat, so that you can leverage all the goodies that modern-day, computer-mediated conversation has to offer.
It is a tool that you can open in the morning while sipping your coffee/tea, to likely find a bunch of fresh new mobile-related resources that someone just posted.
It sports a good mix of professional Qt mobile app developers, relatively fresh Qt users who love being on the cutting edge and sharing what they just discovered, Qt veterans who are looking for a smooth transition to newer topics like mobile and Quick, members of the QtCompany who want a direct channel with a group of mobile devs for feedback, as well as professionals from companies which have invested in Qt on mobile to provide middleware and services to application developers.
Members come from industries as diverse as game development, medical software, geographic informations systems, you name it. That’s just as diverse as the Qt users spectrum.
QtMob is also a place for collaborations, where members get feedback about new packages and tools they are offering or plan to offer to the community (among others, Cutehacks’ qpm package manager, Qt Champion Benlau’s many utility libraries, Grecko’s SortFilterProxyModel qml wrapper, Esri’s ArcGIS SDK, VPlay’s mobile SDK or just recruit consultants for their next project.
QtMob is the right place to start an in-depth discussion about strategies to prevent memory warnings on iOS, choosing the right push notifications component, talking about Qt and Redux, implementing CI, making .pro changes to support your build, or just shouting your frustation in the #bottomlesspit channel.
Here are a few first impressions from a community member, Sassan from Teheran:
“After joining this group I was able to ask questions, and there was always someone who responded quickly, either with the solution or the reason why it was not possible yet to achieve what I needed. A question may initiate discussions about the topic, and you’ll end up knowing much more than what you asked for, which is good because it will save you lots of time not doing the wrong things others have tried before. Your question may end up as a feature request on the Qt bugtracker.
After joining this group I was able to know about the latest technologies, packages, related softwares, etc around Qt. It’s really nice to know for example the time you spend to release your app can be reduced 5 times by using tools people discuss about here… and that’s just an example.
After joining this group I had the chance to know about the latest up-to-date documents (pdfs, videos, tutorials, etc.) people create about coding in Qt.
After joining this group I had the chance to meet so many nice people, many of whom spend part of their time to contribute to this open source project.
After joining this group I learned about best practices for doing things I already knew how to do, but which I can now do better.”
We recently celebrated user #100 (while I write the count is 130), Jeff Galbraith from iMirror: a good example of using Qt for mobile in innovative settings like fitting rooms. But QtMob members use cases are just so many…
Come along and see, it’s free!
The only requirement: be willing to give at least as much as you get, in whatever form is best suited to your level of experience and job constraints.
The post QtMob: Qt mobile application development becomes easier appeared first on Qt Blog.
Twitter seems ever dominant and important for communication. Years ago I added a microblogging feed to Planet KDE but that still needed people to add themselves and being all idealistic I added support for anything with an RSS feed assuming people would use more-free identi.ca. But identi.ca went away and Twitter I think removed their RSS ability but got ever more important and powerful.or the relaunched theme a couple of years ago we added some Twitter feeds but they were hidden away and little used.
So today I’ve made them show by default and available down the side. There’s one which is for all feeds with a #kde tag and one with @kdecommunity feed. You can hide them by clicking the Microblogging link at the top. Let me know what you think.
Update: my Bootstrap CSS failed and on medium sized monitors it moved all the real content down to below the Twitter feeds rather than floating to the side so I’ve moved them to the bottom instead of the side. Anyone who knows Bootstrap better than me able to help fix?
I’ve also done away with the planetoids. zh.planetkde.org, fr.planetkde.org, pim.planetkde.org and several others. These were little used and when I asked representatives from the communities about them they didn’t even know they existed. Instead we have categories which you can see with the Configure Feed menu at the top to select languages.
I allowed the <embed> tag which allow for embedding YouTube videos and other bits. Don’t abuse it folks
Finally Planet KDE moved back to where it belongs: kde.org. Because KDE is a community, it should not be afraid of its community.
Let me know of any issues or improvements that could be made.by
This announcement is also available in Spanish and Taiwanese Mandarin.
It's only been days since our latest release Goedel 2017.03 was announced, but KDE software updates fully roll in Chakra and we could not delay this any further. As always, simply upgrading your system after installation will provide you with the latest available software in our repositories.
The most recent updates for KDE's Plasma, Applications and Frameworks series are now available to all Chakra users. All of these have been built against an important update of Qt to version 5.8.0.
The Calligra suite has also been updated to 184.108.40.206, so it is now fully ported to Frameworks 5 and will be provided as a single package. However this means that applications like author, braindump, flow and stage are no longer actively maintained and are dropped from our repositories.
Plasma 5.9.3 provides another round of bugfixes and translation updates to the 5.9 release, with changes found mostly in kwin, plasma-desktop and plasma-workspace.
Applications 16.12.3 ship with more than 20 recorded bugfixes and include improvements to kdepim, ark, filelight, gwenview, kate, kdenlive and okular. kdelibs have also been updated to 4.14.30.
Frameworks 5.32.0 provide the usual bugfixes and improvements to breeze-icons, kio, ktexteditor, plasma-framework and syntax-highlighting, among other packages.
Other notable package upgrades and changes:
skypeforlinux 220.127.116.11: We now ship the bundled web application which unfortunately misses some features and lacks in system integration, but at least it works.
It should be safe to answer yes to any replacement question by Pacman. If in doubt or if you face another issue in relation to this update, please ask or report it on the related forum section.
Most of our mirrors take 12-24h to synchronize, after which it should be safe to upgrade. To be sure, please use the mirror status page to check that your mirror synchronized with our main server after this announcement.
In February, KDE's Plasma team came together in for their yearly in-person meeting. The meeting was kindly hosted by von Affenfels GmbH, a webdesign agency in Stuttgart, Germany. The team discussed a wide variety of topics, such as design, features new and old, bugs and sore points in the current implementation, app distribution, also project management, internal and outward-facing communication and Wayland.
KDE is experimenting with new ways to deploy applications. Under consideration are technologies such as Flatpak, Snap and AppImage, which all have their distinct advantages. Support for bundled applications is being built into Discover, Plasma's software management center, and the KDE Store. An idea is to allow software developers more control over their applications' lifecycle, and to get updates shipped much quicker into the hands of users. Similar as with packages automatically created from our Git code repositories. This can dramatically cut down on the complexity of the deployment chain.
Browser integration in Plasma will be improved by integrating notifications and download progress and multimedia natively into Plasma by providing a browser extension that relays this information to the Plasma shell.
The Plasma team also discussed using touchpad gestures to control the window manager, so users can use specific multitouch gestures to trigger effects like the "desktop grid", "present windows" or swiping between virtual desktops.
Plasma Mobile, KDE's ongoing product to provide a Plasma implementation suitable for mobile phones was made to run on the Nexus 5X. The previous reference device, the Nexus 5 (sans "X") was getting a bit dated, and since it's not easily available on the market anymore, a new reference device that people can get their hands on was needed. Bhushan Shah solved the last problems keeping us from using this newer and faster device as a development platform. Images will be appearing shortly, and the team is looking forward to receiving (and addressing) feedback about Plasma on the 5X.
While not strictly Plasma, the team made a final push to getting KDE's websites at www.kde.org updated. A tireless effort by Ken Vermette with the help of Harald Sitter and a few more helping hands lead to the shiny new design being revealed during the course of the sprint.
On the less technical side, a sprint such as this is always a good opportunity to talk about how we work together, and how we present ourselves to the outside world. While we have made great strides to improve our software by applying more thorough review processes, continuous testing and integration and paying more attention to the wishes and problems of our users, we want to put more focus on stability. One way to achieve this is to move bigger feature merges more towards the beginning of a development cycle, thereby increasing the amount of time we have for testing and ironing out problems.
Sprints like this are only possible with the support of our community. We would like to thank the KDE e.V. for making this sprint (as many others before) possible. A special note of appreciation goes out to all those who donated to KDE e.V., without your support, we cannot get together in person to discuss and work. Personal interaction, while not necessary on a daily basis helps us to improve our collaboration, communication, team-work, and not at least the software we create for our users.
The Linux Action Show did an interview with the team at the sprint, watch this episode from 5 minutes in to meet the crew.
Hi all, you may have read my post on Nativefier, a script to build Electron web apps from URLs. If not, check it for advantages of Electron web apps, like low RAM consumption.
Instead a disadvantage of Nativefier is that on Linux it doesn’t “install” the app, but just create a portable folder, so you have to manually create the launcher with name, icon etc.
So I decided to “define” a file type with the metadata to install the app following Freedesktop specifications: *.webapp files.
Here there is an example of KDE Phabricator with Breeze style (*.webapp file included in the repo):
That has native notifications:
Can be unistalled with a right-click on its entry in the app menu:
And there is also a Service Menu for Dolphin to perform “right-click > Install” on *.webapp files:
For more informations and instructions check the repo.
I hope you like the idea, I will distribute more *.webapp files, maybe of sites with Breeze style.
conf.kde.in 2017 was held in the big and beautiful campus of IIT, Guwahati in Assam. During 10th to 12th March, the conference schedule was 2 days of talks followed by a day of workshop. Talks were lined-up in single track which helps attendees not to miss any talk. Conference pictures from my camera are available at flickr.
tar -xzf USNO-NOMAD-1e8-1.0.tar.gz
cp USNO-NOMAD-1e8.dat ~/.local/share/kstars
Now restart KStars, and go to Settings → Configure KStars. You'll see the Star Catalogs density slider, move it up and click Apply. You can control how many stars KStars draw on the screen, the more stars, the more resources it would take to render them, so adjust the slider carefully.