Yet another developer blog.
Back from another LinuxCon, this time the one which took place in San Diego, CA, and we did another talk about the EFL, titled “Tizen’s Graphical Libraries: EFL”. The talk showed the current state of the libraries, and what kind of development is going on with them. Gustavo Barbieri presented the talk with me.
The presentation slides are available here.
Besides the keynotes and other nice talks that I’ve seen there, the ones which most grabbed my attention were one about the improvements on GDB for debugging multi-core code, showing some recently added features to the tool, and another one called “Why Kernel Space Sucks”, featuring mistakes done during the development of some kernel APIs.
The opportunity to meet developers from other projects is always great, even though I couldn’t find other EFL developers there. The upcoming LinuxCon Europe, which is going to have a EFLDevDay, will be probably full of them. Anyway, it was a really nice conference after all.
Evas, the canvas library behind EFL, is already fast and lightweight. But a new feature was added to it recently, making it even faster: a cache server, able to cache images and font glyphs between different applications.
The idea is simple: whenever an application needs to show an image on the screen, a quite common use case for any graphical application, it must first load this image from the filesystem, decode it on memory, and just then it has the pixels available for making any other transformation and showing them. Now, instead of loading these pixels from the filesystem directly, applications can make a request to CServe2, that will then load the image from the disk and return the loaded pixels in a shared memory.
One of the advantages of having such server is obvious: if the same image is used by more than one application, it will be loaded only once, and will use that memory only once. And loading the same image is a common use case when you have several applications using the same widget toolkit, with the same decorations and so on.
What happens then is that the overall system memory footprint will be reduced (less copies of the image in memory). Additionally, if an application has all its images cached already before starting, because any other application has lodead them already or because its images are still cached since the last run, it can start faster. No image loading time will be necessary.
A similar idea is applied to fonts and glyphs. Glyphs are, roughly speaking, visual representations of text characters. In one of the last stages of rendering text on the screen, the glyphs, which usually still have a vectorial representation, are rendered and become images. At this stage we are caching these images on the server too, keeping them ready for client applications when they need them.
The image cache server was already done before, called CServe, and that’s the reason why this one is called CServe2. Besides caching images between applications, it also caches font glyph bitmaps (only that for now), and has some other nice features: it exports an asynchronous API, which can be used by clients to request the loading of images as soon as they know that these images may be used. The server will then start preloading these images speculatively, and when the application really needs to show them on the screen, they will be already loaded.
The asynchronous API is actually part of a bigger plan, in which the Evas rendering pipeline will also be converted to a more asynchronous one, and then make better use of the cache server, but the current gains are already interesting. For applications that do not require big fonts or images being rendered, it doesn’t make a difference, but for an image viewer, like Ephoto, it already has some nice gains on its start time if loading a big image.
Here is a simple test done using Elementary’s photocam widget, which loads a picture and, once it is loaded, show it on the screen. The time from the widget being created until the image being loaded for the first time was measured, and the results are shown below. The tests were done with no CServe2 being used, then when it was used but had no previous cache of the said image, and finally when the image was already cached on the server:
The results speak for themselves. While there’s a little extra overhead added by the server when loading the picture for the first time, due to many factors which can also be minimized later, it proves that the resulting benefits of using the server outweigh by far this extra overhead.
All this stuff is already available at E’s SVN and enabled by default, although the server needs to be run explicitely. If you want to do that, just start the installed binary (should be under /usr/local/libexec/evas_cserve2 on a default build from sources), and export an environment variable:
With that setup, just run any efl application and it should use the cache server (if running the sofware x11 backend) by default.
Try it out. Any comments are appreciated.
Recently I attended at LinuxCon Brazil, did a presentation and still had the chance to attend to some interesting talks.
The presentation was given together with Bruno Dilly, about EFL focused on embedded devices. Besides some previous similar talks about EFL, this one was more interesting and quite different, since we didn’t talk much about how to write a program using it, but instead focused on its advantages, presented some real use cases and tried to show off where one can take more advantages from these libraries.
Overall the event was very interesting, with some nice keynotes and presentations. Two of them that draw my attention were Eugeni Dodonov’s talk about the Linux Graphical Stack, and Daniel Frye’s keynote about the 10+ years of Linux at IBM, which exposed how IBM’s LTC (Linux Technology Center) started trying to contribute with Linux, what they did wrong and how they fixed it.
In addition to our talk, 2 more guys from ProFUSION were presenting their talks: Lucas DeMarchi did a talk about how to become an open source developer and Gustavo Barbieri presented two talks: “Tips and Tricks to Develop Software for CE product on Low-End Hardware” and “Demystifying HTML5” with Sulamita Garcia (Intel).
That’s all for now, and hopefully there will be more presentations coming on next events.
Emotion is the EFL library that handles audio and video playback. It had only two backends: gstreamer and xine. But recently, Zodiac Aerospace asked ProFUSION to integrate a VLC backend that was being developed originally by Hugo Camboulive. After analyzing his current work, we figured out that it could be used to integrate not only VLC, but other players. The end result of this work is a generic backend and a brand new vlc plugin for it.
This generic backend executes a separated player (its plugin) in another process. It receives the bytes to be drawn on the Emotion object through a shared memory, and communicates/controls the player through a pipe. The pipe file descriptors to be used are sent to the player through command line arguments, leaving the standard input/output free to be used if necessary.
The player must receive and send commands defined on a common file called
Emotion_Generic_Plugin.h, which can be included for easier implementation.
However, there’s no need for the player to link against Emotion.
How does it work?
When the module is initialized for an emotion object, it starts another process that runs the specified player. The player command line is specified using:
emotion_object_module_option_set(object, "player", <command_to_player>);
A player using libvlc is being provided now, and the generic module internally checks if the command given was “vlc”, in which case it will use this provided emotion-vlc player.
When a file is set to this object, Emotion sends the file name to the player, and expects an answer that will tell that the player already decoded a bit of the file, and the video size is already set on the module, so it can allocate a shared memory with correct size.
The module then allocates the memory, sends a message to the player and expects an answer. After this last answer, the “open_done” signal is sent and the module knows that it is ready for playing. Commands issued before the module being ready are now applied (and play is resumed if necessary).
During this setup stage, info about the file set will be stored in the module, so commands like meta data get, length get and so will be available to sync calls like emotion_object_play_length_get().
The playback phase occurs by VLC writing the decoded video data on a shared memory buffer, which will be used by Emotion to display the decoded frame. A triple buffering mechanism is used to avoid tearing, and also ensures that no blocking happens on Emotion when the player is writing the pixels on the buffer.
If the player dies for any reason, a “decode_stop” signal is sent, allowing the program to call play again, and in that case it will be restarted. The playback should start from the same point it was before the player crashed (if the player supports seek on the current media format).
This last point is the main advantage of the generic backend, allowing the program to recover from a player crash. Similar plugins can be implemented using gstreamer and xine, isolating the decode from the program UI using a separated process.
If you have any questions about this backend, please feel free to ask =).
Well, I’m finally putting my blog online, and hope to have some interesting stuff to post here. I’ll be probably talking a lot about work that I’m doing on EFL, and sometimes on [WebKit] too. And since I’ve been lately also doing some stuff in the game development area, you can also expect some posts on this topic too.
This blog is also powered by nanoc, so I’m probably going to share some thoughts and code for this awesome framework. Although it’s very simple now, I should improve it a little in the near future (but not too much). And at least I plan to publish the template used for this blog, that could be a good start for anyone wanting to start a blog with nanoc with the setup already done. This way maybe I also receive some critics on the way that I implemented things, and can change it to something better.