GSoC: Phonon QML Iteration2 & Meego

In my quest of bringing Phonon, the best multimedia abstraction library from KDE, to QML and Qt Quick I have reached another big iteration.

Following iteration 1 there is a now a new branch called ‘qml-i2‘ for iteration 2.

While i1 was using only existing experimental technology of Phonon, namely the VideoDataOutput class, i2 has moved away from this and now features a closer relation with the Phonon backend (currently only GStreamer).

Also I spent half an hour on creating an alternative appearance for the demo player. It is using Nokia Meego 1.2 Harmattan Qt Quick Components and I already ran it on a Nokia N950, the developer version of the Nokia N9 – looks really slick.

Hot new stuff:

  • Closer to backend
  • Redrawing only of space occupied by video frame
  • Pull instead of push frame access -> no memcopy
  • Faster due to above
  • Still raster/qimage based frame drawing
  • Demo player in demos/qml/videoplayer much improved
  • Harmattan demo player

The overall architecture is really simple to explain:
There are 3 QML Elements for Audio, Video and the Media control itself. The Video element uses a class called VideoGraphicsObject which implements the drawing logic. The VideoGraphicsObject connects to a backend implementation via a well defined interface. The backend implementation does magic in order to obtain raw video data from the video pipeline, emits a signal that a new frame is ready and the VideoGraphicsObject does the drawing.

What is particularly interesting is the way we currently access the frame data. The backend implementation holds exactly one frame, consequently either the pipeline or the VideoGraphicsObject hold a lock on the frame. This has three particular advantages for the time being: a) it does avoid any sort of object copy b) it allows the pipeline to adapt to drawing speed and frequency c) always the most current frame is drawn, even if the drawing operation was delayed.

To test qml-i2 you’ll need both the qml-i2 branch from the Phonon git repository as well as the qml-i2 branch from the Phonon GStreamer git repository. After installing both you should be able to run the demo player in Phonon’s demos/qml/videplayer folder.

6 thoughts on “GSoC: Phonon QML Iteration2 & Meego

  1. What you really want to use is the OpenGL texture sink in Harmattan. That’s the most efficient way of rendering video, as it allows for a zero-copy pipeline that gives you the frames as textures.

    • Thanks for the tip 🙂
      I only used raster/qimage for starters, to get something usable as soon as possible. I landed ARB FP support this week, did GLSL today, so, I am getting there 🙂

      The ultimate target is of course to get to zero-copy or at least as-little-copy-as-possible.

  2. The nice thing about the texture sink is that it allows doing the color space conversion using the SGX’ builtin conversion, without the need of a complex shader.

  3. This is really progressing nicely 🙂
    Though personally i’m kinda waiting for the ffmpeg backend to work. The reason for that is that i want hardware accelerated video playback (with VDPAU or XvBA or VA-API) which probably becomes possible as soon as the ffmpeg backend works.

    A phonon question. I don’t know the details, but there is a ffmpeg, gstreamer and xine backend for it.. But why do you need to make another backend with another interface for it? Can’t you just use the phonon calls that in term go back to the selected backend..? I could be completely wrong here or getting the wrong idea but lust asking anyway.

    Also, could you post a little “how to” to get this running on a pc from QtCreator?

    Keep up the awesome work!
    Mark.

  4. Well, you can QGraphicsProxyWidget the QtPhonon VideoWidget, but that will either not work, or give not too great performance. On that note QtPhonon is insanely old though, that is Phonon 4.3, currently we are at 4.5 😦

Leave a comment