Multithreaded fast-forward possible?

Peter Rennert p.rennert at cs.ucl.ac.uk
Thu Dec 13 05:08:09 PST 2012


Thanks Juraj these were good tips.

I needed to add a queue before ffdec_h264, but there must not be one 
after ffdec_264. Furthermore, I needed to set sync=false at the image 
sink. However, now it seems as the filesink (or qtdemux/h264parse?) is 
not feeding the data quickly enough. (I am just at x2 speed, my disk I/O 
maximum should far ahead). My ffdec_264 sends now the following warning 
message when I try to increase the speed:

0:00:06.193160955  3058      0x27f08f0 WARN videodecoder 
gstvideodecoder.c:2847:gst_video_decoder_alloc_output_frame:<dec> failed 
to get buffer wrong-state
0:00:06.193239481  3058      0x27f08f0 WARN ffmpeg 
gstffmpegviddec.c:1218:gst_ffmpegviddec_frame:<dec> ffdec_h264: decoding 
error (len: -1, have_data: 0)

I got rid of those warnings when I add a queue after the filesink.

Still there are problems with my pipeline:

First problem is that the standard speed seems to be twice as normal.

Second, although I see that the computations are now spread over several 
cores and I do not get anymore warnings from ffdec_h264 after keyPress 
(and respective the seek event), the playback speed does not change at 
all anymore.

I put my test code below. Its in python (my first attempt of python and 
gstreamer), but I think it should be readable for people who use C or 
C++, too. I suspect I might use wrong flags in gst.event_new_seek().

P.S.: just to make it complete for documentation purposes, playing a bit 
around with it and using

self.pipeline.send_event(gst.event_new_seek(self.pbRate, gst.FORMAT_TIME,
             gst.SEEK_FLAG_FLUSH, gst.SEEK_TYPE_NONE, gst.CLOCK_TIME_NONE,
             gst.SEEK_TYPE_NONE, gst.CLOCK_TIME_NONE))

instead of

self.pipeline.send_event(gst.event_new_seek(self.pbRate, gst.FORMAT_TIME,
             gst.SEEK_FLAG_FLUSH, gst.SEEK_TYPE_NONE, 0,
             gst.SEEK_TYPE_NONE, 0))

gives me a python error:

Traceback (most recent call last):
   File "pyGstViewer.py", line 61, in keyPress
     self.increasePlaybackSpeed()
   File "pyGstViewer.py", line 70, in increasePlaybackSpeed
     gst.SEEK_TYPE_NONE, gst.CLOCK_TIME_NONE))


===== CODE =====

import sys, os
import pygst
pygst.require("0.10")
import gst
import pygtk, gobject
import gtk

class GTK_Main:
     def __init__(self):
         self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
         self.window.connect("destroy", gtk.main_quit, "WM destroy")
         vbox = gtk.VBox()
         self.window.add(vbox)
         vbox.pack_start(gtk.Label("Please type text"))
         entry = gtk.Entry()
         vbox.pack_start(entry)
         self.window.connect("key-press-event", self.keyPress)

         self.window.show_all()

         self.pipeline =  gst.parse_launch("filesrc " +
"location=/home/peter/vid/20121207/00/2012-12-07.00-00-00.mp4 ! " +
             "queue ! qtdemux ! h264parse ! queue ! ffdec_h264 name=dec 
! " +
             "ffmpegcolorspace ! deinterlace ! xvimagesink sync=false")

         self.dec = self.pipeline.get_by_name("dec")

         self.dec.set_property("max-threads", 8);

         self.pbRate = 1

         self.pipeline.set_state(gst.STATE_PLAYING)

         bus = self.pipeline.get_bus()
         bus.add_signal_watch()
         bus.connect("message", self.onMessage)


     def keyPress(self, widget, event):
         print "keypress event!!"
         key = gtk.gdk.keyval_name(event.keyval)
         if key == "d":
             self.increasePlaybackSpeed()
         else:
             self.decreasePlaybackSpeed()

     def increasePlaybackSpeed(self):
         print str(self.pbRate) + " --> " + str(self.pbRate + 4)
         self.pbRate += 4
         self.pipeline.send_event(gst.event_new_seek(self.pbRate, 
gst.FORMAT_TIME,
             gst.SEEK_FLAG_FLUSH, gst.SEEK_TYPE_NONE, 0,
             gst.SEEK_TYPE_NONE, 0))
         print "increased playback speed!!!"

     def decreasePlaybackSpeed(self):
         self.pbRate -= 2
         self.pipeline.send_event(gst.event_new_seek(self.pbRate, 
gst.FORMAT_TIME,
             gst.SEEK_FLAG_FLUSH, gst.SEEK_TYPE_NONE, 0,
             gst.SEEK_TYPE_NONE, 0))
         print "decreased playback speed!!!"

     def onMessage(self, bus, message):
         t = message.type
         if t == gst.MESSAGE_EOS:
             self.pipeline.set_state(gst.STATE_NULL)
         elif t == gst.MESSAGE_ERROR:
             self.pipeline.set_state(gst.STATE_NULL)
             err, debug = message.parse_error()
             print "Error: %s" % err, debug

GTK_Main()
gtk.gdk.threads_init()
gtk.main()



On 12/13/2012 11:18 AM, Juraj Holtak wrote:
>
> And maybe the xvimagesink should have sync=false set too... maybe...
>
> On Dec 13, 2012 12:15 PM, "Juraj Holtak" <juraj.holtak at gmail.com 
> <mailto:juraj.holtak at gmail.com>> wrote:
>
>     Hi,
>
>     Maybe worth a try:
>
>     Put an "queue" element before and after ffdec_h264 and use
>     max-threads=<your_cpu_count>. I imagine it worked like this for me
>     but maybe I just had luck...
>
>     Juraj
>
>     On Dec 13, 2012 2:03 AM, "Peter Rennert" <p.rennert at cs.ucl.ac.uk
>     <mailto:p.rennert at cs.ucl.ac.uk>> wrote:
>
>         Dear all,
>
>         I want to increase the playback of a video quite drastically.
>         At the moment I am playing a test video sequence with
>
>         gst-launch-0.10 filesrc location=/path/to/my.mp4 ! qtdemux !
>         h264parse ! ffdec_h264 name=dec ! ffmpegcolorspace !
>         deinterlace ! xvimagesink
>
>         and at some point the video speed cannot get increased
>         further, because it uses only one CPU core to decode the
>         video. Gstreamer starts complaining of getting behind the
>         timestamps of the stream and crashes.
>
>         I could think of two solutions:
>             - Is there a "native" way of making use of more than just
>         a single core to decode the video frames? I tried to set
>         "max-threads" of ffdec_h264 to 4, but it still only uses a
>         single core.
>
>         or,
>             - Is there a way of skipping frames? As I need only a
>         "effective" framerate on the screen of about 25fps I could
>         just decode the frames I need and skip the other frames. Then
>         I would not need more CPU power than for realtime playback.
>
>         or.
>             - Is there a way to split the stream after the qtdemux and
>         use several decoders in a kind of decoder pool (distribute the
>         frames between them) and unify the stream after?
>
>         My videos are in H264 format. I am not sure if the
>         non-keyframes are encoded with respect to the previous frame
>         or with respect to the last key-frame. In that case I could
>         try to filter the key-frames and send them to every decoder in
>         the pool and decode the any frame with that as reference until
>         the next key-frame. With some management I could try to merge
>         the output of the pool decoder with a input-selector. But that
>         sounds bulky :\ So if one knows how to get to one of the first
>         two solutions, I would be very happy...
>
>
>         Cheers,
>
>         Peter
>
>         PS I am happy to provide an example video. Maybe something is
>         wrong with the encoding there that prevents ffdec_h264 to use
>         multiple cores
>         _______________________________________________
>         gstreamer-devel mailing list
>         gstreamer-devel at lists.freedesktop.org
>         <mailto:gstreamer-devel at lists.freedesktop.org>
>         http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
>
>
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20121213/067ef153/attachment-0001.html>


More information about the gstreamer-devel mailing list