[gst-devel] totem and osssink? (long)

Mathrick mnews2 at wp.pl
Fri Mar 12 15:07:00 CET 2004


W liście z pią, 12-03-2004, godz. 19:31, Thomas Vander Stichele pisze: 
> > I may be wrong, but it looks like that would break badly when using
> > v4lsrc with TV tuner for example, or more generally any live data feed.
> 
> Not necessarily; it all depends on how you declare it should work and
> how the concept of "ready buffer on the link" is used.  For example, in
> the v4lsrc case, it would be simple to just make the v4lsrc element
> continuously read and drop the buffer on the link in the PAUSED state. 
> It didn't really push it out yet, it just placed it on the link.

Not sure if I follow you correctly, but I thought you wanted entire
pipeline being ready to start playing immediately? That would require
next elems in pipeline to actually receive that buffer, right? It wasn't
v4lsrc itself I was concerned about, it was precisely the rest of
pipeline (especially encoders, they are very sensitive for sending bogus
data we're not going to really use) that won't be able to keep up.

> Same for live streaming from internet; what you want is that stuff
> starts playing immediately when going back to playing from paused.  This
> can be done if the streaming source buffers internally, and then does
> the same thing as in the v4lsrc case.  Current apps doing stream
> playback probably have to rebuffer completely to recover from
> paused->playing; with this approach, since in paused they are still
> processing incoming data, just not sending it on, it can keep the
> prebuffer filled and respond to PLAYING immediately.

Sure, it would be nice to have prebuffer filled at all times, OTOH I
don't think people would be really happy if Totem suddenly started to
use 70% CPU when pausing, and in general case we cannot guarantee that
keeping data processing alive is cheap.

Maciej

-- 
"Tautologizm to coś tautologicznego"
   Maciej Katafiasz <mnews2 at wp.pl>
       http://mathrick.blog.pl





More information about the gstreamer-devel mailing list