[Gstreamer-openmax] Discussion on the hardware accelerator solution in GstOpenMAX project.

Felipe Contreras felipe.contreras at nokia.com
Tue Aug 5 05:01:47 PDT 2008


Hi Victor,

On Mon, 2008-08-04 at 13:35 +0200, ext Victor Manuel Jáquez Leal wrote:
> Hi all,
> 
> >> I haven't read the NXP tunneling implementation in gstopenmax but once
> >> we implemented something related: when the omx gst element is linked
> >> with another omx gst element a tunnel is setup, but as no buffers
> >> traverse the pipeline, because the buffer communication is done
> >> beneath the omx layer, we had to push ghost buffer (empty buffers with
> >> calculated metadata), and those ghost buffers simulated the gst a/v
> >> sync, nevertheless the real a/v sync was done by omx.
> >
> > If the A/V sync was done in the omx layer then why are the ghost buffers needed?
> 
> Last Friday I glanced the tunneling-v3  branch in github and I could
> not grasp some details which I'm worry about:
> 
> 1) AFAIK: in order that a pipeline could change from prerrolling to
> playing, the sink must receive at least one buffer. If you have an OMX
> sink in tunnel with a previous element the pipeline will never leave
> the prerolling state.

Well, it _does_ leave the pre-rolling state, I don't know why. I'll have
to investigate.

> 2) When the first gst buffer traverse its part of the pipeline the
> stream negotiation is done among the linked elements. So, if no
> buffers traverse some portion of the pipeline because they are
> tunneled, those elements in the tunnel will never inform about their
> real configured caps to the GStreamer client application.

When the omx component issues a settings changed event, then the caps
are properly updated.

If it doesn't, then yeah, that might be an issue, although I don't think
applications really make use of such data.

> > The gst base sink requires buffers in order to do A/V sync. If the
> > sink doesn't receive the buffers, then it doesn't do the sync, but it
> > still works.
> 
> But, as I said, the pipeline won't leave the prerolling state in the
> case of a tunneled omx sink.

But it does! (not sure why).

> >> 1. it's easy to build and setup the omx pipelines given the caps
> >
> > What if you want a post-processing element in the middle? Or you want
> > an encoder+decoder (transcoder)?
> 
> Yes, the supersink solutions are not flexible, but they might provide
> an effective solution to a common use case.
> 
> > I don't think all the omx pipelines can be built based on the caps.
> 
> Not at all, but if you have a fixed hardware (as happens in the
> embedded world) you'll only need the input stream caps to build up the
> OMX pipeline to render the stream.

It's not that fixed... new DSP tasks can be loaded on the TI DSP for
example.

> Anyway, this is not a real argument, it's just workaround a problem in
> the supersink concept.
> 
> >> 4. it's easy to manage the state among the omx components
> >
> > I'm not so sure about that. If a gst element is mapped to a single omx
> > component it's easier to see what's happening.
> 
> That's another issue which I couldn't figure out in the tunneling-v3 branch.
> 
> According to the spec (Page 122, Figure 3-10. State Transition to Idle
> in the Case of Tunneled Components) if you change to idle state to a
> component which is not a "buffer supplier" the CommandStateSet
> callback won't be triggered until the other component in the tunnel
> have changed to idle also.

Unless the ports have been disabled.

> AFAIK, each change state in gstopenmax is done sequentially:
> change_state, wait_for_state.  And it could cause problems when the
> component which is the buffer supplier in the tunnel is the posterior
> in the tunneled: you'll get a dead-lock waiting for the first
> component to change its state. And that could be the case, in example,
> of tunneled videosink.

Again, only when the ports are enabled.

In any case, after discussion with Frederik from NXP I decided to try
something different: now for the special case of the Idle state
gst-openmax is not waiting for the change of state until the next state:

http://github.com/felipec/gst-openmax/commit/3e4fc57893a876206d381d444c97f193614e7d51

> In the case of a supersink that situation is easily to overcome.
> 
> > Actually I think that was the difficult part in the tunneling branch:
> > align gst and omx components.
> 
> Yes, it is.
> 
> >> Maybe one day the interop profile won't be necessary when the pBuffer
> >> in the bufferheader leave its readonly property...
> >
> > Even in that case tunneling might help; there would be less memory
> > allocations for gst buffers.
> 
> Yes that might be true. And also could be the situation where the
> tunneled components never use general purpose memory ;)

True.

> > I think you are basing your ideas on the fact that in order to support
> > A/V sync properly gst should do it by receiving real buffers on the
> > element. Even if you have a video decoder sink that receives real gst
> > buffers in order to do A/V sync, the sync will be done _before_ the
> > decoding, so by the time the buffers reach the renderer some time
> > would have been spent, and the sync would be lost.
> >
> > In discussions with different parties I believe the consensus has been
> > that mapping the omx clock to a gst clock is the right way to go. In
> > this way you have all the flexibility of gst pipelines, omx
> > efficiency, and you have proper A/V sync.
> 
> You're right. When I tried to find a way to map those clocks I found
> some creepy problems in the OMX clock implementation so I dropped it,
> but yes I remember it was possibly... in theory at least.
> 
> Nevertheless, in the supersink solution, the mapping won't be
> necessary, because the supersink will only attend the (not-exposed)
> OMX clock :)

But again, if you have a video decoder + video sink in omx, and audio
sink in GStreamer, then the A/V sync will be done by GStreamer at the
supersink level. That means if the video decoding takes 1 second you
would have 1 second of video delay.

> > I'm a pragmaticist, so I'm not saying that approach would work, but I
> > don't see any way it shouldn't, so I think we should better try it
> > first.
> 
> I'm agree.

Best regards.

-- 
Felipe Contreras





More information about the Gstreamer-openmax mailing list