[Gstreamer-openmax] Discussion on the hardware accelerator solution in GstOpenMAX project.
felipe.contreras at gmail.com
Sun Aug 3 02:56:32 PDT 2008
On Fri, Aug 1, 2008 at 1:38 AM, Victor Manuel Jáquez Leal
<ceyusa at gmail.com> wrote:
> Even though I haven't work in these things for a while, I used to and
> have thoughts about them.
> I haven't read the NXP tunneling implementation in gstopenmax but once
> we implemented something related: when the omx gst element is linked
> with another omx gst element a tunnel is setup, but as no buffers
> traverse the pipeline, because the buffer communication is done
> beneath the omx layer, we had to push ghost buffer (empty buffers with
> calculated metadata), and those ghost buffers simulated the gst a/v
> sync, nevertheless the real a/v sync was done by omx.
If the A/V sync was done in the omx layer then why are the ghost buffers needed?
The gst base sink requires buffers in order to do A/V sync. If the
sink doesn't receive the buffers, then it doesn't do the sync, but it
> That solution is not sound: might work in some cases but we couldn't
> assure it for every case.
> We dismissed the super sinks since the beginning of the development,
> because, as you mentioned, is not a flexible solution.
> But we have a trade of: the first solution is not concordant with the
> gstreamer philosophy, and the second is not concordant with the omx
> philosophy, because a semantic overlapping, as in the buffer
> communication assumptions, the state management among the components,
> Nowadays I'm more convinced that a supersink could be the best
> solution to integrate gst and omx in the A/V playback use case:
> 1. it's easy to build and setup the omx pipelines given the caps
What if you want a post-processing element in the middle? Or you want
an encoder+decoder (transcoder)?
I don't think all the omx pipelines can be built based on the caps.
> 2. it's easy to control the sync
Not quite; I'll explain at the end.
> 3. it's easy add gst interfaces as volume, contrast, et all
> 4. it's easy to manage the state among the omx components
I'm not so sure about that. If a gst element is mapped to a single omx
component it's easier to see what's happening.
Actually I think that was the difficult part in the tunneling branch:
align gst and omx components.
> 5. no dirty hacks as ghostbuffers
> 6. afaik the supersink elements can be autoplugged by playbin2
> I admit the supersinks break the flexibility offered by gst, but as
> far as i can foresee, is the straight strategy to obtain the
> performance promised by omx in his interop profile.
Or as we are doing in the tunneling branch.
> Maybe one day the interop profile won't be necessary when the pBuffer
> in the bufferheader leave its readonly property...
Even in that case tunneling might help; there would be less memory
allocations for gst buffers.
I think you are basing your ideas on the fact that in order to support
A/V sync properly gst should do it by receiving real buffers on the
element. Even if you have a video decoder sink that receives real gst
buffers in order to do A/V sync, the sync will be done _before_ the
decoding, so by the time the buffers reach the renderer some time
would have been spent, and the sync would be lost.
In discussions with different parties I believe the consensus has been
that mapping the omx clock to a gst clock is the right way to go. In
this way you have all the flexibility of gst pipelines, omx
efficiency, and you have proper A/V sync.
I'm a pragmaticist, so I'm not saying that approach would work, but I
don't see any way it shouldn't, so I think we should better try it
More information about the Gstreamer-openmax