Gstreamer support for GPU codecs
Nicolas Dufresne
nicolas.dufresne at gmail.com
Tue May 17 18:13:48 UTC 2016
Le samedi 14 mai 2016 à 10:41 +0300, Sebastian Dröge a écrit :
> On Fr, 2016-05-13 at 08:34 -0400, Aaron Boxer wrote:
> >
> > I have a question about how the streaming architecture works with
> > GPU acceleration.
> >
> > Since discrete cards are sitting on the PCI bus, best performance
> > happens when data is
> > moved to the card in a pipeline and the host gets notified when
> > data has been processed
> > and moved back to the host.
> >
> > Also, it is sometimes more efficient to process N frames at a time.
> >
> > So, for best perf, the flow would be:
> >
> > A) host keeps a list of N host-side memory buffers
> > B) host waits for a host buffer to become available
> > C) when buffer is available, host copies memory into that buffer,
> > and queues the buffer
> > to be copied over to the card
> > D) when N buffers have been processed, and copied back to host, the
> > host receives an event
> > E) host can use the processed buffers, and when it is finished,
> > that buffers becomes available
> > for another frame
> >
> > Would this workflow work with GStreamer?
> Yes, you just need to ensure that latency is reported accordingly by
> your element.
Note, it's arguably not the most efficient way. Ideally, you should
implement a V4L2 mem-to-mem driver for your card. The videobuf2 and/or
v4l2_mem2mem framework will provide you an appropriate queue mechanism,
and efficient memory allocation model. Those drivers are already
supported by GStreamer.
Nicolas
More information about the gstreamer-devel
mailing list