[Liboil] GPU-assisted Dirac (de)compression
lists at whitehouse.org.nz
Sat Jun 7 00:03:48 PDT 2008
My key question is this:
What is the optimal way to utilise the GPU to improve
compression/decompression of Dirac files?
I understand that Schrodinger can use CUDA to utilise the GPU on
NVidia 8800s. What I am trying to determine is the best overall
approach to take in order to generally improve Dirac performance on
all graphics cards (or at least all graphics cards with a Free
As far as I can see, the best approach in a perfect world would be:
1) Add Dirac support to VAAPI;
2) Have all media applications (e.g. GStreamer) support VAAPI;
3) Add Dirac VAAPI support to Gallium3D softpipe; and
4) Add support for the Dirac VAAPI functions to the various Gallium3D
hardware drivers (presumably in shader language in order to support
It seems that this GSOC would go a long way to sorting this:
but nothing anywhere mentions Dirac.
I'm also interested in whether any of this will help encoding (it does
not sound like VAAPI currently intends to support encoding).
Can anybody shed some light on this issue?
Some related website references:
Gallium3D frontend for video decoding
GSOC '08 hardware accelerated video decoding
Add a Nouveau gallium backend
GPU-Accelerated Dirac Video Codec (CUDA)
Enable PureVideo under Linux (MPEG-4 / H.264 XvMC):
Generic GPU-Accelerated Video Decoding
FSF Associate Member: 5632
More information about the Liboil