[Liboil] [Schrodinger-devel] GPU-assisted Dirac (de)compression

Stefan de Konink skinkie at xs4all.nl
Mon Jun 9 09:21:31 PDT 2008


Tim Borer schreef:
> At 16:24 07/06/2008, Younes Manton wrote:
>> Encoding doesn't make sense on a GPU to be honest with you. The
>> optimal end-point for GPU processing is the screen, so decoding fits
>> perfectly, but for encoding we have to make a round trip from CPU to
>> GPU and back to CPU. I think that would offset most/all speed gains
>> you might get from having the GPU do the encoding. No clue about CUDA,
>> but I'm sure it doesn't get around the fundamental problem that the
>> round trip is sub-optimal. Dedicated encoding hardware is a different
>> story, but I get the impression that VAAPI is only intended for
>> decoding.
> 
> Actually I suspect that encoding on a GPU does make some sense. True 
> you have to get the data on and off which is an overhead. However 
> encoding is much more compute intensive than decoding so the overhead 
> from data on and off is proportionately less. The GPU is actually 
> well suited to some motion estimation schemes because it has been 
> designed to be suitable for motion compensation.
> 
> So don't abandon the idea of coding on the GPU. On the other hand 
> decoding is probably where it is at, at the moment.

The ideal situation would be of course to have the last step to be done 
on the GPU. In that case it would reduce the amount of data going to the 
GPU.

But I wonder if operations like CUDA can directly display data or that 
it needs to go back to the processor first.


Stefan


More information about the Liboil mailing list