[Mesa-dev] reworking pipe_video_decoder / pipe_video_buffer

Younes Manton younes.m at gmail.com
Tue Nov 22 13:00:38 PST 2011


2011/11/21 Christian König <deathsimple at vodafone.de>:
> On 16.11.2011 15:38, Maarten Lankhorst wrote:
>> If the decode_bitstream interface is changed to get all bitstream buffers
>> at the same time,
>> there wouldn't be overhead to doing it like this. For a single picture
>> it's supposed to stay constant,
>> so for vdpau the sane way would be: set picture parameters for hardware
>> (includes EVERYTHING),
>> write all bitstream buffers to a hardware bo, wait until magic is done.
>> Afaict, there isn't even a sane
>> way to only submit partial buffers, so it's just a bunch of overhead for
>> me.
>>
>> nvidia doesn't support va-api, it handles the entire process from picture
>> parameters
>> to a decoded buffer internally so it always convert the picture parameters
>> into
>> something the hardware can understand, every frame.
>
> I'm not arguing against removing the scattered calls to decode_bitstream. I
> just don't want to lose information while passing the parameters from the
> state tracker down to the driver. But we can also add this information as a
> flag to the function later, so on a second thought that seems to be ok.
>

I don't have a comment on the rest, but on this part let me point out
that it's valid for a VDPAU client to pass you, for a bitstream of
size N bytes:

* 1 buffer of size N
* N buffers of size 1
* any combination in between

The only thing you're assured of, as far as I can tell, is that you
have a complete picture across all the buffers. So, having the state
tracker pass one buffer at a time to the driver can get ugly for
everyone if a client decides to chop up the picture bitstream in an
arbitrary manner.


More information about the mesa-dev mailing list