[Mesa-dev] About merging pipe-video to master

Jose Fonseca jfonseca at vmware.com
Tue Jul 12 08:27:06 PDT 2011



----- Original Message -----
> Hi guys,
> 
> as the subject already indicates: I'm about to merge pipe-video to
> master and just wanted to ask if anybody has still any objections?
> 
> After following Jose and Younes discussion on mesa-dev about how to
> design such an abstraction layer I took another round of cleaning up
> the
> interface and moved some more parts into the state tracker.
> 
> So the interface between the state tracker and drivers only consist
> of
> the following now:
> 
> 1. two additional functions for the screen object: get_video_param
> and
> is_video_format_supported. get_video_param gets a parameter for a
> specified codec (like max width/height of decoding target, which
> could
> be smaller than texture max width/height), and
> is_video_format_supported
> which checks if a texture format is supported as a decoding target
> for a
> codec.
> 
> 2. create_video_decoder function in the pipe_context object, which
> creates a decoder object for a given codec. The decoder object in
> turn
> includes everything needed to decode a video stream of that codec and
> uses pipe_video_decode_buffer objects to hold the input data of a
> single
> frame of that video codec.
> 
> 3. create_video_buffer function in the pipe_context object, which
> creates a video_buffer object to store a decoded video frame. This
> video_buffer object is then used for both rendering to the screen
> with
> normal pipe_context functionality and also as the input for reference
> frames back to the decoder.
> 
> The pipe_video_buffer object is there because I think hardware
> decoders
> need some special memory layout of the different planes of a yuv
> buffer.
> There is a standard implementation that just uses normal textures as
> the
> different planes for yuv buffer, which can be used by a driver when
> there is no need for a special memory layout or when the driver just
> uses shader based decoding.
> 
> The other option would be adding a PIPE_BIND_VIDEO_BUFFER flag to the
> resource creation functions, but I don't want to repeat functionality
> in
> the different drivers and as far as I can see the current resource
> functions (samplers/surfaces) can't be used to create a surface for
> just
> one plane/component of a yuv buffer and we could still clean that up
> to
> use the standard resource functions if the need arise.
> 
> Everything else, especially the vl_compositor functionality, is now
> part
> of the state tracker instead of the driver. The interface was mostly
> designed keeping two things in mind: First it should abstract the
> functionality of hardware video decoding from the state tracker and
> second it should be possible to implement a wide variety of different
> video decoding apis with it. For the second part I checked that it's
> at
> least possible to do XvMC,VDPAU,VAAPI and DXVA with it.
> 
> So what do you guys think about it?

I didn't have time to look in detail so that, and I'm not sure if I'll have time tomorrow either, but it sounds good from your description, and we can always cleanup loose ends later, so I have no objection proceeding.

One final request, please make sure that any new source files in src/gallium/drivers/ and src/gallium/auxiliary are listed in SConscript, to prevent breaking our automated builds.

Jose


More information about the mesa-dev mailing list