[Mesa-dev] About merging pipe-video to master
Keith Whitwell
keithw at vmware.com
Tue Jul 12 03:31:49 PDT 2011
On Mon, 2011-07-11 at 18:24 +0200, Christian König wrote:
> Hi guys,
>
> as the subject already indicates: I'm about to merge pipe-video to
> master and just wanted to ask if anybody has still any objections?
>
> After following Jose and Younes discussion on mesa-dev about how to
> design such an abstraction layer I took another round of cleaning up the
> interface and moved some more parts into the state tracker.
>
> So the interface between the state tracker and drivers only consist of
> the following now:
>
> 1. two additional functions for the screen object: get_video_param and
> is_video_format_supported. get_video_param gets a parameter for a
> specified codec (like max width/height of decoding target, which could
> be smaller than texture max width/height), and is_video_format_supported
> which checks if a texture format is supported as a decoding target for a
> codec.
>
> 2. create_video_decoder function in the pipe_context object, which
> creates a decoder object for a given codec. The decoder object in turn
> includes everything needed to decode a video stream of that codec and
> uses pipe_video_decode_buffer objects to hold the input data of a single
> frame of that video codec.
>
> 3. create_video_buffer function in the pipe_context object, which
> creates a video_buffer object to store a decoded video frame. This
> video_buffer object is then used for both rendering to the screen with
> normal pipe_context functionality and also as the input for reference
> frames back to the decoder.
>
> The pipe_video_buffer object is there because I think hardware decoders
> need some special memory layout of the different planes of a yuv buffer.
> There is a standard implementation that just uses normal textures as the
> different planes for yuv buffer, which can be used by a driver when
> there is no need for a special memory layout or when the driver just
> uses shader based decoding.
>
> The other option would be adding a PIPE_BIND_VIDEO_BUFFER flag to the
> resource creation functions, but I don't want to repeat functionality in
> the different drivers and as far as I can see the current resource
> functions (samplers/surfaces) can't be used to create a surface for just
> one plane/component of a yuv buffer and we could still clean that up to
> use the standard resource functions if the need arise.
I'm a bit unsure about what's the best approach here, though at this
stage I'm happy with your approach and don't think it needs to be
changed before any merge.
But speaking in general terms, individual planes map well onto 8-bit
single-component texture images (L8 or similar) and any hardware
requirements (pitch, memory pool, etc) for the individual plane could be
specified with a PIPE_BIND_VIDEO_BUFFER flag.
However, it's also easy to imagine hardware having special requirements
about the positioning of the planes relative to one another, similar to
how mipmaps must be layed out in hardware-specific ways.
If we did decide to get rid of video_buffers and integrate the concept
with pipe_resources, it seems like there would need to be a way to
specify this at resource creation - either a planar YUV format, or some
other marking on the resource.
I don't have easy answers for that, and in the meantime I don't think
it's important enough to hold up pipe-video, which is looking now like a
good step forward.
Keith
More information about the mesa-dev
mailing list