[Mesa-dev] About merging pipe-video to master

Johannes Obermayr johannesobermayr at gmx.de
Mon Jul 11 15:10:47 PDT 2011

Christian König schrieb:
>Hi guys,
>as the subject already indicates: I'm about to merge pipe-video to
>master and just wanted to ask if anybody has still any objections?

Yes [I am a watchman of compile/build errors (I know I am an old bitcher but code quality rules)]:


Please merge things only when all new introduced things/switches compile/build successfully.


>After following Jose and Younes discussion on mesa-dev about how to
>design such an abstraction layer I took another round of cleaning up the
>interface and moved some more parts into the state tracker.
>So the interface between the state tracker and drivers only consist of
>the following now:
>1. two additional functions for the screen object: get_video_param and
>is_video_format_supported. get_video_param gets a parameter for a
>specified codec (like max width/height of decoding target, which could
>be smaller than texture max width/height), and is_video_format_supported
>which checks if a texture format is supported as a decoding target for a
>2. create_video_decoder function in the pipe_context object, which
>creates a decoder object for a given codec. The decoder object in turn
>includes everything needed to decode a video stream of that codec and
>uses pipe_video_decode_buffer objects to hold the input data of a single
>frame of that video codec.
>3. create_video_buffer function in the pipe_context object, which
>creates a video_buffer object to store a decoded video frame. This
>video_buffer object is then used for both rendering to the screen with
>normal pipe_context functionality and also as the input for reference
>frames back to the decoder.
>The pipe_video_buffer object is there because I think hardware decoders
>need some special memory layout of the different planes of a yuv buffer.
>There is a standard implementation that just uses normal textures as the
>different planes for yuv buffer, which can be used by a driver when
>there is no need for a special memory layout or when the driver just
>uses shader based decoding.
>The other option would be adding a PIPE_BIND_VIDEO_BUFFER flag to the
>resource creation functions, but I don't want to repeat functionality in
>the different drivers and as far as I can see the current resource
>functions (samplers/surfaces) can't be used to create a surface for just
>one plane/component of a yuv buffer and we could still clean that up to
>use the standard resource functions if the need arise.
>Everything else, especially the vl_compositor functionality, is now part
>of the state tracker instead of the driver. The interface was mostly
>designed keeping two things in mind: First it should abstract the
>functionality of hardware video decoding from the state tracker and
>second it should be possible to implement a wide variety of different
>video decoding apis with it. For the second part I checked that it's at
>least possible to do XvMC,VDPAU,VAAPI and DXVA with it.
>So what do you guys think about it?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20110712/05f82f36/attachment.htm>

More information about the mesa-dev mailing list