[Mesa-dev] reworking pipe_video_decoder / pipe_video_buffer

Maarten Lankhorst m.b.lankhorst at gmail.com
Tue Nov 15 08:52:31 PST 2011


Hey all,

I'm convinced that right now the pipe_video_decoder and pipe_video_buffer are unnecessarily complicated, and require simplification.

struct pipe_video_decoder
{
   struct pipe_context *context;
   enum pipe_video_profile profile;
   enum pipe_video_entrypoint entrypoint;
   enum pipe_video_chroma_format chroma_format;
   unsigned width;
   unsigned height;

   /**
    * destroy this video decoder
    */
   void (*destroy)(struct pipe_video_decoder *decoder);

   /**
    * decode a single picture from raw bitstream
    */
   void (*decode_bitstream)(struct pipe_video_decoder *decoder, struct pipe_video_buffer *target, struct pipe_picture_desc *desc, unsigned num_data_buffers, unsigned *num_bytes, const void * const*data);

  /**
    * decode a macroblock array, may just queue data without actual decoding until
    * flush is called, because of broken applications
    */
   void (*decode_macroblock)(struct pipe_video_decoder *decoder,
                             struct pipe_picture_desc *desc,
                             struct pipe_video_buffer *target,
                             const struct pipe_macroblock *macroblocks,
                             unsigned num_macroblocks);
   /**
    * flush any outstanding command buffers to the hardware for this video buffer
    * should be called before a video_buffer is used again, might block
    */
   void (*flush)(struct pipe_video_decoder *decoder, struct pipe_video_buffer *target);
};

Deleted:
- begin_frame/end_frame: Was only useful for XvMC, should be folded into flush..
- set_quant_matrix/set_reference_frames:
    they should become part of picture_desc,
    not all codecs deal with it in the same way,
    and some may not have all of the calls.
- set_picture_parameters: Can be passed to decode_bitstream/macroblock
- set_decode_target: Idem
- create_decode_buffer/set_decode_buffer/destroy_decode_buffer:
    Even if a decoder wants it, the state tracker has no business knowing
    about it.

flush is changed to only flush a single pipe_video_buffer,
this should reduce the state that needs to be maintained for XvMC otherwise.

Note: internally you can still use those calls, as long as the *public* api
would be reduced to this, pipe_video_buffer is specific to each driver,
so you can put in a lot of state there if you choose to do so. For example,
quant matrix is not used in vc-1, and h264 will need a different
way for set_reference_frames, see struct VdpReferenceFrameH264. This is why
I want to remove those calls, and put them into picture_desc..

struct pipe_video_buffer I'm less sure about, especially wrt interlacing
and alignment it needs more thought. height and width are aligned to 64
on nvidia, and requires a special tiling flag. The compositor will need
to become aware of interlacing, to be able to play back interlaced videos.

I honestly haven't read up on interlacing yet, and haven't found a video
to test it with, so that part is just speculative crap, and might change
when I find out more about it.

Testing interlaced  videos that decode correctly with nvidia vdpau would help
a lot to figure out what the proper way to handle interlacing would be, so if
someone has a bunch that play with nvidia accelerated vdpau & mplayer correctly,
could you please link them? ;)

/**
 * output for decoding / input for displaying
 */
struct pipe_video_buffer
{
   struct pipe_context *context;

   enum pipe_format buffer_format;
   // Note: buffer_format may change as a result of put_bits, or call to decode_bitstream
   // afaict there is no guarantee a buffer filled with put_bits can be used as reference
   // frame to decode_bitstream

   enum pipe_video_chroma_format chroma_format;
   unsigned width;
   unsigned height;

   enum pipe_video_interlace_type layout;
   // progressive
   // even and odd lines are split
   // interlaced, top field valid only (half height)
   // interlaced, bottom field valid only
   // I'm really drawing a blank what would be sane here, since interlacing has a ton of
   // codec specific information, and current design doesn't handle it at all..

   /**
    * destroy this video buffer
    */
   void (*destroy)(struct pipe_video_buffer *buffer);

   /**
    * get a individual sampler view for each component
    */
   struct pipe_sampler_view **(*get_sampler_view_components)(struct pipe_video_buffer *buffer);
   // Note: for progressive split in 2 halfs, would probably need up 6...
   // top Y, bottom Y, top CbCr, bottom CbCr

   /**
    * get a individual surfaces for each plane
    */
   struct pipe_surface **(*get_surfaces)(struct pipe_video_buffer *buffer);

   /**
    * write bits to a video buffer, possibly altering the format of this video buffer
    */
   void (*put_bits)(struct pipe_video_buffer *buffer, enum pipe_format format,
                    void const *const *source_data, uint32_t const *source_pitches);

   /**
    * read bits from a video buffer
    */
   void (*get_bits)(struct pipe_video_buffer *buffer, enum pipe_format format,
                    void *const*source_data, uint32_t const *source_pitches);
};





More information about the mesa-dev mailing list