<html>
  <head>
    <meta content="text/html; charset=iso-8859-15"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">Am 20.07.2016 um 06:21 schrieb Zhang,
      Boyuan:<br>
    </div>
    <blockquote
cite="mid:CY4PR12MB1127076C0EAC7F6C4FC3D5C987080@CY4PR12MB1127.namprd12.prod.outlook.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=iso-8859-15">
      <meta name="Generator" content="Microsoft Exchange Server">
      <!-- converted from text -->
      <style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
      <meta content="text/html; charset=UTF-8">
      <style type="text/css" style="">
<!--
p
        {margin-top:0;
        margin-bottom:0}
-->
</style>
      <div dir="ltr">
        <div id="x_divtagdefaultwrapper" style="font-size:12pt;
          color:#000000; background-color:#FFFFFF;
          font-family:Calibri,Arial,Helvetica,sans-serif">
          <p><span style="color:rgb(33,33,33); font-size:13.3333px">>>
              -  
              context->decoder->begin_frame(context->decoder,
              context->target, &context->desc.base);</span><br
              style="color:rgb(33,33,33); font-size:13.3333px">
            <span style="color:rgb(33,33,33); font-size:13.3333px">>>
              +   if (context->decoder->entrypoint !=
              PIPE_VIDEO_ENTRYPOINT_ENCODE)</span><br
              style="color:rgb(33,33,33); font-size:13.3333px">
            <span style="color:rgb(33,33,33); font-size:13.3333px">>>
              +     
              context->decoder->begin_frame(context->decoder,
              context->target, &context->desc.base);</span></p>
          <p><br style="color:rgb(33,33,33); font-size:13.3333px">
            <span style="color:rgb(33,33,33); font-size:13.3333px">>Why
              do we do so here? Could we avoid that?</span><br
              style="color:rgb(33,33,33); font-size:13.3333px">
            <br style="color:rgb(33,33,33); font-size:13.3333px">
            <span style="color:rgb(33,33,33); font-size:13.3333px">>I
              would rather like to keep the begin_frame()/end_frame()
              handling as it is.</span><br style="color:rgb(33,33,33);
              font-size:13.3333px">
            <br style="color:rgb(33,33,33); font-size:13.3333px">
            <span style="color:rgb(33,33,33); font-size:13.3333px">>Christian.</span></p>
          <p><span style="color:rgb(33,33,33); font-size:13.3333px"><br>
            </span></p>
          <p><font color="#212121"><span style="font-size:13.3333px">This
                is on purpose. Based on my testing, application will
                call begin_frame first, then call
                PictureParameter/SequenceParameter/... to pass us all
                picture related parameters. However, some of those
                values are actually required by begin_picture call in
                radeon_vce. So we have to delay the call until we
                receive all the parameters that needed. Same applies to
                encode_bitstream call. That's why I delay both calls to
                end_frame where we get all necessary values.</span></font></p>
        </div>
      </div>
    </blockquote>
    <br>
    We can keep it like this for now, but I would prefer that we clean
    this up and change the radeon_vce so that it matches the
    begin/encode/end calls from VA-API.<br>
    <br>
    We should probably work on this together with the performance
    improvements.<br>
    <br>
    Regards,<br>
    Christian.<br>
    <br>
    <blockquote
cite="mid:CY4PR12MB1127076C0EAC7F6C4FC3D5C987080@CY4PR12MB1127.namprd12.prod.outlook.com"
      type="cite">
      <div dir="ltr">
        <div id="x_divtagdefaultwrapper" style="font-size:12pt;
          color:#000000; background-color:#FFFFFF;
          font-family:Calibri,Arial,Helvetica,sans-serif">
          <p><font color="#212121"><span style="font-size:13.3333px"><br>
              </span></font></p>
          <p><font color="#212121"><span style="font-size:13.3333px">Regards,</span></font></p>
          <p><font color="#212121"><span style="font-size:13.3333px">Boyuan</span></font></p>
        </div>
        <hr tabindex="-1" style="display:inline-block; width:98%">
        <div id="x_divRplyFwdMsg" dir="ltr"><font style="font-size:11pt"
            face="Calibri, sans-serif" color="#000000"><b>From:</b>
            Christian König <a class="moz-txt-link-rfc2396E" href="mailto:deathsimple@vodafone.de"><deathsimple@vodafone.de></a><br>
            <b>Sent:</b> July 19, 2016 4:55:43 AM<br>
            <b>To:</b> Zhang, Boyuan; <a class="moz-txt-link-abbreviated" href="mailto:mesa-dev@lists.freedesktop.org">mesa-dev@lists.freedesktop.org</a><br>
            <b>Cc:</b> <a class="moz-txt-link-abbreviated" href="mailto:adf.lists@gmail.com">adf.lists@gmail.com</a><br>
            <b>Subject:</b> Re: [PATCH 09/12] st/va: add functions for
            VAAPI encode</font>
          <div> </div>
        </div>
      </div>
      <font size="2"><span style="font-size:10pt;">
          <div class="PlainText">Am 19.07.2016 um 00:43 schrieb Boyuan
            Zhang:<br>
            > Add necessary functions/changes for VAAPI encoding to
            buffer and picture. These changes will allow driver to
            handle all Vaapi encode related operations. This patch
            doesn't change the Vaapi decode behaviour.<br>
            ><br>
            > Signed-off-by: Boyuan Zhang
            <a class="moz-txt-link-rfc2396E" href="mailto:boyuan.zhang@amd.com"><boyuan.zhang@amd.com></a><br>
            > ---<br>
            >   src/gallium/state_trackers/va/buffer.c     |   6 +<br>
            >   src/gallium/state_trackers/va/picture.c    | 169
            ++++++++++++++++++++++++++++-<br>
            >   src/gallium/state_trackers/va/va_private.h |   3 +<br>
            >   3 files changed, 176 insertions(+), 2 deletions(-)<br>
            ><br>
            > diff --git a/src/gallium/state_trackers/va/buffer.c
            b/src/gallium/state_trackers/va/buffer.c<br>
            > index 7d3167b..dfcebbe 100644<br>
            > --- a/src/gallium/state_trackers/va/buffer.c<br>
            > +++ b/src/gallium/state_trackers/va/buffer.c<br>
            > @@ -133,6 +133,12 @@ vlVaMapBuffer(VADriverContextP
            ctx, VABufferID buf_id, void **pbuff)<br>
            >         if (!buf->derived_surface.transfer ||
            !*pbuff)<br>
            >            return VA_STATUS_ERROR_INVALID_BUFFER;<br>
            >   <br>
            > +      if (buf->type == VAEncCodedBufferType) {<br>
            > +         ((VACodedBufferSegment*)buf->data)->buf
            = *pbuff;<br>
            > +        
            ((VACodedBufferSegment*)buf->data)->size =
            buf->coded_size;<br>
            > +        
            ((VACodedBufferSegment*)buf->data)->next = NULL;<br>
            > +         *pbuff = buf->data;<br>
            > +      }<br>
            >      } else {<br>
            >         pipe_mutex_unlock(drv->mutex);<br>
            >         *pbuff = buf->data;<br>
            > diff --git a/src/gallium/state_trackers/va/picture.c
            b/src/gallium/state_trackers/va/picture.c<br>
            > index 89ac024..4793194 100644<br>
            > --- a/src/gallium/state_trackers/va/picture.c<br>
            > +++ b/src/gallium/state_trackers/va/picture.c<br>
            > @@ -78,7 +78,8 @@ vlVaBeginPicture(VADriverContextP
            ctx, VAContextID context_id, VASurfaceID rende<br>
            >         return VA_STATUS_SUCCESS;<br>
            >      }<br>
            >   <br>
            > -  
            context->decoder->begin_frame(context->decoder,
            context->target, &context->desc.base);<br>
            > +   if (context->decoder->entrypoint !=
            PIPE_VIDEO_ENTRYPOINT_ENCODE)<br>
            > +     
            context->decoder->begin_frame(context->decoder,
            context->target, &context->desc.base);<br>
            <br>
            Why do we do so here? Could we avoid that?<br>
            <br>
            I would rather like to keep the begin_frame()/end_frame()
            handling as it is.<br>
            <br>
            Christian.<br>
            <br>
            >   <br>
            >      return VA_STATUS_SUCCESS;<br>
            >   }<br>
            > @@ -278,6 +279,139 @@
            handleVASliceDataBufferType(vlVaContext *context, vlVaBuffer
            *buf)<br>
            >         num_buffers, (const void * const*)buffers,
            sizes);<br>
            >   }<br>
            >   <br>
            > +static VAStatus<br>
            > +handleVAEncMiscParameterTypeRateControl(vlVaContext
            *context, VAEncMiscParameterBuffer *misc)<br>
            > +{<br>
            > +   VAEncMiscParameterRateControl *rc =
            (VAEncMiscParameterRateControl *)misc->data;<br>
            > +   if
            (context->desc.h264enc.rate_ctrl.rate_ctrl_method ==<br>
            > +       PIPE_H264_ENC_RATE_CONTROL_METHOD_CONSTANT)<br>
            > +     
            context->desc.h264enc.rate_ctrl.target_bitrate =
            rc->bits_per_second;<br>
            > +   else<br>
            > +     
            context->desc.h264enc.rate_ctrl.target_bitrate =
            rc->bits_per_second * rc->target_percentage;<br>
            > +   context->desc.h264enc.rate_ctrl.peak_bitrate =
            rc->bits_per_second;<br>
            > +   if
            (context->desc.h264enc.rate_ctrl.target_bitrate <
            2000000)<br>
            > +     
            context->desc.h264enc.rate_ctrl.vbv_buffer_size =
            MIN2((context->desc.h264enc.rate_ctrl.target_bitrate *
            2.75), 2000000);<br>
            > +   else<br>
            > +     
            context->desc.h264enc.rate_ctrl.vbv_buffer_size =
            context->desc.h264enc.rate_ctrl.target_bitrate;<br>
            > +  
            context->desc.h264enc.rate_ctrl.target_bits_picture =<br>
            > +       
            context->desc.h264enc.rate_ctrl.target_bitrate /
            context->desc.h264enc.rate_ctrl.frame_rate_num;<br>
            > +  
            context->desc.h264enc.rate_ctrl.peak_bits_picture_integer
            =<br>
            > +       
            context->desc.h264enc.rate_ctrl.peak_bitrate /
            context->desc.h264enc.rate_ctrl.frame_rate_num;<br>
            > +  
            context->desc.h264enc.rate_ctrl.peak_bits_picture_fraction
            = 0;<br>
            > +<br>
            > +   return VA_STATUS_SUCCESS;<br>
            > +}<br>
            > +<br>
            > +static VAStatus<br>
            > +handleVAEncSequenceParameterBufferType(vlVaDriver
            *drv, vlVaContext *context, vlVaBuffer *buf)<br>
            > +{<br>
            > +   VAEncSequenceParameterBufferH264 *h264 =
            (VAEncSequenceParameterBufferH264 *)buf->data;<br>
            > +   if (!context->decoder) {<br>
            > +      context->templat.max_references =
            h264->max_num_ref_frames;<br>
            > +      context->templat.level = h264->level_idc;<br>
            > +      context->decoder =
            drv->pipe->create_video_codec(drv->pipe,
            &context->templat);<br>
            > +      if (!context->decoder)<br>
            > +         return VA_STATUS_ERROR_ALLOCATION_FAILED;<br>
            > +   }<br>
            > +   context->desc.h264enc.gop_size =
            h264->intra_idr_period;<br>
            > +   context->desc.h264enc.rate_ctrl.frame_rate_num =
            h264->time_scale / 2;<br>
            > +   context->desc.h264enc.rate_ctrl.frame_rate_den =
            1;<br>
            > +   return VA_STATUS_SUCCESS;<br>
            > +}<br>
            > +<br>
            > +static VAStatus<br>
            > +handleVAEncMiscParameterBufferType(vlVaContext
            *context, vlVaBuffer *buf)<br>
            > +{<br>
            > +   VAStatus vaStatus = VA_STATUS_SUCCESS;<br>
            > +   VAEncMiscParameterBuffer *misc;<br>
            > +   misc = buf->data;<br>
            > +<br>
            > +   switch (misc->type) {<br>
            > +   case VAEncMiscParameterTypeRateControl:<br>
            > +      vaStatus =
            handleVAEncMiscParameterTypeRateControl(context, misc);<br>
            > +      break;<br>
            > +<br>
            > +   default:<br>
            > +      break;<br>
            > +   }<br>
            > +<br>
            > +   return vaStatus;<br>
            > +}<br>
            > +<br>
            > +static VAStatus<br>
            > +handleVAEncPictureParameterBufferType(vlVaDriver *drv,
            vlVaContext *context, vlVaBuffer *buf)<br>
            > +{<br>
            > +   VAEncPictureParameterBufferH264 *h264;<br>
            > +   vlVaBuffer *coded_buf;<br>
            > +<br>
            > +   h264 = buf->data;<br>
            > +   context->desc.h264enc.frame_num =
            h264->frame_num;<br>
            > +   context->desc.h264enc.not_referenced = false;<br>
            > +   context->desc.h264enc.is_idr =
            (h264->pic_fields.bits.idr_pic_flag == 1);<br>
            > +   context->desc.h264enc.pic_order_cnt =
            h264->CurrPic.TopFieldOrderCnt / 2;<br>
            > +   if (context->desc.h264enc.is_idr)<br>
            > +      context->desc.h264enc.i_remain = 1;<br>
            > +   else<br>
            > +      context->desc.h264enc.i_remain = 0;<br>
            > +<br>
            > +   context->desc.h264enc.p_remain =
            context->desc.h264enc.gop_size -
            context->desc.h264enc.gop_cnt -
            context->desc.h264enc.i_remain;<br>
            > +<br>
            > +   coded_buf = handle_table_get(drv->htab,
            h264->coded_buf);<br>
            > +   if (!coded_buf->derived_surface.resource)<br>
            > +      coded_buf->derived_surface.resource =
            pipe_buffer_create(drv->pipe->screen,
            PIPE_BIND_VERTEX_BUFFER,<br>
            > +                                           
            PIPE_USAGE_STREAM, coded_buf->size);<br>
            > +   context->coded_buf = coded_buf;<br>
            > +<br>
            > +  
            context->desc.h264enc.frame_idx[h264->CurrPic.picture_id]
            = h264->frame_num;<br>
            > +   if (context->desc.h264enc.is_idr)<br>
            > +      context->desc.h264enc.picture_type =
            PIPE_H264_ENC_PICTURE_TYPE_IDR;<br>
            > +   else<br>
            > +      context->desc.h264enc.picture_type =
            PIPE_H264_ENC_PICTURE_TYPE_P;<br>
            > +<br>
            > +   context->desc.h264enc.frame_num_cnt++;<br>
            > +   context->desc.h264enc.gop_cnt++;<br>
            > +   if (context->desc.h264enc.gop_cnt ==
            context->desc.h264enc.gop_size)<br>
            > +      context->desc.h264enc.gop_cnt = 0;<br>
            > +<br>
            > +   return VA_STATUS_SUCCESS;<br>
            > +}<br>
            > +<br>
            > +static VAStatus<br>
            > +handleVAEncSliceParameterBufferType(vlVaDriver *drv,
            vlVaContext *context, vlVaBuffer *buf)<br>
            > +{<br>
            > +   VAEncSliceParameterBufferH264 *h264;<br>
            > +<br>
            > +   h264 = buf->data;<br>
            > +   context->desc.h264enc.ref_idx_l0 =
            VA_INVALID_ID;<br>
            > +   context->desc.h264enc.ref_idx_l1 =
            VA_INVALID_ID;<br>
            > +<br>
            > +   for (int i = 0; i < 32; i++) {<br>
            > +      if (h264->RefPicList0[i].picture_id !=
            VA_INVALID_ID) {<br>
            > +         if (context->desc.h264enc.ref_idx_l0 ==
            VA_INVALID_ID)<br>
            > +            context->desc.h264enc.ref_idx_l0 =
            context->desc.h264enc.frame_idx[h264->RefPicList0[i].picture_id];<br>
            > +      }<br>
            > +      if (h264->RefPicList1[i].picture_id !=
            VA_INVALID_ID && h264->slice_type == 1) {<br>
            > +         if (context->desc.h264enc.ref_idx_l1 ==
            VA_INVALID_ID)<br>
            > +            context->desc.h264enc.ref_idx_l1 =
            context->desc.h264enc.frame_idx[h264->RefPicList1[i].picture_id];<br>
            > +      }<br>
            > +   }<br>
            > +<br>
            > +   if (h264->slice_type == 1)<br>
            > +      context->desc.h264enc.picture_type =
            PIPE_H264_ENC_PICTURE_TYPE_B;<br>
            > +   else if (h264->slice_type == 0)<br>
            > +      context->desc.h264enc.picture_type =
            PIPE_H264_ENC_PICTURE_TYPE_P;<br>
            > +   else if (h264->slice_type == 2) {<br>
            > +      if (context->desc.h264enc.is_idr){<br>
            > +         context->desc.h264enc.picture_type =
            PIPE_H264_ENC_PICTURE_TYPE_IDR;<br>
            > +         context->desc.h264enc.idr_pic_id++;<br>
            > +        } else<br>
            > +         context->desc.h264enc.picture_type =
            PIPE_H264_ENC_PICTURE_TYPE_I;<br>
            > +   } else<br>
            > +      context->desc.h264enc.picture_type =
            PIPE_H264_ENC_PICTURE_TYPE_SKIP;<br>
            > +<br>
            > +   return VA_STATUS_SUCCESS;<br>
            > +}<br>
            > +<br>
            >   VAStatus<br>
            >   vlVaRenderPicture(VADriverContextP ctx, VAContextID
            context_id, VABufferID *buffers, int num_buffers)<br>
            >   {<br>
            > @@ -328,6 +462,22 @@ vlVaRenderPicture(VADriverContextP
            ctx, VAContextID context_id, VABufferID *buff<br>
            >            vaStatus =
            vlVaHandleVAProcPipelineParameterBufferType(drv, context,
            buf);<br>
            >            break;<br>
            >   <br>
            > +      case VAEncSequenceParameterBufferType:<br>
            > +         vaStatus =
            handleVAEncSequenceParameterBufferType(drv, context, buf);<br>
            > +         break;<br>
            > +<br>
            > +      case VAEncMiscParameterBufferType:<br>
            > +         vaStatus =
            handleVAEncMiscParameterBufferType(context, buf);<br>
            > +         break;<br>
            > +<br>
            > +      case VAEncPictureParameterBufferType:<br>
            > +         vaStatus =
            handleVAEncPictureParameterBufferType(drv, context, buf);<br>
            > +         break;<br>
            > +<br>
            > +      case VAEncSliceParameterBufferType:<br>
            > +         vaStatus =
            handleVAEncSliceParameterBufferType(drv, context, buf);<br>
            > +         break;<br>
            > +<br>
            >         default:<br>
            >            break;<br>
            >         }<br>
            > @@ -342,6 +492,9 @@ vlVaEndPicture(VADriverContextP
            ctx, VAContextID context_id)<br>
            >   {<br>
            >      vlVaDriver *drv;<br>
            >      vlVaContext *context;<br>
            > +   vlVaBuffer *coded_buf;<br>
            > +   unsigned int coded_size;<br>
            > +   void *feedback;<br>
            >   <br>
            >      if (!ctx)<br>
            >         return VA_STATUS_ERROR_INVALID_CONTEXT;<br>
            > @@ -365,7 +518,19 @@ vlVaEndPicture(VADriverContextP
            ctx, VAContextID context_id)<br>
            >      }<br>
            >   <br>
            >      context->mpeg4.frame_num++;<br>
            > -  
            context->decoder->end_frame(context->decoder,
            context->target, &context->desc.base);<br>
            > +<br>
            > +   if (context->decoder->entrypoint ==
            PIPE_VIDEO_ENTRYPOINT_ENCODE) {<br>
            > +      coded_buf = context->coded_buf;<br>
            > +     
            context->decoder->begin_frame(context->decoder,
            context->target, &context->desc.base);<br>
            > +     
            context->decoder->encode_bitstream(context->decoder,
            context->target,<br>
            > +                                        
            coded_buf->derived_surface.resource, &feedback);<br>
            > +     
            context->decoder->end_frame(context->decoder,
            context->target, &context->desc.base);<br>
            > +     
            context->decoder->flush(context->decoder);<br>
            > +     
            context->decoder->get_feedback(context->decoder,
            feedback, &coded_size);<br>
            > +      coded_buf->coded_size = coded_size;<br>
            > +   }<br>
            > +   else<br>
            > +     
            context->decoder->end_frame(context->decoder,
            context->target, &context->desc.base);<br>
            >   <br>
            >      return VA_STATUS_SUCCESS;<br>
            >   }<br>
            > diff --git a/src/gallium/state_trackers/va/va_private.h
            b/src/gallium/state_trackers/va/va_private.h<br>
            > index ad9010a..6d3ac38 100644<br>
            > --- a/src/gallium/state_trackers/va/va_private.h<br>
            > +++ b/src/gallium/state_trackers/va/va_private.h<br>
            > @@ -229,6 +229,7 @@ typedef struct {<br>
            >         struct pipe_vc1_picture_desc vc1;<br>
            >         struct pipe_h264_picture_desc h264;<br>
            >         struct pipe_h265_picture_desc h265;<br>
            > +      struct pipe_h264_enc_picture_desc h264enc;<br>
            >      } desc;<br>
            >   <br>
            >      struct {<br>
            > @@ -241,6 +242,7 @@ typedef struct {<br>
            >      } mpeg4;<br>
            >   <br>
            >      struct vl_deint_filter *deint;<br>
            > +   struct vlVaBuffer *coded_buf;<br>
            >   } vlVaContext;<br>
            >   <br>
            >   typedef struct {<br>
            > @@ -260,6 +262,7 @@ typedef struct {<br>
            >      } derived_surface;<br>
            >      unsigned int export_refcount;<br>
            >      VABufferInfo export_state;<br>
            > +   unsigned int coded_size;<br>
            >   } vlVaBuffer;<br>
            >   <br>
            >   typedef struct {<br>
            <br>
          </div>
        </span></font>
    </blockquote>
    <br>
  </body>
</html>