[Mesa-dev] [PATCH 3/3] radeon/vce: use vce structures for vce_52 firmware

Zhang, Boyuan Boyuan.Zhang at amd.com
Wed Jun 22 21:52:10 UTC 2016


OK, so I added get parameters call for each firmware versions, and moved all the value assignments to firmware specific file. As a result, changes made for specific version won't affect other version. For future firmware verison, we still can use the same structure but assign different values in version specific calls. Please see the new patch set I just sent.

-----Original Message-----
From: Christian König [mailto:deathsimple at vodafone.de] 
Sent: June-22-16 11:55 AM
To: Zhang, Boyuan; mesa-dev at lists.freedesktop.org
Subject: Re: [PATCH 3/3] radeon/vce: use vce structures for vce_52 firmware

Am 22.06.2016 um 17:43 schrieb Zhang, Boyuan:
>> We should write the encode structure directly without the use of the
>> RVCE_CS() macros.
>>
>> Otherwise all of that doesn't make much sense and is just another layer of abstraction.
> Different from UVD where firmware takes the address of the IB structure, VCE firmware directly takes the value of IB, not the address. The encode structure here is used for storing value. We need this layer is because we want to assign different values to some of the IB in VAAPI which had previously hardcoded values for OMX. Therefore, we still want to keep the RVCE_CS() macros. By keeping this, all firmware version can work, even the structure changes b/w different version of firmware, it still works because we only take the value of IB not the structure itself.

And exactly that's what we don't want.

Each firmware version should have a complete separate implementation of mapping the values from the pipe description into the binary representation of the IB.

Otherwise we would need to test with all the older firmware versions as well when we make a change.

Adding different values to the IB is also possible completely without the structure by just using the values from the picture descriptor directly.

Regards,
Christian.

>
> Regards,
> Boyuan
>
> -----Original Message-----
> From: Christian König [mailto:deathsimple at vodafone.de]
> Sent: June-22-16 3:34 AM
> To: Zhang, Boyuan; mesa-dev at lists.freedesktop.org
> Subject: Re: [PATCH 3/3] radeon/vce: use vce structures for vce_52 
> firmware
>
> Am 21.06.2016 um 16:50 schrieb Boyuan Zhang:
>> Signed-off-by: Boyuan Zhang <boyuan.zhang at amd.com>
>> ---
>>    src/gallium/drivers/radeon/radeon_vce.c    | 171 +++++++++++
>>    src/gallium/drivers/radeon/radeon_vce.h    |   1 +
>>    src/gallium/drivers/radeon/radeon_vce_52.c | 447 +++++++++++++++++++++++------
>>    3 files changed, 533 insertions(+), 86 deletions(-)
>>
>> diff --git a/src/gallium/drivers/radeon/radeon_vce.c
>> b/src/gallium/drivers/radeon/radeon_vce.c
>> index e16e0cf..0d96085 100644
>> --- a/src/gallium/drivers/radeon/radeon_vce.c
>> +++ b/src/gallium/drivers/radeon/radeon_vce.c
>> @@ -139,6 +139,176 @@ static void sort_cpb(struct rvce_encoder *enc)
>>    	}
>>    }
>>    
>> +static void get_rate_control_param(struct rvce_encoder *enc, struct 
>> +pipe_h264_enc_picture_desc *pic) {
> Move all of this into the firmware specific file. Don't add anything to the common file since we don't want to implement this for the older firmware versions.
>
>> +	enc->enc_pic.rc.rc_method = pic->rate_ctrl.rate_ctrl_method;
>> +	enc->enc_pic.rc.target_bitrate = pic->rate_ctrl.target_bitrate;
>> +	enc->enc_pic.rc.peak_bitrate = pic->rate_ctrl.peak_bitrate;
>> +	enc->enc_pic.rc.quant_i_frames = pic->quant_i_frames;
>> +	enc->enc_pic.rc.quant_p_frames = pic->quant_p_frames;
>> +	enc->enc_pic.rc.quant_b_frames = pic->quant_b_frames;
>> +	enc->enc_pic.rc.gop_size = pic->gop_size;
>> +	enc->enc_pic.rc.frame_rate_num = pic->rate_ctrl.frame_rate_num;
>> +	enc->enc_pic.rc.frame_rate_den = pic->rate_ctrl.frame_rate_den;
>> +	enc->enc_pic.rc.max_qp = 51;
>> +
>> +	if (pic->enable_low_level_control == true) {
>> +		enc->enc_pic.rc.vbv_buffer_size = 20000000;
>> +		if (pic->rate_ctrl.frame_rate_num == 0)
>> +			enc->enc_pic.rc.frame_rate_num = 30;
>> +		if (pic->rate_ctrl.frame_rate_den == 0)
>> +			enc->enc_pic.rc.frame_rate_den = 1;
>> +		enc->enc_pic.rc.vbv_buf_lv = 48;
>> +		enc->enc_pic.rc.fill_data_enable = 1;
>> +		enc->enc_pic.rc.enforce_hrd = 1;
>> +		enc->enc_pic.rc.target_bits_picture = enc->enc_pic.rc.target_bitrate / enc->enc_pic.rc.frame_rate_num;
>> +		enc->enc_pic.rc.peak_bits_picture_integer = enc->enc_pic.rc.peak_bitrate / enc->enc_pic.rc.frame_rate_num;
>> +		enc->enc_pic.rc.peak_bits_picture_fraction = 0;
>> +	} else {
>> +		enc->enc_pic.rc.vbv_buffer_size = pic->rate_ctrl.vbv_buffer_size;
>> +		enc->enc_pic.rc.vbv_buf_lv = 0;
>> +		enc->enc_pic.rc.fill_data_enable = 0;
>> +		enc->enc_pic.rc.enforce_hrd = 0;
>> +		enc->enc_pic.rc.target_bits_picture = pic->rate_ctrl.target_bits_picture;
>> +		enc->enc_pic.rc.peak_bits_picture_integer = pic->rate_ctrl.peak_bits_picture_integer;
>> +		enc->enc_pic.rc.peak_bits_picture_fraction = pic->rate_ctrl.peak_bits_picture_fraction;
>> +	}
>> +}
>> +
>> +static void get_motion_estimation_param(struct rvce_encoder *enc, 
>> +struct pipe_h264_enc_picture_desc *pic) {
>> +	if (pic->enable_low_level_control == true) {
>> +		enc->enc_pic.me.motion_est_quarter_pixel = 0x00000001;
>> +		enc->enc_pic.me.enc_disable_sub_mode = 0x00000078;
>> +		enc->enc_pic.me.lsmvert = 0x00000002;
>> +		enc->enc_pic.me.enc_en_ime_overw_dis_subm = 0x00000001;
>> +		enc->enc_pic.me.enc_ime_overw_dis_subm_no = 0x00000001;
>> +		enc->enc_pic.me.enc_ime2_search_range_x = 0x00000004;
>> +		enc->enc_pic.me.enc_ime2_search_range_y = 0x00000004;
>> +		enc->enc_pic.me.enc_ime_decimation_search = 0x00000001;
>> +		enc->enc_pic.me.motion_est_half_pixel = 0x00000001;
>> +		enc->enc_pic.me.enc_search_range_x = 0x00000010;
>> +		enc->enc_pic.me.enc_search_range_y = 0x00000010;
>> +		enc->enc_pic.me.enc_search1_range_x = 0x00000010;
>> +		enc->enc_pic.me.enc_search1_range_y = 0x00000010;
>> +	} else {
>> +		enc->enc_pic.me.motion_est_quarter_pixel = 0x00000000;
>> +		enc->enc_pic.me.enc_disable_sub_mode = 0x000000fe;
>> +		enc->enc_pic.me.lsmvert = 0x00000000;
>> +		enc->enc_pic.me.enc_en_ime_overw_dis_subm = 0x00000000;
>> +		enc->enc_pic.me.enc_ime_overw_dis_subm_no = 0x00000000;
>> +		enc->enc_pic.me.enc_ime2_search_range_x = 0x00000001;
>> +		enc->enc_pic.me.enc_ime2_search_range_y = 0x00000001;
>> +		enc->enc_pic.me.enc_ime_decimation_search = 0x00000001;
>> +		enc->enc_pic.me.motion_est_half_pixel = 0x00000001;
>> +		enc->enc_pic.me.enc_search_range_x = 0x00000010;
>> +		enc->enc_pic.me.enc_search_range_y = 0x00000010;
>> +		enc->enc_pic.me.enc_search1_range_x = 0x00000010;
>> +		enc->enc_pic.me.enc_search1_range_y = 0x00000010;
>> +	}
>> +}
>> +
>> +static void get_pic_control_param(struct rvce_encoder *enc, struct 
>> +pipe_h264_enc_picture_desc *pic) {
>> +	unsigned encNumMBsPerSlice;
>> +	encNumMBsPerSlice = align(enc->base.width, 16) / 16;
>> +	encNumMBsPerSlice *= align(enc->base.height, 16) / 16;
>> +	enc->enc_pic.pc.enc_crop_right_offset = (align(enc->base.width, 16) - enc->base.width) >> 1;
>> +	enc->enc_pic.pc.enc_crop_bottom_offset = (align(enc->base.height, 16) - enc->base.height) >> 1;
>> +	enc->enc_pic.pc.enc_num_mbs_per_slice = encNumMBsPerSlice;
>> +	enc->enc_pic.pc.enc_b_pic_pattern = MAX2(enc->base.max_references, 1) - 1;
>> +	enc->enc_pic.pc.enc_number_of_reference_frames = MIN2(enc->base.max_references, 2);
>> +	enc->enc_pic.pc.enc_max_num_ref_frames = enc->base.max_references + 1;
>> +	enc->enc_pic.pc.enc_num_default_active_ref_l0 = 0x00000001;
>> +	enc->enc_pic.pc.enc_num_default_active_ref_l1 = 0x00000001;
>> +	if (pic->enable_low_level_control == true) {
>> +		enc->enc_pic.pc.enc_cabac_enable = 0x00000001;
>> +		enc->enc_pic.pc.enc_constraint_set_flags = 0x00000040;
>> +		enc->enc_pic.pc.enc_num_default_active_ref_l0 = 0x00000001;
>> +		enc->enc_pic.pc.enc_num_default_active_ref_l1 = 0x00000001;
>> +	} else {
>> +		enc->enc_pic.pc.enc_cabac_enable = 0x00000000;
>> +		enc->enc_pic.pc.enc_constraint_set_flags = 0x00000040;
>> +		enc->enc_pic.pc.enc_num_default_active_ref_l0 = 0x00000001;
>> +		enc->enc_pic.pc.enc_num_default_active_ref_l1 = 0x00000001;
>> +	}
>> +}
>> +
>> +static void get_task_info_param(struct rvce_encoder *enc) {
>> +	enc->enc_pic.ti.offset_of_next_task_info = 0xffffffff; }
>> +
>> +static void get_feedback_buffer_param(struct rvce_encoder *enc) {
>> +	enc->enc_pic.fb.feedback_ring_size = 0x00000001; }
>> +
>> +static void get_config_ext_param(struct rvce_encoder *enc) {
>> +	enc->enc_pic.ce.enc_enable_perf_logging = 0x00000003; }
>> +
>> +static void get_vui_param(struct rvce_encoder *enc, struct 
>> +pipe_h264_enc_picture_desc *pic) {
>> +	if (pic->enable_low_level_control == true)
>> +		enc->enc_pic.enable_vui = 0;
>> +	else
>> +		enc->enc_pic.enable_vui = (pic->rate_ctrl.frame_rate_num != 0);
>> +	enc->enc_pic.vui.video_format = 0x00000005;
>> +	enc->enc_pic.vui.color_prim = 0x00000002;
>> +	enc->enc_pic.vui.transfer_char = 0x00000002;
>> +	enc->enc_pic.vui.matrix_coef = 0x00000002;
>> +	enc->enc_pic.vui.timing_info_present_flag = 0x00000001;
>> +	enc->enc_pic.vui.num_units_in_tick = pic->rate_ctrl.frame_rate_den;
>> +	enc->enc_pic.vui.time_scale = pic->rate_ctrl.frame_rate_num * 2;
>> +	enc->enc_pic.vui.fixed_frame_rate_flag = 0x00000001;
>> +	enc->enc_pic.vui.bit_rate_scale = 0x00000004;
>> +	enc->enc_pic.vui.cpb_size_scale = 0x00000006;
>> +	enc->enc_pic.vui.initial_cpb_removal_delay_length_minus1 = 0x00000017;
>> +	enc->enc_pic.vui.cpb_removal_delay_length_minus1 = 0x00000017;
>> +	enc->enc_pic.vui.dpb_output_delay_length_minus1 = 0x00000017;
>> +	enc->enc_pic.vui.time_offset_length = 0x00000018;
>> +	enc->enc_pic.vui.motion_vectors_over_pic_boundaries_flag = 0x00000001;
>> +	enc->enc_pic.vui.max_bytes_per_pic_denom = 0x00000002;
>> +	enc->enc_pic.vui.max_bits_per_mb_denom = 0x00000001;
>> +	enc->enc_pic.vui.log2_max_mv_length_hori = 0x00000010;
>> +	enc->enc_pic.vui.log2_max_mv_length_vert = 0x00000010;
>> +	enc->enc_pic.vui.num_reorder_frames = 0x00000003;
>> +	enc->enc_pic.vui.max_dec_frame_buffering = 0x00000003; }
>> +
>> +static void get_pic_param(struct rvce_encoder *enc, struct 
>> +pipe_h264_enc_picture_desc *pic) {
>> +	if (pic->enable_low_level_control == true)
>> +		enc->enc_pic.ref_pic_mode = 0x01000201;
>> +	get_rate_control_param(enc, pic);
>> +	get_motion_estimation_param(enc, pic);
>> +	get_pic_control_param(enc, pic);
>> +	get_task_info_param(enc);
>> +	get_feedback_buffer_param(enc);
>> +	get_vui_param(enc, pic);
>> +	get_config_ext_param(enc);
>> +
>> +	enc->enc_pic.picture_type = pic->picture_type;
>> +	enc->enc_pic.frame_num = pic->frame_num;
>> +	enc->enc_pic.frame_num_cnt = pic->frame_num_cnt;
>> +	enc->enc_pic.p_remain = pic->p_remain;
>> +	enc->enc_pic.i_remain = pic->i_remain;
>> +	enc->enc_pic.gop_cnt = pic->gop_cnt;
>> +	enc->enc_pic.pic_order_cnt = pic->pic_order_cnt;
>> +	enc->enc_pic.ref_idx_l0 = pic->ref_idx_l0;
>> +	enc->enc_pic.ref_idx_l1 = pic->ref_idx_l1;
>> +	enc->enc_pic.not_referenced = pic->not_referenced;
>> +	enc->enc_pic.is_idr = pic->is_idr;
>> +	enc->enc_pic.has_ref_pic_list = pic->has_ref_pic_list;
>> +	for (int i = 0; i < 32 ; i++) {
>> +		enc->enc_pic.ref_pic_list_0[i] = pic->ref_pic_list_0[i];
>> +		enc->enc_pic.ref_pic_list_1[i] = pic->ref_pic_list_1[i];
>> +		enc->enc_pic.frame_idx[i] = pic->frame_idx[i];
>> +	}
>> +}
>> +
>>    /**
>>     * get number of cpbs based on dpb
>>     */
>> @@ -267,6 +437,7 @@ static void rvce_begin_frame(struct pipe_video_codec *encoder,
>>    		enc->pic.quant_b_frames != pic->quant_b_frames;
>>    
>>    	enc->pic = *pic;
>> +	get_pic_param(enc, pic);
>>    
>>    	enc->get_buffer(vid_buf->resources[0], &enc->handle, &enc->luma);
>>    	enc->get_buffer(vid_buf->resources[1], NULL, &enc->chroma); diff 
>> --git a/src/gallium/drivers/radeon/radeon_vce.h
>> b/src/gallium/drivers/radeon/radeon_vce.h
>> index da61285..a24aac8 100644
>> --- a/src/gallium/drivers/radeon/radeon_vce.h
>> +++ b/src/gallium/drivers/radeon/radeon_vce.h
>> @@ -405,6 +405,7 @@ struct rvce_encoder {
>>    	struct rvid_buffer		*fb;
>>    	struct rvid_buffer		cpb;
>>    	struct pipe_h264_enc_picture_desc pic;
>> +	struct rvce_h264_enc_pic	enc_pic;
>>    
>>    	unsigned			task_info_idx;
>>    	unsigned			bs_idx;
>> diff --git a/src/gallium/drivers/radeon/radeon_vce_52.c
>> b/src/gallium/drivers/radeon/radeon_vce_52.c
>> index 3894eea..e39b64b 100644
>> --- a/src/gallium/drivers/radeon/radeon_vce_52.c
>> +++ b/src/gallium/drivers/radeon/radeon_vce_52.c
>> @@ -45,22 +45,22 @@ static void create(struct rvce_encoder *enc)
>>    	enc->task_info(enc, 0x00000000, 0, 0, 0);
>>    
>>    	RVCE_BEGIN(0x01000001); // create cmd
>> -	RVCE_CS(0x00000000); // encUseCircularBuffer
>> +	RVCE_CS(enc->enc_pic.ec.enc_use_circular_buffer); // 
>> +encUseCircularBuffer
> We should write the encode structure directly without the use of the
> RVCE_CS() macros.
>
> Otherwise all of that doesn't make much sense and is just another layer of abstraction.
>
>>    	RVCE_CS(profiles[enc->base.profile -
>>    		PIPE_VIDEO_PROFILE_MPEG4_AVC_BASELINE]); // encProfile
>>    	RVCE_CS(enc->base.level); // encLevel
>> -	RVCE_CS(0x00000000); // encPicStructRestriction
>> +	RVCE_CS(enc->enc_pic.ec.enc_pic_struct_restriction); // 
>> +encPicStructRestriction
>>    	RVCE_CS(enc->base.width); // encImageWidth
>>    	RVCE_CS(enc->base.height); // encImageHeight
>>    	RVCE_CS(enc->luma->level[0].pitch_bytes); // encRefPicLumaPitch
>>    	RVCE_CS(enc->chroma->level[0].pitch_bytes); // encRefPicChromaPitch
>>    	RVCE_CS(align(enc->luma->npix_y, 16) / 8); // encRefYHeightInQw
>> -	RVCE_CS(0x00000000); // encRefPic(Addr|Array)Mode, encPicStructRestriction, disableRDO
>> +	RVCE_CS(enc->enc_pic.ref_pic_mode); // encRefPic(Addr|Array)Mode, 
>> +encPicStructRestriction, disableRDO
>>    
>> -	RVCE_CS(0x00000000); // encPreEncodeContextBufferOffset
>> -	RVCE_CS(0x00000000); // encPreEncodeInputLumaBufferOffset
>> -	RVCE_CS(0x00000000); // encPreEncodeInputChromaBufferOffs
>> -	RVCE_CS(0x00000000); // encPreEncodeMode|ChromaFlag|VBAQMode|SceneChangeSensitivity
>> +	RVCE_CS(enc->enc_pic.ec.enc_pre_encode_context_buffer_offset); // encPreEncodeContextBufferOffset
>> +	RVCE_CS(enc->enc_pic.ec.enc_pre_encode_input_luma_buffer_offset); // encPreEncodeInputLumaBufferOffset
>> +	RVCE_CS(enc->enc_pic.ec.enc_pre_encode_input_chroma_buffer_offset); 
>> +// encPreEncodeInputChromaBufferOffs
>> +	
>> +RVCE_CS(enc->enc_pic.ec.enc_pre_encode_mode_chromaflag_vbaqmode_scen
>> +e
>> +changesensitivity); //
>> +encPreEncodeMode|ChromaFlag|VBAQMode|SceneChangeSensitivity
>>    	RVCE_END();
>>    }
>>    
>> @@ -73,7 +73,7 @@ static void encode(struct rvce_encoder *enc)
>>    	if (enc->dual_inst) {
>>    		if (bs_idx == 0)
>>    			dep = 1;
>> -		else if (enc->pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_IDR)
>> +		else if (enc->enc_pic.picture_type ==
>> +PIPE_H264_ENC_PICTURE_TYPE_IDR)
> Don't add an extra layer here. Use the pipe structure directly and remove the redundant information from the encode structure.
>
> Regards,
> Christian.
>
>>    			dep = 0;
>>    		else
>>    			dep = 2;
>> @@ -107,13 +107,13 @@ static void encode(struct rvce_encoder *enc)
>>    	}
>>    
>>    	RVCE_BEGIN(0x03000001); // encode
>> -	RVCE_CS(enc->pic.frame_num ? 0x0 : 0x11); // insertHeaders
>> -	RVCE_CS(0x00000000); // pictureStructure
>> +	RVCE_CS(enc->enc_pic.frame_num ? 0x0 : 0x11); // insertHeaders
>> +	RVCE_CS(enc->enc_pic.eo.picture_structure); // pictureStructure
>>    	RVCE_CS(enc->bs_size); // allowedMaxBitstreamSize
>> -	RVCE_CS(0x00000000); // forceRefreshMap
>> -	RVCE_CS(0x00000000); // insertAUD
>> -	RVCE_CS(0x00000000); // endOfSequence
>> -	RVCE_CS(0x00000000); // endOfStream
>> +	RVCE_CS(enc->enc_pic.eo.force_refresh_map); // forceRefreshMap
>> +	RVCE_CS(enc->enc_pic.eo.insert_aud); // insertAUD
>> +	RVCE_CS(enc->enc_pic.eo.end_of_sequence); // endOfSequence
>> +	RVCE_CS(enc->enc_pic.eo.end_of_stream); // endOfStream
>>    	RVCE_READ(enc->handle, RADEON_DOMAIN_VRAM,
>>    		enc->luma->level[0].offset); // inputPictureLumaAddressHi/Lo
>>    	RVCE_READ(enc->handle, RADEON_DOMAIN_VRAM, @@ -122,45 +122,56 @@ 
>> static void encode(struct rvce_encoder *enc)
>>    	RVCE_CS(enc->luma->level[0].pitch_bytes); // encInputPicLumaPitch
>>    	RVCE_CS(enc->chroma->level[0].pitch_bytes); // encInputPicChromaPitch
>>    	if (enc->dual_pipe)
>> -		RVCE_CS(0x00000000); // encInputPic(Addr|Array)Mode,encDisable(TwoPipeMode|MBOffloading)
>> +		enc->enc_pic.eo.enc_input_pic_addr_array_disable2pipe_disablemboffload = 0x00000000;
>> +	else
>> +		enc->enc_pic.eo.enc_input_pic_addr_array_disable2pipe_disablemboffload = 0x00010000;
>> +	RVCE_CS(enc->enc_pic.eo.enc_input_pic_addr_array_disable2pipe_disablemboffload); // encInputPic(Addr|Array)Mode,encDisable(TwoPipeMode|MBOffloading)
>> +	RVCE_CS(enc->enc_pic.eo.enc_input_pic_tile_config); // encInputPicTileConfig
>> +	RVCE_CS(enc->enc_pic.picture_type); // encPicType
>> +	RVCE_CS(enc->enc_pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_IDR); // encIdrFlag
>> +	if ((enc->enc_pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_IDR) && (enc->enc_pic.eo.enc_idr_pic_id !=0))
>> +		enc->enc_pic.eo.enc_idr_pic_id = enc->enc_pic.idr_pic_id - 1;
>>    	else
>> -		RVCE_CS(0x00010000); // encInputPic(Addr|Array)Mode,encDisable(TwoPipeMode|MBOffloading)
>> -	RVCE_CS(0x00000000); // encInputPicTileConfig
>> -	RVCE_CS(enc->pic.picture_type); // encPicType
>> -	RVCE_CS(enc->pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_IDR); // encIdrFlag
>> -	RVCE_CS(0x00000000); // encIdrPicId
>> -	RVCE_CS(0x00000000); // encMGSKeyPic
>> -	RVCE_CS(!enc->pic.not_referenced); // encReferenceFlag
>> -	RVCE_CS(0x00000000); // encTemporalLayerIndex
>> -	RVCE_CS(0x00000000); // num_ref_idx_active_override_flag
>> -	RVCE_CS(0x00000000); // num_ref_idx_l0_active_minus1
>> -	RVCE_CS(0x00000000); // num_ref_idx_l1_active_minus1
>> -
>> -	i = enc->pic.frame_num - enc->pic.ref_idx_l0;
>> -	if (i > 1 && enc->pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_P) {
>> -		RVCE_CS(0x00000001); // encRefListModificationOp
>> -		RVCE_CS(i - 1);      // encRefListModificationNum
>> +		enc->enc_pic.eo.enc_idr_pic_id = 0x00000000;
>> +	RVCE_CS(enc->enc_pic.eo.enc_idr_pic_id); // encIdrPicId
>> +	RVCE_CS(enc->enc_pic.eo.enc_mgs_key_pic); // encMGSKeyPic
>> +	RVCE_CS(!enc->enc_pic.not_referenced); // encReferenceFlag
>> +	RVCE_CS(enc->enc_pic.eo.enc_temporal_layer_index); // encTemporalLayerIndex
>> +	RVCE_CS(enc->enc_pic.eo.num_ref_idx_active_override_flag); // num_ref_idx_active_override_flag
>> +	RVCE_CS(enc->enc_pic.eo.num_ref_idx_l0_active_minus1); // num_ref_idx_l0_active_minus1
>> +	RVCE_CS(enc->enc_pic.eo.num_ref_idx_l1_active_minus1); //
>> +num_ref_idx_l1_active_minus1
>> +
>> +	i = enc->enc_pic.frame_num - enc->enc_pic.ref_idx_l0;
>> +	if (i > 1 && enc->enc_pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_P) {
>> +		enc->enc_pic.eo.enc_ref_list_modification_op = 0x00000001;
>> +		enc->enc_pic.eo.enc_ref_list_modification_num = i - 1;
>> +		RVCE_CS(enc->enc_pic.eo.enc_ref_list_modification_op); // encRefListModificationOp
>> +		RVCE_CS(enc->enc_pic.eo.enc_ref_list_modification_num);      // encRefListModificationNum
>>    	} else {
>> -		RVCE_CS(0x00000000); // encRefListModificationOp
>> -		RVCE_CS(0x00000000); // encRefListModificationNum
>> +		enc->enc_pic.eo.enc_ref_list_modification_op = 0x00000000;
>> +		enc->enc_pic.eo.enc_ref_list_modification_num = 0x00000000;
>> +		RVCE_CS(enc->enc_pic.eo.enc_ref_list_modification_op); // encRefListModificationOp
>> +		RVCE_CS(enc->enc_pic.eo.enc_ref_list_modification_num); // 
>> +encRefListModificationNum
>>    	}
>>    
>>    	for (i = 0; i < 3; ++i) {
>> -		RVCE_CS(0x00000000); // encRefListModificationOp
>> -		RVCE_CS(0x00000000); // encRefListModificationNum
>> +		enc->enc_pic.eo.enc_ref_list_modification_op = 0x00000000;
>> +		enc->enc_pic.eo.enc_ref_list_modification_num = 0x00000000;
>> +		RVCE_CS(enc->enc_pic.eo.enc_ref_list_modification_op); // encRefListModificationOp
>> +		RVCE_CS(enc->enc_pic.eo.enc_ref_list_modification_num); // 
>> +encRefListModificationNum
>>    	}
>>    	for (i = 0; i < 4; ++i) {
>> -		RVCE_CS(0x00000000); // encDecodedPictureMarkingOp
>> -		RVCE_CS(0x00000000); // encDecodedPictureMarkingNum
>> -		RVCE_CS(0x00000000); // encDecodedPictureMarkingIdx
>> -		RVCE_CS(0x00000000); // encDecodedRefBasePictureMarkingOp
>> -		RVCE_CS(0x00000000); // encDecodedRefBasePictureMarkingNum
>> +		RVCE_CS(enc->enc_pic.eo.enc_decoded_picture_marking_op); // encDecodedPictureMarkingOp
>> +		RVCE_CS(enc->enc_pic.eo.enc_decoded_picture_marking_num); // encDecodedPictureMarkingNum
>> +		RVCE_CS(enc->enc_pic.eo.enc_decoded_picture_marking_idx); // encDecodedPictureMarkingIdx
>> +		RVCE_CS(enc->enc_pic.eo.enc_decoded_ref_base_picture_marking_op); // encDecodedRefBasePictureMarkingOp
>> +		RVCE_CS(enc->enc_pic.eo.enc_decoded_ref_base_picture_marking_num);
>> +// encDecodedRefBasePictureMarkingNum
>>    	}
>>    
>>    	// encReferencePictureL0[0]
>>    	RVCE_CS(0x00000000); // pictureStructure
>> -	if(enc->pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_P ||
>> -	   enc->pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_B) {
>> +	if(enc->enc_pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_P ||
>> +		enc->enc_pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_B) {
>>    		struct rvce_cpb_slot *l0 = l0_slot(enc);
>>    		rvce_frame_offset(enc, l0, &luma_offset, &chroma_offset);
>>    		RVCE_CS(l0->picture_type); // encPicType @@ -169,24 +180,35 @@ 
>> static void encode(struct rvce_encoder *enc)
>>    		RVCE_CS(luma_offset); // lumaOffset
>>    		RVCE_CS(chroma_offset); // chromaOffset
>>    	} else {
>> -		RVCE_CS(0x00000000); // encPicType
>> -		RVCE_CS(0x00000000); // frameNumber
>> -		RVCE_CS(0x00000000); // pictureOrderCount
>> -		RVCE_CS(0xffffffff); // lumaOffset
>> -		RVCE_CS(0xffffffff); // chromaOffset
>> +		enc->enc_pic.eo.l0_enc_pic_type = 0x00000000;
>> +		enc->enc_pic.eo.l0_frame_number = 0x00000000;
>> +		enc->enc_pic.eo.l0_picture_order_count = 0x00000000;
>> +		enc->enc_pic.eo.l0_luma_offset = 0xffffffff;
>> +		enc->enc_pic.eo.l0_chroma_offset = 0xffffffff;
>> +		RVCE_CS(enc->enc_pic.eo.l0_enc_pic_type); // encPicType
>> +		RVCE_CS(enc->enc_pic.eo.l0_frame_number); // frameNumber
>> +		RVCE_CS(enc->enc_pic.eo.l0_picture_order_count); // pictureOrderCount
>> +		RVCE_CS(enc->enc_pic.eo.l0_luma_offset); // lumaOffset
>> +		RVCE_CS(enc->enc_pic.eo.l0_chroma_offset); // chromaOffset
>>    	}
>>    
>>    	// encReferencePictureL0[1]
>> -	RVCE_CS(0x00000000); // pictureStructure
>> -	RVCE_CS(0x00000000); // encPicType
>> -	RVCE_CS(0x00000000); // frameNumber
>> -	RVCE_CS(0x00000000); // pictureOrderCount
>> -	RVCE_CS(0xffffffff); // lumaOffset
>> -	RVCE_CS(0xffffffff); // chromaOffset
>> +	enc->enc_pic.eo.l0_picture_structure = 0x00000000;
>> +	enc->enc_pic.eo.l0_enc_pic_type = 0x00000000;
>> +	enc->enc_pic.eo.l0_frame_number = 0x00000000;
>> +	enc->enc_pic.eo.l0_picture_order_count = 0x00000000;
>> +	enc->enc_pic.eo.l0_luma_offset = 0xffffffff;
>> +	enc->enc_pic.eo.l0_chroma_offset = 0xffffffff;
>> +	RVCE_CS(enc->enc_pic.eo.l0_picture_structure); // pictureStructure
>> +	RVCE_CS(enc->enc_pic.eo.l0_enc_pic_type); // encPicType
>> +	RVCE_CS(enc->enc_pic.eo.l0_frame_number); // frameNumber
>> +	RVCE_CS(enc->enc_pic.eo.l0_picture_order_count); // pictureOrderCount
>> +	RVCE_CS(enc->enc_pic.eo.l0_luma_offset); // lumaOffset
>> +	RVCE_CS(enc->enc_pic.eo.l0_chroma_offset); // chromaOffset
>>    
>>    	// encReferencePictureL1[0]
>>    	RVCE_CS(0x00000000); // pictureStructure
>> -	if(enc->pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_B) {
>> +	if(enc->enc_pic.picture_type == PIPE_H264_ENC_PICTURE_TYPE_B) {
>>    		struct rvce_cpb_slot *l1 = l1_slot(enc);
>>    		rvce_frame_offset(enc, l1, &luma_offset, &chroma_offset);
>>    		RVCE_CS(l1->picture_type); // encPicType @@ -195,48 +217,301 @@ 
>> static void encode(struct rvce_encoder *enc)
>>    		RVCE_CS(luma_offset); // lumaOffset
>>    		RVCE_CS(chroma_offset); // chromaOffset
>>    	} else {
>> -		RVCE_CS(0x00000000); // encPicType
>> -		RVCE_CS(0x00000000); // frameNumber
>> -		RVCE_CS(0x00000000); // pictureOrderCount
>> -		RVCE_CS(0xffffffff); // lumaOffset
>> -		RVCE_CS(0xffffffff); // chromaOffset
>> +		enc->enc_pic.eo.l1_enc_pic_type = 0x00000000;
>> +		enc->enc_pic.eo.l1_frame_number = 0x00000000;
>> +		enc->enc_pic.eo.l1_picture_order_count = 0x00000000;
>> +		enc->enc_pic.eo.l1_luma_offset = 0xffffffff;
>> +		enc->enc_pic.eo.l1_chroma_offset = 0xffffffff;
>> +		RVCE_CS(enc->enc_pic.eo.l1_enc_pic_type); // encPicType
>> +		RVCE_CS(enc->enc_pic.eo.l1_frame_number); // frameNumber
>> +		RVCE_CS(enc->enc_pic.eo.l1_picture_order_count); // pictureOrderCount
>> +		RVCE_CS(enc->enc_pic.eo.l1_luma_offset); // lumaOffset
>> +		RVCE_CS(enc->enc_pic.eo.l1_chroma_offset); // chromaOffset
>>    	}
>>    
>>    	rvce_frame_offset(enc, current_slot(enc), &luma_offset, &chroma_offset);
>>    	RVCE_CS(luma_offset); // encReconstructedLumaOffset
>>    	RVCE_CS(chroma_offset); // encReconstructedChromaOffset
>> -	RVCE_CS(0x00000000); // encColocBufferOffset
>> -	RVCE_CS(0x00000000); // encReconstructedRefBasePictureLumaOffset
>> -	RVCE_CS(0x00000000); // encReconstructedRefBasePictureChromaOffset
>> -	RVCE_CS(0x00000000); // encReferenceRefBasePictureLumaOffset
>> -	RVCE_CS(0x00000000); // encReferenceRefBasePictureChromaOffset
>> -	RVCE_CS(0x00000000); // pictureCount
>> -	RVCE_CS(enc->pic.frame_num); // frameNumber
>> -	RVCE_CS(enc->pic.pic_order_cnt); // pictureOrderCount
>> -	RVCE_CS(0x00000000); // numIPicRemainInRCGOP
>> -	RVCE_CS(0x00000000); // numPPicRemainInRCGOP
>> -	RVCE_CS(0x00000000); // numBPicRemainInRCGOP
>> -	RVCE_CS(0x00000000); // numIRPicRemainInRCGOP
>> -	RVCE_CS(0x00000000); // enableIntraRefresh
>> -
>> -	RVCE_CS(0x00000000); // aq_variance_en
>> -	RVCE_CS(0x00000000); // aq_block_size
>> -	RVCE_CS(0x00000000); // aq_mb_variance_sel
>> -	RVCE_CS(0x00000000); // aq_frame_variance_sel
>> -	RVCE_CS(0x00000000); // aq_param_a
>> -	RVCE_CS(0x00000000); // aq_param_b
>> -	RVCE_CS(0x00000000); // aq_param_c
>> -	RVCE_CS(0x00000000); // aq_param_d
>> -	RVCE_CS(0x00000000); // aq_param_e
>> -
>> -	RVCE_CS(0x00000000); // contextInSFB
>> +	RVCE_CS(enc->enc_pic.eo.enc_coloc_buffer_offset); // encColocBufferOffset
>> +	RVCE_CS(enc->enc_pic.eo.enc_reconstructed_ref_base_picture_luma_offset); // encReconstructedRefBasePictureLumaOffset
>> +	RVCE_CS(enc->enc_pic.eo.enc_reconstructed_ref_base_picture_chroma_offset); // encReconstructedRefBasePictureChromaOffset
>> +	RVCE_CS(enc->enc_pic.eo.enc_reference_ref_base_picture_luma_offset); // encReferenceRefBasePictureLumaOffset
>> +	RVCE_CS(enc->enc_pic.eo.enc_reference_ref_base_picture_chroma_offset); // encReferenceRefBasePictureChromaOffset
>> +	RVCE_CS(enc->enc_pic.frame_num_cnt-1); // pictureCount
>> +	RVCE_CS(enc->enc_pic.frame_num); // frameNumber
>> +	RVCE_CS(enc->enc_pic.pic_order_cnt); // pictureOrderCount
>> +	RVCE_CS(enc->enc_pic.i_remain); // numIPicRemainInRCGOP
>> +	RVCE_CS(enc->enc_pic.p_remain); // numPPicRemainInRCGOP
>> +	RVCE_CS(enc->enc_pic.eo.num_b_pic_remain_in_rcgop); // numBPicRemainInRCGOP
>> +	RVCE_CS(enc->enc_pic.eo.num_ir_pic_remain_in_rcgop); // numIRPicRemainInRCGOP
>> +	RVCE_CS(enc->enc_pic.eo.enable_intra_refresh); // 
>> +enableIntraRefresh
>> +
>> +	RVCE_CS(enc->enc_pic.eo.aq_variance_en); // aq_variance_en
>> +	RVCE_CS(enc->enc_pic.eo.aq_block_size); // aq_block_size
>> +	RVCE_CS(enc->enc_pic.eo.aq_mb_variance_sel); // aq_mb_variance_sel
>> +	RVCE_CS(enc->enc_pic.eo.aq_frame_variance_sel); // aq_frame_variance_sel
>> +	RVCE_CS(enc->enc_pic.eo.aq_param_a); // aq_param_a
>> +	RVCE_CS(enc->enc_pic.eo.aq_param_b); // aq_param_b
>> +	RVCE_CS(enc->enc_pic.eo.aq_param_c); // aq_param_c
>> +	RVCE_CS(enc->enc_pic.eo.aq_param_d); // aq_param_d
>> +	RVCE_CS(enc->enc_pic.eo.aq_param_e); // aq_param_e
>> +
>> +	RVCE_CS(enc->enc_pic.eo.context_in_sfb); // contextInSFB
>>    	RVCE_END();
>>    }
>>    
>> -void radeon_vce_52_init(struct rvce_encoder *enc)
>> +static void rate_control(struct rvce_encoder *enc)
>>    {
>> -	radeon_vce_50_init(enc);
>> +	RVCE_BEGIN(0x04000005); // rate control
>> +	RVCE_CS(enc->enc_pic.rc.rc_method); // encRateControlMethod
>> +	RVCE_CS(enc->enc_pic.rc.target_bitrate); // encRateControlTargetBitRate
>> +	RVCE_CS(enc->enc_pic.rc.peak_bitrate); // encRateControlPeakBitRate
>> +	RVCE_CS(enc->enc_pic.rc.frame_rate_num); // encRateControlFrameRateNum
>> +	RVCE_CS(enc->enc_pic.rc.gop_size); // encGOPSize
>> +	RVCE_CS(enc->enc_pic.rc.quant_i_frames); // encQP_I
>> +	RVCE_CS(enc->enc_pic.rc.quant_p_frames); // encQP_P
>> +	RVCE_CS(enc->enc_pic.rc.quant_b_frames); // encQP_B
>> +	RVCE_CS(enc->enc_pic.rc.vbv_buffer_size); // encVBVBufferSize
>> +	RVCE_CS(enc->enc_pic.rc.frame_rate_den); // encRateControlFrameRateDen
>> +	RVCE_CS(enc->enc_pic.rc.vbv_buf_lv); // encVBVBufferLevel
>> +	RVCE_CS(enc->enc_pic.rc.max_au_size); // encMaxAUSize
>> +	RVCE_CS(enc->enc_pic.rc.qp_initial_mode); // encQPInitialMode
>> +	RVCE_CS(enc->enc_pic.rc.target_bits_picture); // encTargetBitsPerPicture
>> +	RVCE_CS(enc->enc_pic.rc.peak_bits_picture_integer); // encPeakBitsPerPictureInteger
>> +	RVCE_CS(enc->enc_pic.rc.peak_bits_picture_fraction); // encPeakBitsPerPictureFractional
>> +	RVCE_CS(enc->enc_pic.rc.min_qp); // encMinQP
>> +	RVCE_CS(enc->enc_pic.rc.max_qp); // encMaxQP
>> +	RVCE_CS(enc->enc_pic.rc.skip_frame_enable); // encSkipFrameEnable
>> +	RVCE_CS(enc->enc_pic.rc.fill_data_enable); // encFillerDataEnable
>> +	RVCE_CS(enc->enc_pic.rc.enforce_hrd); // encEnforceHRD
>> +	RVCE_CS(enc->enc_pic.rc.b_pics_delta_qp); // encBPicsDeltaQP
>> +	RVCE_CS(enc->enc_pic.rc.ref_b_pics_delta_qp); // encReferenceBPicsDeltaQP
>> +	RVCE_CS(enc->enc_pic.rc.rc_reinit_disable); // encRateControlReInitDisable
>> +	RVCE_CS(enc->enc_pic.rc.enc_lcvbr_init_qp_flag); // encLCVBRInitQPFlag
>> +	RVCE_CS(enc->enc_pic.rc.lcvbrsatd_based_nonlinear_bit_budget_flag); // encLCVBRSATDBasedNonlinearBitBudgetFlag
>> +	RVCE_END();
>> +}
>> +
>> +static void config(struct rvce_encoder *enc) {
>> +	enc->task_info(enc, 0x00000002, 0, 0xffffffff, 0);
>> +	enc->rate_control(enc);
>> +	enc->config_extension(enc);
>> +	enc->motion_estimation(enc);
>> +	enc->rdo(enc);
>> +	if (enc->use_vui)
>> +		enc->vui(enc);
>> +	enc->pic_control(enc);
>> +}
>> +
>> +static void config_extension(struct rvce_encoder *enc) {
>> +	RVCE_BEGIN(0x04000001); // config extension
>> +	RVCE_CS(enc->enc_pic.ce.enc_enable_perf_logging); // encEnablePerfLogging
>> +	RVCE_END();
>> +}
>> +
>> +static void destroy(struct rvce_encoder *enc) {
>> +	enc->task_info(enc, 0x00000001, 0, 0, 0);
>> +
>> +	RVCE_BEGIN(0x02000001); // destroy
>> +	RVCE_END();
>> +}
>> +
>> +static void feedback(struct rvce_encoder *enc) {
>> +	RVCE_BEGIN(0x05000005); // feedback buffer
>> +	RVCE_WRITE(enc->fb->res->buf, enc->fb->res->domains, 0x0); // feedbackRingAddressHi/Lo
>> +	RVCE_CS(enc->enc_pic.fb.feedback_ring_size); // feedbackRingSize
>> +	RVCE_END();
>> +}
>> +
>> +static void motion_estimation(struct rvce_encoder *enc) {
>> +	RVCE_BEGIN(0x04000007); // motion estimation
>> +	RVCE_CS(enc->enc_pic.me.enc_ime_decimation_search); // encIMEDecimationSearch
>> +	RVCE_CS(enc->enc_pic.me.motion_est_half_pixel); // motionEstHalfPixel
>> +	RVCE_CS(enc->enc_pic.me.motion_est_quarter_pixel); // motionEstQuarterPixel
>> +	RVCE_CS(enc->enc_pic.me.disable_favor_pmv_point); // disableFavorPMVPoint
>> +	RVCE_CS(enc->enc_pic.me.force_zero_point_center); // forceZeroPointCenter
>> +	RVCE_CS(enc->enc_pic.me.lsmvert); // LSMVert
>> +	RVCE_CS(enc->enc_pic.me.enc_search_range_x); // encSearchRangeX
>> +	RVCE_CS(enc->enc_pic.me.enc_search_range_y); // encSearchRangeY
>> +	RVCE_CS(enc->enc_pic.me.enc_search1_range_x); // encSearch1RangeX
>> +	RVCE_CS(enc->enc_pic.me.enc_search1_range_y); // encSearch1RangeY
>> +	RVCE_CS(enc->enc_pic.me.disable_16x16_frame1); // disable16x16Frame1
>> +	RVCE_CS(enc->enc_pic.me.disable_satd); // disableSATD
>> +	RVCE_CS(enc->enc_pic.me.enable_amd); // enableAMD
>> +	RVCE_CS(enc->enc_pic.me.enc_disable_sub_mode); // encDisableSubMode
>> +	RVCE_CS(enc->enc_pic.me.enc_ime_skip_x); // encIMESkipX
>> +	RVCE_CS(enc->enc_pic.me.enc_ime_skip_y); // encIMESkipY
>> +	RVCE_CS(enc->enc_pic.me.enc_en_ime_overw_dis_subm); // encEnImeOverwDisSubm
>> +	RVCE_CS(enc->enc_pic.me.enc_ime_overw_dis_subm_no); // encImeOverwDisSubmNo
>> +	RVCE_CS(enc->enc_pic.me.enc_ime2_search_range_x); // encIME2SearchRangeX
>> +	RVCE_CS(enc->enc_pic.me.enc_ime2_search_range_y); // encIME2SearchRangeY
>> +	RVCE_CS(enc->enc_pic.me.parallel_mode_speedup_enable); // parallelModeSpeedupEnable
>> +	RVCE_CS(enc->enc_pic.me.fme0_enc_disable_sub_mode); // fme0_encDisableSubMode
>> +	RVCE_CS(enc->enc_pic.me.fme1_enc_disable_sub_mode); // fme1_encDisableSubMode
>> +	RVCE_CS(enc->enc_pic.me.ime_sw_speedup_enable); // imeSWSpeedupEnable
>> +	RVCE_END();
>> +}
>>    
>> +static void pic_control(struct rvce_encoder *enc) {
>> +	RVCE_BEGIN(0x04000002); // pic control
>> +	RVCE_CS(enc->enc_pic.pc.enc_use_constrained_intra_pred); // encUseConstrainedIntraPred
>> +	RVCE_CS(enc->enc_pic.pc.enc_cabac_enable); // encCABACEnable
>> +	RVCE_CS(enc->enc_pic.pc.enc_cabac_idc); // encCABACIDC
>> +	RVCE_CS(enc->enc_pic.pc.enc_loop_filter_disable); // encLoopFilterDisable
>> +	RVCE_CS(enc->enc_pic.pc.enc_lf_beta_offset); // encLFBetaOffset
>> +	RVCE_CS(enc->enc_pic.pc.enc_lf_alpha_c0_offset); // encLFAlphaC0Offset
>> +	RVCE_CS(enc->enc_pic.pc.enc_crop_left_offset); // encCropLeftOffset
>> +	RVCE_CS(enc->enc_pic.pc.enc_crop_right_offset); // encCropRightOffset
>> +	RVCE_CS(enc->enc_pic.pc.enc_crop_top_offset); // encCropTopOffset
>> +	RVCE_CS(enc->enc_pic.pc.enc_crop_bottom_offset); // encCropBottomOffset
>> +	RVCE_CS(enc->enc_pic.pc.enc_num_mbs_per_slice); // encNumMBsPerSlice
>> +	RVCE_CS(enc->enc_pic.pc.enc_intra_refresh_num_mbs_per_slot); // encIntraRefreshNumMBsPerSlot
>> +	RVCE_CS(enc->enc_pic.pc.enc_force_intra_refresh); // encForceIntraRefresh
>> +	RVCE_CS(enc->enc_pic.pc.enc_force_imb_period); // encForceIMBPeriod
>> +	RVCE_CS(enc->enc_pic.pc.enc_pic_order_cnt_type); // encPicOrderCntType
>> +	RVCE_CS(enc->enc_pic.pc.log2_max_pic_order_cnt_lsb_minus4); // log2_max_pic_order_cnt_lsb_minus4
>> +	RVCE_CS(enc->enc_pic.pc.enc_sps_id); // encSPSID
>> +	RVCE_CS(enc->enc_pic.pc.enc_pps_id); // encPPSID
>> +	RVCE_CS(enc->enc_pic.pc.enc_constraint_set_flags); // encConstraintSetFlags
>> +	RVCE_CS(enc->enc_pic.pc.enc_b_pic_pattern); // encBPicPattern
>> +	RVCE_CS(enc->enc_pic.pc.weight_pred_mode_b_picture); // weightPredModeBPicture
>> +	RVCE_CS(enc->enc_pic.pc.enc_number_of_reference_frames); // encNumberOfReferenceFrames
>> +	RVCE_CS(enc->enc_pic.pc.enc_max_num_ref_frames); // encMaxNumRefFrames
>> +	RVCE_CS(enc->enc_pic.pc.enc_num_default_active_ref_l0); // encNumDefaultActiveRefL0
>> +	RVCE_CS(enc->enc_pic.pc.enc_num_default_active_ref_l1); // encNumDefaultActiveRefL1
>> +	RVCE_CS(enc->enc_pic.pc.enc_slice_mode); // encSliceMode
>> +	RVCE_CS(enc->enc_pic.pc.enc_max_slice_size); // encMaxSliceSize
>> +	RVCE_END();
>> +}
>> +
>> +static void rdo(struct rvce_encoder *enc) {
>> +	RVCE_BEGIN(0x04000008); // rdo
>> +	RVCE_CS(enc->enc_pic.rdo.enc_disable_tbe_pred_i_frame); // encDisableTbePredIFrame
>> +	RVCE_CS(enc->enc_pic.rdo.enc_disable_tbe_pred_p_frame); // encDisableTbePredPFrame
>> +	RVCE_CS(enc->enc_pic.rdo.use_fme_interpol_y); // useFmeInterpolY
>> +	RVCE_CS(enc->enc_pic.rdo.use_fme_interpol_uv); // useFmeInterpolUV
>> +	RVCE_CS(enc->enc_pic.rdo.use_fme_intrapol_y); // useFmeIntrapolY
>> +	RVCE_CS(enc->enc_pic.rdo.use_fme_intrapol_uv); // useFmeIntrapolUV
>> +	RVCE_CS(enc->enc_pic.rdo.use_fme_interpol_y_1); // useFmeInterpolY_1
>> +	RVCE_CS(enc->enc_pic.rdo.use_fme_interpol_uv_1); // useFmeInterpolUV_1
>> +	RVCE_CS(enc->enc_pic.rdo.use_fme_intrapol_y_1); // useFmeIntrapolY_1
>> +	RVCE_CS(enc->enc_pic.rdo.use_fme_intrapol_uv_1); // useFmeIntrapolUV_1
>> +	RVCE_CS(enc->enc_pic.rdo.enc_16x16_cost_adj); // enc16x16CostAdj
>> +	RVCE_CS(enc->enc_pic.rdo.enc_skip_cost_adj); // encSkipCostAdj
>> +	RVCE_CS(enc->enc_pic.rdo.enc_force_16x16_skip); // encForce16x16skip
>> +	RVCE_CS(enc->enc_pic.rdo.enc_disable_threshold_calc_a); // encDisableThresholdCalcA
>> +	RVCE_CS(enc->enc_pic.rdo.enc_luma_coeff_cost); // encLumaCoeffCost
>> +	RVCE_CS(enc->enc_pic.rdo.enc_luma_mb_coeff_cost); // encLumaMBCoeffCost
>> +	RVCE_CS(enc->enc_pic.rdo.enc_chroma_coeff_cost); // encChromaCoeffCost
>> +	RVCE_END();
>> +}
>> +
>> +static void session(struct rvce_encoder *enc) {
>> +	RVCE_BEGIN(0x00000001); // session cmd
>> +	RVCE_CS(enc->stream_handle);
>> +	RVCE_END();
>> +}
>> +
>> +static void task_info(struct rvce_encoder *enc, uint32_t op,
>> +					  uint32_t dep, uint32_t fb_idx, uint32_t ring_idx) {
>> +	RVCE_BEGIN(0x00000002); // task info
>> +	if (op == 0x3) {
>> +		if (enc->task_info_idx) {
>> +			uint32_t offs = enc->cs->current.cdw - enc->task_info_idx + 3;
>> +			// Update offsetOfNextTaskInfo
>> +			enc->cs->current.buf[enc->task_info_idx] = offs;
>> +		}
>> +		enc->task_info_idx = enc->cs->current.cdw;
>> +	}
>> +	enc->enc_pic.ti.task_operation = op;
>> +	enc->enc_pic.ti.reference_picture_dependency = dep;
>> +	enc->enc_pic.ti.feedback_index = fb_idx;
>> +	enc->enc_pic.ti.video_bitstream_ring_index = ring_idx;
>> +	RVCE_CS(enc->enc_pic.ti.offset_of_next_task_info); // offsetOfNextTaskInfo
>> +	RVCE_CS(enc->enc_pic.ti.task_operation); // taskOperation
>> +	RVCE_CS(enc->enc_pic.ti.reference_picture_dependency); // referencePictureDependency
>> +	RVCE_CS(enc->enc_pic.ti.collocate_flag_dependency); // collocateFlagDependency
>> +	RVCE_CS(enc->enc_pic.ti.feedback_index); // feedbackIndex
>> +	RVCE_CS(enc->enc_pic.ti.video_bitstream_ring_index); // videoBitstreamRingIndex
>> +	RVCE_END();
>> +}
>> +
>> +static void vui(struct rvce_encoder *enc) {
>> +	int i;
>> +
>> +	if (!enc->enc_pic.enable_vui)
>> +		return;
>> +
>> +	RVCE_BEGIN(0x04000009); // vui
>> +	RVCE_CS(enc->enc_pic.vui.aspect_ratio_info_present_flag); //aspectRatioInfoPresentFlag
>> +	RVCE_CS(enc->enc_pic.vui.aspect_ratio_idc); //aspectRatioInfo.aspectRatioIdc
>> +	RVCE_CS(enc->enc_pic.vui.sar_width); //aspectRatioInfo.sarWidth
>> +	RVCE_CS(enc->enc_pic.vui.sar_height); //aspectRatioInfo.sarHeight
>> +	RVCE_CS(enc->enc_pic.vui.overscan_info_present_flag); //overscanInfoPresentFlag
>> +	RVCE_CS(enc->enc_pic.vui.overscan_Approp_flag); //overScanInfo.overscanAppropFlag
>> +	RVCE_CS(enc->enc_pic.vui.video_signal_type_present_flag); //videoSignalTypePresentFlag
>> +	RVCE_CS(enc->enc_pic.vui.video_format); //videoSignalTypeInfo.videoFormat
>> +	RVCE_CS(enc->enc_pic.vui.video_full_range_flag); //videoSignalTypeInfo.videoFullRangeFlag
>> +	RVCE_CS(enc->enc_pic.vui.color_description_present_flag); //videoSignalTypeInfo.colorDescriptionPresentFlag
>> +	RVCE_CS(enc->enc_pic.vui.color_prim); //videoSignalTypeInfo.colorPrim
>> +	RVCE_CS(enc->enc_pic.vui.transfer_char); //videoSignalTypeInfo.transferChar
>> +	RVCE_CS(enc->enc_pic.vui.matrix_coef); //videoSignalTypeInfo.matrixCoef
>> +	RVCE_CS(enc->enc_pic.vui.chroma_loc_info_present_flag); //chromaLocInfoPresentFlag
>> +	RVCE_CS(enc->enc_pic.vui.chroma_loc_top); //chromaLocInfo.chromaLocTop
>> +	RVCE_CS(enc->enc_pic.vui.chroma_loc_bottom); //chromaLocInfo.chromaLocBottom
>> +	RVCE_CS(enc->enc_pic.vui.timing_info_present_flag); //timingInfoPresentFlag
>> +	RVCE_CS(enc->enc_pic.vui.num_units_in_tick); //timingInfo.numUnitsInTick
>> +	RVCE_CS(enc->enc_pic.vui.time_scale); //timingInfo.timeScale;
>> +	RVCE_CS(enc->enc_pic.vui.fixed_frame_rate_flag); //timingInfo.fixedFrameRateFlag
>> +	RVCE_CS(enc->enc_pic.vui.nal_hrd_parameters_present_flag); //nalHRDParametersPresentFlag
>> +	RVCE_CS(enc->enc_pic.vui.cpb_cnt_minus1); //hrdParam.cpbCntMinus1
>> +	RVCE_CS(enc->enc_pic.vui.bit_rate_scale); //hrdParam.bitRateScale
>> +	RVCE_CS(enc->enc_pic.vui.cpb_size_scale); //hrdParam.cpbSizeScale
>> +	for (i = 0; i < 32; i++) {
>> +		RVCE_CS(enc->enc_pic.vui.bit_rate_value_minus); //hrdParam.bitRateValueMinus
>> +		RVCE_CS(enc->enc_pic.vui.cpb_size_value_minus); //hrdParam.cpbSizeValueMinus
>> +		RVCE_CS(enc->enc_pic.vui.cbr_flag); //hrdParam.cbrFlag
>> +	}
>> +	RVCE_CS(enc->enc_pic.vui.initial_cpb_removal_delay_length_minus1); //hrdParam.initialCpbRemovalDelayLengthMinus1
>> +	RVCE_CS(enc->enc_pic.vui.cpb_removal_delay_length_minus1); //hrdParam.cpbRemovalDelayLengthMinus1
>> +	RVCE_CS(enc->enc_pic.vui.dpb_output_delay_length_minus1); //hrdParam.dpbOutputDelayLengthMinus1
>> +	RVCE_CS(enc->enc_pic.vui.time_offset_length); //hrdParam.timeOffsetLength
>> +	RVCE_CS(enc->enc_pic.vui.low_delay_hrd_flag); //lowDelayHRDFlag
>> +	RVCE_CS(enc->enc_pic.vui.pic_struct_present_flag); //picStructPresentFlag
>> +	RVCE_CS(enc->enc_pic.vui.bitstream_restriction_present_flag); //bitstreamRestrictionPresentFlag
>> +	RVCE_CS(enc->enc_pic.vui.motion_vectors_over_pic_boundaries_flag); //bitstreamRestrictions.motionVectorsOverPicBoundariesFlag
>> +	RVCE_CS(enc->enc_pic.vui.max_bytes_per_pic_denom); //bitstreamRestrictions.maxBytesPerPicDenom
>> +	RVCE_CS(enc->enc_pic.vui.max_bits_per_mb_denom); //bitstreamRestrictions.maxBitsPerMbDenom
>> +	RVCE_CS(enc->enc_pic.vui.log2_max_mv_length_hori); //bitstreamRestrictions.log2MaxMvLengthHori
>> +	RVCE_CS(enc->enc_pic.vui.log2_max_mv_length_vert); //bitstreamRestrictions.log2MaxMvLengthVert
>> +	RVCE_CS(enc->enc_pic.vui.num_reorder_frames); //bitstreamRestrictions.numReorderFrames
>> +	RVCE_CS(enc->enc_pic.vui.max_dec_frame_buffering); //bitstreamRestrictions.maxDecFrameBuffering
>> +	RVCE_END();
>> +}
>> +
>> +void radeon_vce_52_init(struct rvce_encoder *enc) {
>> +	enc->session = session;
>> +	enc->task_info = task_info;
>>    	enc->create = create;
>> +	enc->feedback = feedback;
>> +	enc->rate_control = rate_control;
>> +	enc->config_extension = config_extension;
>> +	enc->pic_control = pic_control;
>> +	enc->motion_estimation = motion_estimation;
>> +	enc->rdo = rdo;
>> +	enc->vui = vui;
>> +	enc->config = config;
>>    	enc->encode = encode;
>> +	enc->destroy = destroy;
>>    }



More information about the mesa-dev mailing list