[PATCH 3/3] drm/amdgpu: share scheduler score on VCN3 instances

Christian König ckoenig.leichtzumerken at gmail.com
Fri Feb 5 09:58:52 UTC 2021


Alex how do we want to merge this?

I've just pushed the first patch to drm-misc-next since that needed a 
rebase because it touches other drivers as well.

But the rest is really AMD specific and I'm not sure if the dependent 
stuff is already in there as well.

So if I push it to drm-misc-next you will probably need to merge and if 
I push it to amd-staging-drm-next somebody else might need to merge when 
drm-misc-next is merged.

Ideas?

Christian.

Am 04.02.21 um 19:34 schrieb Leo Liu:
> The series are:
>
> Reviewed-and-Tested-by: Leo Liu <leo.liu at amd.com>
>
>
> On 2021-02-04 9:44 a.m., Christian König wrote:
>> The VCN3 instances can do both decode as well as encode.
>>
>> Share the scheduler load balancing score and remove fixing encode to
>> only the second instance.
>>
>> Signed-off-by: Christian König <christian.koenig at amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h |  1 +
>>   drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c   | 11 +++++++----
>>   2 files changed, 8 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>> index 13aa417f6be7..d10bc4f0a05f 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>> @@ -211,6 +211,7 @@ struct amdgpu_vcn_inst {
>>       void            *saved_bo;
>>       struct amdgpu_ring    ring_dec;
>>       struct amdgpu_ring    ring_enc[AMDGPU_VCN_MAX_ENC_RINGS];
>> +    atomic_t        sched_score;
>>       struct amdgpu_irq_src    irq;
>>       struct amdgpu_vcn_reg    external;
>>       struct amdgpu_bo    *dpg_sram_bo;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c 
>> b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>> index 239a4eb52c61..b33f513fd2ac 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>> @@ -171,6 +171,7 @@ static int vcn_v3_0_sw_init(void *handle)
>>         for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
>>           volatile struct amdgpu_fw_shared *fw_shared;
>> +
>>           if (adev->vcn.harvest_config & (1 << i))
>>               continue;
>>   @@ -198,6 +199,8 @@ static int vcn_v3_0_sw_init(void *handle)
>>           if (r)
>>               return r;
>>   +        atomic_set(&adev->vcn.inst[i].sched_score, 0);
>> +
>>           ring = &adev->vcn.inst[i].ring_dec;
>>           ring->use_doorbell = true;
>>           if (amdgpu_sriov_vf(adev)) {
>> @@ -209,7 +212,8 @@ static int vcn_v3_0_sw_init(void *handle)
>>               ring->no_scheduler = true;
>>           sprintf(ring->name, "vcn_dec_%d", i);
>>           r = amdgpu_ring_init(adev, ring, 512, 
>> &adev->vcn.inst[i].irq, 0,
>> -                     AMDGPU_RING_PRIO_DEFAULT, NULL);
>> +                     AMDGPU_RING_PRIO_DEFAULT,
>> +                     &adev->vcn.inst[i].sched_score);
>>           if (r)
>>               return r;
>>   @@ -227,11 +231,10 @@ static int vcn_v3_0_sw_init(void *handle)
>>               } else {
>>                   ring->doorbell_index = 
>> (adev->doorbell_index.vcn.vcn_ring0_1 << 1) + 2 + j + 8 * i;
>>               }
>> -            if (adev->asic_type == CHIP_SIENNA_CICHLID && i != 1)
>> -                ring->no_scheduler = true;
>>               sprintf(ring->name, "vcn_enc_%d.%d", i, j);
>>               r = amdgpu_ring_init(adev, ring, 512, 
>> &adev->vcn.inst[i].irq, 0,
>> -                         AMDGPU_RING_PRIO_DEFAULT, NULL);
>> +                         AMDGPU_RING_PRIO_DEFAULT,
>> + &adev->vcn.inst[i].sched_score);
>>               if (r)
>>                   return r;
>>           }



More information about the amd-gfx mailing list