[PATCH 5/6] drm/amdgpu: implement grab dedicated vmid V2
Christian König
deathsimple at vodafone.de
Thu Apr 27 09:14:52 UTC 2017
Am 27.04.2017 um 06:42 schrieb zhoucm1:
>
>
> On 2017年04月27日 10:52, Zhang, Jerry (Junwei) wrote:
>> On 04/26/2017 07:10 PM, Chunming Zhou wrote:
>>> v2: move sync waiting only when flush needs
>>>
>>> Change-Id: I64da2701c9fdcf986afb90ba1492a78d5bef1b6c
>>> Signed-off-by: Chunming Zhou <David1.Zhou at amd.com>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 61
>>> ++++++++++++++++++++++++++++++++++
>>> 1 file changed, 61 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 214ac50..bce7701 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -402,6 +402,63 @@ static bool
>>> amdgpu_vm_dedicated_vmid_ready(struct amdgpu_vm *vm, unsigned vmhub)
>>> return !!vm->dedicated_vmid[vmhub];
>>> }
>>>
>>> +static int amdgpu_vm_grab_dedicated_vmid(struct amdgpu_vm *vm,
>>> + struct amdgpu_ring *ring,
>>> + struct amdgpu_sync *sync,
>>> + struct fence *fence,
>>> + struct amdgpu_job *job)
>>> +{
>>> + struct amdgpu_device *adev = ring->adev;
>>> + unsigned vmhub = ring->funcs->vmhub;
>>> + struct amdgpu_vm_id *id = vm->dedicated_vmid[vmhub];
>>> + struct amdgpu_vm_id_manager *id_mgr =
>>> &adev->vm_manager.id_mgr[vmhub];
>>> + struct fence *updates = sync->last_vm_update;
>>> + int r = 0;
>>> + struct fence *flushed, *tmp;
>>> + bool needs_flush = false;
>>> +
>>> + mutex_lock(&id_mgr->lock);
>>> + if (amdgpu_vm_had_gpu_reset(adev, id))
>>> + needs_flush = true;
>>> +
>>> + flushed = id->flushed_updates;
>>> + if (updates && (!flushed || updates->context !=
>>> flushed->context ||
>>> + fence_is_later(updates, flushed)))
>>> + needs_flush = true;
>>
>> Just question:
>> Do we need to consider concurrent flush for Vega10 like grab id func?
> Christian has pointed that old asic has hardware bug.
Which is fixed for Vega10, on that hardware concurrent flushing works fine.
It's just that Tonga/Fiji have a real problem with that and for
CIK/Polaris it's more a subtle one which is hard to trigger.
Regards,
Christian.
>
> Regards,
> David Zhou
>>
>> Jerry
>>
>>> + if (needs_flush) {
>>> + tmp = amdgpu_sync_get_fence(&id->active);
>>> + if (tmp) {
>>> + r = amdgpu_sync_fence(adev, sync, tmp);
>>> + fence_put(tmp);
>>> + mutex_unlock(&id_mgr->lock);
>>> + return r;
>>> + }
>>> + }
>>> +
>>> + /* Good we can use this VMID. Remember this submission as
>>> + * user of the VMID.
>>> + */
>>> + r = amdgpu_sync_fence(ring->adev, &id->active, fence);
>>> + if (r)
>>> + goto out;
>>> +
>>> + if (updates && (!flushed || updates->context !=
>>> flushed->context ||
>>> + fence_is_later(updates, flushed))) {
>>> + fence_put(id->flushed_updates);
>>> + id->flushed_updates = fence_get(updates);
>>> + }
>>> + id->pd_gpu_addr = job->vm_pd_addr;
>>> + id->current_gpu_reset_count =
>>> atomic_read(&adev->gpu_reset_counter);
>>> + atomic64_set(&id->owner, vm->client_id);
>>> + job->vm_needs_flush = needs_flush;
>>> +
>>> + job->vm_id = id - id_mgr->ids;
>>> + trace_amdgpu_vm_grab_id(vm, ring, job);
>>> +out:
>>> + mutex_unlock(&id_mgr->lock);
>>> + return r;
>>> +}
>>> +
>>> /**
>>> * amdgpu_vm_grab_id - allocate the next free VMID
>>> *
>>> @@ -426,6 +483,10 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm,
>>> struct amdgpu_ring *ring,
>>> unsigned i;
>>> int r = 0;
>>>
>>> + if (amdgpu_vm_dedicated_vmid_ready(vm, vmhub))
>>> + return amdgpu_vm_grab_dedicated_vmid(vm, ring, sync,
>>> + fence, job);
>>> +
>>> fences = kmalloc_array(sizeof(void *), id_mgr->num_ids,
>>> GFP_KERNEL);
>>> if (!fences)
>>> return -ENOMEM;
>>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
More information about the amd-gfx
mailing list