[PATCH libdrm 1/2] amdgpu: add the interface of waiting multiple fences

Nicolai Hähnle nhaehnle at gmail.com
Tue Apr 18 18:28:01 UTC 2017


On 18.04.2017 17:47, Edward O'Callaghan wrote:
> On 04/14/2017 12:47 AM, Nicolai Hähnle wrote:
>> From: Nicolai Hähnle <nicolai.haehnle at amd.com>
>>
>> Signed-off-by: Junwei Zhang <Jerry.Zhang at amd.com>
>> [v2: allow returning the first signaled fence index]
>> Signed-off-by: monk.liu <Monk.Liu at amd.com>
>> [v3:
>>  - cleanup *status setting
>>  - fix amdgpu symbols check]
>> Signed-off-by: Nicolai Hähnle <nicolai.haehnle at amd.com>
>> Reviewed-by: Christian König <christian.koenig at amd.com> (v1)
>> Reviewed-by: Jammy Zhou <Jammy.Zhou at amd.com> (v1)
>> ---
>>  amdgpu/amdgpu-symbol-check |  1 +
>>  amdgpu/amdgpu.h            | 23 ++++++++++++++
>>  amdgpu/amdgpu_cs.c         | 74 ++++++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 98 insertions(+)
>>
[snip]
>> +static int amdgpu_ioctl_wait_fences(struct amdgpu_cs_fence *fences,
>> +				    uint32_t fence_count,
>> +				    bool wait_all,
>> +				    uint64_t timeout_ns,
>> +				    uint32_t *status,
>> +				    uint32_t *first)
>> +{
>> +	struct drm_amdgpu_fence *drm_fences;
>> +	amdgpu_device_handle dev = fences[0].context->dev;
>> +	union drm_amdgpu_wait_fences args;
>> +	int r;
>> +	uint32_t i;
>> +
>> +	drm_fences = alloca(sizeof(struct drm_amdgpu_fence) * fence_count);
>> +	for (i = 0; i < fence_count; i++) {
>> +		drm_fences[i].ctx_id = fences[i].context->id;
>> +		drm_fences[i].ip_type = fences[i].ip_type;
>> +		drm_fences[i].ip_instance = fences[i].ip_instance;
>> +		drm_fences[i].ring = fences[i].ring;
>> +		drm_fences[i].seq_no = fences[i].fence;
>> +	}
>> +
>> +	memset(&args, 0, sizeof(args));
>> +	args.in.fences = (uint64_t)(uintptr_t)drm_fences;
>> +	args.in.fence_count = fence_count;
>> +	args.in.wait_all = wait_all;
>> +	args.in.timeout_ns = amdgpu_cs_calculate_timeout(timeout_ns);
>> +
>> +	r = drmIoctl(dev->fd, DRM_IOCTL_AMDGPU_WAIT_FENCES, &args);
>> +	if (r)
>> +		return -errno;
>
> Hi Nicolai,
>
> you will leak drm_fences here on the error branch.

It's an alloca, so it's automatically freed when the function returns.


>> +
>> +	*status = args.out.status;
>> +
>> +	if (first)
>> +		*first = args.out.first_signaled;
>> +
>> +	return 0;
>> +}
>> +
>> +int amdgpu_cs_wait_fences(struct amdgpu_cs_fence *fences,
>> +			  uint32_t fence_count,
>> +			  bool wait_all,
>> +			  uint64_t timeout_ns,
>> +			  uint32_t *status,
>> +			  uint32_t *first)
>> +{
>> +	uint32_t i;
>> +	int r;
>
> no need for a intermediate ret, just return amdgpu_ioctl_wait_fences()
> directly?

Good point, I'll change that before I push.


>> +
>> +	/* Sanity check */
>> +	if (NULL == fences)
>> +		return -EINVAL;
>> +	if (NULL == status)
>> +		return -EINVAL;
>> +	if (fence_count <= 0)
>> +		return -EINVAL;
>
> may as well combine these branches?
>
> if (!fences || !status || !fence_count)
> 	return -EINVAL;
>
> as fence_count is unsigned.

Yeah, that makes some sense, but I decided to keep the separate 
if-statements because other functions are written like this as well.

Thanks,
Nicolai



>
> Kind Regards,
> Edward.
>
>> +	for (i = 0; i < fence_count; i++) {
>> +		if (NULL == fences[i].context)
>> +			return -EINVAL;
>> +		if (fences[i].ip_type >= AMDGPU_HW_IP_NUM)
>> +			return -EINVAL;
>> +		if (fences[i].ring >= AMDGPU_CS_MAX_RINGS)
>> +			return -EINVAL;
>> +	}
>> +
>> +	*status = 0;
>> +
>> +	r = amdgpu_ioctl_wait_fences(fences, fence_count, wait_all, timeout_ns,
>> +				     status, first);
>> +
>> +	return r;
>> +}
>> +
>>  int amdgpu_cs_create_semaphore(amdgpu_semaphore_handle *sem)
>>  {
>>  	struct amdgpu_semaphore *gpu_semaphore;
>>
>>  	if (NULL == sem)
>>  		return -EINVAL;
>>
>>  	gpu_semaphore = calloc(1, sizeof(struct amdgpu_semaphore));
>>  	if (NULL == gpu_semaphore)
>>  		return -ENOMEM;
>>
>


-- 
Lerne, wie die Welt wirklich ist,
Aber vergiss niemals, wie sie sein sollte.


More information about the amd-gfx mailing list