[PATCH] drm/amdgpu: add kernel DOC for ioctls in amdgpu_cs.c file

Alex Deucher alexdeucher at gmail.com
Thu May 31 15:43:11 UTC 2018


On Wed, May 30, 2018 at 2:42 PM, Leo Liu <leo.liu at amd.com> wrote:
> There are four ioctls in this files, and DOC gives details of
> data structures for each of ioctls, and their functionalities.
>
> Signed-off-by: Leo Liu <leo.liu at amd.com>

A few comments below about readability.  With those fixed:
Reviewed-by: Alex Deucher <alexander.deucher at amd.com>

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 142 +++++++++++++++++++++++++
>  1 file changed, 142 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 12f0d18c6ee8..343ff115cff1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1217,6 +1217,49 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>         return 0;
>  }
>
> +/**
> + * DOC:  amdgpu_cs_ioctl
> + *
> + * This ioctl processes user space command submission chunks,

Maybe give an overview of why we use the ioctl.  Something like:
The CS (Command Submission) ioctl is used to submit command buffers to
the kernel for scheduling on hw queues.  The chunk interface gives us
the flexibility to add new features to the ioctl by simply adding a
new chunk type.

> + * return a fence sequence number associated to a amdgpu fence.
> + *
> + * In data structure:
> + *
> + * __u32 ctx_id:
> + * Integer ID, created via DRM_AMDGPU_CTX ioctl call when user space
> + * command submission context created. It will be used as ID for later
> + * command submission context.

Required for all command submissions.  The kernel uses this to track
dependencies and guilt for command submissions that lead to an engine
lock up.

> + *
> + * __u32 bo_list_handle:
> + * Handle of resources list associated with this CS, created via
> + * DRM_AMDGPU_BO_LIST ioctl call before command submission, and
> + * the BOs in the list will be validated.

to make sure they are resident when the command buffer executes.

> + *
> + * __u32 num_chunks:
> + * Number of chunks, their types include:
> + * AMDGPU_CHUNK_ID_IB
> + * The data will be filled into IB buffer, and mappped to a HW ring.
> + *

Expand this a bit.  E.g.,
An IB (Indirect Buffer) is a stand alone command buffer that can be
scheduled fetching for execution on a hw queue.


> + * AMDGPU_CHUNK_ID_FENCE
> + * The data will be used to find user fence BO and its offset.
> + *

Expand this a bit.  E.g., add something like:
Allows user space to specify their own fences within the command stream.

> + * AMDGPU_CHUNK_ID_DEPENDENCIES
> + * AMDGPU_CHUNK_ID_SYNCOBJ_IN
> + * AMDGPU_CHUNK_ID_SYNCOBJ_OUT
> + * These will be parsed as fence dependencies in given requirement,
> + * and will be remembered and to be synced later.
> + *
> + * __u32 _pad:
> + *
> + * __u64 chunks:
> + * Point to the CS chunks.

Pointer

> + *
> + * amdgpu_cs_submit() function will be called to initialize a scheduler
> + * job, and associate it to a HW ring, add a new fence to the context,

associate it with a

> + * and then push the job to the queue for scheduler to process,
> + * it will return fence sequence number to user space.
> + *
> + */
>  int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>  {
>         struct amdgpu_device *adev = dev->dev_private;
> @@ -1272,6 +1315,38 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>         return r;
>  }
>
> +/**
> + * DOC:  amdgpu_cs_wait_ioctl
> + *
> + * This ioctl checks a user space fence associated to a amdgpu fence whether
> + * it's signaled or waits to be signaled till timeout with kernel dma fence.

This reads sort of oddly.  how about:
The CS (Command Submission) wait ioctl waits for the specified fence
on the specified hw queue to signal or for the timeout, whichever
comes first.

> + * Once signaled, the associated CS is completed.
> + *
> + * In data structure:
> + *
> + * __u64 handle:
> + * The fence sequence number from amdgpu_cs_ioctl returned.
> + *
> + * __u64 timeout:
> + * Absolute timeout to wait.
> + *
> + * __u32 ip_type:
> + * __u32 ip_instance:
> + * __u32 ring:
> + * Map user space ring to a kernel HW ring, then use the seq(handle) to
> + * find the amdgpu fence, that will be checked and waited.
> + *
> + * __u32 ctx_id:
> + * ID for command submission context
> + *
> + * Out data:
> + *
> + * __u64 status:
> + * 0 CS completed
> + * 1 CS busy
> + *
> + */
> +
>  /**
>   * amdgpu_cs_wait_ioctl - wait for a command submission to finish
>   *
> @@ -1358,6 +1433,37 @@ static struct dma_fence *amdgpu_cs_get_fence(struct amdgpu_device *adev,
>         return fence;
>  }
>
> +/**
> + * DOC: amdgpu_cs_fence_to_handle_ioctl
> + *
> + * This ioctl converts a user space fence into a fence object handle or fd,
> + * or file fd based on the purpose in “what”, since using handles or fd will
> + * be more efficient than ioctl call from user space to check signaled.


This reads sort of oddly.  how about:
The CS (Command Submission) fence to handle ioctl converts a CS fence
into a drm sync object as specified in the what parameter.


> + *
> + * In data structure:
> + *
> + * struct drm_amdgpu_fence fence:
> + * User space fence structured with ctx_id, ip_type, ip_instance, ring, seq_no
> + *
> + * __u32 what:
> + * Types of handle to convert include:
> + *
> + * AMDGPU_FENCE_TO_HANDLE_GET_SYNCOBJ
> + * It is required to return a process-dependent handle for a sync object
> + *
> + * AMDGPU_FENCE_TO_HANDLE_GET_SYNCOBJ_FD
> + * It is  required to return a process-independent fd for a sync object
> + *
> + * AMDGPU_FENCE_TO_HANDLE_GET_SYNC_FILE_FD
> + * It is required to return a process-independent fd to a sync file
> + *
> + * __u32 pad
> + *
> + * Out data:
> + * __u32 handle:
> + * Handle to be returned to user space
> + *
> + */
>  int amdgpu_cs_fence_to_handle_ioctl(struct drm_device *dev, void *data,
>                                     struct drm_file *filp)
>  {
> @@ -1524,6 +1630,42 @@ static int amdgpu_cs_wait_any_fence(struct amdgpu_device *adev,
>         return r;
>  }
>
> +/**
> + * DOC:  amdgpu_cs_wait_fences_ioctl
> + *
> + * This ioctl checks either all fences or any fence from multiple fences
> + * to be signaled or waits to be signaled till timeout. So it's used to
> + * check and wait multiple CS to be completed.

This reads sort of oddly.  how about:
The CS (Command Submission) wait fences ioctl is an extended version
of the CS wait ioctl that allows the user to specify multiple fences
and wait for all or any of them to signal or for the timeout,
whichever comes first.


> + *
> + * In data structure:
> + *
> + * __u64 fences
> + * Point to the multiple fences

Pointer

> + *
> + * __u32 fence_count
> + * number of fences
> + *
> + * __u32 wait_all
> + * ways to wait either wait_all or wait_any
> + *
> + * __u64 timeout_ns
> + * Absolute timeout to wait
> + *
> + * The function will extract user space fences based on pointer and counts,
> + * then mapping them amdgpu fences and check if they are signaled or wait
> + * to timeout.


This reads sort of oddly.  how about:
This function extracts fences based on pointer and count specified and
waits for them to signal or timeout.


> + *
> + * Out data:
> + *
> + *__u32 status
> + * 0 CS completed
> + * 1 CS busy
> + *
> + *__u32 first_signaled;
> + * First signaled fence index
> + *
> + */
> +
>  /**
>   * amdgpu_cs_wait_fences_ioctl - wait for multiple command submissions to finish
>   *
> --
> 2.17.0
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


More information about the amd-gfx mailing list