[PATCH v3 02/19] drm-uapi/amdgpu: sync with drm-next
vitaly prosyak
vprosyak at amd.com
Tue Apr 1 04:50:26 UTC 2025
On 2025-04-01 00:39, Khatri, Sunil wrote:
>
> [AMD Official Use Only - AMD Internal Distribution Only]
>
>
>
>
>
>
> *From:*Prosyak, Vitaly <Vitaly.Prosyak at amd.com>
> *Sent:* Tuesday, April 1, 2025 12:42 AM
> *To:* Khatri, Sunil <Sunil.Khatri at amd.com>; igt-dev at lists.freedesktop.org
> *Cc:* Deucher, Alexander <Alexander.Deucher at amd.com>; Koenig, Christian <Christian.Koenig at amd.com>; Prosyak, Vitaly <Vitaly.Prosyak at amd.com>; Zhang, Jesse(Jie) <Jesse.Zhang at amd.com>
> *Subject:* Re: [PATCH v3 02/19] drm-uapi/amdgpu: sync with drm-next
>
>
>
>
>
> On 2025-03-28 04:23, Sunil Khatri wrote:
>
> Sync with drm-next commit ("866fc4f7e772c4a397f9459754ed1b1872b3a3c6")
>
>
>
> Added support of UAPI for user queue secure semaphore.
>
> The semaphore is used to synchronize between the caller and
>
> the gpu hw and user wait for the semaphore.
>
>
>
> Signed-off-by: Sunil Khatri <sunil.khatri at amd.com> <mailto:sunil.khatri at amd.com>
>
> ---
>
> include/drm-uapi/amdgpu_drm.h | 117 ++++++++++++++++++++++++++++++++++
>
> 1 file changed, 117 insertions(+)
>
>
>
> diff --git a/include/drm-uapi/amdgpu_drm.h b/include/drm-uapi/amdgpu_drm.h
>
> index d780e1f2a..fed39c9b4 100644
>
> --- a/include/drm-uapi/amdgpu_drm.h
>
> +++ b/include/drm-uapi/amdgpu_drm.h
>
> @@ -55,6 +55,8 @@ extern "C" {
>
> #define DRM_AMDGPU_FENCE_TO_HANDLE 0x14
>
> #define DRM_AMDGPU_SCHED 0x15
>
> #define DRM_AMDGPU_USERQ 0x16
>
> +#define DRM_AMDGPU_USERQ_SIGNAL 0x17
>
> +#define DRM_AMDGPU_USERQ_WAIT 0x18
>
>
>
> #define DRM_IOCTL_AMDGPU_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_CREATE, union drm_amdgpu_gem_create)
>
> #define DRM_IOCTL_AMDGPU_GEM_MMAP DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_MMAP, union drm_amdgpu_gem_mmap)
>
> @@ -73,6 +75,8 @@ extern "C" {
>
> #define DRM_IOCTL_AMDGPU_FENCE_TO_HANDLE DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_FENCE_TO_HANDLE, union drm_amdgpu_fence_to_handle)
>
> #define DRM_IOCTL_AMDGPU_SCHED DRM_IOW(DRM_COMMAND_BASE + DRM_AMDGPU_SCHED, union drm_amdgpu_sched)
>
> #define DRM_IOCTL_AMDGPU_USERQ DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_USERQ, union drm_amdgpu_userq)
>
> +#define DRM_IOCTL_AMDGPU_USERQ_SIGNAL DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_USERQ_SIGNAL, struct drm_amdgpu_userq_signal)
>
> +#define DRM_IOCTL_AMDGPU_USERQ_WAIT DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_USERQ_WAIT, struct drm_amdgpu_userq_wait)
>
>
>
> /**
>
> * DOC: memory domains
>
> @@ -442,6 +446,119 @@ struct drm_amdgpu_userq_mqd_compute_gfx11 {
>
> __u64 eop_va;
>
> };
>
>
>
> +/* userq signal/wait ioctl */
>
> +struct drm_amdgpu_userq_signal {
>
> + /**
>
> + * @queue_id: Queue handle used by the userq fence creation function
>
> + * to retrieve the WPTR.
>
> + */
>
> + __u32 queue_id;
>
> + __u32 pad;
>
> + /**
>
> + * @syncobj_handles: The list of syncobj handles submitted by the user queue
>
> + * job to be signaled.
>
> + */
>
> I am not sure about the correctness of the 'list of syncobj handles.' If it is a list, the field should be of type |list_head|; if it's an array, it should be |__u64*|, since the next field declares |num_syncobj_handles|. Could you clarify this?
>
> There are several fields like this ?
>
> Hello Vitaly
>
> These are the headers defined in the kernel and directly ported to the IGT lib drm header as various others in past.
>
> These are being discussed between various stake holders like Marel, Alex and Christian and then these types and objects are defined.
>
> Regards
> Sunil khatri
>
Hi Sunil, I got it. My question is about the comment: 'The list of BO handles.' Does this refer to an array of |__u64 bo_write_handles|, where |__u64| represents an address? Maybe, for historical reasons, it ended up being called a 'list'? Since it's already ported, there's nothing to discuss or change :)
>
>
> + __u64 syncobj_handles;
>
> + /**
>
> + * @num_syncobj_handles: A count that represents the number of syncobj handles in
>
> + * @syncobj_handles.
>
> + */
>
> + __u64 num_syncobj_handles;
>
> + /**
>
> + * @bo_read_handles: The list of BO handles that the submitted user queue job
>
> + * is using for read only. This will update BO fences in the kernel.
>
> + */
>
> + __u64 bo_read_handles;
>
> + /**
>
> + * @bo_write_handles: The list of BO handles that the submitted user queue job
>
> + * is using for write only. This will update BO fences in the kernel.
>
> + */
>
> + __u64 bo_write_handles;
>
> + /**
>
> + * @num_bo_read_handles: A count that represents the number of read BO handles in
>
> + * @bo_read_handles.
>
> + */
>
> + __u32 num_bo_read_handles;
>
> + /**
>
> + * @num_bo_write_handles: A count that represents the number of write BO handles in
>
> + * @bo_write_handles.
>
> + */
>
> + __u32 num_bo_write_handles;
>
> +};
>
> +
>
> +struct drm_amdgpu_userq_fence_info {
>
> + /**
>
> + * @va: A gpu address allocated for each queue which stores the
>
> + * read pointer (RPTR) value.
>
> + */
>
> + __u64 va;
>
> + /**
>
> + * @value: A 64 bit value represents the write pointer (WPTR) of the
>
> + * queue commands which compared with the RPTR value to signal the
>
> + * fences.
>
> + */
>
> + __u64 value;
>
> +};
>
> +
>
> +struct drm_amdgpu_userq_wait {
>
> + /**
>
> + * @syncobj_handles: The list of syncobj handles submitted by the user queue
>
> + * job to get the va/value pairs.
>
> + */
>
> + __u64 syncobj_handles;
>
> + /**
>
> + * @syncobj_timeline_handles: The list of timeline syncobj handles submitted by
>
> + * the user queue job to get the va/value pairs at given @syncobj_timeline_points.
>
> + */
>
> + __u64 syncobj_timeline_handles;
>
> + /**
>
> + * @syncobj_timeline_points: The list of timeline syncobj points submitted by the
>
> + * user queue job for the corresponding @syncobj_timeline_handles.
>
> + */
>
> + __u64 syncobj_timeline_points;
>
> + /**
>
> + * @bo_read_handles: The list of read BO handles submitted by the user queue
>
> + * job to get the va/value pairs.
>
> + */
>
> + __u64 bo_read_handles;
>
> + /**
>
> + * @bo_write_handles: The list of write BO handles submitted by the user queue
>
> + * job to get the va/value pairs.
>
> + */
>
> + __u64 bo_write_handles;
>
> + /**
>
> + * @num_syncobj_timeline_handles: A count that represents the number of timeline
>
> + * syncobj handles in @syncobj_timeline_handles.
>
> + */
>
> + __u16 num_syncobj_timeline_handles;
>
> + /**
>
> + * @num_fences: This field can be used both as input and output. As input it defines
>
> + * the maximum number of fences that can be returned and as output it will specify
>
> + * how many fences were actually returned from the ioctl.
>
> + */
>
> + __u16 num_fences;
>
> + /**
>
> + * @num_syncobj_handles: A count that represents the number of syncobj handles in
>
> + * @syncobj_handles.
>
> + */
>
> + __u32 num_syncobj_handles;
>
> + /**
>
> + * @num_bo_read_handles: A count that represents the number of read BO handles in
>
> + * @bo_read_handles.
>
> + */
>
> + __u32 num_bo_read_handles;
>
> + /**
>
> + * @num_bo_write_handles: A count that represents the number of write BO handles in
>
> + * @bo_write_handles.
>
> + */
>
> + __u32 num_bo_write_handles;
>
> + /**
>
> + * @out_fences: The field is a return value from the ioctl containing the list of
>
> + * address/value pairs to wait for.
>
> + */
>
> + __u64 out_fences;
>
> +};
>
> +
>
> /* vm ioctl */
>
> #define AMDGPU_VM_OP_RESERVE_VMID 1
>
> #define AMDGPU_VM_OP_UNRESERVE_VMID 2
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/igt-dev/attachments/20250401/4eaa8f82/attachment-0001.htm>
More information about the igt-dev
mailing list