[RFC 1/7] drm/amdgpu: UAPI for user queue management
Christian König
ckoenig.leichtzumerken at gmail.com
Mon Jan 2 11:27:47 UTC 2023
Am 27.12.22 um 17:58 schrieb Alex Deucher:
> On Sat, Dec 24, 2022 at 3:21 PM Bas Nieuwenhuizen
> <bas at basnieuwenhuizen.nl> wrote:
>> On Fri, Dec 23, 2022 at 8:37 PM Shashank Sharma <shashank.sharma at amd.com> wrote:
>>> From: Alex Deucher <alexander.deucher at amd.com>
>>>
>>> This patch intorduces new UAPI/IOCTL for usermode graphics
>>> queue. The userspace app will fill this structure and request
>>> the graphics driver to add a graphics work queue for it. The
>>> output of this UAPI is a queue id.
>>>
>>> This UAPI maps the queue into GPU, so the graphics app can start
>>> submitting work to the queue as soon as the call returns.
>>>
>>> Cc: Alex Deucher <alexander.deucher at amd.com>
>>> Cc: Christian Koenig <christian.koenig at amd.com>
>>> Signed-off-by: Alex Deucher <alexander.deucher at amd.com>
>>> Signed-off-by: Shashank Sharma <shashank.sharma at amd.com>
>>> ---
>>> include/uapi/drm/amdgpu_drm.h | 52 +++++++++++++++++++++++++++++++++++
>>> 1 file changed, 52 insertions(+)
>>>
>>> diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
>>> index 0d93ec132ebb..a3d0dd6f62c5 100644
>>> --- a/include/uapi/drm/amdgpu_drm.h
>>> +++ b/include/uapi/drm/amdgpu_drm.h
>>> @@ -54,6 +54,7 @@ extern "C" {
>>> #define DRM_AMDGPU_VM 0x13
>>> #define DRM_AMDGPU_FENCE_TO_HANDLE 0x14
>>> #define DRM_AMDGPU_SCHED 0x15
>>> +#define DRM_AMDGPU_USERQ 0x16
>>>
>>> #define DRM_IOCTL_AMDGPU_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_CREATE, union drm_amdgpu_gem_create)
>>> #define DRM_IOCTL_AMDGPU_GEM_MMAP DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_GEM_MMAP, union drm_amdgpu_gem_mmap)
>>> @@ -71,6 +72,7 @@ extern "C" {
>>> #define DRM_IOCTL_AMDGPU_VM DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_VM, union drm_amdgpu_vm)
>>> #define DRM_IOCTL_AMDGPU_FENCE_TO_HANDLE DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDGPU_FENCE_TO_HANDLE, union drm_amdgpu_fence_to_handle)
>>> #define DRM_IOCTL_AMDGPU_SCHED DRM_IOW(DRM_COMMAND_BASE + DRM_AMDGPU_SCHED, union drm_amdgpu_sched)
>>> +#define DRM_IOCTL_AMDGPU_USERQ DRM_IOW(DRM_COMMAND_BASE + DRM_AMDGPU_USERQ, union drm_amdgpu_userq)
>>>
>>> /**
>>> * DOC: memory domains
>>> @@ -288,6 +290,56 @@ union drm_amdgpu_ctx {
>>> union drm_amdgpu_ctx_out out;
>>> };
>>>
>>> +/* user queue IOCTL */
>>> +#define AMDGPU_USERQ_OP_CREATE 1
>>> +#define AMDGPU_USERQ_OP_FREE 2
>>> +
>>> +#define AMDGPU_USERQ_MQD_FLAGS_SECURE (1 << 0)
>>> +#define AMDGPU_USERQ_MQD_FLAGS_AQL (1 << 1)
>> Can we document what AQL means here?
> AQL is the packet format used by KFD/ROCm. The idea is to be able to
> create queues that support either format (AQL or PM4).
Could we make that a separate queue type? E.g. like SDMA, GFX, Compute?
It's not really a flag which can be applied independent of the queue.
Regards,
Christian.
>
>>
>>> +
>>> +struct drm_amdgpu_userq_mqd {
>>> + /** Flags: AMDGPU_USERQ_MQD_FLAGS_* */
>>> + __u32 flags;
>>> + /** IP type: AMDGPU_HW_IP_* */
>>> + __u32 ip_type;
>>> + /** GEM object handle */
>>> + __u32 doorbell_handle;
>>> + /** Doorbell offset in dwords */
>>> + __u32 doorbell_offset;
>> What are the doorbell handle/offset for? I don't see any of them used
>> in the rest of the series (we only check the handle isn't 0, which
>> isn't enough validation for a GEM handle to consider it valid), and
>> the kernel seems to allocate some kind of doorbell index in patch 4.
>> Does userspace need to know about that one? (similarly use_doorbell in
>> that patch seems like it is never explicitly written to)
> The doorbell is how you trigger the engine to start processing the
> user queue. The idea is that each user process allocates a page of
> doorbell space (one of the PCI BARs) and then each 64 bit segment in
> that page could be used for a user mode queue. So the UMD writes its
> data to the queue, updates the wptr, and then writes to the doorbell
> to tell the firmware to start processing the queue.
>
>> The other questions I have are about how this interacts with memory
>> management. Does this have access to all BOs allocated with
>> AMDGPU_GEM_CREATE_VM_ALWAYS_VALID? What about imported BOs? How does
>> this interact with VA unmap/map operations? (AFAICT we have no way to
>> tell if pagetable modifying operations are complete from userspace for
>> now). What happens if we need to spill BOs from VRAM due to
>> (cross-process) memory pressure?
> Effectively everything you map on the GPU would be valid. If there is
> memory pressure, the kernel driver will behave similarly to KFD. It
> will unmap the queues (which preempts all work on the engines), do any
> memory migrations, and then map the queues again.
>
> Alex
>
>>> + /** GPU virtual address of the queue */
>>> + __u64 queue_va;
>>> + /** Size of the queue in bytes */
>>> + __u64 queue_size;
>>> + /** GPU virtual address of the rptr */
>>> + __u64 rptr_va;
>>> + /** GPU virtual address of the wptr */
>>> + __u64 wptr_va;
>>> +};
>>> +
>>> +struct drm_amdgpu_userq_in {
>>> + /** AMDGPU_USERQ_OP_* */
>>> + __u32 op;
>>> + /** Flags */
>>> + __u32 flags;
>>> + /** Context handle to associate the queue with */
>>> + __u32 ctx_id;
>>> + __u32 pad;
>>> + /** Queue descriptor */
>>> + struct drm_amdgpu_userq_mqd mqd;
>>> +};
>>> +
>>> +struct drm_amdgpu_userq_out {
>>> + /** Queue handle */
>>> + __u32 q_id;
>>> + /** Flags */
>>> + __u32 flags;
>>> +};
>>> +
>>> +union drm_amdgpu_userq {
>>> + struct drm_amdgpu_userq_in in;
>>> + struct drm_amdgpu_userq_out out;
>>> +};
>>> +
>>> /* vm ioctl */
>>> #define AMDGPU_VM_OP_RESERVE_VMID 1
>>> #define AMDGPU_VM_OP_UNRESERVE_VMID 2
>>> --
>>> 2.34.1
>>>
More information about the amd-gfx
mailing list