[PATCH 14/26] drm/xe/eudebug: implement userptr_vma access

Christian König christian.koenig at amd.com
Tue Dec 10 10:00:48 UTC 2024


Am 10.12.24 um 10:33 schrieb Joonas Lahtinen:
> Quoting Christian König (2024-12-09 17:42:32)
>> Am 09.12.24 um 16:31 schrieb Simona Vetter:
>>> On Mon, Dec 09, 2024 at 03:03:04PM +0100, Christian König wrote:
>>>> Am 09.12.24 um 14:33 schrieb Mika Kuoppala:
>>>>> From: Andrzej Hajda<andrzej.hajda at intel.com>
>>>>>
>>>>> Debugger needs to read/write program's vmas including userptr_vma.
>>>>> Since hmm_range_fault is used to pin userptr vmas, it is possible
>>>>> to map those vmas from debugger context.
>>>> Oh, this implementation is extremely questionable as well. Adding the LKML
>>>> and the MM list as well.
>>>>
>>>> First of all hmm_range_fault() does *not* pin anything!
>>>>
>>>> In other words you don't have a page reference when the function returns,
>>>> but rather just a sequence number you can check for modifications.
>>> I think it's all there, holds the invalidation lock during the critical
>>> access/section, drops it when reacquiring pages, retries until it works.
>>>
>>> I think the issue is more that everyone hand-rolls userptr.
>> Well that is part of the issue.
>>
>> The general problem here is that the eudebug interface tries to simulate
>> the memory accesses as they would have happened by the hardware.
> Could you elaborate, what is that a problem in that, exactly?
>
> It's pretty much the equivalent of ptrace() poke/peek but for GPU memory.

Exactly that here. You try to debug the GPU without taking control of 
the CPU process.

This means that you have to re-implement all debug functionalities which 
where previously invested for the CPU process for the GPU once more.

And that in turn creates a massive attack surface for security related 
problems, especially when you start messing with things like userptrs 
which have a very low level interaction with core memory management.

> And it is exactly the kind of interface that makes sense for debugger as
> GPU memory != CPU memory, and they don't need to align at all.

And that is what I strongly disagree on. When you debug the GPU it is 
mandatory to gain control of the CPU process as well.

The CPU process is basically the overseer of the GPU activity, so it 
should know everything about the GPU operation, for example what a 
mapping actually means.

The kernel driver and the hardware only have the information necessary 
to execute the work prepared by the CPU process. So the information 
available is limited to begin with.

>> What the debugger should probably do is to cleanly attach to the
>> application, get the information which CPU address is mapped to which
>> GPU address and then use the standard ptrace interfaces.
> I don't quite agree here -- at all. "Which CPU address is mapped to
> which GPU address" makes no sense when the GPU address space and CPU
> address space is completely controlled by the userspace driver/application.

Yeah, that's the reason why you should ask the userspace 
driver/application for the necessary information and not go over the 
kernel to debug things.

> Please try to consider things outside of the ROCm architecture.

Well I consider a good part of the ROCm architecture rather broken 
exactly because we haven't pushed back hard enough on bad ideas.

> Something like a register scratch region or EU instructions should not
> even be mapped to CPU address space as CPU has no business accessing it
> during normal operation. And backing of such region will vary per
> context/LRC on the same virtual address per EU thread.
>
> You seem to be suggesting to rewrite even our userspace driver to behave
> the same way as ROCm driver does just so that we could implement debug memory
> accesses via ptrace() to the CPU address space.

Oh, well certainly not. That ROCm has an 1 to 1 mapping between CPU and 
GPU is one thing I've pushed back massively on and has now proven to be 
problematic.

> That seems bit of a radical suggestion, especially given the drawbacks
> pointed out in your suggested design.
>
>> The whole interface re-invents a lot of functionality which is already
>> there
> I'm not really sure I would call adding a single interface for memory
> reading and writing to be "re-inventing a lot of functionality".
>
> All the functionality behind this interface will be needed by GPU core
> dumping, anyway. Just like for the other patch series.

As far as I can see exactly that's an absolutely no-go. Device core 
dumping should *never ever* touch memory imported by userptrs.

That's what process core dumping is good for.

>> just because you don't like the idea to attach to the debugged
>> application in userspace.
> A few points that have been brought up as drawback to the
> GPU debug through ptrace(), but to recap a few relevant ones for this
> discussion:
>
> - You can only really support GDB stop-all mode or at least have to
>    stop all the CPU threads while you control the GPU threads to
>    avoid interference. Elaborated on this on the other threads more.
> - Controlling the GPU threads will always interfere with CPU threads.
>    Doesn't seem feasible to single-step an EU thread while CPU threads
>    continue to run freely?

I would say no.

> - You are very much restricted by the CPU VA ~ GPU VA alignment
>    requirement, which is not true for OpenGL or Vulkan etc. Seems
>    like one of the reasons why ROCm debugging is not easily extendable
>    outside compute?

Well as long as you can't take debugged threads from the hardware you 
can pretty much forget any OpenGL or Vulkan debugging with this 
interface since it violates the dma_fence restrictions in the kernel.

> - You have to expose extra memory to CPU process just for GPU
>    debugger access and keep track of GPU VA for each. Makes the GPU more
>    prone to OOB writes from CPU. Exactly what not mapping the memory
>    to CPU tried to protect the GPU from to begin with.
>
>> As far as I can see this whole idea is extremely questionable. This
>> looks like re-inventing the wheel in a different color.
> I see it like reinventing a round wheel compared to octagonal wheel.
>
> Could you elaborate with facts much more on your position why the ROCm
> debugger design is an absolute must for others to adopt?

Well I'm trying to prevent some of the mistakes we did with the ROCm design.

And trying to re-invent well proven kernel interfaces is one of the big 
mistakes made in the ROCm design.

If you really want to expose an interface to userspace which walks the 
process page table, installs an MMU notifier, kmaps the resulting page 
and then memcpy to/from it then you absolutely *must* run that by guys 
like Christoph Hellwig, Andrew and even Linus.

I'm pretty sure that those guys will note that a device driver should 
absolutely not mess with such stuff.

Regards,
Christian.

> Otherwise it just looks like you are trying to prevent others from
> implementing a more flexible debugging interface through vague comments about
> "questionable design" without going into details. Not listing much concrete
> benefits nor addressing the very concretely expressed drawbacks of your
> suggested design, makes it seem like a very biased non-technical discussion.
>
> So while review interest and any comments are very much appreciated, please
> also work on providing bit more reasoning and facts instead of just claiming
> things. That'll help make the discussion much more fruitful.
>
> Regards, Joonas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/intel-xe/attachments/20241210/72120842/attachment.htm>


More information about the Intel-xe mailing list