[PATCH 1/1] drm/amdkfd: Add IPC API

Christian König ckoenig.leichtzumerken at gmail.com
Tue Jul 14 09:28:12 UTC 2020


Am 14.07.20 um 10:58 schrieb Daniel Vetter:
> On Tue, Jul 14, 2020 at 02:26:36PM +1000, Dave Airlie wrote:
>> On Tue, 14 Jul 2020 at 14:09, Felix Kuehling <felix.kuehling at amd.com> wrote:
>>> Am 2020-07-13 um 11:28 p.m. schrieb Dave Airlie:
>>>> On Tue, 14 Jul 2020 at 13:14, Felix Kuehling <Felix.Kuehling at amd.com> wrote:
>>>>> This allows exporting and importing buffers. The API generates handles
>>>>> that can be used with the HIP IPC API, i.e. big numbers rather than
>>>>> file descriptors.
>>>> First up why? I get the how.
>>> The "why" is compatibility with HIP code ported from CUDA. The
>>> equivalent CUDA IPC API works with handles that can be communicated
>>> through e.g. a pipe or shared memory. You can't do that with file
>>> descriptors.
>> Okay that sort of useful information should definitely be in the patch
>> description.
>>
>>> https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__DEVICE.html#group__CUDART__DEVICE_1g8a37f7dfafaca652391d0758b3667539
>>>
>>> https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__DEVICE.html#group__CUDART__DEVICE_1g01050a29fefde385b1042081ada4cde9
>>>
>>>>> + * @share_handle is a 128 bit random number generated with
>>>>> + * @get_random_bytes. This number should be very hard to guess.
>>>>> + * Knowledge of the @share_handle implies authorization to access
>>>>> + * the shared memory. User mode should treat it like a secret key.
>>>>> + * It can be used to import this BO in a different process context
>>>>> + * for IPC buffer sharing. The handle will be valid as long as the
>>>>> + * underlying BO exists. If the same BO is exported multiple times,
>>>> Do we have any examples of any APIs in the kernel that operate like
>>>> this? That don't at least layer some sort of file permissions  and
>>>> access control on top?
>>> SystemV shared memory APIs (shmget, shmat) work similarly. There are
>>> some permissions that can be specified by the exporter in shmget.
>>> However, the handles are just numbers and much easier to guess (they are
>>> 32-bit integers). The most restrictive permissions would allow only the
>>> exporting UID to attach to the shared memory segment.
>>>
>>> I think DRM flink works similarly as well, with a global name IDR used
>>> for looking up GEM objects using global object names.
>> flink is why I asked, because flink was a mistake and not one I'd care
>> to make again.
>> shm is horrible also, but at least has some permissions on what users
>> can attack it.
> Yeah this smells way too much like flink. I had the same reaction, and
> kinda sad that we have to do this because nvidia defines how this works
> with 0 input from anyone else. Oh well, the world sucks.
>
>>>> The reason fd's are good is that combined with unix sockets, you can't
>>>> sniff it, you can't ptrace a process and find it, you can't write it
>>>> out in a coredump and have someone access it later.
>>> Arguably ptrace and core dumps give you access to all the memory
>>> contents already. So you don't need the shared memory handle to access
>>> memory in that case.
>> core dumps might not dump this memory though, but yeah ptrace would
>> likely already mean you have access.
>>
>>>> Maybe someone who knows security can ack merging this sort of uAPI
>>>> design, I'm not confident in what it's doing is in any ways a good
>>>> idea. I might have to ask some people to take a closer look.
>>> Please do. We have tried to make this API as secure as possible within
>>> the constraints of the user mode API we needed to implement.
>> I'll see if I hear back, but also if danvet has any input like I
>> suppose it's UUID based buffer access, so maybe 128-bit is enough and
>> you have enough entropy not to create anything insanely predictable.
> So one idea that crossed my mind is if we don't want to do this as a
> generic dma-buf handle converter.
>
> Something like /dev/dri/cuda_is_nasty (maybe slightly nicer name) which
> provides a generic dma-buf <-> cuda uuid converter. With separate access
> restrictions, so admins can decide whether they want to allow this
> silliness, or not. Anyone else who wants to reimplement cuda will need
> this too, so that's another reason for splitting this out.
>
> Wrt security: I think assuming that there's none and the lookup has a
> side-channel you can use to efficiently scan the entire range is probably
> the safe approach here. This is way out of my league, but I think people
> who know how to do this won't have a much harder time scanning this than
> the flink space.
>
> Also, if we have one common uuid->dma-buf converter, we might actually
> have a chance to proof the "it's not secure" assumption wrong. Also, we
> might be able to tie this into cgroups or namespaces or similar that way.
>
> Just some thoughts to give my initial "eek, why this" reaction a bit more
> substance :-) No idea whether this would work or make more sense.

Yeah, my initial reaction was the same. On the pro side is that we use 
more than the 32bits flink did as identifier.


What we could maybe do to improve this is to link DMA-buf file 
descriptors into the file system from userspace. And then we could just 
do something like:

open("/tmp/dma-buf-0x0123-4567-89AB-CDEF-0123-4567-89AB-CDEF", "rw");

But to be honest I don't really know the fs code that well enough to 
judge if this is possible or not.


Or we let DMA-bufs appear under some directory of /sys by their name so 
that applications can open and use them.

Regards,
Christian.

>
> Cheers, Daniel



More information about the amd-gfx mailing list