[PATCH 1/6] drm/amdgpu: Generalize KFD dmabuf import
Christian König
ckoenig.leichtzumerken at gmail.com
Mon Jan 16 15:11:46 UTC 2023
Am 16.01.23 um 15:52 schrieb Felix Kuehling:
> Am 2023-01-16 um 06:42 schrieb Christian König:
>> [SNIP]
>>>>> When the BO is imported into the same GPU, you get a reference to
>>>>> the same BO, so the imported BO has the same mmap_offset as the
>>>>> original BO.
>>>>>
>>>>> When the BO is imported into a different GPU, it is a new BO with
>>>>> a new mmap_offset.
>>>>
>>>> That won't work.
>>>>
>>>>> I don't think this is incorrect.
>>>>
>>>> No, this is completely incorrect. It mixes up the reverse tracking
>>>> of mappings and might crash the system.
>>>
>>> I don't understand that. The imported BO is a different BO with a
>>> different mmap offset in a different device file. I don't see how
>>> that messes with the tracking of mappings.
>>
>> The tracking keeps note which piece of information is accessible
>> through which address space object and offset. I you suddenly have
>> two address spaces and offsets pointing to the same piece of
>> information that won't work any more.
>
> How do you identify a "piece of information". I don't think it's the
> physical page. VRAM doesn't even have struct pages. I think it's the
> BO that's being tracked.
Both struct page as well as pfn. Essentially everything you can give
vm_insert_pfn() or vm_insert_page() as parameter.
> With a dmabuf import you have a second BO aliasing the same physical
> pages as the original BO.
No, exactly that's wrong. You have a BO which contains a bunch of dma
addresses. The pages or whatever lies behind those as backing store are
intentionally hidden from the importer.
Daniel even started mangling the page pointer to prevent people from
using them:
https://elixir.bootlin.com/linux/latest/source/drivers/dma-buf/dma-buf.c#L769
> Then those two BOs are seen as two distinct "pieces of information"
> that can each have their own mapping.
Well I need to correct myself: Creating a "fake" offset for the BO is
ok, but we should just never ever mmap() through two different address
spaces.
>
>>>
>>>> This is the reason why we can't mmap() imported BOs.
>>>
>>> I don't see anything preventing that. For userptr BOs, there is this
>>> code in amdgpu_gem_object_mmap:
>>>
>>> if (amdgpu_ttm_tt_get_usermm(bo->tbo.ttm))
>>> return -EPERM;
>>>
>>> I don't see anything like this preventing mmapping of imported
>>> dmabuf BOs. What am I missing?
>>>
>>
>> At some point I really need to make a big presentation about all this
>> stuff, we had the same discussion multiple times now :)
>>
>> It's the same reason why you can't mmap() VRAM through the kfd node:
>> Each file can have only one address space object associated with it.
>
> I remember that. We haven't used KFD to mmap BOs for a long time for
> that reason.
>
>
>>
>> See dma_buf_mmap() and vma_set_file() how this is worked around in
>> DMA-buf.
>
> These are for mmapping memory through the dmabuf fd. I'm not sure
> that's a good example. drm_gem_prime_mmap creates a temporary struct
> file and struct drm_file that are destroyed immediately after calling
> obj->dev->driver->fops->mmap. I think that would break any reverse
> mapping.
I was hoping we have removed that code by now. Today obj->funcs->mmap
should be used and not the hack with temporary creating an file/fpriv
any more.
>>>>
>>>>> mmapping the memory with that new offset should still work. The
>>>>> imported BO is created with ttm_bo_type_sg, and AFAICT ttm_bo_vm.c
>>>>> supports mapping of SG BOs.
>>>>
>>>> Actually it shouldn't. This can go boom really easily.
>>>
>>> OK. I don't think we're doing this, but after Xiaogang raised the
>>> question I went looking through the code whether it's theoretically
>>> possible. I didn't find anything in the code that says that mmapping
>>> imported dmabufs would be prohibited or even dangerous. On the
>>> contrary, I found that ttm_bo_vm explicitly supports mmapping SG BOs.
>>>
>>>
>>>>
>>>> When you have imported a BO the only correct way of to mmap() it is
>>>> to do so on the original exporter.
>>>
>>> That seems sensible, and this is what we do today. That said, if
>>> mmapping an imported BO is dangerous, I'm missing a mechanism to
>>> protect against this. It could be as simple as setting
>>> AMDGPU_GEM_CREATE_NO_CPU_ACCESS in amdgpu_dma_buf_create_obj.
>>
>> At least for the GEM mmap() handler this is double checked very early
>> by looking at obj->import_attach and then either rejecting it or
>> redirecting the request to the DMA-buf file instead.
>
> Can you point me at where this check is? I see a check for
> obj->import_attach in drm_gem_dumb_map_offset. But I can't see how
> this function is called in amdgpu. I don't think it is used at all.
Uff, good question! @Thomas and @Dmitry: I clearly remember that one of
you guys was involved in the DRM/GEM mmap cleanup and DMA-buf with
workarounds for the KFD and DMA-buf.
What was the final solution to this? I can't find it of hand any more.
>>
>> We probably need something similar when stuff is mapped through the
>> KFD node. But I think we don't do that any more for "normal" BOs
>> anyway, don't we?
>
> Correct, we don't map BOs through the KFD device file. The only
> mappings we still use it for are:
>
> * Doorbells on APUs
> * Events page on APUs
> * MMIO page for HDP flushing
>
> The code for mmapping regular BOs through /dev/kfd was never upstream.
Good, that's probably much less problematic.
Regards,
Christian.
>
> Regards,
> Felix
>
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> Regards,
>>> Felix
>>
More information about the dri-devel
mailing list