[Linaro-mm-sig] Changing vma->vm_file in dma_buf_mmap()

Christian König christian.koenig at amd.com
Thu Sep 17 07:11:33 UTC 2020


Am 17.09.20 um 08:23 schrieb Daniel Vetter:
> On Wed, Sep 16, 2020 at 5:31 PM Christian König
> <ckoenig.leichtzumerken at gmail.com> wrote:
>> Am 16.09.20 um 17:24 schrieb Daniel Vetter:
>>> On Wed, Sep 16, 2020 at 4:14 PM Christian König
>>> <christian.koenig at amd.com> wrote:
>>>> Am 16.09.20 um 16:07 schrieb Jason Gunthorpe:
>>>>> On Wed, Sep 16, 2020 at 11:53:59AM +0200, Daniel Vetter wrote:
>>>>>
>>>>>> But within the driver, we generally need thousands of these, and that
>>>>>> tends to bring fd exhaustion problems with it. That's why all the private
>>>>>> buffer objects which aren't shared with other process or other drivers are
>>>>>> handles only valid for a specific fd instance of the drm chardev (each
>>>>>> open gets their own namespace), and only for ioctls done on that chardev.
>>>>>> And for mmap we assign fake (but unique across all open fd on it) offsets
>>>>>> within the overall chardev. Hence all the pgoff mangling and re-mangling.
>>>>> Are they still unique struct files? Just without a fdno?
>>>> Yes, exactly.
>>> Not entirely, since dma-buf happened after drm chardev, so for that
>>> historical reason the underlying struct file is shared, since it's the
>>> drm chardev. But since that's per-device we don't have a problem in
>>> practice with different vm_ops, since those are also per-device. But
>>> yeah we could fish out some entirely hidden per-object struct file if
>>> that's required for some mm internal reasons.
>> Hui? Ok that is just the handling in i915, isn't it?
>>
>> As far as I know we create an unique struct file for each DMA-buf.
> Yes dma-buf, but that gets forwarded to the original drm chardev which
> originally exported the buffer. It's only there where the forwarding
> chain stops. The other thing is that iirc we have a singleton
> anon_inode behind all the dma-buf, so they'd share all the same
> address_space and so would all alias for unmap_mapping_range (I think
> at least).

Amdgpu works by using the address_space of the drm chardev into the 
struct file of DMA-buf instead.

I think that this is cleaner, but only by a little bit :)

Anyway I'm a bit concerned that we have so many different approaches for 
the same problem.

Christian.

> -Daniel
>
>> Regards,
>> Christian.
>>
>>
>>> -Daniel
>>>
>>>>>> Hence why we'd like to be able to forward aliasing mappings and adjust the
>>>>>> file and pgoff, while hopefully everything keeps working. I thought this
>>>>>> would work, but Christian noticed it doesn't really.
>>>>> It seems reasonable to me that the dma buf should be the owner of the
>>>>> VMA, otherwise like you say, there is a big mess attaching the custom
>>>>> vma ops and what not to the proper dma buf.
>>>>>
>>>>> I don't see anything obviously against this in mmap_region() - why did
>>>>> Chritian notice it doesn't really work?
>>>> To clarify I think this might work.
>>>>
>>>> I just had the same "Is that legal?", "What about security?", etc..
>>>> questions you raised as well.
>>>>
>>>> It seems like a source of trouble so I thought better ask somebody more
>>>> familiar with that.
>>>>
>>>> Christian.
>>>>
>>>>> Jason
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel at lists.freedesktop.org
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7Cf725d2eb6a5a49bd533f08d85ad23308%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637359206142262941&sdata=qcLsl9R1gP%2FGY39ctsQkIzI99Bn%2F840YS17F4xudrAE%3D&reserved=0
>>>
>



More information about the dri-devel mailing list