[PATCH] drm/ttm: nuke VM_MIXEDMAP on BO mappings

Thomas Hellström (Intel) thomas_os at shipmail.org
Wed Jun 2 11:24:09 UTC 2021


Hi,

On 6/2/21 12:03 PM, Christian König wrote:
>
>
> Am 02.06.21 um 11:07 schrieb Thomas Hellström (Intel):
>>
>> On 6/2/21 10:30 AM, Christian König wrote:
>>> We discussed if that is really the right approach for quite a while 
>>> now, but
>>> digging deeper into a bug report on arm turned out that this is 
>>> actually
>>> horrible broken right now.
>>>
>>> The reason for this is that vmf_insert_mixed_prot() always tries to 
>>> grab
>>> a reference to the underlaying page on architectures without
>>> ARCH_HAS_PTE_SPECIAL and as far as I can see also enabled GUP.
>>>
>>> So nuke using VM_MIXEDMAP here and use VM_PFNMAP instead.
>>>
>>> Also set VM_SHARED, not 100% sure if that is needed with VM_PFNMAP, 
>>> but better
>>> save than sorry.
>>>
>>> Signed-off-by: Christian König <christian.koenig at amd.com>
>>> Bugs: https://gitlab.freedesktop.org/drm/amd/-/issues/1606#note_936174
>>> ---
>>>   drivers/gpu/drm/ttm/ttm_bo_vm.c | 29 +++++++----------------------
>>>   1 file changed, 7 insertions(+), 22 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c 
>>> b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> index 9bd15cb39145..bf86ae849340 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>> @@ -359,12 +359,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct 
>>> vm_fault *vmf,
>>>            * at arbitrary times while the data is mmap'ed.
>>>            * See vmf_insert_mixed_prot() for a discussion.
>>>            */
>>> -        if (vma->vm_flags & VM_MIXEDMAP)
>>> -            ret = vmf_insert_mixed_prot(vma, address,
>>> -                            __pfn_to_pfn_t(pfn, PFN_DEV),
>>> -                            prot);
>>> -        else
>>> -            ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>> +        ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>
>> I think vmwgfx still uses MIXEDMAP. (Which is ofc same bug and should 
>> be changed).
>
> Mhm, the only thing I could find is that it is clearing VM_MIXEDMAP 
> and adding VM_PFNMAP instead.
>
> But going to clean that up as well.
>
>>
>>>             /* Never error on prefaulted PTEs */
>>>           if (unlikely((ret & VM_FAULT_ERROR))) {
>>> @@ -411,15 +406,9 @@ vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault 
>>> *vmf, pgprot_t prot)
>>>       pfn = page_to_pfn(page);
>>>         /* Prefault the entire VMA range right away to avoid further 
>>> faults */
>>> -    for (address = vma->vm_start; address < vma->vm_end; address += 
>>> PAGE_SIZE) {
>>> -
>>> -        if (vma->vm_flags & VM_MIXEDMAP)
>>> -            ret = vmf_insert_mixed_prot(vma, address,
>>> -                            __pfn_to_pfn_t(pfn, PFN_DEV),
>>> -                            prot);
>>> -        else
>>> -            ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>> -    }
>>> +    for (address = vma->vm_start; address < vma->vm_end;
>>> +         address += PAGE_SIZE)
>>> +        ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>>>         return ret;
>>>   }
>>> @@ -576,14 +565,10 @@ static void ttm_bo_mmap_vma_setup(struct 
>>> ttm_buffer_object *bo, struct vm_area_s
>>>         vma->vm_private_data = bo;
>>>   -    /*
>>> -     * We'd like to use VM_PFNMAP on shared mappings, where
>>> -     * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
>>> -     * but for some reason VM_PFNMAP + x86 PAT + write-combine is very
>>> -     * bad for performance. Until that has been sorted out, use
>>> -     * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
>>> +    /* Enforce VM_SHARED here since no driver backend actually 
>>> supports COW
>>> +     * on TTM buffer object mappings.
>>
>> I think by default all TTM drivers support COW mappings in the sense 
>> that written data never makes it to the bo but stays in anonymous 
>> pages, although I can't find a single usecase. So comment should be 
>> changed to state that they are useless for us and that we can't 
>> support COW mappings with VM_PFNMAP.
>
> Well the problem I see with that is that it only works as long as the 
> BO is in system memory. When it then suddenly migrates to VRAM 
> everybody sees the same content again and the COW pages are dropped. 
> That is really inconsistent and I can't see why we would want to do that.
Hmm, yes, that's actually a bug in drm_vma_manager().
>
> Additionally to that when you allow COW mappings you need to make sure 
> your COWed pages have the right caching attribute and that the 
> reference count is initialized and taken into account properly. Not 
> driver actually gets that right at the moment.

I was under the impression that COW'ed pages were handled transparently 
by the vm, you'd always get cached properly refcounted COW'ed pages but 
anyway since we're going to ditch support for them, doesn't really matter.

>
>>
>>>        */
>>> -    vma->vm_flags |= VM_MIXEDMAP;
>>> +    vma->vm_flags |= VM_PFNMAP | VM_SHARED;
>>
>> Hmm, shouldn't we refuse COW mappings instead, like my old patch on 
>> this subject did? In theory someone could be setting up what she 
>> thinks is a private mapping to a shared buffer object, and write 
>> sensitive data to it, which will immediately leak. It's a simple 
>> check, could open-code if necessary.
>
> Yeah, though about that as well. Rejecting things would mean we 
> potentially break userspace which just happened to work by coincident 
> previously. Not totally evil, but not nice either.
>
> How about we do a WARN_ON_ONCE(!(vma->vm_flags & VM_SHARED)); instead?

Umm, yes but that wouldn't notify the user, and would be triggerable 
from user-space. But you can also set up legal non-COW mappings without 
the VM_SHARED flag, IIRC, see is_cow_mapping(). I think when this was up 
for discussion last time we arrived in a vma_is_cow_mapping() utility...

/Thomas




More information about the dri-devel mailing list