[PATCH v2 2/2] drm/ttm: Fix vm page protection handling

Thomas Hellström (VMware) thomas_os at shipmail.org
Wed Dec 4 14:16:09 UTC 2019


On 12/4/19 2:52 PM, Michal Hocko wrote:
> On Tue 03-12-19 11:48:53, Thomas Hellström (VMware) wrote:
>> From: Thomas Hellstrom <thellstrom at vmware.com>
>>
>> TTM graphics buffer objects may, transparently to user-space,  move
>> between IO and system memory. When that happens, all PTEs pointing to the
>> old location are zapped before the move and then faulted in again if
>> needed. When that happens, the page protection caching mode- and
>> encryption bits may change and be different from those of
>> struct vm_area_struct::vm_page_prot.
>>
>> We were using an ugly hack to set the page protection correctly.
>> Fix that and instead use vmf_insert_mixed_prot() and / or
>> vmf_insert_pfn_prot().
>> Also get the default page protection from
>> struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot().
>> This way we catch modifications done by the vm system for drivers that
>> want write-notification.
> So essentially this should have any new side effect on functionality it
> is just making a hacky/ugly code less so?

Functionality is unchanged. The use of a on-stack vma copy was severely 
frowned upon in an earlier thread, which also points to another similar 
example using vmf_insert_pfn_prot().

https://lore.kernel.org/lkml/20190905103541.4161-2-thomas_os@shipmail.org/

> In other words what are the
> consequences of having page protection inconsistent from vma's?

During the years, it looks like the caching- and encryption flags of 
vma::vm_page_prot have been largely removed from usage. From what I can 
tell, there are no more places left that can affect TTM. We discussed 
__split_huge_pmd_locked() towards the end of that thread, but that 
doesn't affect TTM even with huge page-table entries.

/Thomas




More information about the dri-devel mailing list