Fence wait in mmu_interval_notifier_ops::invalidate

Thomas Hellström (Intel) thomas_os at shipmail.org
Fri Dec 11 09:37:13 UTC 2020


On 12/11/20 9:57 AM, Christian König wrote:
> Am 11.12.20 um 08:50 schrieb Thomas Hellström (Intel):
>> Hi, Christian
>>
>> Thanks for the reply.
>>
>> On 12/10/20 11:53 AM, Christian König wrote:
>>> Am 09.12.20 um 17:46 schrieb Thomas Hellström (Intel):
>>>>
>>>> On 12/9/20 5:37 PM, Jason Gunthorpe wrote:
>>>>> On Wed, Dec 09, 2020 at 05:36:16PM +0100, Thomas Hellström (Intel) 
>>>>> wrote:
>>>>>> Jason, Christian
>>>>>>
>>>>>> In most implementations of the callback mentioned in the subject 
>>>>>> there's a
>>>>>> fence wait.
>>>>>> What exactly is it needed for?
>>>>> Invalidate must stop DMA before returning, so presumably drivers 
>>>>> using
>>>>> a dma fence are relying on a dma fence mechanism to stop DMA.
>>>>
>>>> Yes, so far I follow, but what's the reason drivers need to stop DMA?
>>>
>>> Well in general an invalidation means that the specified part of the 
>>> page tables are updated, either with new addresses or new access flags.
>>>
>>> In both cases you need to stop the DMA because you could otherwise 
>>> work with stale data, e.g. read/write with the wrong addresses or 
>>> write to a read only region etc...
>>
>> Yes. That's clear. I'm just trying to understand the complete 
>> implications of doing that.
>>
>>>
>>>> Is it for invlidation before breaking COW after fork or something 
>>>> related?
>>>
>>> This is just one of many use cases which could invalidate a range. 
>>> But there are many more, both from the kernel as well as userspace.
>>>
>>> Just imaging that userspace first mmaps() some anonymous memory r/w, 
>>> starts a DMA to it and while the DMA is ongoing does a readonly 
>>> mmap() of libc to the same location.
>>
>> My understanding of this particular case is that hardware would 
>> continue to DMA to orphaned pages that are pinned until the driver is 
>> done with DMA, unless hardware would somehow in-flight pick up the 
>> new PTE addresses pointing to libc but not the protection?
>
> Exactly that is not guaranteed under all circumstances. Especially 
> since HMM tries to avoid grabbing a reference to the underlying pages. 
> And it depends when the destination addresses of the DMA are read and 
> when the access flags are evaluated.
>
> But even when it causes no security problem the requirement we have to 
> fulfill here is that the DMA is coherent. In other words we either 
> have to delay updates to the page tables until the DMA operation is 
> completed or apply both address and access flag changes in a way the 
> DMA operation immediately sees it as well.
>
> Regards,
> Christian.
>
Got it.

Thanks!
Thomas




More information about the dri-devel mailing list