[Nouveau] [PATCH drm-next 05/14] drm/nouveau: new VM_BIND uapi interfaces

Christian König christian.koenig at amd.com
Tue Feb 7 09:35:10 UTC 2023


Am 06.02.23 um 19:20 schrieb Danilo Krummrich:
> On 2/6/23 17:14, Christian König wrote:
>> Concentrating this discussion on a very big misunderstanding first.
>>
>> Am 06.02.23 um 14:27 schrieb Danilo Krummrich:
>>> [SNIP]
>>> My understanding is that userspace is fully responsible on the parts 
>>> of the GPU VA space it owns. This means that userspace needs to take 
>>> care to *not* ask the kernel to modify mappings that are in use 
>>> currently.
>>
>> This is a completely wrong assumption! Take a look at what games like 
>> Forza Horizzon are doing.
>>
>> Basically that game allocates a very big sparse area and fills it 
>> with pages from BOs while shaders are accessing it. And yes, as far 
>> as I know this is completely valid behavior.
>
> I also think this is valid behavior. That's not the problem I'm trying 
> to describe. In this case userspace modifies the VA space 
> *intentionally* while shaders are accessing it, because it knows that 
> the shaders can deal with reading 0s.

No, it's perfectly valid for userspace to modify the VA space even if 
shaders are not supposed to deal with reading 0s.

>
>
> Just to have it all in place, the example I gave was:
>  - two virtually contiguous buffers A and B
>  - binding 1 mapped to A with BO offset 0
>  - binding 2 mapped to B with BO offset length(A)
>
> What I did not mention both A and B aren't sparse buffers in this 
> example, although it probably doesn't matter too much.
>
> Since the conditions to do so are given, we merge binding 1 and 
> binding 2 right at the time when binding 2 is requested. To do so a 
> driver might unmap binding 1 for a very short period of time (e.g. to 
> (re-)map the freshly merged binding with a different page size if 
> possible).

Nope, that's not correct handling.

>
> From userspace perspective buffer A is ready to use before applying 
> binding 2 to buffer B, hence it would be illegal to touch binding 1 
> again when userspace asks the kernel to map binding 2 to buffer B.
>
> Besides that I think there is no point in merging between buffers 
> anyway because we'd end up splitting such a merged mapping anyway 
> later on when one of the two buffers is destroyed.
>
> Also, I think the same applies to sparse buffers as well, a mapping 
> within A isn't expected to be re-mapped just because something is 
> mapped to B.
>
> However, in this context I start wondering if re-mapping in the 
> context of merge and split is allowed at all, even within the same 
> sparse buffer (and even with a separate page table for sparse mappings 
> as described in my last mail; shaders would never fault).

See, your assumption is that userspace/applications don't modify the VA 
space intentionally while the GPU is accessing it is just bluntly 
speaking incorrect.

When you have a VA address which is mapped to buffer A and accessed by 
some GPU shaders it is perfectly valid for the application to say "map 
it again to the same buffer A".

It is also perfectly valid for an application to re-map this region to a 
different buffer B, it's just not defined when the access then transits 
from A to B. (AFAIK this is currently worked on in a new specification).

So when your page table updates result in the shader to intermediately 
get 0s in return, because you change the underlying mapping you simply 
have some implementation bug in Nouveau.

I don't know how Nvidia hw handles this, and yes it's quite complicated 
on AMD hw as well because our TLBs are not really made for this use 
case, but I'm 100% sure that this is possible since it is still part of 
some of the specifications (mostly Vulkan I think).

To sum it up as far as I can see by giving the regions to the kernel is 
not something you would want for Nouveau either.

Regards,
Christian.


>
>>
>> So you need to be able to handle this case anyway and the approach 
>> with the regions won't help you at all preventing that.
>>
>> Regards,
>> Christian.
>>
>



More information about the dri-devel mailing list