[Intel-gfx] [PATCH v3 3/3] drm/doc/rfc: VM_BIND uapi definition

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Thu Jun 23 11:05:52 UTC 2022


On 23/06/2022 09:57, Lionel Landwerlin wrote:
> On 23/06/2022 11:27, Tvrtko Ursulin wrote:
>>>
>>> After a vm_unbind, UMD can re-bind to same VA range against an active 
>>> VM.
>>> Though I am not sue with Mesa usecase if that new mapping is required 
>>> for
>>> running GPU job or it will be for the next submission. But ensuring the
>>> tlb flush upon unbind, KMD can ensure correctness.
>>
>> Isn't that their problem? If they re-bind for submitting _new_ work 
>> then they get the flush as part of batch buffer pre-amble. 
> 
> In the non sparse case, if a VA range is unbound, it is invalid to use 
> that range for anything until it has been rebound by something else.
> 
> We'll take the fence provided by vm_bind and put it as a wait fence on 
> the next execbuffer.
> 
> It might be safer in case of memory over fetching?
> 
> 
> TLB flush will have to happen at some point right?
> 
> What's the alternative to do it in unbind?

Currently TLB flush happens from the ring before every BB_START and also 
when i915 returns the backing store pages to the system.

For the former, I haven't seen any mention that for execbuf3 there are 
plans to stop doing it? Anyway, as long as this is kept and sequence of 
bind[1..N]+execbuf is safe and correctly sees all the preceding binds.
Hence about the alternative to doing it in unbind - first I think lets 
state the problem that is trying to solve.

For instance is it just for the compute "append work to the running 
batch" use case? I honestly don't remember how was that supposed to work 
so maybe the tlb flush on bind was supposed to deal with that scenario?

Or you see a problem even for Mesa with the current model?

Regards,

Tvrtko


More information about the Intel-gfx mailing list