[PATCH v3 8/8] drm/etnaviv: implement per-process address spaces on MMUv2

Lucas Stach l.stach at pengutronix.de
Fri Aug 9 13:45:08 UTC 2019


Am Freitag, den 09.08.2019, 14:04 +0200 schrieb Lucas Stach:
> This builds on top of the MMU contexts introduced earlier. Instead of having
> one context per GPU core, each GPU client receives its own context.
> 
> On MMUv1 this still means a single shared pagetable set is used by all
> clients, but on MMUv2 there is now a distinct set of pagetables for each
> client. As the command fetch is also translated via the MMU on MMUv2 the
> kernel command ringbuffer is mapped into each of the client pagetables.
> 
> As the MMU context switch is a bit of a heavy operation, due to the needed
> cache and TLB flushing, this patch implements a lazy way of switching the
> MMU context. The kernel does not have its own MMU context, but reuses the
> last client context for all of its operations. This has some visible impact,
> as the GPU can now only be started once a client has submitted some work and
> we got the client MMU context assigned. Also the MMU context has a different
> lifetime than the general client context, as the GPU might still execute the
> kernel command buffer in the context of a client even after the client has
> completed all GPU work and has been terminated. Only when the GPU is runtime
> suspended or switches to another clients MMU context is the old context
> freed up.
> 
> > Signed-off-by: Lucas Stach <l.stach at pengutronix.de>
> ---
> v3: Don't call etnaviv_cmdbuf_suballoc_unmap when mapping failed.
> ---
[...]
>  	/*
> @@ -308,7 +312,8 @@ void etnaviv_sync_point_queue(struct etnaviv_gpu *gpu, unsigned int event)
>  
>  /* Append a command buffer to the ring buffer. */
>  void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
> > -	unsigned int event, struct etnaviv_cmdbuf *cmdbuf)
> > +	struct etnaviv_iommu_context *mmu_context, unsigned int event,
> > +	struct etnaviv_cmdbuf *cmdbuf)
>  {
> >  	struct etnaviv_cmdbuf *buffer = &gpu->buffer;
> >  	unsigned int waitlink_offset = buffer->user_size - 16;
> @@ -317,17 +322,19 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
> >  	bool switch_context = gpu->exec_state != exec_state;
> >  	unsigned int new_flush_seq = READ_ONCE(gpu->mmu_context->flush_seq);
> >  	bool need_flush = gpu->flush_seq != new_flush_seq;
> +	bool switch_mmu_context = gpu->mmu_context != mmu_context;


I screwed up this one during the rework to avoid the flush sequence
race. I'll squash the following into this commit:

--- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
@@ -320,9 +320,9 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
        u32 return_target, return_dwords;
        u32 link_target, link_dwords;
        bool switch_context = gpu->exec_state != exec_state;
-       unsigned int new_flush_seq = READ_ONCE(gpu->mmu_context->flush_seq);
-       bool need_flush = gpu->flush_seq != new_flush_seq;
        bool switch_mmu_context = gpu->mmu_context != mmu_context;
+       unsigned int new_flush_seq = READ_ONCE(gpu->mmu_context->flush_seq);
+       bool need_flush = switch_mmu_context || gpu->flush_seq != new_flush_seq;


More information about the dri-devel mailing list