[PATCH 1/2] accel/ivpu: Add dma fence to command buffers only

Stanislaw Gruszka stanislaw.gruszka at linux.intel.com
Wed Apr 12 08:50:07 UTC 2023


On Thu, Apr 06, 2023 at 06:57:34PM +0200, Daniel Vetter wrote:
> On Fri, Mar 31, 2023 at 01:36:02PM +0200, Stanislaw Gruszka wrote:
> > From: Karol Wachowski <karol.wachowski at linux.intel.com>
> > 
> > Currently job->done_fence is added to every BO handle within a job. If job
> > handle (command buffer) is shared between multiple submits, KMD will add
> > the fence in each of them. Then bo_wait_ioctl() executed on command buffer
> > will exit only when all jobs containing that handle are done.
> > 
> > This creates deadlock scenario for user mode driver in case when job handle
> > is added as dependency of another job, because bo_wait_ioctl() of first job
> > will wait until second job finishes, and second job can not finish before
> > first one.
> > 
> > Having fences added only to job buffer handle allows user space to execute
> > bo_wait_ioctl() on the job even if it's handle is submitted with other job.
> > 
> > Fixes: cd7272215c44 ("accel/ivpu: Add command buffer submission logic")
> > Signed-off-by: Karol Wachowski <karol.wachowski at linux.intel.com>
> > Signed-off-by: Stanislaw Gruszka <stanislaw.gruszka at linux.intel.com>
> 
> Uh this is confused on a lot of levels ...
> 
> Frist having a new driver which uses the dma_resv/bo implicit fencing for
> umd synchronization is not great at all. The modern way of doing is:
> - in/out dependencies are handling with drm_syncobj
> - userspace waits on the drm_syncobj, not with a driver-private wait ioctl
>   on a specific bo

We have synobj on our TODO list, will work on it.

> The other issue is that the below starts to fall over once you do dynamic
> memory management, for that case you _always_ have to install a fence.
> 
> Now fear not, there's a solution here:
> - you always install a fence (i.e. revert this patch), but by default is a
>   DMA_RESV_USAGE_BOOKKEEP fence. See the kerneldoc for enum dma_resv_usage
>   for what that means.
> - only for the special job bo you set the DMA_RESV_USAGE_READ flag. You
>   want read because really that's what the gpu is doing for the job bo.
> - the bo_wait ioctl then waits for write access internally

In our case VPU can write to command buffer (there is special context
save area), so I think keeping USAGE_WRITE is appropriate.

> Should result in the same uapi as your patch here, but without abusing
> dma_resv in a way that'll go boom.
> 
> Note that userspace can get at the dma_resv READ/WRITE fences through
> ioctls on a dma-buf, so which one you pick here is uabi relevant.
> bo-as-job-fence is USAGE_READ.
> 
> Please take care of this in -next.

Sure.

Regards
Stanislaw


More information about the dri-devel mailing list