[PATCH 19/25] drm/amdgpu: s/GFP_KERNEL/GFP_ATOMIC in scheduler code
Daniel Vetter
daniel at ffwll.ch
Tue Jul 14 10:49:10 UTC 2020
On Tue, Jul 07, 2020 at 10:12:23PM +0200, Daniel Vetter wrote:
> My dma-fence lockdep annotations caught an inversion because we
> allocate memory where we really shouldn't:
>
> kmem_cache_alloc+0x2b/0x6d0
> amdgpu_fence_emit+0x30/0x330 [amdgpu]
> amdgpu_ib_schedule+0x306/0x550 [amdgpu]
> amdgpu_job_run+0x10f/0x260 [amdgpu]
> drm_sched_main+0x1b9/0x490 [gpu_sched]
> kthread+0x12e/0x150
>
> Trouble right now is that lockdep only validates against GFP_FS, which
> would be good enough for shrinkers. But for mmu_notifiers we actually
> need !GFP_ATOMIC, since they can be called from any page laundering,
> even if GFP_NOFS or GFP_NOIO are set.
>
> I guess we should improve the lockdep annotations for
> fs_reclaim_acquire/release.
>
> Ofc real fix is to properly preallocate this fence and stuff it into
> the amdgpu job structure. But GFP_ATOMIC gets the lockdep splat out of
> the way.
>
> v2: Two more allocations in scheduler paths.
>
> Frist one:
>
> __kmalloc+0x58/0x720
> amdgpu_vmid_grab+0x100/0xca0 [amdgpu]
> amdgpu_job_dependency+0xf9/0x120 [amdgpu]
> drm_sched_entity_pop_job+0x3f/0x440 [gpu_sched]
> drm_sched_main+0xf9/0x490 [gpu_sched]
>
> Second one:
>
> kmem_cache_alloc+0x2b/0x6d0
> amdgpu_sync_fence+0x7e/0x110 [amdgpu]
> amdgpu_vmid_grab+0x86b/0xca0 [amdgpu]
> amdgpu_job_dependency+0xf9/0x120 [amdgpu]
> drm_sched_entity_pop_job+0x3f/0x440 [gpu_sched]
> drm_sched_main+0xf9/0x490 [gpu_sched]
>
> Cc: linux-media at vger.kernel.org
> Cc: linaro-mm-sig at lists.linaro.org
> Cc: linux-rdma at vger.kernel.org
> Cc: amd-gfx at lists.freedesktop.org
> Cc: intel-gfx at lists.freedesktop.org
> Cc: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Maarten Lankhorst <maarten.lankhorst at linux.intel.com>
> Cc: Christian König <christian.koenig at amd.com>
> Signed-off-by: Daniel Vetter <daniel.vetter at intel.com>
Has anyone from amd side started looking into how to fix this properly?
I looked a bit into fixing this with mempool, and the big guarantee we
need is that
- there's a hard upper limit on how many allocations we minimally need to
guarantee forward progress. And the entire vmid allocation and
amdgpu_sync_fence stuff kinda makes me question that's a valid
assumption.
- mempool_free must be called without any locks in the way which are held
while we call mempool_alloc. Otherwise we again have a nice deadlock
with no forward progress. I tried auditing that, but got lost in amdgpu
and scheduler code. Some lockdep annotations for mempool.c might help,
but they're not going to catch everything. Plus it would be again manual
annotations because this is yet another cross-release issue. So not sure
that helps at all.
iow, not sure what to do here. Ideas?
Cheers, Daniel
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 2 +-
> 3 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 8d84975885cd..a089a827fdfe 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -143,7 +143,7 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> uint32_t seq;
> int r;
>
> - fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_KERNEL);
> + fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
> if (fence == NULL)
> return -ENOMEM;
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> index 267fa45ddb66..a333ca2d4ddd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
> @@ -208,7 +208,7 @@ static int amdgpu_vmid_grab_idle(struct amdgpu_vm *vm,
> if (ring->vmid_wait && !dma_fence_is_signaled(ring->vmid_wait))
> return amdgpu_sync_fence(sync, ring->vmid_wait);
>
> - fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_KERNEL);
> + fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_ATOMIC);
> if (!fences)
> return -ENOMEM;
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> index 8ea6c49529e7..af22b526cec9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> @@ -160,7 +160,7 @@ int amdgpu_sync_fence(struct amdgpu_sync *sync, struct dma_fence *f)
> if (amdgpu_sync_add_later(sync, f))
> return 0;
>
> - e = kmem_cache_alloc(amdgpu_sync_slab, GFP_KERNEL);
> + e = kmem_cache_alloc(amdgpu_sync_slab, GFP_ATOMIC);
> if (!e)
> return -ENOMEM;
>
> --
> 2.27.0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the amd-gfx
mailing list