[Intel-gfx] [PATCH 3/8] drm/i915: evict VM instead of everything
Daniel Vetter
daniel at ffwll.ch
Thu Sep 12 00:45:06 CEST 2013
On Wed, Sep 11, 2013 at 02:57:50PM -0700, Ben Widawsky wrote:
> When reserving objects during execbuf, it is possible to come across an
> object which will not fit given the current fragmentation of the address
> space. We do not have any defragment in drm_mm, so the strategy is to
> instead evict everything, and reallocate objects.
>
> With the upcoming addition of multiple VMs, there is no point to evict
> everything since doing so is overkill for the specific case mentioned
> above.
>
> Recommended-by: Daniel Vetter <daniel.vetter at ffwll.ch>
> Signed-off-by: Ben Widawsky <ben at bwidawsk.net>
Merged the first three patches (with a tiny fixup for this one since
you've forgotten to update one comment that needs to be updated). Leaves
us with a short-time lack of tracepoint for evict_vm but Chris has imo a
good point.
-Daniel
> ---
> drivers/gpu/drm/i915/i915_drv.h | 1 +
> drivers/gpu/drm/i915/i915_gem_evict.c | 17 ++++++++++++++++-
> drivers/gpu/drm/i915/i915_gem_execbuffer.c | 8 +++++++-
> 3 files changed, 24 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 81ba5bb..7caf71d 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2106,6 +2106,7 @@ int __must_check i915_gem_evict_something(struct drm_device *dev,
> unsigned cache_level,
> bool mappable,
> bool nonblock);
> +int i915_gem_evict_vm(struct i915_address_space *vm, bool do_idle);
> int i915_gem_evict_everything(struct drm_device *dev);
>
> /* i915_gem_stolen.c */
> diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
> index e9033f0..a3e279d 100644
> --- a/drivers/gpu/drm/i915/i915_gem_evict.c
> +++ b/drivers/gpu/drm/i915/i915_gem_evict.c
> @@ -155,7 +155,22 @@ found:
> return ret;
> }
>
> -static int i915_gem_evict_vm(struct i915_address_space *vm, bool do_idle)
> +/**
> + * i915_gem_evict_vm - Try to free up VM space
> + *
> + * @vm: Address space to evict from
> + * @do_idle: Boolean directing whether to idle first.
> + *
> + * VM eviction is about freeing up virtual address space. If one wants fine
> + * grained eviction, they should see evict something for more details. In terms
> + * of freeing up actual system memory, this function may not accomplish the
> + * desired result. An object may be shared in multiple address space, and this
> + * function will not assert those objects be freed.
> + *
> + * Using do_idle will result in a more complete eviction because it retires, and
> + * inactivates current BOs.
> + */
> +int i915_gem_evict_vm(struct i915_address_space *vm, bool do_idle)
> {
> struct i915_vma *vma, *next;
> int ret;
> diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> index c8a01c1..ee93357 100644
> --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> @@ -549,10 +549,16 @@ i915_gem_execbuffer_reserve(struct intel_ring_buffer *ring,
> {
> struct drm_i915_gem_object *obj;
> struct i915_vma *vma;
> + struct i915_address_space *vm;
> struct list_head ordered_vmas;
> bool has_fenced_gpu_access = INTEL_INFO(ring->dev)->gen < 4;
> int retry;
>
> + if (list_empty(vmas))
> + return 0;
> +
> + vm = list_first_entry(vmas, struct i915_vma, exec_list)->vm;
> +
> INIT_LIST_HEAD(&ordered_vmas);
> while (!list_empty(vmas)) {
> struct drm_i915_gem_exec_object2 *entry;
> @@ -641,7 +647,7 @@ err: /* Decrement pin count for bound objects */
> if (ret != -ENOSPC || retry++)
> return ret;
>
> - ret = i915_gem_evict_everything(ring->dev);
> + ret = i915_gem_evict_vm(vm, true);
> if (ret)
> return ret;
> } while (1);
> --
> 1.8.4
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
More information about the Intel-gfx
mailing list