[Intel-gfx] [PATCH v4 2/3] drm/i915: refactor duplicate object vmap functions (reworked)
Chris Wilson
chris at chris-wilson.co.uk
Tue Feb 23 12:56:25 UTC 2016
On Tue, Feb 23, 2016 at 12:25:29PM +0000, Tvrtko Ursulin wrote:
>
> On 23/02/16 11:52, Chris Wilson wrote:
> >On Tue, Feb 23, 2016 at 11:38:02AM +0000, Chris Wilson wrote:
> >>Indeed, we would need a new notifier, pretty much for the sole use of
> >>32b. Grr, that will be a nuisance.
> >
> >Core changes:
> >
> >diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> >index d1f1d338af20..542a91c2785f 100644
> >--- a/include/linux/vmalloc.h
> >+++ b/include/linux/vmalloc.h
> >@@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
> > #define VMALLOC_TOTAL 0UL
> > #endif
> >
> >+struct notitifer_block;
> >+int register_vmap_notifier(struct notifier_block *nb);
> >+int unregister_vmap_notifier(struct notifier_block *nb);
> >+
> > #endif /* _LINUX_VMALLOC_H */
> >diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> >index fb42a5bffe47..0735d82444f7 100644
> >--- a/mm/vmalloc.c
> >+++ b/mm/vmalloc.c
> >@@ -21,6 +21,7 @@
> > #include <linux/debugobjects.h>
> > #include <linux/kallsyms.h>
> > #include <linux/list.h>
> >+#include <linux/notifier.h>
> > #include <linux/rbtree.h>
> > #include <linux/radix-tree.h>
> > #include <linux/rcupdate.h>
> >@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
> >
> > static void purge_vmap_area_lazy(void);
> >
> >+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
> >+
> > /*
> > * Allocate a region of KVA of the specified size and alignment, within the
> > * vstart and vend.
> >@@ -356,6 +359,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> > struct vmap_area *va;
> > struct rb_node *n;
> > unsigned long addr;
> >+ unsigned long freed;
> > int purged = 0;
> > struct vmap_area *first;
> >
> >@@ -468,6 +472,12 @@ overflow:
> > purged = 1;
> > goto retry;
> > }
> >+ freed = 0;
> >+ blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
> >+ if (freed > 0) {
> >+ purged = 0;
> >+ goto retry;
> >+ }
> > if (printk_ratelimit())
> > pr_warn("vmap allocation for size %lu failed: "
> > "use vmalloc=<size> to increase size.\n", size);
> >@@ -475,6 +485,18 @@ overflow:
> > return ERR_PTR(-EBUSY);
> > }
> >
> >+int register_vmap_notifier(struct notifier_block *nb)
> >+{
> >+ return blocking_notifier_chain_register(&vmap_notify_list, nb);
> >+}
> >+EXPORT_SYMBOL_GPL(register_vmap_notifier);
> >+
> >+int unregister_vmap_notifier(struct notifier_block *nb)
> >+{
> >+ return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
> >+}
> >+EXPORT_SYMBOL_GPL(unregister_vmap_notifier);
> >+
> > static void __free_vmap_area(struct vmap_area *va)
> > {
> > BUG_ON(RB_EMPTY_NODE(&va->rb_node));
> >
> >
> >That doesn't look too horrendous. Names?
>
> Downside is new deadlock opportunity so GEM callers to vmap would
> have to release the struct mutex.
No, the shrinker can steal the mutex for itself (or gives up). Callers
need to be sure that they pin their pages before doing any allocations
if they intend to use the pages afterwards. (i.e. we want to apply the
same rules for get_pages vs malloc to get_pages vs vmalloc).
> >register_oovmap_notifier
> >register_vmap_nospace_notifier?
> >register_vmap_purge_notifier?
>
> register_vmap_shrinker ?
>
> >And the 64k question, how to sell it?
>
> Not sure, maybe with numbers showing that caching the vmapping helps
> us significantly?
Something along the lines of how beneficial short-term caching can be
with the issue that raises of how best to release the cache, with
parallels to existing caches.
Hmm, Documentation/vm/highmem.txt:
==========================
TEMPORARY VIRTUAL MAPPINGS
==========================
The kernel contains several ways of creating temporary mappings:
(*) vmap(). This can be used to make a long duration mapping of multiple
physical pages into a contiguous virtual space. It needs global
synchronization to unmap.
"long duration" but temporary, ok, maybe there's a little more wiggle
room already here for keeping vmappings around. I was under the
impression that vmaps were to only ever be shortlived (due to the
pressure on vmap space).
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
More information about the Intel-gfx
mailing list