[Intel-gfx] [PATCH v4 1/2] shmem: Support for registration of driver/file owner specific ops

Kirill A. Shutemov kirill at shutemov.name
Tue Apr 26 23:33:57 UTC 2016


On Tue, Apr 26, 2016 at 02:53:41PM +0200, Daniel Vetter wrote:
> On Mon, Apr 25, 2016 at 02:42:50AM +0300, Kirill A. Shutemov wrote:
> > On Mon, Apr 04, 2016 at 02:18:10PM +0100, Chris Wilson wrote:
> > > From: Akash Goel <akash.goel at intel.com>
> > > 
> > > This provides support for the drivers or shmem file owners to register
> > > a set of callbacks, which can be invoked from the address space
> > > operations methods implemented by shmem.  This allow the file owners to
> > > hook into the shmem address space operations to do some extra/custom
> > > operations in addition to the default ones.
> > > 
> > > The private_data field of address_space struct is used to store the
> > > pointer to driver specific ops.  Currently only one ops field is defined,
> > > which is migratepage, but can be extended on an as-needed basis.
> > > 
> > > The need for driver specific operations arises since some of the
> > > operations (like migratepage) may not be handled completely within shmem,
> > > so as to be effective, and would need some driver specific handling also.
> > > Specifically, i915.ko would like to participate in migratepage().
> > > i915.ko uses shmemfs to provide swappable backing storage for its user
> > > objects, but when those objects are in use by the GPU it must pin the
> > > entire object until the GPU is idle.  As a result, large chunks of memory
> > > can be arbitrarily withdrawn from page migration, resulting in premature
> > > out-of-memory due to fragmentation.  However, if i915.ko can receive the
> > > migratepage() request, it can then flush the object from the GPU, remove
> > > its pin and thus enable the migration.
> > > 
> > > Since gfx allocations are one of the major consumer of system memory, its
> > > imperative to have such a mechanism to effectively deal with
> > > fragmentation.  And therefore the need for such a provision for initiating
> > > driver specific actions during address space operations.
> > 
> > Hm. Sorry, my ignorance, but shouldn't this kind of flushing be done in
> > response to mmu_notifier's ->invalidate_page?
> > 
> > I'm not aware about how i915 works and what's its expectation wrt shmem.
> > Do you have some userspace VMA which is mirrored on GPU side?
> > If yes, migration would cause unmapping of these pages and trigger the
> > mmu_notifier's hook.
> 
> We do that for userptr pages (i.e. stuff we steal from userspace address
> spaces). But we also have native gfx buffer objects based on shmem files,
> and thus far we need to allocate them as !GFP_MOVEABLE. And we allocate a
> _lot_ of those. And those files aren't mapped into any cpu address space
> (ofc they're mapped on the gpu side, but that's driver private), from the
> core mm they are pure pagecache. And afaiui for that we need to wire up
> the migratepage hooks through shmem to i915_gem.c

I see.

I don't particularly like the way patch hooks into migrate, but don't a
good idea how to implement this better.

This way allows to hook up to any shmem file, which can be abused by
drivers later.

I wounder if it would be better for i915 to have its own in-kernel mount
with variant of tmpfs which provides different mapping->a_ops? Or is it
overkill? I don't know.

Hugh?

-- 
 Kirill A. Shutemov


More information about the Intel-gfx mailing list