[Intel-gfx] [PATCH 05/11] drm/i915: Support for creating Stolen memory backed objects

Chris Wilson chris at chris-wilson.co.uk
Thu Jan 14 03:57:35 PST 2016


On Thu, Jan 14, 2016 at 11:27:42AM +0000, Tvrtko Ursulin wrote:
> 
> On 14/01/16 11:14, Chris Wilson wrote:
> >On Thu, Jan 14, 2016 at 10:46:39AM +0000, Tvrtko Ursulin wrote:
> >>
> >>On 14/01/16 10:24, Chris Wilson wrote:
> >>>  * Stolen memory is a very limited resource and certain functions of the
> >>>  * hardware can only work from within stolen memory. Userspace's
> >>>  * allocations may be evicted from stolen and moved to normal memory as
> >>>  * required. If the allocation is marked as purgeable (using madvise),
> >>>  * the allocation will be dropped and further access to the object's
> >>>  * backing storage will result in -EFAULT. Stolen objects will also be
> >>>  * migrated to normal memory across suspend and resume, as the stolen
> >>>  * memory is not preserved.
> >>>  *
> >>>  * Stolen memory is regarded as a resource placement hint, most suitable
> >>>  * for medium-sized buffers that are only accessed by the GPU and can be
> >>>  * discarded.
> >>>  */
> >>
> >>Would it be better if the placement hint did not specifically
> >>mention stolen memory but specific limitations? Like,
> >
> >It is a nice idea, but it doesn't really fit with the other placement
> >domains I have sketched out (they are all fully featured, or nearly so,
> >but exist in different pools of memory with different characteristics
> >and reasons for choice).
> 
> Perhaps these pools have some characteristics which could be
> described abstractly? I am of course coming back to why would
> userspace have to know about stolen, why would we fail object
> creation if there is no space, and then move objects out of stolen
> on hibernate and never put them back. Sounds like a bad and
> inflexible interface, and a lying kernel to me.

I agree that migrating and failing because of not enough space is nasty.
I was tempted to suggest we allow stolen to fallback to normal during
creation. (But ABI is tricky, and I think we've got way past the point
where this series is ready, we're down to stray lines and choosing a
colour for the kettle.)

The choice is whether we disable powersaving because userspace allocated
a ton of vertex buffers in stolen, or if we just tell them to get lost
and take back our memory. It's a two-level priority eviction pass at the
moment, kernel vs userspace. We could extend that to several
classes/priorites (anything that maps to an unsigned long), but then we
need to define a policy of whether some levels are privileged etc.
(Overengineering until someone discovers that the kernel kicking their
objects out to enable FBC actually causes regression.)

Some of the abstraction is nice, but I feel that it makes for sloppy
uAPI, and a hindrance to letting userspace decide what it wants. With
complex questions of "where should I allocate this buffer", the answer
has always been to push that as close to the real decision maker as
possible. Making the choice in the kernel, smacks of policy. The
mechanism would be to let userspace know the details and make a choice.
(All the kernel should have to do is make sure userspace doesn't stomp
on each other.)

The question is whether the important API detail is the placement, or the
first/second class nature of the object interface. I favour selecting
placement.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


More information about the Intel-gfx mailing list