[Intel-gfx] [RFC] drm/i915: context support unit test

Ben Widawsky widawsky at gmail.com
Wed Dec 29 05:03:01 CET 2010


On Tue, Dec 28, 2010 at 11:36:17PM +0100, Daniel Vetter wrote:
> Hi Ben,
> 
> On Sat, Dec 25, 2010 at 02:53:04PM -0800, Ben Widawsky wrote:
> > I am requesting comments on the unit test for the context support I will be 
> > adding. Attached is the unit test. I intend to create wrappers for the create
> > and destroy Ioctls in libdrm, unless someone has a better solution to reuse the
> > existing API. For the time being, I plan to use the rsvd1 field in the exec2
> > structure to store the context.
> > 
> > In summary, you'll see two new Ioctls in this test, and one new DRM API, but
> > once it's cleaned up, it will probably be 3 new Ioctls, and 3 new DRM APIs.
> > Also I realize this test doesn't cover a lot of the bad cases, but that will
> > be included later.
> 
> Just a few questions on the api:
> - How does this tie in with the multiple ringbuffer support? Is the kernel
>   supposed to lazily allocate contexts for each ring as soon as userspaces
>   uses it on a given ring for the first time? imho the simpler approach
>   than adding an explicit ring arg to the ctx_create ioctl.
> - (Assuming that the context stores pointers to the indirect state
>   objects - public docs are unclear in that matter) How do you plan to
>   handle bo eviction? The simplest thing is probably to bail on execbuf
>   in the kernel and ask userspace to reissue a complete context. Also: How
>   does the kernel know that evicting/moving a given bo invalidates a
>   certain context? Do you intend to create that connection implicitly with
>   i915_gem_domain_instruction or with some new reloc flag?
> 
> > Thanks.
> > Ben
> 
> Cheers, Daniel
> 
> -- 
> Daniel Vetter
> Mail: daniel at ffwll.ch
> Mobile: +41 (0)79 365 57 48

Daniel,

There is no tie in to multiple ringbuffer support. A client may allocate
a context for all ringuffers, or one for each ringbuffer. I too must
figure out if this is relevant to anything but the graphics engine.

The immediate plan is to allocate space for the HW context at the time 
the client gets (lazily or not) a new context. The memory will be pinned
at this point, because it's somewhat difficult to make sure the memory
will be there on future context switches. This is a possible area for
improvement, but to do this would require keeping tracking of the last
context to run, and then possibly paging in memory of that context
before the new context can run.

Unfortunately I'm new to this HW and SW design, so I may not have 
totally followed your questions about eviction, or invalidation. The
answer is, I don't yet know. If your assumption is right, and it's 
possible to set the hardware state such that it may reference memory in
a subsequent batch that's not referenced in the subsequent batch 
buffers, then there is some work to be done which isn't part of the 
initial implementation. It will be up to user space to pin any objects 
which may be referenced in the future. After the initial implementation,
we can decide how to proceed, but your suggestion seems reasonable to 
me, and I'd have to do more research.

Regarding lazy creation:
The current design allows a client to create multiple contexts, or even
possibly share contexts. I'm not sure of an easy way to meet those goals
with a lazy allocation. My original plan was to modify the gem alloc API
so that each bo would be associated with a context, but either way it
involved adding a new API (since I couldn't destroy the old alloc API),
and I figured there may be some uses for contexts which don't require a
bo in the future.

Thanks. 
Ben



More information about the Intel-gfx mailing list