[Intel-gfx] [PATCH 8/8] drm/i915: Extract the actual workload submission mechanism from execbuffer
Mateo Lozano, Oscar
oscar.mateo at intel.com
Thu Jul 3 11:07:41 CEST 2014
> -----Original Message-----
> From: Chris Wilson [mailto:chris at chris-wilson.co.uk]
> Sent: Thursday, July 03, 2014 8:32 AM
> To: Mateo Lozano, Oscar
> Cc: intel-gfx at lists.freedesktop.org
> Subject: Re: [Intel-gfx] [PATCH 8/8] drm/i915: Extract the actual workload
> submission mechanism from execbuffer
>
> On Thu, Jun 26, 2014 at 02:24:19PM +0100, oscar.mateo at intel.com wrote:
> > + i915_gem_execbuffer_move_to_active(vmas, ring);
> > + i915_gem_execbuffer_retire_commands(dev, file, ring, batch_obj);
>
> This is where I start freaking out over the mix of global vs logical state and
> the implications of reordering.
I hear you, Chris. This is scary stuff because it has a lot of implications, but it has been in review for a long time and we seem to be stalled.
> The key question for me is how clean busy-ioctl is when it has to pick up the
> pieces from a partial failure to submit.
AFAICT, __i915_add_request() needs some changes if we don´t want to reuse the intel_ringbuffer.c functionality (Daniel was wondering why I needed to abstract __i915_add_request away with another vfunc: this is the reason) and i915_gem_retire_requests_ring() needs to update last_retired_head in the correct ringbuf (global or logical). Other than that, everything seems healthy, as things like the list of active objects and the list of requests reside on intel_engine_cs, so they are common for both paths.
How could I put your mind at ease? maybe by creating a topic branch and going through QA for a while? or merging it as disabled by default? I don´t feel rewriting it forever is going to make it any better :(
More information about the Intel-gfx
mailing list