[Intel-gfx] [PATCH] drm/i915: Reinstate order of operations in {intel, logical}_ring_begin()
Dave Gordon
david.s.gordon at intel.com
Mon Jun 15 11:11:37 PDT 2015
On 15/06/15 10:15, Chris Wilson wrote:
> On Mon, Jun 08, 2015 at 07:51:36PM +0100, Dave Gordon wrote:
>> The original idea of preallocating the OLR was implemented in
>>
>>> 9d773091 drm/i915: Preallocate next seqno before touching the ring
>>
>> and the sequence of operations was to allocate the OLR, then wrap past
>> the end of the ring if necessary, then wait for space if necessary.
>> But subsequently intel_ring_begin() was refactored, in
>>
>>> 304d695 drm/i915: Flush outstanding requests before allocating new seqno
>>
>> to ensure that pending work that might need to be flushed used the old
>> and not the newly-allocated request. This changed the sequence to wrap
>> and/or wait, then allocate, although the comment still said
>> /* Preallocate the olr before touching the ring */
>> which was no longer true as intel_wrap_ring_buffer() touches the ring.
>>
>> The reversal didn't introduce any problems until the introduction of
>> dynamic pinning, in
>>
>>> 7ba717c drm/i915/bdw: Pin the ringbuffer backing object to GGTT on-demand
>>
>> With that came the possibility that the ringbuffer might not be pinned
>> to the GTT or mapped into CPU address space when intel_ring_begin()
>> is called. It gets pinned when the request is allocated, so it's now
>> important that this comes before *anything* that can write into the
>> ringbuffer, specifically intel_wrap_ring_buffer(), as this will fault if
>> (a) the ringbuffer happens not to be mapped, and (b) tail happens to be
>> sufficiently close to the end of the ring to trigger wrapping.
>>
>> The original rationale for this reversal seems to no longer apply,
>> as we shouldn't ever have anything in the ringbuffer which is not
>> associated with a specific request, and therefore shouldn't have anything
>> to flush. So it should now be safe to reinstate the original sequence
>> of allocate-wrap-wait :)
>
> It still applies. If you submit say 1024 interrupted execbuffers they
What is an interrupted execbuffer? AFAICT we hold the struct_mutex while
stuffing the ringbuffer so we can only ever be in the process of adding
instructions to one ringbuffer at a time, and we don't (now) interleave
any flip commands (execlists mode requires mmio flip). Is there still
something that just adds random stuff to someone else's OLR?
> all share the same request. Then so does the 1025. Except the 1025th
> (for the sake of argument) requires extra space on the ring. To make
> that space it finishes the only request (since all 1024 are one and the
> same) the continues onwardsly blithely unaware it just lost the
> olr/seqno.
>
> To fix this requires request create/commit semantics, where the request
> create manages the pinning of the context for itself, and also imposes
> the limitation that a single request cannot occupy the full ringbuffer.
> -Chris
Well I'd very much like create/commit semantics, so that we can roll
back any sequence that breaks rather than leaving halfbaked command streams.
I'm pretty sure that no single request can occupy the full ringbuffer
though. Last time I measured it, the maximal sequence for any single
request was ~190 Dwords, considerably less than 1Kb, and not enough to
fill even the small (4-page) rings used with GuC submission.
.Dave.
More information about the Intel-gfx
mailing list