[Intel-gfx] [PATCH 48/55] drm/i915: Add *_ring_begin() to request allocation

John Harrison John.C.Harrison at Intel.com
Fri Jun 19 09:34:49 PDT 2015


On 18/06/2015 14:29, Daniel Vetter wrote:
> On Thu, Jun 18, 2015 at 1:21 PM, John Harrison
> <John.C.Harrison at intel.com> wrote:
>> I'm still confused by what you are saying in the above referenced email.
>> Part of it is about the sanity checks failing to handle the wrapping case
>> correctly which has been fixed in the base reserve space patch (patch 2 in
>> the series). The rest is either saying that you think we are potentially
>> wrappping too early and wasting a few bytes of the ring buffer or that
>> something is actually broken?
> Yeah I didn't realize that this change was meant to fix the
> ring->reserved_tail check since I didn't make that connection. It is
> correct with that change, but the problem I see is that the
> correctness of that debug aid isn't assured locally: No we both need
> that check _and_ the correct handling of the reservation tracking at
> wrap-around. If the check just handles wrapping it'll robustly stay in
> working shape even when the wrapping behaviour changes.
>
>> Point 2: 100 bytes of reserve, 160 bytes of execbuf and 200 bytes remaining.
>> You seem to think this will fail somehow? Why? The wait_for_space(160) in
>> the execbuf code will cause a wrap because the the 100 bytes for the
>> add_request reservation is added on and the wait is actually being done for
>> 260 bytes. So yes, we wrap earlier than would otherwise have been necessary
>> but that is the only way to absolutely guarantee that the add_request() call
>> cannot fail when trying to do the wrap itself.
> There's no problem except that it's wasteful. And I tried to explain
> that no unconditionally force-wrapping for the entire reservation is
> actually not needed, since the additional space needed to account for
> the eventual wrapping is bounded by a factor of 2. It's much less in
> practice since we split up the final request bits into multiple
> smaller intel_ring_begin. And if feels a bit wasteful to throw that
> space away (and make the gpu eat through MI_NOP) just because it makes
> caring for the worst-case harder. And with GuC the 160 dwords is
> actually a fairly substantial part of the ring.
>
> Even more so when we completely switch to a transaction model for
> request, where we only need to wrap for individual commands and hence
> could place intel_ring_being per-cmd (which is mostly what we do
> already anyway).
>
>> As Chris says, if the driver is attempting to create a single request that
>> fills the entire ringbuffer then that is a bug that should be caught as soon
>> as possible. Even with a Guc, the ring buffer is not small compared to the
>> size of requests the driver currently produces. Part of the scheduler work
>> is to limit the number of batch buffers that a given application/context can
>> have outstanding in the ring buffer at any given time in order to prevent
>> starvation of the rest of the system by one badly behaved app. Thus
>> completely filling a large ring buffer becomes impossible anyway - the
>> application will be blocked before it gets that far.
> My proposal for this reservation wrapping business would have been:
> - Increase the reservation by 31 dwords (to account for the worst-case
> wrap in pc_render_add_request).
> - Rework the reservation overflow WARN_ON in reserve_space_end to work
> correctly even when wrapping while the reservation has been in use.
> - Move the addition of reserved_space below the point where we wrap
> the ring and only check against total free space, neglecting wrapping.
> - Remove all other complications you've added.
>
> Result is no forced wrapping for reservation and a debug check which
> should even survive random changes by monkeys since the logic for that
> check is fully contained within reserve_space_end. And for the check
> we should be able to reuse __intel_free_space.
>
> If I'm reading things correctly this shouldn't have any effect outside
> of patch 2 and shouldn't cause any conflicts.
> -Daniel

See new patch #2...



More information about the Intel-gfx mailing list