[Intel-gfx] [PATCH 51/59] drm/i915: Add *_ring_begin() to request allocation
John Harrison
John.C.Harrison at Intel.com
Fri Mar 20 09:09:55 PDT 2015
On 20/03/2015 15:30, Chris Wilson wrote:
> On Fri, Mar 20, 2015 at 04:23:45PM +0100, Daniel Vetter wrote:
>> On Thu, Mar 19, 2015 at 12:30:56PM +0000, John.C.Harrison at Intel.com wrote:
>>> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
>>> index 6f198df..c7dcabd 100644
>>> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
>>> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
>>> @@ -2205,21 +2205,27 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
>>> return 0;
>>> }
>>>
>>> -int intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int size)
>>> +int legacy_ring_reserved_space_reserve(struct drm_i915_gem_request *request)
>>> {
>>> - /* NB: Until request management is fully tidied up and the OLR is
>>> - * removed, there are too many ways for get false hits on this
>>> - * anti-recursion check! */
>>> - /*WARN_ON(ringbuf->reserved_size);*/
>>> + /*
>>> + * The first call merely notes the reserve request and is common for
>>> + * all back ends. The subsequent localised _begin() call actually
>>> + * ensures that the reservation is available. Without the begin, if
>>> + * the request creator immediately submitted the request without
>>> + * adding any commands to it then there might not actually be
>>> + * sufficient room for the submission commands.
>>> + */
>>> + intel_ring_reserved_space_reserve(request->ringbuf, MIN_SPACE_FOR_ADD_REQUEST);
>>> +
>>> + return intel_ring_begin(request, 0);
>> This feels a bit convoluted tbh, and would fall aparat if we start adding
>> sanity checks to _begin/advance functions. Can't we instead directly call
>> ring_wait_for_space? This forgoes the intel_wrap_ring_buffer call, but
>> otoh we just need to factor that into our estimates. Wrapping the ring for
>> the entire reservartion right away is
>> a) way too much - we only wrap individual ring_being calls anyway
>> b) not doing any good since all the intermediate intel_ring_emit calls
>> might very-well push us into a wrap anyway.
>>
>> In the end we just need to increase our reservation with the biggest
>> intel_ring_begin we have in the add_request code - that's the worst-case
>> of ring space we might waste due to wrapping.
I don't see why sanity checks in begin/advance would be a problem. This
_begin() call is only required in the situation where no-one actually
adds any commands to the ring before calling i915_add_request() and then
is only necessary because i915_add_request() is not allowed to call
_begin() any more because it is not allowed to fail. There is no magic
going on behind the scenes. It is exactly the same as calling
'_begin(RESERVED_SIZE)'. All subsequent code must still do it's own
_begin() call before adding commands to the ring. Any sanity checks
would ignore the reserved size and complain if a _begin() is too small
for the commands actually written. It should all just work :). Plus it
is nice and simple and easy to maintain.
Also, I don't see why it is better to do the wrap as late as possible.
If a wrap is going to be required at some point then it doesn't really
matter when it is done - early or late, the (small) hit still has to be
taken. Given that the buffer size is orders of magnitude larger than the
size of an individual request, there is no possibility of ever having to
wrap twice. So I would say that keeping the code nice and simple
outweighs any advantage to wrapping a few calls later.
> Sorry to tune in here but what? What happened to the transactional
> request model?
> -Chris
Er, what transactional request model?
There were comments a while ago about a scheme of not updating the
hardware until the add_request() call such that the entire request could
be discarded by simply resetting the ring pointers to how they were at
the point of the alloc_request() call. I don't know if anything ever
came out of that discussion though. I certainly didn't see it as a 'we
must implement this before we do anything else'. And I'm just trying to
do the simplest/quickest route to getting the GPU scheduler upstreamed!
John.
More information about the Intel-gfx
mailing list