[Intel-gfx] [PATCH 08/10] drm/i915: Prelude to splitting i915_gem_do_execbuffer in two
Dave Gordon
david.s.gordon at intel.com
Thu Dec 18 06:06:22 PST 2014
On 17/12/14 20:09, Daniel Vetter wrote:
> On Tue, Dec 16, 2014 at 02:26:55PM +0000, Dave Gordon wrote:
>> On 10/12/14 17:15, Daniel Vetter wrote:
>>> On Wed, Dec 10, 2014 at 04:33:14PM +0000, Dave Gordon wrote:
>>>> On 10/12/14 10:58, Daniel Vetter wrote:
>>>>> On Tue, Dec 09, 2014 at 12:59:11PM +0000, John.C.Harrison at Intel.com wrote:
>>>>>> From: John Harrison <John.C.Harrison at Intel.com>
>>>>>>
>>>>>> The scheduler decouples the submission of batch buffers to the driver with their
>>>>>> submission to the hardware. This basically means splitting the execbuffer()
>>>>>> function in half. This change rearranges some code ready for the split to occur.
>>>>>
>>>>> Now there's the curios question: Where will the split in top/bottom halves
>>>>> be? Without that I have no idea whether it makes sense and so can't review
>>>>> this patch. At least if the goal is to really prep for the scheduler and
>>>>> not just move a few lines around ;-)
>>>>> -Daniel
>>>>>
>>>> [snip]
>>>>>>
>>>>>> + i915_gem_execbuffer_move_to_active(vmas, ring);
>>>>>> +
>>>>>> + /* To be split into two functions here... */
>>>>>> +
>>>>>> + /* Unconditionally invalidate gpu caches and ensure that we do flush
>>>>>> + * any residual writes from the previous batch.
>>>>>> + */
>>>>>> + ret = logical_ring_invalidate_all_caches(ringbuf);
>>>>
>>>> It'll be where the marker comment is above. Ahead of that point is stuff
>>>> to do with setting up software state; after that we're talking to h/w.
>>>> When the scheduler code goes it, it decouples the two by interposing at
>>>> this point. Then batches go into it with s/w state set up, but don't get
>>>> to talk to the h/w until they're selected for execution, possibly in a
>>>> different order.
>>>
>>> Oops, I guess I need see a doctor to check my eyesight ;-)
>>>
>>> Two comments about code that's still in the bottom half:
>>> - There's lots of input validation code still below that cutoff point
>>> which needs to be moved above to still be able to return -EINVAL. Test
>>> coverage is the critical part here, but I think we've closed most of our
>>> gaps. I guess a future patch will address this?
>>
>> Yes, all validation will end up before the batch gets pushed into the
>> scheduler queue. It had already been shuffled there in an earlier
>> version, but some of it then got relocated into less convenient places
>> during the legacy-vs-execlist split. So for now, the execbuffer path has
>> some common code (including early validation), then a split by
>> submission mechanism (with additional validation that may someday get
>> reconverged into the common section); and back to common code, and
>> /then/ the batches get passed to the scheduler, whereafter there won't
>> be any more EINVAL cases (though there will be other error cases, such
>> as ENOMEM). Then as and when batches are selected for execution, they're
>> passed to the specific backend submission mechanism code (which of
>> course won't contain any further validation code either).
>
> Hm, what can still go ENOMEM in there? We should pin everything we need,
> the struct is preallocated and everything else should be in place too. Or
> at least that's how I think it should work out. If the scheduler needs any
> allocations for its internal tracking the we need to push them all into
> other places, most likely requests.
> -Daniel
It used to be the case that the scheduler had to allocate some tracking
data, but John may well have changed that with the advent of the
seqno-request transformation.
I'd like to eventually move all batch-related allocations into a single
chunk per request, done after input validation and before acquiring any
locks :)
.Dave.
More information about the Intel-gfx
mailing list