Include request for reset-rework branch v4

Christian König deathsimple at vodafone.de
Thu May 3 10:49:00 PDT 2012


On 03.05.2012 19:34, Jerome Glisse wrote:
> On Thu, May 3, 2012 at 1:28 PM, Christian König<deathsimple at vodafone.de>  wrote:
>> On 03.05.2012 19:20, Alex Deucher wrote:
>>> 2012/5/3 Jerome Glisse<j.glisse at gmail.com>:
>>>> On Thu, May 3, 2012 at 12:39 PM, Christian König
>>>> <deathsimple at vodafone.de>    wrote:
>>>>> On 03.05.2012 18:32, Jerome Glisse wrote:
>>>>>> On Thu, May 3, 2012 at 4:19 AM, Christian
>>>>>> König<deathsimple at vodafone.de>
>>>>>>   wrote:
>>>>>>> On 02.05.2012 18:01, Jerome Glisse wrote:
>>>>>>>> On Wed, May 2, 2012 at 9:11 AM, Christian
>>>>>>>> König<deathsimple at vodafone.de>
>>>>>>>>   wrote:
>>>>>>>>> Hi Dave,
>>>>>>>>>
>>>>>>>>> there still seems to be the need for some further discussion about
>>>>>>>>> the
>>>>>>>>> SA
>>>>>>>>> code,
>>>>>>>>> so I again split that out of the patchset and tested the result a
>>>>>>>>> bit.
>>>>>>>>>
>>>>>>>>> Most of the stuff still works fine without those offending changes,
>>>>>>>>> so
>>>>>>>>> to
>>>>>>>>> avoid
>>>>>>>>> mailing around unrelated and already reviewed patches, I request the
>>>>>>>>> include
>>>>>>>>> the following 17 patches into drm-next.
>>>>>>>>>
>>>>>>>>> If you prefer to merge they are also available from
>>>>>>>>> git://people.freedesktop.org/~deathsimple/linux branch reset-rework.
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>> Christian.
>>>>>>>>>
>>>>>>>> I am ok with this 17 patchset, i just sent 3 patch on top of those 17
>>>>>>>> that
>>>>>>>> bring back some other of the previous cleanup.
>>>>>>> At least for now those three are NAK, cause I just realized we need to
>>>>>>> put
>>>>>>> those on top of a more sophisticated fence implementation.
>>>>>>>
>>>>>>> Your idea of not using a list, but 64 bit sequences instead actually
>>>>>>> sounds
>>>>>>> quite nifty to me. Going to hack something together in the next couple
>>>>>>> of
>>>>>>> hours.
>>>>>>>
>>>>>>> Christian.
>>>>>> Btw you said that you are having issue when using multiple ring, it
>>>>>> comes to my attention that you never sync with the GFX ring (unless
>>>>>> asked by userspace) that's wrong, before scheduling on another ring
>>>>>> than GFX index you need to emit semaphore to make the ring wait for
>>>>>> the last emited fence on the GFX ring because of ttm. What might
>>>>>> happen is that ttm scheduled bo move on the GFX ring and that the ring
>>>>>> you work on start using those bo at there soon to be GPU offset while
>>>>>> the bo data is not there yet.
>>>>> Yeah, already stumbled over that but IIRC Alex already solved that
>>>>> issue..
>>>>>
>>>>>
>>>>> We set the sync_obj to the fence of the move operation, so when there is
>>>>> a
>>>>> move operation in progress we sync to it correctly (at least I hope so).
>>>>>
>>>>> Christian.
>>>>>
>>>> Looking at code doesn't seems ok, yes you use the fence from the move
>>>> operation but you only emit wait if userspace ask for it, that's
>>>> wrong.
>>>>
>>>> if (!(p->relocs[i].flags&    RADEON_RELOC_DONT_SYNC)) {
>>>>
>>> The default is to sync all the rings.  We only skip the sync if the
>>> _DONT_ sync flag is set.
>> Yeah, but the _DONT_ sync flag is just an optimization, and we only want to
>> not sync to things userspace has submitted.
>>
>>   E.g. userspace tells us that two jobs can happily run on the same bo even
>> if they are both writing to it....
>>
>> So Jerome is completely right, userspace doesn't expect that TTM is jumping
>> in between and moving the bo around, that is indeed a bug and another thing
>> for the todo list.
>>
>> Christian.
> Well until userspace can tell kernel on which fence it wants to wait i
> believe this flags become obsolete and can be remove, i am pretty no
> release userspace code ever used that flags.

Agree, AFAIK it currently isn't used anyway.

Christian.


More information about the dri-devel mailing list