[PATCH 3/6] drm/ttm: call ttm_bo_cleanup_refs with reservation and lru lock held

Thomas Hellstrom thellstrom at vmware.com
Wed Nov 28 06:23:34 PST 2012


On 11/28/2012 02:55 PM, Maarten Lankhorst wrote:
> Op 28-11-12 14:21, Thomas Hellstrom schreef:
>> On 11/28/2012 01:15 PM, Maarten Lankhorst wrote:
>>> Op 28-11-12 12:54, Thomas Hellstrom schreef:
>>>> On 11/28/2012 12:25 PM, Maarten Lankhorst wrote:
>>>>> By removing the unlocking of lru and retaking it immediately, a race is
>>>>> removed where the bo is taken off the swap list or the lru list between
>>>>> the unlock and relock. As such the cleanup_refs code can be simplified,
>>>>> it will attempt to call ttm_bo_wait non-blockingly, and if it fails
>>>>> it will drop the locks and perform a blocking wait, or return an error
>>>>> if no_wait_gpu was set.
>>>>>
>>>>> The need for looping is also eliminated, since swapout and evict_mem_first
>>>>> will always follow the destruction path, so no new fence is allowed
>>>>> to be attached. As far as I can see this may already have been the case,
>>>>> but the unlocking / relocking required a complicated loop to deal with
>>>>> re-reservation.
>>>>>
>>>>> The downside is that ttm_bo_cleanup_memtype_use is no longer called with
>>>>> reservation held, so drivers must be aware that move_notify with a null
>>>>> parameter doesn't require a reservation.
>>>> Why can't we unreserve *after* ttm_bo_cleanup_memtype_use? That's not
>>>> immediately clear from this patch.
>>> Because we would hold the reservation while waiting and with the object still
>>> on swap and lru lists still, that would defeat the whole purpose of keeping
>>> the object on multiple lists, plus break current code that assumes bo's on the
>>> those lists can always be reserved.
>>>
>>> the if (ret && !no_wait_gpu) path has to drop the reservation and lru lock, and
>>> isn't guaranteed to be able to retake it. Maybe it could be guaranteed now, but
>>> I'm sure that would not be the case if the reservations were shared across
>>> devices.
>> The evict path removes the BO from the LRU lists, drops the LRU lock but hangs on to the reservation,
>> and in case the wait goes wrong, re-adds the bo to the LRU lists and returns an error.
> If you really want to, we could hang on to the !no_wait_gpu path, wait shouldn't ever fail there, so I suppose
> leaving it off the lru lists and not re-add on any list in case of wait fail is fine. It's still on the ddestroy list in that
> case, so not adding it back to the other lists is harmless.
>
Well I'm a bit afraid that theoretically, other callers may have a bo 
reserved, while cleanup_refs_and_unlock
more or less runs the whole destroy path on that buffer. Sure, we have 
control over those other reservers,
but it may come back and bite us.

Also the wait might fail if a signal is hit, so it's definitely 
possible, and even likely in the case of the X server process.

Anyway, I prefer if we could try to keep the reservation across the 
ttm_cleanup_memtype_use  function, and as far
as I can tell, the only thing preventing that is the reservation release 
in the (!no_wait_gpu) path. So if we alter that to
do the same as the evict path I think without looking to deeply into the 
consequences that we should be safe.

Then we should be able to skip patch 2 as well.

/Thomas




More information about the dri-devel mailing list