[PATCH] drm/ttm: remove fence_lock
Maarten Lankhorst
maarten.lankhorst at canonical.com
Thu Oct 18 07:45:45 PDT 2012
Op 18-10-12 13:55, Thomas Hellstrom schreef:
> On 10/18/2012 01:38 PM, Maarten Lankhorst wrote:
>> Op 18-10-12 13:02, Thomas Hellstrom schreef:
>>> On 10/18/2012 10:37 AM, Maarten Lankhorst wrote:
>>>> Hey,
>>>>
>>>> Op 18-10-12 09:59, Thomas Hellstrom schreef:
>>>>>
>>>>> On 10/18/2012 09:28 AM, Thomas Hellstrom wrote:
>>>>>> Hi, Maarten,
>>>>>>
>>>>>> As you know I have been having my doubts about this change.
>>>>>> To me it seems insane to be forced to read the fence pointer under a
>>>>>> reserved lock, simply because when you take the reserve lock, another
>>>>>> process may have it and there is a substantial chance that that process
>>>>>> will also be waiting for idle while holding the reserve lock.
>>>> I think it makes perfect sense, the only times you want to read the fence
>>>> is when you want to change the members protected by the reservation.
>>> No, that's not true. A typical case (or the only case)
>>> is where you want to do a map with no_wait semantics. You will want
>>> to be able to access a buffer's results even if the eviction code
>>> is about to schedule an unbind from the GPU, and have the buffer
>>> reserved?
>> Well either block on reserve or return -EBUSY if reserved, presumably the
>> former..
>>
>> ttm_bo_vm_fault does the latter already, anyway
>
> ttm_bo_vm_fault only trylocks to avoid a deadlock with mmap_sem, it's really
> a waiting reserve but for different reasons. Typically a user-space app will keep
> asynchronous maps to TTM during a buffer lifetime, and the buffer lifetime may
> be long as user-space apps keep caches.
> That's not the same as accessing a buffer after the GPU is done with it.
Ah indeed.
>>
>> You don't need to hold the reservation while performing the wait itself though,
>> you could check if ttm_bo_wait(no_wait_gpu = true) returns -EBUSY, and if so
>> take a ref to the sync_obj member and then wait after unreserving. You won't
>> reset sync_obj member to NULL in that case, but that should be harmless.
>> This will allow you to keep the reservations fast and short. Maybe a helper
>> would be appropriate for this since radeon and nouveau both seem to do this.
>>
>
> The problem is that as long as other users are waiting for idle with reservation
> held, for example to switch GPU engine or to delete a GPU bind, waiting
> for reserve will in many case mean wait for GPU.
This example sounds inefficient, I know nouveau can do this, but this essentially
just stalls the gpu entirely. I think guess you mean functions like nouveau_gem_object_close?
It wouldn't surprise me if performance in nouveau could be improved simply by
fixing those cases up instead, since it stalls the application completely too for other uses.
If reservations are held, it also becomes very easy to find all outliers, I didn't hook
this part of lockdep up yet, but I intend to. See Documentation/lockstat.txt .
Lockstat becomes slightly trickier since we do multi object reservation, but we
could follow the same path as lock_mutex_interruptible in the interrupted case.
If the waiting on a reservation becomes a problem, I intend to make it a very
measurable problem. :-)
And the only reason I haven't fixed the nouveau function I mentioned
is because I wanted to see if I could make this show up as issue, and
check how much it affects normal workloads.
All other places already did the reservation before wait, so it would be
really valuable to quantify how much of a problem this really is, and fix
up those pain points instead.
~Maarten
More information about the dri-devel
mailing list