[PATCH] drm/ttm: remove fence_lock

Thomas Hellstrom thomas at shipmail.org
Thu Oct 18 04:02:19 PDT 2012


On 10/18/2012 10:37 AM, Maarten Lankhorst wrote:
> Hey,
>
> Op 18-10-12 09:59, Thomas Hellstrom schreef:
>>
>>
>> On 10/18/2012 09:28 AM, Thomas Hellstrom wrote:
>>> Hi, Maarten,
>>>
>>> As you know I have been having my doubts about this change.
>>> To me it seems insane to be forced to read the fence pointer under a
>>> reserved lock, simply because when you take the reserve lock, another
>>> process may have it and there is a substantial chance that that process
>>> will also be waiting for idle while holding the reserve lock.
> I think it makes perfect sense, the only times you want to read the fence
> is when you want to change the members protected by the reservation.

No, that's not true. A typical case (or the only case)
is where you want to do a map with no_wait semantics. You will want
to be able to access a buffer's results even if the eviction code
is about to schedule an unbind from the GPU, and have the buffer
reserved?

>
>>> In essence, to read the fence pointer, there is a large chance you will
>>> be waiting for idle to be able to access it. That's why it's protected by
>>> a separate spinlock in the first place.
>>>
>>> So even if this change might not affect current driver much it's a change
>>> to the TTM api that leads to an IMHO very poor design.
> I would argue the opposite, no longer having a separate lock, with clear
> semantics when fencing is allowed, gives you a way to clean up the core
> of ttm immensely.
>
> There were only 2 functions affected by fence lock removal and they were
> on buffer destruction, ttm_bo_cleanup_refs (and *_or_queue).

The actual code and the number of users is irrelevant here, since
we're discussing the implications of changing the API.

>> One way we could perhaps improve on this, if you think this is necessary, is to
>> build a bit on the initial rcu suggestion, still require reservation when the fence pointer is modified,
>> but to also use rcu_assign_pointer() for the fence pointer.
> This is a massive overkill when the only time you read the fence pointer
> without reservation is during buffer destruction. RCU is only good if
> there's ~10x more reads than writes, and for us it's simply 50% reads 50%
> writes..
>

I think you misunderstand me. I'm not suggesting going down the full RCU 
path, I'm merely making
sure that reads and writes to the bo's fence pointer are atomic, using 
the RCU functions
for this. I'm not suggesting any RCU syncs. This means your patch can be 
kept largely as
it is, just make sure you do atomic_writes to the fence pointers.

/Thomas








More information about the dri-devel mailing list