[Mesa-dev] [PATCH] os: add spinlocks

Marek Olšák maraeo at gmail.com
Thu Dec 16 07:46:39 PST 2010


Thanks for a lot of info. The more I think about spinlocks, the more I see
they don't really give as much as I thought. I was just trying to optimize
some code in a wrong way and probably at a wrong place too. It doesn't even
deserve a comment as it was quite a banal thing.

FYI, I am not going to push the patch.

Marek

On Wed, Dec 15, 2010 at 4:10 PM, Thomas Hellstrom <thellstrom at vmware.com>wrote:

>  OK, Some info to back this up, please see inline.
>
>
> On 12/15/2010 01:20 PM, Thomas Hellstrom wrote:
>
> On 12/15/2010 09:23 AM, Marek Olšák wrote:
>
> On Tue, Dec 14, 2010 at 8:10 PM, Thomas Hellstrom <thellstrom at vmware.com>wrote:
>
>> Hmm,
>>
>> for the uninformed, where do we need to use spinlocks in gallium and how
>> do
>> we avoid using them on an UP system?
>>
>
> I plan to use spinlocks to guard very simple code like the macro
> remove_from_list, which might be, under some circumstances, called too
> often. Entering and leaving a mutex is quite visible in callgrind.
>
> What does UP stand for?
>
> Marek
>
>
> I've also noted that mutexes show up high on profiling on a number of
> occasions, but
>
> Linux mutexes should theoretically be roughly as fast as spinlocks unless
> there is high contention. (They shouldn't call into the kernel unless there
> is contention). If this is not the case it is worth figuring out why.
>
>
> At a first glance, looking at this article,
>
> http://www.alexonlinux.com/pthread-mutex-vs-pthread-spinlock
>
> spinlocks should be superior for things like quick list manipulation, but
> If I try that code with the second thread disabled (thus avoiding lock
> contention), it turns out that in the uncontended case, spinlocks are only
> some 13% faster.
>
>
>
> The problem with spinlocks in user-space is that a context switch may occur
> in a locked section and that will have other processes trying to lock
> spinning until the process holding the lock is scheduled again.
>
> On uni-processor (UP) systems spinlocks are particularly bad, since if
> there is contention on a lock, the process will always continue to spin
> until a timeout context switch, and thus behave much worse than a mutex
> which will context switch immediately.
> (This is of course unless pthread spinlocks does some magic I'm not aware
> of)
>
>
>
> Trying the code with two threads on a uni-processor desktop system,
> spinlocks are some 63% slower than mutexes. This is with a kernel tick of
> 1000HZ. Probably spinlock stats  would be much worse with a slower tick.
>
>
>
> In kernel space the situation is different, since spinlocks block
> preemption and contention can thus never occur on uniprocessor systems.
>
> There are of course situations where user-space spinlocks are faster (high
> contention and low chance of context switch), but that really means the user
> needs to know exactly what he's doing and the drawbacks of using spinlocks.
>
> I don't really think we should add spinlocks to gallium without
>
> a) Figuring out why, in your case, mutexes behave so much worse than
> spinlocks. Do you expect high lock contention?
> b) adding some kind of warning about the drawbacks of using them.
>
>
>
> Finally, trying to repeat the author's findings on my dual-processor
> system, Spinlocks are faster (but not nearly as much as the author states,
> but that's probably because my CPUs are faster hence lower lock contention),
> but the cpu-usage is roughly the same as in the mutex case.
>
> Given this, I would advise strongly against building spinlocks into any
> code that might be run on a uni-processor system.  Particularly gallium
> utility code.
> If we want to get rid of unnecessary locking overhead we should probably
> fix the code up to avoid taking the locks when not strictly needed.
>
> /Thomas
>
>
> /Thomas
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20101216/6f562a96/attachment.htm>


More information about the mesa-dev mailing list