[PATCH v3 2/3] mutex: add support for wound/wait style locks, v3

Daniel Vetter daniel at ffwll.ch
Mon May 27 07:47:40 PDT 2013


On Mon, May 27, 2013 at 10:21 AM, Peter Zijlstra <peterz at infradead.org> wrote:
> On Wed, May 22, 2013 at 07:24:38PM +0200, Maarten Lankhorst wrote:
>> >> +static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
>> >> +                             struct ww_class *ww_class)
>> >> +{
>> >> +  ctx->task = current;
>> >> +  do {
>> >> +          ctx->stamp = atomic_long_inc_return(&ww_class->stamp);
>> >> +  } while (unlikely(!ctx->stamp));
>> > I suppose we'll figure something out when this becomes a bottleneck. Ideally
>> > we'd do something like:
>> >
>> >  ctx->stamp = local_clock();
>> >
>> > but for now we cannot guarantee that's not jiffies, and I suppose that's a tad
>> > too coarse to work for this.
>> This might mess up when 2 cores happen to return exactly the same time, how do you choose a winner in that case?
>> EDIT: Using pointer address like you suggested below is fine with me. ctx pointer would be static enough.
>
> Right, but for now I suppose the 'global' atomic is ok, if/when we find
> it hurts performance we can revisit. I was just spewing ideas :-)

We could do a simple

ctx->stamp = (local_clock() << nr_cpu_shift) | local_processor_id()

to work around any bad luck in grabbing the ticket. With sufficient
fine clocks the bias towards smaller cpu ids would be rather
irrelevant. Just wanted to drop this idea before I'll forget about it
again ;-)
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the dri-devel mailing list