[PATCH 03/17] drm: convert crtc and mode_config to ww_mutex
Daniel Vetter
daniel at ffwll.ch
Mon May 26 09:09:21 PDT 2014
On Mon, May 26, 2014 at 11:49:59AM -0400, Rob Clark wrote:
> On Mon, May 26, 2014 at 11:35 AM, Daniel Vetter <daniel at ffwll.ch> wrote:
> > On Mon, May 26, 2014 at 11:20:49AM -0400, Rob Clark wrote:
> >> On Mon, May 26, 2014 at 11:07 AM, Daniel Vetter <daniel at ffwll.ch> wrote:
> >> > On Mon, May 26, 2014 at 5:04 PM, Rob Clark <robdclark at gmail.com> wrote:
> >> >> hmm, ok, I had thought this case, thread B would get -EDEADLK because
> >> >> lock was held, and not his acquire ctx. If that is not the case, then
> >> >> I would propose this:
> >> >>
> >> >> All places doing things the old way, must grab mode_config.mutex first
> >> >> currently. And we use mode_config.mutex to protect
> >> >> mode_config.acquire_ctx. So all the lower spots grabbing individual
> >> >> crtc mutexes can safely use mode_config.acquire_ctx.
> >> >>
> >> >> Then the only headache is propagating -EDEADLK up the call stack. If
> >> >> we are lucky, the all already propagate -EINTR, etc.
> >> >
> >> > The output poll work most definitely doesn't propagate -EINTR. Like
> >> > I've said, this will be painful. And imo doing this also makes the kms
> >> > locking into quite a mess overall.
> >>
> >> Well, we could hold mode_config.mutex as a traditional mutex around
> >> atomic operations. What you loose out would be now _NONBLOCK
> >> operations could conceivable call into driver paths without
> >> mode_config.mutex held. This was the advantage of converting
> >> mode_config.mutex as well. Granted, it is slightly theoretical
> >> because until we expose atomic ioctl it would only apply to page_flip
> >> (which was not holding mode_config.mutex). And we also want to get
> >> rid of mode_config.mutex in these paths too.
> >>
> >> Otoh, if we want to make locking more fine grained, more use of
> >> ww_mutex seems like the best way. And if that means adding a return
> >> value to a fxn here/there and propagating errors properly, maybe we
> >> should just go ahead and do that. It sounds like the right long term
> >> solution anyways.
> >
> > Yeah, I'm starting to lean towards trying to elide mode_config.mutex
> > completely from the atomic paths (and modesets in general). I think the
> > only bits we really need is adding ww mutexes to planes _and_ to
> > connectors. The atomic would _only_ ever acquire ww mutexes, and we would
> > be able to guarante that most of them are only held short times so that we
> > don't need to bother with trylocking them for NONBLOCK. That should simply
> > the atomic logic a bit I hope.
> >
> > So that, and a full subsystem audit unfortunately :(
>
> well, getting rid of any global lock from atomic path is, I hope, the
> eventual goal. But we are *really* late already with atomic, and too
> many other things backing up on top of this. I suppose I should have
> a closer look at the detect paths, to just see how hairy the
> alternative of adding error propagation would be. Although maybe not
> today (it is supposed to be a holiday here.. and I've already spent
> enough of the weekend on atomic ;-))
>
> Just to get *something* merged to unblock driver work and maybe get
> people starting to think about plumbing this through userspace (xrandr
> atomic modeset, for example), maybe the way to go is just revert
> mode_config.mutex to a traditional mutex for now. The weird cases
> only start to come up with real atomic ioctls, and we could use a
> driver flag to opt-in to that on a per driver basis.
This will result in people raving about regressions, which from a
maintainer pov I'll reject. If we want to chicken out of this question I'd
vote for an drm_modeset_lock_all for the atomic paths, which also isn't
really helpful. I don't really think there's an easy answer.
Wrt unblocking driver work: In i915 we keep on plunging ahead without the
interface stuff thus far. And wrt unblocking everything else: I hear a lot
of people screaming about the lack of atomic updates, but at least within
Intel not many people throwing resources at it. So my impression is that
it can't be that bad really.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
More information about the dri-devel
mailing list