[PATCH 13/13] RFC: drm: Atomic modeset ioctl
Daniel Vetter
daniel at ffwll.ch
Thu Dec 18 23:55:18 PST 2014
On Fri, Dec 19, 2014 at 12:29:22PM +0900, Michel Dänzer wrote:
> On 17.12.2014 20:18, Daniel Vetter wrote:
> > On Wed, Dec 17, 2014 at 06:31:13PM +0900, Michel Dänzer wrote:
> >> On 17.12.2014 16:20, Pekka Paalanen wrote:
> >>> On Wed, 17 Dec 2014 11:48:51 +0900
> >>> Michel Dänzer <michel at daenzer.net> wrote:
> >>>
> >>>> On 17.12.2014 08:05, Rob Clark wrote:
> >>>>> The atomic modeset ioctl can be used to push any number of new values
> >>>>> for object properties. The driver can then check the full device
> >>>>> configuration as single unit, and try to apply the changes atomically.
> >>>>>
> >>>>> The ioctl simply takes a list of object IDs and property IDs and their
> >>>>> values.
> >>>>
> >>>> [...]
> >>>>
> >>>>> diff --git a/include/uapi/drm/drm_mode.h b/include/uapi/drm/drm_mode.h
> >>>>> index 86574b0..3459778 100644
> >>>>> --- a/include/uapi/drm/drm_mode.h
> >>>>> +++ b/include/uapi/drm/drm_mode.h
> >>>>> @@ -519,4 +519,25 @@ struct drm_mode_destroy_dumb {
> >>>>> uint32_t handle;
> >>>>> };
> >>>>>
> >>>>> +/* page-flip flags are valid, plus: */
> >>>>> +#define DRM_MODE_ATOMIC_TEST_ONLY 0x0100
> >>>>> +#define DRM_MODE_ATOMIC_NONBLOCK 0x0200
> >>>>> +
> >>>>> +#define DRM_MODE_ATOMIC_FLAGS (\
> >>>>> + DRM_MODE_PAGE_FLIP_EVENT |\
> >>>>> + DRM_MODE_PAGE_FLIP_ASYNC |\
> >>>>> + DRM_MODE_ATOMIC_TEST_ONLY |\
> >>>>> + DRM_MODE_ATOMIC_NONBLOCK)
> >>>>> +
> >>>>> +struct drm_mode_atomic {
> >>>>> + __u32 flags;
> >>>>> + __u32 count_objs;
> >>>>> + __u64 objs_ptr;
> >>>>> + __u64 count_props_ptr;
> >>>>> + __u64 props_ptr;
> >>>>> + __u64 prop_values_ptr;
> >>>>> + __u64 blob_values_ptr; /* remove? */
> >>>>> + __u64 user_data;
> >>>>> +};
> >>>>> +
> >>>>> #endif
> >>>>>
> >>>>
> >>>> The new ioctl(s) should take an explicit parameter specifying when the
> >>>> changes should take effect. And since variable refresh rate displays are
> >>>> becoming mainstream, that parameter should probably be a timestamp
> >>>> rather than a frame counter.
> >>>
> >>> That sounds cool to me, but also a rabbit hole. While having worked on
> >>> the Wayland Presentation queueing extension, I'd like to ask the
> >>> following questions:
> >>>
> >>> - If you set the atomic kick to happen in the future, is there any way
> >>> to cancel it? I'd be ok with not being able to cancel initially, but
> >>> if one wants to add that later, we should already know how to
> >>> reference this atomic submission in the cancel request. What if user
> >>> space has a bug and schedules an update at one hour or three days from
> >>> now, how would we abort that?
> >>>
> >>> - Can you VT-switch or drop DRM master if you have a pending atomic
> >>> update?
> >>>
> >>> - Should one be able to set multiple pending atomic updates?
> >>>
> >>> - If I schedule an atomic update for one CTRC, can I schedule another
> >>> update for another CRTC before the first one completes? Or am I
> >>> forced to gather all updates over all outputs in the same atomic
> >>> submission even if I don't care about or want inter-output sync and
> >>> the outputs might even be running at different refresh rates?
> >>> (Actually, this seems to be a valid question even without any target
> >>> time parameter.)
> >>>
> >>> - If there can be multiple pending atomic updates on the same DRM
> >>> device, is there any way to guarantee that the
> >>> DRM_MODE_ATOMIC_TEST_ONLY results will be accurate also when the
> >>> atomic update actually kicks in? Another update may have changed the
> >>> configuration before this update kicks in, which means the overall
> >>> state isn't the same that was tested.
> >>>
> >>> - Does a pending atomic update prevent immediate (old style) KMS
> >>> changes?
> >>>
> >>> - Assuming hardware cannot do arbitrary time updates, how do you round
> >>> the given timestamp? Strictly "not before" given time? Round to
> >>> nearest possible time? The effect of required vs. unwanted sync to
> >>> vblank?
> >>>
> >>> - How would user space match the page flip event to the atomic
> >>> submission it did?
> >>>
> >>> I wonder if there is a way to postpone these hard(?) questions, so that
> >>> we could have atomic sooner and add scheduling later? I would imagine
> >>> solving everything above is quite some work.
> >>
> >> I agree. The main reason I brought it up is because I'd like to avoid
> >> getting into the same situation as with the current
> >> DRM_IOCTL_MODE_PAGE_FLIP ioctl, which doesn't explicitly communicate
> >> between userspace and kernel when the flip is supposed/expected to
> >> occur. We recently had to jump through some hoops in the radeon kernel
> >> driver to prevent flips from occurring sooner than expected by userspace.
> >
> > The current approach is to ask for a vblank event 1 frame earlier in the
> > ddx and schedule the flip when you receive that. And hope for the best.
>
> I'm well aware of that; it's exactly what broke down with radeon.
>
> The fundamental problem is that if the timing semantics are not
> explicitly specified in the interface, different drivers may end up
> implementing slightly different semantics, and different userspace may
> end up expecting the semantics implemented by different drivers.
>
> If you guys are aware of this risk and confident it can be handled
> without explicitly specifying the timing semantics in the interface
> upfront, that's great. I wanted to make sure you're aware.
There's two answers to that:
- With my intel hat on: We check all these things plus piles of other
races (flip-vs-anything tests really) in igt. I think i915 is pretty
much just the standard here so I'll be fine with that for no.
- With my community hat on I'd really like for someone (can't really
justify this to my boss) to port igt to libkms or something similar for
the generic tests and make it run everywhere. It might need some detail
work but overall we have the infrastructure to skip specific tests as
needed easily.
Until that happens my approach is to try to push as much as possible of
the i915 semantics into core/shared helpers, like what I've proposed for
the dpms atomic stuff.
Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
More information about the dri-devel
mailing list