RandR 1.2 feedback

Alex Deucher alexdeucher at gmail.com
Thu Nov 30 06:19:04 PST 2006

On 11/29/06, Andy Ritger <aritger at nvidia.com> wrote:
> Thanks for the feedback, Keith.  Sorry for the slow response.  Comments below:
> On Fri, 24 Nov 2006, Keith Packard wrote:
> > On Wed, 2006-11-22 at 17:38 -0800, Andy Ritger wrote:
> >
> >> - It would be nice if the specification tracked the list of modes
> >>    per output.  Rather than have a single list of modes for the X screen,
> >>    and then have each output reference whichever modes are valid for that
> >>    output, it may make more sense to just store the modes per output.
> >
> > Right, the reason I didn't do it this way is to support 'clone' mode
> > where a single crtc can drive multiple outputs. Having per-output mode
> > lists would make this problematic.
> >
> >>    One advantage is that the user can request a mode named "auto-select",
> >>    and each output could have a different mode with that name.
> >
> > Is this not well supported with the existing 'preferred' mode stuff? If
> > not, could we support it with an output property which labeled the
> > 'auto-select' mode?
> >
> >>    I suppose that if you ever wanted to associate additional properties
> >>    with modes, but if those properties should be different per output,
> >>    then storing the modes per output would make this easier.
> >
> > No, I've actually pared the modes down to the basics so I could support
> > clone mode as described above.
> >
> >>    The downside of per-output modelists is that you end up with some
> >>    duplication of modes that are valid for each output.  I wouldn't
> >>    consider that a big deal, though.
> >
> > Except for clone mode, I would agree. The unfortunate thing is that I
> > have many older chipsets which have only a single CRTC and multiple
> > outputs; without clone mode, I can drive only one output at a time.
> That's a good point about clone mode not making per-output modes feasible.
> Perhaps the modes should be per-CRTC, but not a big deal.  The per-screen
> mode list is workable, so I'll leave that topic alone.
> >>    One major downside is that this doesn't give an implementation a
> >>    good opportunity to perform validation of the complete system.
> >>    The CRTC vs output distinction expose some specific hardware
> >>    capabilities/limitations to the client.  That's fine, except that
> >>    there may be other restrictions.  For example, things like video memory
> >>    bandwidth or TMDS Links may limit the combinations of modes that you
> >>    can use on different outputs at the same time.
> >
> > I'd like to get a better idea of actual limits here; looking over the
> > Intel docs, I was unable to discover any combination which could be
> > plugged together which wouldn't work. But, I only have two crtcs.
> I suppose the limits would be anything that is a shared resource between
> the CRTCs.  Memory bandwidth is the only good example that comes to mind:
> if the memory subsystem of the graphics hardware has constraints such
> that it cannot feed all CRTCs simultaneously when all CRTCs run at
> their maximum clks, then that's a limit that cannot be validated just
> by looking at the mode on one output by itself.  Granted, that's not a
> likely scenario.  I'm more concerned about the system contraints that
> we can't foresee today.
> > And, encapsulating the configuration in a container data structure
> > doesn't really solve the problem; you still have no way of describing
> > "why" a configuration can't be used.
> Yes, the "why" feedback is missing, but providing the feedback seems
> like a separate issue than identifying that the currently requested
> configuration cannot be fulfilled.
> Is it better for the client to
>      - resize the root window
>      - set a mode on output A
>      - set a mode on output B
> but have the output B mode fail because there is some conflict with what
> is already setup on output A, or have the client say, "I'd like to
> register a configuration that has:
>      - a particular sized root window,
>      - a particular mode on output A, and
>      - a particular mode on output B"
> and have the registering of that configuration fail?
> > Anything you can set atomically can be set incrementally.
> Oh, sorry, I was suggesting to replace the incremental assignments by a
> single atomic assignment, giving the implementation a central place to
> perform its validation.  i.e., remove the individual requests:
>      RRSetScreenSize
>      RRSetCrtcConfig
> and replace them new requests, roughly like this:
>      RRAddScreenConfiguration: contains everything from SetScreenSize
>          and SetCrtcConfig for each CRTC; implementation validates
>          everything, and if the configuration is valid, puts it in a list;
>          this would not actually make the requested configuration active
>      RRSwitchScreenConfiguration: switches to one of the valid
>          configurations, making it active
>      RRDeleteScreenConfiguration: deletes an existing configuration;
>          cannot be the one currently in use
> >>    Exposing CRTC vs output in the spec seems OK, but seems insufficient
> >>    to reflect all hardware restrictions; and I think hardware changes
> >>    too rapidly to realistically reflect all the various limitations that
> >>    hardware might have.
> >
> > Yup. I punted and let the driver just say "sorry, can't do that Dave".
> >
> >>    This is nice because adding a new MetaMode gives the implementation
> >>    a central place to perform any needed validation, and then you have
> >>    a higher likelihood of later being able to fullfill the request to
> >>    switch to that MetaMode.
> >
> > Sorry, I can't see how this makes it 'more likely'. The 'usual' way to
> > set a configuration is to turn everything off, set the screen size and
> > then add each crtc/output combination. As these appear incrementally,
> > each partial setup may require dramatic re-configuration of the
> > hardware, but nothing more complicated than your metamode notions.
> My concern is that each incremental step requires reconfiguration and
> revalidation, such that each incremental step is a potential point
> of failure.  Whereas, if the complete desired configuration had been
> specified earlier with an RRAddScreenConfiguration-like API, then
> (most? all?) the validation will have already been done before we start
> applying any parts of the new configuration.
> If we knew the complete configuration before starting the adventure of
> applying the new configuration, there should be fewer potential points
> of failure once we start applying the new config.
> >>  A MetaMode is also a nice abstraction for
> >>    backwards compatibility with RandR 1.1 and XF86VidMode -- they just
> >>    see the MetaMode as a single mode.  Lastly, keeping multiple complete
> >>    screen configurations around makes it easy to return to the previous
> >>    config, if the user wants to revert his changes (or you present an
> >>    "are these new settings OK?" dialog with a 10 second timeout).
> >
> > What we could add is some 'save/commit/revert' mechanism so that partial
> > reconfigurations interrupted by client termination wouldn't break the
> > user environment. I'm not that concerned by this though; there are lots
> > of bits of our environment which depend on well-behaved applications.
> Yeah, a 'save/commit/revert' mechanism is probably a lot of work just
> to handle the abnormal client termination case.
> >> - Other minor stuff:
> >>
> >>      - Should DPI be queriable per-output; I know the core X protocol
> >>        provides a single WidthMM, HeightMM per X screen, but it might be
> >>        useful to allow aware applications to query the DPI with per-output
> >>        granularity.
> >
> > You'll note that outputs have a mm_width/mm_height value.
> That sounds perfect.  However, when I went through the spec, I didn't
> see mm_width/mm_height values in the outputs.  I see mm width/height in:
>      RRSetScreenSize
>      RRScreenChangeNotify
> am I missing where it is also specified for outputs?
> Also, does mm_width/mm_height make sense in the MODEINFO?  Isn't the
> physical size a function of mode+output?  The same mode could have
> drastically different physical size on different outputs.
> >>      - The spec should probably be clear that just because the width/height
> >>        in a RRSetScreenSize request is within the min/max range reported by
> >>        RRGetScreenSizeRange, we're not guaranteed to be able to fullfill
> >>        that.  Video memory constraints, or other hardware constraints may
> >>        come into play that cannot be reflected completely by the min/max
> >>        values reported by RRGetScreenSizeRange (I assume that range is
> >>        intended to report the maximum renderable sizes?)
> >
> > No, all widths/heights are always configurable -- it's just clipping
> > after all. In the XAA-implementation, we don't resize the frame buffer
> > at all, we just change the root window clip list (yes, this sucks).
> OK.
> >>      - I believe TwinView/MergedFB is different than Xinerama, and
> >>        that they solve slightly different problems.  RandR 1.2 solves
> >>        the problem of querying/configuring outputs on an X screen on a
> >>        single GPU.  But having a Xinerama X screen across multiple GPUs
> >>        is slightly different -- if each GPU stores only a portion of the
> >>        entire Xinerama X screen, the outputs connected to that GPU are
> >>        limited in which portions of the Xinerama X screen they can display.
> >
> > You're conflating the protocol extension Xinerama with the DIX-level
> > code which can be used to support that extension (and uses the same
> > name). Yes, we still need the DIX-level code to span GPUs, and, no, I
> > haven't bothered to make that all work together nicely yet.
> >
> > I don't like the DIX-level code as it is horribly inefficient, but
> > without major restructuring of the DIX/DDX interface, we can't do a lot
> > better at this point.
> >
> > I'm afraid its a bit of 'someone else's problem' at this point as I
> > can't plug multiple Intel graphics cards into a machine. If you're
> > interested in working out how the DIX-level Xinerama code could be used
> > with RandR 1.2, I'd love to help out.
> OK.  I'll leave this topic alone for the short term, since it seems it
> will be a contentious issue.  Definitely not something for RandR 1.2;
> I was just trying to identify if the 1.2 spec would limit multi-GPU X
> screens in a future version of the RandR spec.
> X screens spanning GPUs is a hot topic for NVIDIA customers, so I'll
> try to pick this up.  Maybe I can organize an XDevConf '07 talk on this.

Does Nvidia have any relatively hw-independant code for handling this,
or even just a spec as to how you handle it that you could release to
the community to get the ball rolling on this.  I'd like to see
progress on this front as well.


More information about the xorg mailing list