[PATCH] drm/amd/display: Allow faking displays as VRR capable.

Mario Kleiner mario.kleiner.de at gmail.com
Tue Apr 30 19:56:04 UTC 2019


On Tue, Apr 30, 2019 at 2:22 PM Kazlauskas, Nicholas
<Nicholas.Kazlauskas at amd.com> wrote:
>
> On 4/30/19 3:44 AM, Michel Dänzer wrote:
> > [CAUTION: External Email]
> >
> > On 2019-04-30 9:37 a.m., Mario Kleiner wrote:
> >> Allow to detect any connected display to be marked as
> >> VRR capable. This is useful for testing the basics of
> >> VRR mode, e.g., scheduling and timestamping, BTR, and
> >> transition logic, on non-VRR capable displays, e.g.,
> >> to perform IGT test-suit kms_vrr test runs.
> >>
> >> This fake VRR display mode is enabled by setting the
> >> optional module parameter amdgpu.fakevrrdisplay=1.
> >>
> >> It will try to use VRR range info parsed from EDID on
> >> DisplayPort displays which have a compatible EDID,
> >> but not compatible DPCD caps for Adaptive Sync. E.g.,
> >> NVidia G-Sync compatible displays expose a proper EDID,
> >> but not proper DPCD caps.
> >>
> >> It will use a hard-coded VRR range of 30 Hz - 144 Hz on
> >> other displays without suitable EDID, e.g., standard
> >> DisplayPort, HDMI, DVI monitors.
> >>
> >> Signed-off-by: Mario Kleiner <mario.kleiner.de at gmail.com>
> >>
> >> [...]
> >>
> >>   struct amdgpu_mgpu_info mgpu_info = {
> >>        .mutex = __MUTEX_INITIALIZER(mgpu_info.mutex),
> >> @@ -665,6 +666,16 @@ MODULE_PARM_DESC(halt_if_hws_hang, "Halt if HWS hang is detected (0 = off (defau
> >>   MODULE_PARM_DESC(dcfeaturemask, "all stable DC features enabled (default))");
> >>   module_param_named(dcfeaturemask, amdgpu_dc_feature_mask, uint, 0444);
> >>
> >> +/**
> >> + * DOC: fakevrrdisplay (int)
> >> + * Override detection of VRR displays to mark any display as VRR capable, even
> >> + * if it is not. Useful for basic testing of VRR without need to attach such a
> >> + * display, e.g., for igt tests.
> >> + * Setting 1 enables faking VRR. Default value, 0, does normal detection.
> >> + */
> >> +module_param_named(fakevrrdisplay, amdgpu_fake_vrr_display, int, 0644);
> >> +MODULE_PARM_DESC(fakevrrdisplay, "Detect any display as VRR capable (0 = off (default), 1 = on)");
> >
> > amdgpu has too many module parameters already; IMHO this kind of niche
> > use-case doesn't justify adding yet another one. For the vast majority
> > of users, this would just be another knob to break things, resulting in
> > support burden for us.
> >
> > How about e.g. making the vrr_capable property mutable, or adding
> > another property for this?
> >
> >
> > --
> > Earthling Michel Dänzer               |              https://www.amd.com
> > Libre software enthusiast             |             Mesa and X developer
> >
>
> Since vrr_capable is already an optional property I think making it
> mutable could potentially be an option. It would allow for userspace to
> be able to disable capability as well that way.

Yes, that would have been useful for at least my case. In my own
toolkit i will need to control vrr on/off on a run-by-run basis,
depending on what the users experiment scripts want. So i'll add code
to manipulate my fullscreen windows attached XAtom directly and
override Mesa's choices. A bit of a hack, but should hopefully work.

At least for my special niche, more easily accessible (== RandR output
props) info is always helpful. Other things that would probably make
my case easier would be optional properties to report the "vrr_active"
state back, so that the toolkit can know cheaply via a simple query
without doubt at any point in time if vrr is active or not, because i
need to use very different ways of scheduling swapbuffers and
correctness checking the results for vrr vs. non-vrr.

Or some vrr_range property, so the toolkit can know if it is operating
the hw in normal vrr range, or in BTR mode, and adapt its swapbuffers
scheduling, e.g., to try to help the current vrr/btr code a bit to
make the "right" decisions for stable timing.

Of course that makes userspace clients more dependent on current hw
implementation details, so i can see why it's probably not a popular
choice for a generic api. The best long-term solution is to have
proper api for the client to just provide a target presentation
timestamp and leave the rest to the magic little elves inside the
machine.

>
> It's a pretty niche usecase though. However, as Michel said, it would
> probably just end up being another setting that allows users to break
> their own setup.
>
> Nicholas Kazlauskas

Ok, fair enough, thank you two for the feedback. I assumed users
wouldn't mess with this module parameter, given that it has zero
practical end-user value and can essentially only be used for cheap
IGT regression testing, but then one never knows if users think it is
some magic "get Freesync for free" switch and poke the button anyway.
I'll keep the patch locally for my own testing then.

I assume your team at AMD probably has enough FreeSync displays around
for some IGT test setup to do automated/regular regression testing?
There was quite a bit of talk about the regression testing topic on
last XDC, but i haven't kept up to date on how this is progressing
outside of Intel?

I'll try to contribute some more tests to the igt/kms_vrr test, maybe
porting some of the tests from my own toolkit, as far as they can be
sensibly ported without producing too much false positives.

-mario


More information about the dri-devel mailing list