[Intel-gfx] [RFC i-g-t v3 5/5] Add support for hotplug testing with the Chamelium
Tomeu Vizoso
tomeu at tomeuvizoso.net
Thu Dec 8 07:51:17 UTC 2016
On 7 December 2016 at 20:56, Lyude Paul <lyude at redhat.com> wrote:
> On Wed, 2016-12-07 at 12:27 +0100, Tomeu Vizoso wrote:
>> On 1 December 2016 at 02:24, Lyude <lyude at redhat.com> wrote:
(big snip)
>> > + * By default, this file is expected to exist in
>> > ~/.igt_chamelium_rc . The
>> > + * directory for this can be overriden by setting the environment
>> > variable
>> > + * %CHAMELIUM_CONFIG_PATH.
>>
>> It would be a pity if in time we need to add configuration parameters
>> for other aspects of IGT and we have to choose between renaming this
>> file, or having two configuration files for the same project. Why not
>> call it just .igt_rc?
> Nice catch. While I'm at it I'll add a patch to expose a function or
> two to get a global GKeyFile that's shared across tests that need
> configurations. As well I'll make sure connector groups are prefixed
> with something such as "Chamelium:" since we'll be sharing things.
Sounds great.
>> > +/**
>> > + * chamelium_async_hpd_pulse_finish:
>> > + *
>> > + * Waits for any asynchronous RPC started by
>> > #chamelium_async_hpd_pulse_start
>> > + * to complete, and then cleans up any leftover responses from the
>> > chamelium.
>> > + * If all of the RPC calls have already completed, this function
>> > returns
>> > + * immediately.
>>
>> This makes me wonder if it wouldn't have been better to go with
>> libsoup, as it uses a real event loop that we may need to use in
>> tests
>> for other reasons.
>
> Fwiw, I wouldn't worry about this part too much. The only reason we we
> use async calls here is so that we can use the mixed pulses RPC to get
> the chamelium to send delayed hotplug events by using the pulse width
> as the delay. I wouldn't think there would be any situations where the
> chamelium would fail to send a hotplug, since all that part requires is
> just asserting the hpd line. It could fail if the IO board stopped
> responding, but in that case everything else is going to fail too and
> we won't get the hotplug event we're expecting anyway.
>
> This is just my opinion though, if you can think of some use cases that
> this could seriously cause some issues I'll be happy to change it.
Fair enough. When this issue came up first, I tried to think of any
use cases in which having a proper event loop would be important, but
I wasn't able to back then and I still cannot think of anything, so
I'm fine with things as they are. Also, as you have the code
structured, it would be quite easy to switch to libsoup if needed in
the future.
>> > + /* Sleep so we don't accidentally cause an hpd
>> > storm */
>> > + sleep(1);
>>
>> Wouldn't such a long sleep unnecessarily slow test runs down?
> Yes, but I don't see much of a way around it right now. Eventually I'd
> like to add something to i915's debugfs to allow disabling HPD storm
> detection (we'll need this as well if we want to eventually add HPD
> storm detection tests).
That's cool, but I was thinking that the sleep duration could be shorter.
>> > +
>> > +static void
>> > +test_display(data_t *data, struct chamelium_port *port)
>> > +{
>> > + igt_display_t display;
>> > + igt_output_t *output;
>> > + igt_plane_t *primary;
>> > + struct igt_fb fb;
>> > + drmModeRes *res;
>> > + drmModeModeInfo *mode;
>> > + drmModeConnector *connector;
>> > + uint32_t crtc_id;
>> > + int fb_id;
>> > +
>> > + reset_state(data, port);
>> > +
>> > + chamelium_plug(port->id);
>> > + wait_for_connector(data, port, DRM_MODE_CONNECTED);
>> > + igt_assert(res = drmModeGetResources(data->drm_fd));
>> > + kmstest_unset_all_crtcs(data->drm_fd, res);
>> > +
>> > + igt_display_init(&display, data->drm_fd);
>> > +
>> > + /* Find the output struct for this connector */
>> > + for_each_connected_output(&display, output) {
>> > + if (output->config.connector->connector_id ==
>> > + port->connector_id)
>> > + break;
>>
>> Are we that sure that we'll always find such an output? Could be
>> quite
>> disconcerting if that stopped being true, even if by a bug.
> Not sure I follow here: that's what the wait_for_connector() call is
> there for. I'd expect that we could make the assumption we'll find a
> connector if we've managed to make it that far without failing.
My concern was rather that, as code evolve, that assumption could
become false and the bug would manifest in a disconcerting way. So I
would just assert that the connector that we got after the loop is
really the one we expected.
Thanks,
Tomeu
More information about the Intel-gfx
mailing list