[Intel-gfx] [PATCH v2 1/1] drm/i915: Fix VGA handling using stop_machine() or mmio

Dave Airlie airlied at gmail.com
Mon Oct 7 02:23:26 CEST 2013

On Tue, Oct 1, 2013 at 4:37 AM, Alex Williamson
<alex.williamson at redhat.com> wrote:
> On Mon, 2013-09-30 at 15:24 +0100, Chris Wilson wrote:
>> On Mon, Sep 30, 2013 at 05:08:31PM +0300, ville.syrjala at linux.intel.com wrote:
>> > From: Ville Syrjälä <ville.syrjala at linux.intel.com>
>> >
>> > We have several problems with out VGA handling:
>> > - We try to use the GMCH control VGA disable bit even though it may
>> >   be locked
>> > - If we manage to disable VGA throuh GMCH control, we're no longer
>> >   able to correctly disable the VGA plane
>> > - Taking part in the VGA arbitration is too expensive for X [1]
>> I'd like to emphasize that X disables DRI if it detects 2 vga cards,
>> effectively breaking all machines with a discrete GPU. Even if one of
>> those is not being used.
> Why does it do this?  It seems like DRI would make little or no use of
> VGA space.  Having more than one VGA card seems like a pretty common
> condition when integrated graphics are available.  We also seem to have
> quite some interest in assigning one or more of the cards to a virtual
> machine, so I worry we're headed the wrong way if X starts deciding not
> to release the VGA arbiter lock.  On a modern desktop what touches VGA
> space with a high enough frequency that we care about it's performance?
> Thanks,

Because we don't know what DRI clients can do, and a lot of this was
constructed when DRI1 still occured. The idea being that sane GPUs
would remove themselves from arbitration by just switching off their
VGA spaces, who was to know Intel hw guys would fuck this up, probably
should have guessed though.

So the problem we have is that VGA ARB's current interface causes
interactions with the current X server that probably aren't totally
required, so by trying to actually VGA arbitrate properly we are going
to regress userspace so we now need to design a solution that avoids
the regression but possibly allows us to move forward.

I'm think I'm going to make an executive decision to merge Ville's
latest patch to avoid the regression, the other option being just to
revert everything back to the status quo by reverting all the patches
from the past few months, if people are happier with that then maybe
we should just do that for now and try and design our way out of it
properly by reengineering userspace and plan to avoid the regression
next time.


More information about the Intel-gfx mailing list