UDL device cannot get its own screen
Böszörményi Zoltán
zboszor at pr.hu
Wed Nov 13 18:08:41 UTC 2019
2019. 11. 13. 18:25 keltezéssel, Ilia Mirkin írta:
> On Wed, Nov 13, 2019 at 11:59 AM Böszörményi Zoltán <zboszor at pr.hu> wrote:
>>
>> 2019. 11. 12. 17:41 keltezéssel, Ilia Mirkin írta:
>>> On Tue, Nov 12, 2019 at 9:23 AM Böszörményi Zoltán <zboszor at pr.hu> wrote:
>>>> But no, all GPU devices (now only one, the UDL device) have screen 0
>>>> (a.k.a. DISPLAY=:0.0) set when AutoBindGPU is true:
>>>>
>>>> [ 2444.576] xf86AutoConfigOutputDevices: xf86NumScreens 2 xf86NumGPUScreens 1
>>>> [ 2444.576] xf86AutoConfigOutputDevices: GPU #0 driver 'modesetting' 'modeset' scrnIndex
>>>> 256 origIndex 257 pScreen->myNum 256 confScreen->screennum 0
>>>> confScreen->device->identifier 'Intel0'
>>>> confScreen->device->screen 0 confScreen->device->myScreenSection->screennum 0
>>>> confScreen->device->myScreenSection->device->screen 0
>>>>
>>>> Somehow, Option "Device" should ensure that the UDL device is actually
>>>> treated as a framebuffer that can be rendered into (i.e. to be modeset(2)
>>>> instead of modeset(Gn)) and it should be woken up automatically.
>>>>
>>>> This is what AutoBindGPU is supposed to do, isn't it?
>>>>
>>>> But instead of assigning to screen 0, it should be assigned to whatever
>>>> screen number it is configured as.
>>>>
>>>> I know it's not a common use case nowadays, but I really want separate
>>>> fullscreen apps on their independent screens, including a standalone UDL
>>>> device, instead of having the latters as a Xinerama extension to some
>>>> other device.
>>>
>>> If you see a "G", that means it's being treated as a GPU device, which
>>> is *not* what you want if you want separate screens. You need to try
>>> to convince things to *not* set the devices up as GPU devices, but
>>> instead put each device (and each one of its heads, via ZaphodHeads)
>>> no a separate device, which in turn will have a separate screen.
>>
>> I created a merge request that finally made it possible what I wanted.
>>
>> https://gitlab.freedesktop.org/xorg/xserver/merge_requests/334
>>
>> Now, no matter if I use the intel or modesetting drivers for the
>> Device sections using the Intel heads, or AutoBindGPU set to true or
>> false, the UDL device is correctly matched with its Option "kmsdev"
>> setting to the plaform device's device path.
>>
>> This patch seems to be a slight layering violation, but since the
>> modesetting driver is built into the Xorg server sources, the patch
>> may get away with it.
>
> Have you looked at setting AutoAddGPU to false? AutoBindGPU is too
> late -- that's when you already have a GPU, whether to bind it to the
> primary device (/screen/whatever). You need to not have a GPU in the
> first place.
Yes, I tried AutoAddGPU=false. Then the UDL device was not set up at all.
What I noticed in debugging Xorg via GDB is that the UDL device was
matched to the wrong platform device in xf86platformProbeDev.
xf86_platform_devices[0] == Intel, /dev/dri/card1, primary platform device
xf86_platform_devices[1] == UDL, /dev/dri/card0
devList[0] == "Intel0"
devList[1] == "Intel1"
devList[2] == "UDL"
devList[3] == "Intel2" (GPU device)
Since the device path was not matched and the PCI ID did not match,
(after all, the UDL device is NOT PCI), this code was executed:
else {
/* for non-seat0 servers assume first device is the master */
if (ServerIsNotSeat0())
break;
if (xf86IsPrimaryPlatform(&xf86_platform_devices[j]))
break;
}
So, probeSingleDevice() was called with xf86_platform_devices[0] and
devList[2], resulting in the UDL device set up as a GPU device and not
a framebuffer on its own right.
My MR modifies this so if there is an explicit Option "kmsdev" setting,
it's matched first. The final else branch is only executed in the default
case with no explicit configuration.
With this MR, the explicit configuration for UDL works, regardless the
AutoBindGPU value.
Best regards,
Zoltán Böszörményi
More information about the dri-devel
mailing list