dummy driver and maximum resolutions, config hacks via LD_PRELOAD, etc

Antoine Martin antoine at nagafix.co.uk
Wed Apr 6 03:51:26 PDT 2011


Hi,

As suggested on this list a while back, I am trying to replace Xvfb with
the Xorg + dummy driver.

1) I can't seem to make it use resolutions higher than 2048x2048 which
is a major showstopper for me:
Virtual height (2560) is too large for the hardware (max 2048)
Virtual width (3840) is too large for the hardware (max 2048)

Seems bogus to me, I've tried giving it more ram, giving it a very wide
range of vsync and hsync, added modelines for these large modes, etc
No go.

Wish-list: it would also be nice not to have to specify Modelines for
the "dummy monitor" since it should be able to handle things like
3840x2560 as long as enough RAM is allocated to it, right?
I had to add this one to get 2048x2048:
Modeline "2048x2048 at 10" 49.47 2048 2080 2264 2296 2048 2097 2101 2151

2) If I resize this dummy screen with randr, do I save any memory or cpu
usage during rendering? Are there any benefits at all?
It doesn't seem to have any effect on the process memory usage, I
haven't measured CPU usage yet but I assume there will be at least some
savings there. (the scenario that interests me is just one application
running in the top-left corner - does the unused space matter much?)
I may have dozens of dummy sessions, so savings would add up.

3) Are there any ways of doing what the LD_PRELOAD hacks from Xdummy*
do, but in a cleaner way? That is:
* avoid vt switching completely
* avoid input device probing (/dev/input/*)
* load config files from user defined locations (not /etc/X11)
* write log file to user define location (not /var/log)

I understand why some of those restrictions are in place, but maybe we
can have a /usr/bin/Xdummy which does this and *only* allows loading the
dummy module (to prevent abuse)? Xdummy does not need to be suid, since
it doesn't touch any real devices.
Or would that be too hard to implement? If not, can you give me any
pointers to get started?

4) Acceleration... Now this last bit really is a lot more far fetched,
maybe I am just daydreaming.
Wouldn't it be possible to use real graphics cards for acceleration, but
without dedicating it to a single Xdummy/Xvfb instance?
What I am thinking is that I may have an under-used graphics card in a
system, or even a spare GPU (secondary card) and it would be nice
somehow to be able to use this processing power from Xdummy instances. I
don't understand GEM/Gallium kernel vs X server demarcation line, so
maybe the card is locked to a single X server and this is never going to
be possible.

Thanks
Antoine

*Xdummy:
http://www.karlrunge.com/x11vnc/Xdummy



More information about the xorg mailing list