dummy driver and maximum resolutions, config hacks via LD_PRELOAD, etc

Aaron Plattner aplattner at nvidia.com
Wed Apr 6 15:38:27 PDT 2011


On Wed, Apr 06, 2011 at 02:37:52PM -0700, Antoine Martin wrote:
> On 04/06/2011 09:13 PM, Adam Jackson wrote:
> > On 4/6/11 6:51 AM, Antoine Martin wrote:
> > 
> >> 1) I can't seem to make it use resolutions higher than 2048x2048 which
> >> is a major showstopper for me:
> >> Virtual height (2560) is too large for the hardware (max 2048)
> >> Virtual width (3840) is too large for the hardware (max 2048)
> >>
> >> Seems bogus to me, I've tried giving it more ram, giving it a very wide
> >> range of vsync and hsync, added modelines for these large modes, etc
> >> No go.
> > 
> > It is bogus, the driver has an arbitrary limit.  Look for the call to
> > xf86ValidateModelines in the source, and compare that to (for example)
> > what the vesa driver does.
> Here's a patch which constifies the hard-coded limits and increases them
> to more usable values (4096x4096). I've tested it on Fedora 14 and it
> allows me to allocate much bigger virtual screens.
> 
> diff --git a/src/dummy_driver.c b/src/dummy_driver.c
> index 804e41e..05450d5 100644
> --- a/src/dummy_driver.c
> +++ b/src/dummy_driver.c
> @@ -85,6 +85,9 @@ static Bool   dummyDriverFunc(ScrnInfoPtr pScrn,
> xorgDriverFuncOp op,
>  #define DUMMY_MINOR_VERSION PACKAGE_VERSION_MINOR
>  #define DUMMY_PATCHLEVEL PACKAGE_VERSION_PATCHLEVEL
> 
> +#define DUMMY_MAX_WIDTH=4096
> +#define DUMMY_MAX_HEIGHT=4096

4096 is low.  Modern GPUs go up to at least 16kx16k, and I think you can
get away with X screens at the protocol level up to 32kx32k, though I
vaguely recall there being some restriction against that.

> 4) Acceleration... Now this last bit really is a lot more far fetched,
> maybe I am just daydreaming.  Wouldn't it be possible to use real
> graphics cards for acceleration, but without dedicating it to a single
> Xdummy/Xvfb instance?  What I am thinking is that I may have an
> under-used graphics card in a system, or even a spare GPU (secondary
> card) and it would be nice somehow to be able to use this processing
> power from Xdummy instances. I don't understand GEM/Gallium kernel vs X
> server demarcation line, so maybe the card is locked to a single X server
> and this is never going to be possible.

Not with the dummy driver, but real drivers can do that if they have the
functionality.  For example [shameless plug], you can use the
"UseDisplayDevice" "none" option with the NVIDIA driver:

ftp://download.nvidia.com/XFree86/Linux-x86/270.30/README/xconfigoptions.html#UseDisplayDevice



More information about the xorg mailing list