How to disable/limit pixmap cache in X
glynn at gclements.plus.com
Thu Sep 20 01:26:08 PDT 2007
Jim Kronebusch wrote:
> > I had assumed that Jim was under the impression that the server was
> > merely "caching" this data as an optimisation, and that reducing the
> > amount of memory used would merely degrade performance. That isn't the
> > case.
> Yes, I was under the impression that this was for optimisation. I understand now that
> the application may be using it as an optimisation but for X this is not the case and
> refusal to accept pixmaps could cause the application to crash (I hope what I just
> stated is accurate).
> But the fact still remains that an application can abuse pixmap
> storage (such as firefox) and without the server having some sort of limits in place
> this is able to crash the client.
It might help to refer to it as the "thin client" or "terminal". From
the X perspective, the thin client runs the X server, the X client is
run on the "terminal server".
> So to me there should be some mechanism in place that
> does not allow this to happen. I was hoping there was an easy or at least only
> moderately difficult way to make this happen....apparently not. I was hoping there was
> a way to put a size limit specifically on pixmap storage and refuse any requests from an
> application beyond that point.
At present, there isn't any kind of configuration option for this. You
can query the resource usage of a client (xrestop, libXRes), but
cannot (directly) limit it.
> I had hoped that this would cause the application to
> simply move on or not request pixmap storage and not cause the application to crash.
> But since that seems to only be controlled on the application side rejection will
> ultimately cause the application to crash.
It's theoretically possible for the application to handle this, but it
isn't straightforward, so most applications won't try.
The main problem is that error notification is asynchronous. The
client tells the X server to create a pixmap, store image data into
the pixmap, draw the pixmap in a window, and do some other stuff.
Later, it gets back an error saying that pixmap creation failed, and
so did everything involving that pixmap.
The application could call XSync() after creating a pixmap, which will
wait until all responses (including errors) have been reported, but
this can reduce performance, particularly when using X over a network
(if you wait for a response after each request, you end up with
throughput which is inversely proportional to latency).
> This of course isn't ideal, but crashing the
> offending application is still way better than crashing the server and freezing the
> client and allowing a user to still continue working in other applications.
> To me this seems to be the only sensible way for the server to react. I wouldn't think
> there would ever be any circumstance where you would want to allow the
> client/application to crash the server.
Having the server crash is less than ideal, although it may be
unavoidable if the OS overcommits memory (and you don't have any
Even without the overcommit issue, you can get into a situation where
you have used up every last byte of memory without the server
crashing, and every subsequent allocation from any client will fail.
If the client which fails is something important (e.g. the window
manager), this may not be much of an improvement over the server
> Am I making any sense here? I'll admit I don't really know much of anything about how X
> works, and as a result I have no real knowledge on how this should be fixed. All I do
> know is this is a huge problem that seems to be rapidly getting worse and is causing
> sever instability in remote X usage such as Linux thin clients or in lower memory machines.
The problem is quite clear. There are some potential workarounds which
are feasible, e.g. a more intelligent allocation strategy (the current
one is basically to try to honour every allocation). That would mean
that Firefox' gluttony would only kill Firefox.
The more desirable behaviour, namely for applications to simply
degrade in performance when resources are scarce, is something which
has to be implemented in the application (or its toolkit).
Glynn Clements <glynn at gclements.plus.com>
More information about the xorg