Question about the future of Xorg

Carsten Haitzler raster at rasterman.com
Sat Jun 14 10:01:39 UTC 2025


On Fri, 13 Jun 2025 20:34:59 -0400 (EDT) Vladimir Dergachev
<volodya at mindspring.com> said:

> 
> 
> On Sat, 14 Jun 2025, Carsten Haitzler wrote:
> 
> >>
> >> This is a Dell, it has an integrated Intel GPU and and NVidia one. I
> >> usually don't use NVidia GPU - too much heat and no visible benefit.
> >>
> >> The NVidia chip is Geforce MX 130, 2 GB RAM.
> >
> > what intel chip/gpu?
> 
> Intel Core i7-8550U

not that old really. my old laptop is that. no dedicated gpu though.

> >>
> >> I found that restarting kwin and restarting plasmashell helps, and also
> >> occasionally I kill firefox and restart it. The latter is a nuisance,
> >> because while it does try to restore windows and tabs it does not restore
> >> all of them.
> >
> > this smells of some kind of leak? where? ... dunnos.
> 
> Could be a leak, could be entropy (like fragmentation in memory).

it could be .. though i dont think it'd do what you describe.

> >> Btw, if you have the same problem there is a setting in about:config
> >> that lets you increase timeout for reaching website. When you restart
> >> firefox and it tries to open 300 windows with 10-15 tabs each, the urls
> >> will timeout too early in default configuration. Change that setting fixes
> >> it (mostly).
> >
> > i never restore tabs... when i close my browser.. i'm done. :)
> 
> Except that nowadays you cannot easily run more than one browser instance.

i don;'t - i keep my browser even in 1 window - at the same position and size
(not even maximized horizontally - just vertically so it's like a paper page
not some extra-wide thing). i close tabs i don't need - i can always go there
again later if i ever need to.

> Somehow there is a drive to turn every app into an operating system. 
> Firefox has "about:processes" to let you find out what each window and tab 
> are doing because it all gets lumped irregularly by top.

well tbh that is pretty much a browser thing - but in that area you're right.

> >> What's your screen resolution? Mine is 3840x2160, so a single full screen
> >> buffer should be 33MB. A few hundred buffers and you would be out of RAM
> >> on discrete GPU (and way earlier on my NVidia chip).
> >
> > 2560x1440. i do have 16gb ram on the card - but ... how many windows do you
> > have normally around... and are they all "maximized" ? as it's a window that
> > consumes a buffer.
> 
> Firefox is usually maximized. I think right now I have around 120 Firefox 
> windows.

and there is where i go "wtf? why?". that's nuts. i would say - looking at most
peoples screens and workflows. you have a totally out-there workflow there.
it's not common. not with that many windows.

> >>> what's your working style? put 50+ windows on a single desktop and
> >>> "alt+tab" between them?
> >>
> >> I got 8 virtual screens, and a bunch of windows on each of them. Most are
> >> firefox.
> >
> > and how many ffox windows?
> >
> >> What happens is that I work on something and it usually involves 10-20
> >> windows, but then I have to pause or wait for one reason or another and
> >> I switch to something else.
> >
> > 10-20 is not too much. and at your fullscreen 33m per window that's
> > ~300-700m - so not a problem for memory usage. even if you run out of vram
> > the gpu can migrate some buffers/textures to system ram and map them over
> > the pcie bus as render targets. its a bit slower. it might not migrate and
> > just alloc new buffers there never migrating lesser used ones off. that'd
> > be a "poor caching algorithm" :)
> 
> It's 10-20 times the number of different things I work on. So it adds up.

i mean i might have 1 virtual desktop with maybe 20 terminals - when coding and
what not. but that's it ... i also don't maximize them - i lay them out so they
are like tiled side by side and split vertically in a kind of step-like grid
with some taller, some shorter but all 80 col wide. no overlap so i can see a
lot of context.

but my point is - even where i have a lot of windows, they dont cover a lot of
SPACE. i've observed others and they mostly live life with maximized windows
and switch - but  they often only have maybe 2-5 - maaaybe 10 at most. it's
rare to have more and then only temporarily.

my point is - your usage pattern is "rare". it's also going to be the most
memory hungry one in a composited world - if so then i'd suggest you upgrade
with a lot more ram. :)

remember compositing is also pretty much a compute vs memory tradeoff. if you
have a buffer of a window and always have it you can avoid redraws every time
its shown/exposed. you just use the data you already have and the cost of just
reading that data is almost always a lot less than the cost of re-computing it.

> 18 terminals are too small a test. Does it work if you open 1000 ?

well i couldn't in my std way - i'd run out of x client fd's - the xserver
limits x client count to 128... :) i'd have to enable single process mode in my
terminal to keep it a single client.

> It makes sense that if you expect users to have no problem handling 100 
> windows, you need to test with at least an order of magnitude more.

well as above... i know there is a limit to client count... and having seen
1000's of users desktops and what not over years - your 100's of windows
workflow is  very 

> > ??? the default for consumer gpu's is 16g these days. 8g is a low end "cut
> > price" gpu. the latest gen of gpu's is now more pushing towards 24/32g.
> 
> They are all "cut price" right now - you cannot buy 24/32gb, at least in 
> stores near me. The companies do this on purpose for market segmentation.

the higher end - nv's rtx5090's are at 32gb now. as i said - pushing there. the
mid to lower tiers like 5070 are 12gb or go back a gen to 4070 - same thing.
the rx9070 is mid-to-high with 16gb the lower 9060 is also 16gb unless its the
absolute lowest end which is 8gb. so 3/4 of the current gen amd cards are 16g.

16g is kind of the default now pretty much. until you get to intel's offerings
and those skew to the low end. 12gb, 102gb and 8gb ... but they are focused on
providing "value" solutions there mostly it seems.

> And on a notebook the RAM and bandwidth are even smaller.

but bandwidth here shouldn't be the issue - you wont be rendering 100's of
windows AT ONCE. you will only see a small selection of them on screen at any
time and thus only some small subset need access for a re-composite every frame.

this is why i am wondering if there's "something not right?". like is it kde's
pager or something rendering every window on every desktop every time you
switch focus - no elimination of obscured windows etc. ? and the pager might
use the original full buffer and scale it down of course  every time it
renders... ?

> Also, right now Microsoft is very busy alienating a lot of people with 
> computers without TPM that cannot upgrade to new Windows version.
> 
> Those people are happily installing Linux and we should not impose 
> requirements of more than 8GB video RAM just to open some webpages.

these people are not going to have 100's of windows :) your workflow is unusual
for sure.

> >>> if its a 1-off "screenshot then display a copy of it and just scale that
> >>> up" then there are wayland protocols for that - but the idea is that
> >>> screenshotting protocol access will be limited and a compositor may do a
> >>> very android/ios thing of ask you to grant permission first.
> >>
> >> I hate the permission stuff on android. The worst is that they've taken to
> >> removing permissions from apps when you don't use them. So you have some
> >
> > so you're happy with rogue games you run screenshotting your browser with
> > banking details and sending it back to home? :)
> 
> This problem only arises on Android and IOS because they are designed for 
> closed source apps and for controlling the user.
> 
> On Linux there is no such problem as long as you use software you can 
> examine.

the problem is 99.9999% of people don't have time to examine it and never will.
that includes geeks and developers. i certainly have no time to do that. i will
not be re-auditing the source for every app i use every update. i won't even do
it once. all those people windows is alienating and try linux certainly won't
be doing it either.

and so i assume you never play games then. or well not on your pc. you will
never get source for these so sure - you can live your life without games but
most people won't - they want to have some entertainment. they may use a
dedicated console. they may use a pc. but it'd be good to know the thing i am
entertaining myself with and once i have completed its quests and storyline is
never played again... is not able to do bad things because my display system is
designed to not allow it.

> On Android you could improve things immeasurably if open source apps were 
> installed with complete user access to app directory (to check which 
> binary actually shipped) and no permission restrictions.

i'm not going to go down this rabbit hole. :) wayland has a security model that
does not go trusting everything and everyone by design. in this way it is
right. spot on. a compositor could just not care and allow everything to
everyone but the model allows for these things to be optional features and they
may not work or may be denied by policy and apps have to live in a world where
they cant DEPEND on these existing like they can in x.

> > that's the point of this. the point is that the display system should stop
> > being a leak of info/security. it cant force you to sandbox apps... but it
> > can STOP being the problem that makes sandboxing ineffective.
> 
> I would actually argue that X is very secure, and gotten more secure over 
> the years.

i absolutely would not. it has not improved at all and it's not secure. any x
client can listen in to you type whenever it likes. listen to your passwords
being entered. they also can send fake input to anything anywhere... ie just
take over control of any app you have and maybe just enter keystrokes of "rm
-rf ~/" and wipe all your stuff. it's a free-for-all in x assuming a model
where there is no malicious actor at all.

> Why? Because before 2000 you often had multiple users on the same system. 
> And now I have several systems and I am the only user. They are on the 
> same network that I control, and there is no way to access those sessions.
> 
> The only potential problem comes from Firefox, and is really mostly due to 
> javascript. And, as I see, Firefox developers (and authors of uBlock 
> and noScript) are on top of it.

and that is your malicious actor vector. or one of them. other people use
things like vs code (binaries) or as above games etc. - but suffice to say the
population at large using a desktop uses a lot of apps they cant fully trust.
it doesn't matter what YOU do - a display system isn't designed for YOU. it's
designed for "most people".

> >> app that you use once a month and then you have to debug why it does not
> >> work. Especially sucks if you need to take a quick snapshot with a thermal
> >> camera or a similar tool.
> >
> > and this is the current problem area - how to grant permission AND keep it
> > granted persistently.
> 
> It is a very simple problem - you have an xmag/kmag like app. You examine 
> code. You see it does not send screenshots to some random IP or random 
> country. You install it and use with no restrictions.

that is not simple. 99.999% of people couldn't read the first line of code and
know what it does. you don't design a display system for 0.0001% of people. it
just so happens historically no one cared and there was little to no ability to
isolate processes. hell ye olde windows and macos didn't even have memory
protection between processes - they could stomp all over each other at a drop
of a hat. things have evolved.

> > a screenshot is just 1 frame of a video... that is how zoom, teams and every
> > video conf app works now today. they keep taking screenshots repeatedly and
> > quickly. that's how they can "share my screen" over that video conference...
> > they grab these frames then encode them into a video stream - on the fly.
> > they do that in x11 today...
> 
> Yes, but ideally you could do it in such a way as to guarantee a frame 
> every 1/N seconds and also guarantee that a frame is fully rendered to 
> avoid tearing. This is something that I think X cannot do right now.

you don't get that guarantee in x either - indeed - yet you seem happy and it
works. in fact the wayland protocols for this are much better designed than
what is being done in x now "the slow hard way". pipewire support over dbus is
also able to do better in some ways (i just dislike the dbus vs wl protocol
choices for something that imho should be done at the wl protocol level).

> > much you want to scale THAT session may vary from target to target it is
> > connecting to... and it should remember such scale settings machine by
> > machine you register/connect to. the compositor has no clue what is inside
> > that app's window. in wayland or x11. it's the app's business.
> 
> Not really - I just run x11vnc on the remote and connect to it. I don't 
> start a new session, and I don't change fonts. Very handy, both to help 
> someone else and to use your desktop when you are away.

not the login session it's looking at but the "viewing session". the VIEWER
should just scale up pixels then when rendering rather than just dumbly drawing
them 1:1 and nothing else. like gimp or inkscape can zoom in and out of
content... your vnc viewer should offer it - specifically to address viewing
vastly different content from different systems and machines. you may vnc to
some machine with an 8k 15" screen .. and when trying to use that on your
1920x1080 desktop monitor it's just some horrible "scroll all day" fest with
fonts 8x bigger than you need. in this case it should be able to scale down to
fit nicely. it also should remember the scale factor you last used for that
viewing session to that vnc server... either way i see it as a hole int he vnc
viewer app feature list. it's not something to solve in a desktop or
compositors etc.

> best
> 
> Vladimir Dergachev
> 


-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
Carsten Haitzler - raster at rasterman.com



More information about the xorg mailing list