[Mesa-dev] [PATCH] gallium/util: don't let children of fork & exec inherit our thread affinity

Mathias Fröhlich Mathias.Froehlich at gmx.net
Thu Nov 1 11:07:39 UTC 2018


Hi,

On Thursday, 1 November 2018 11:04:27 CET Pekka Paalanen wrote:
> On Wed, 31 Oct 2018 16:41:47 -0400
> Marek Olšák <maraeo at gmail.com> wrote:
> 
> > On Wed, Oct 31, 2018 at 11:26 AM Michel Dänzer <michel at daenzer.net> wrote:
> > 
> > > On 2018-10-31 12:39 a.m., Gustaw Smolarczyk wrote:  
> > > > śr., 31 paź 2018 o 00:23 Marek Olšák <maraeo at gmail.com> napisał(a):  
> 
> ...
> 
> > > >> As far as we know, it hurts *only* Blender.  
> > >
> > > Why are you still saying that in the light of
> > > https://bugs.freedesktop.org/108330 ?
> > >  
> > 
> > Another candidate for a drirc setting.
> 
> Isn't GTK 4 going to use the GPU (OpenGL or Vulkan) for the UI
> rendering?
> 
> If GTK 4 uses OpenGL, would you not need to blacklist all GTK 4 apps,
> because there would be a high chance that GTK initializes OpenGL before
> the app creates any worker threads? And these would be all kinds of GTK
> apps that divide work over multiple cores, not limited to what one
> would call GL apps or games.
> 
> I don't know GTK, so I hope someone corrects or verifies this.

You have to expect Qt to use OpenGL for gui rendering. Not the only
implementation, but the one that Qt is heading for in most cases.

IMO think the side effects of binding the thread making the context
current at a cpu is too bad.
As an application developer I would not expect that to happen.

You can easily bind all *internal* threads that are never visible to any
calling code to whatever you want but not a thread that is under the
control of the application.

Only slightly related, but related:
Linux does some kind of numa aware scheduling, that is it tries
to keep tasks on that numa node where most of the physical memory
referenced by the task is directly attached to. Well at least kind of
something like that. So, one question, does amdgpu try to allocate
cpu memory on the cpu that has the pcie lanes of the gpu directly
attached to? As a side effect of that, you would
probably observe numa scheduling pulling the task back to the node
where most of the memory is residing. That is, by designed coincidence,
the same cpu.
Is something like that exploitable for the problem to be solved?

There are sched_domains in the kernel that can tell something about
the topology of the cpus in the system. My be there is a way to assign
a 'prefered sched_domain' to a task internally? ... I have not looked into the
kernel side apis for some time. May be somebody from the scheduler guys
can easily provide something like that?

best

Mathias




More information about the mesa-dev mailing list