[Mesa-dev] Question: st/mesa and context-shareable shaders

Marek Olšák maraeo at gmail.com
Wed Sep 30 09:25:05 PDT 2015


On Wed, Sep 30, 2015 at 3:53 PM, Roland Scheidegger <sroland at vmware.com> wrote:
> Am 30.09.2015 um 11:41 schrieb Erik Faye-Lund:
>> On Mon, Sep 28, 2015 at 4:39 PM, Roland Scheidegger <sroland at vmware.com> wrote:
>>>
>>> In short, for simplicity, only things were sharable which were really
>>> required to be shared (pretty much just actual resources - and yes that
>>> doesn't work too well for GL neither as you can't share sampler/rt
>>> views, let's face it GL's resource model there from legacy GL is a
>>> disaster, not that d3d9 was any better).
>>
>> OpenGL allows sharing the objects that potentially take up a lot of
>> sharable memory, like shaders, programs, texture images, renderbuffers
>> and vertex buffer objects. But not those who does not, like
>> framebuffer objects and sampler objects. This makes a ton of sense to
>> me, and calling this model "a disaster" seems quite unfair.
> The "disaster" was in reference to the separate textures/renderbuffers,
> which imho really is a disaster - of course this has its root in actual
> implementations (which sometimes even had separate memory for those). It
> continues to make problems everywhere, there's still some ugly hacks for
> instance in the mesa statetracker which we can't get rid of (because
> some optimal format for a texture might then not be renderable). So yes,
> "resources" need to be shareable (albeit GL calls them vbos, textures,
> renderbuffers). And yes these can take up a lot of memory.
> Typically, shaders/programs were much smaller than these, albeit they've
> grown considerably - don't forget gallium is more than 7 years old too.
> So maybe that they are sharable in GL was a reasonable forward looking
> choice, but gallium was meant to make things easier for drivers, and
> typically there was no way you could compile things to hw without
> context information in any case.
>
>>
>>> And IIRC multithreaded GL in general never really used to work all that
>>> well and noone was really using that much.
>>
>> That's not quite true. I've been writing multithreaded OpenGL
>> applications professionally for the last 7 years, and I've experienced
>> very few problems with the proprietary drivers in this respect. OpenGL
>> multi-threading works fine, and is in rather heavy use out there. You
>> might not see those applications, but they exist. And they have been
>> existing for a long time, without notable problems.
>>
> There were some minimal demos in mesa for multithreaded GL, and IIRC
> they didn't make problems only in mesa itself. Though granted maybe 7
> years isn't back far enough... I thought though the usefulness was
> typically quite limited but it wouldn't be surprising if that changed
> too (so if you actually compiled your shader in a different thread, I
> wouldn't have been surprised if most of the compile time was actually
> spent in the thread where you first used that shader anyway). Of course
> 10 years back most people didn't even have dual-core cpus...

Nowadays, a lot of applications use several contexts and threads such
as web browsers (firefox, chrome) and game engines (UE4).
Historically, web browsers suffered most from the sampler-view <->
context dependency and we had some ugly bugs due to that. (making
sampler views per-screen would simplify that code a lot) The number of
multithreaded GL applications is only going to grow and drivers must
already be thread-safe to be considered usable. Also, piglit has a
multithreaded shader-compiling glx test that has been very useful for
fixing race conditions in our driver. We definitely need more of
those.

Marek


More information about the mesa-dev mailing list