[Mesa-dev] GBM and the Device Memory Allocator Proposals
Rob Clark
robdclark at gmail.com
Thu Nov 30 18:48:45 UTC 2017
On Thu, Nov 30, 2017 at 1:28 AM, James Jones <jajones at nvidia.com> wrote:
> On 11/29/2017 01:10 PM, Rob Clark wrote:
>>
>> On Wed, Nov 29, 2017 at 12:33 PM, Jason Ekstrand <jason at jlekstrand.net>
>> wrote:
>>>
>>> On Sat, Nov 25, 2017 at 1:20 PM, Rob Clark <robdclark at gmail.com> wrote:
>>>>
>>>>
>>>> On Sat, Nov 25, 2017 at 12:46 PM, Jason Ekstrand <jason at jlekstrand.net>
>>>> wrote:
>>>>>
>>>>> I'm not quite some sure what I think about this. I think I would like
>>>>> to
>>>>> see $new_thing at least replace the guts of GBM. Whether GBM becomes a
>>>>> wrapper around $new_thing or $new_thing implements the GBM API, I'm not
>>>>> sure. What I don't think I want is to see GBM development continuing
>>>>> on
>>>>> it's own so we have two competing solutions.
>>>>
>>>>
>>>> I don't really view them as competing.. there is *some* overlap, ie.
>>>> allocating a buffer.. but even if you are using GBM w/out $new_thing
>>>> you could allocate a buffer externally and import it. I don't see
>>>> $new_thing as that much different from GBM PoV.
>>>>
>>>> But things like surfaces (aka swap chains) seem a bit out of place
>>>> when you are thinking about implementing $new_thing for non-gpu
>>>> devices. Plus EGL<->GBM tie-ins that seem out of place when talking
>>>> about a (for ex.) camera. I kinda don't want to throw out the baby
>>>> with the bathwater here.
>>>
>>>
>>>
>>> Agreed. GBM is very EGLish and we don't want the new allocator to be
>>> that.
>>>
>>>>
>>>> *maybe* GBM could be partially implemented on top of $new_thing. I
>>>> don't quite see how that would work. Possibly we could deprecate
>>>> parts of GBM that are no longer needed? idk.. Either way, I fully
>>>> expect that GBM and mesa's implementation of $new_thing could perhaps
>>>> sit on to of some of the same set of internal APIs. The public
>>>> interface can be decoupled from the internal implementation.
>>>
>>>
>>>
>>> Maybe I should restate things a bit. My real point was that modifiers +
>>> $new_thing + Kernel blob should be a complete and more powerful
>>> replacement
>>> for GBM. I don't know that we really can implement GBM on top of it
>>> because
>>> GBM has lots of wishy-washy concepts such as "cursor plane" which may not
>>> map well at least not without querying the kernel about specifc display
>>> planes. In particular, I don't want someone to feel like they need to
>>> use
>>> $new_thing and GBM at the same time or together. Ideally, I'd like them
>>> to
>>> never do that unless we decide gbm_bo is a useful abstraction for
>>> $new_thing.
>>>
>>
>> (just to repeat what I mentioned on irc)
>>
>> I think main thing is how do you create a swapchain/surface and know
>> which is current front buffer after SwapBuffers().. that is the only
>> bits of GBM that seem like there would still be useful. idk, maybe
>> there is some other idea.
>
>
> I don't view this as terribly useful except for legacy apps that need an EGL
> window surface and can't be updated to use new methods. Wayland compositors
> certainly don't fall in that category. I don't know that any GBM apps do.
kmscube doesn't count? :-P
Hmm, I assumed weston and the other wayland compositors where still
using gbm to create EGL surfaces, but I confess to have not actually
looked at weston src code for quite a few years now.
Anyways, I think it is perfectly fine for GBM to stay as-is in it's
current form. It can already import dma-buf fd's, and those can
certainly come from $new_thing.
So I guess we want an EGL extension to return the allocator device
instance for the GPU. That also takes care of the non-bare-metal
case.
> Rather, I think the way forward for the classes of apps that need something
> like GBM or the generic allocator is more or less the path ChromeOS took
> with their graphics architecture: Render to individual buffers (using FBOs
> bound to imported buffers in GL) and manage buffer exchanges/blits manually.
>
> The useful abstraction surfaces provide isn't so much deciding which buffer
> is currently "front" and "back", but rather handling the transition/hand-off
> to the window system/display device/etc. in SwapBuffers(), and the whole
> idea of the allocator proposals is to make that something the application or
> at least some non-driver utility library handles explicitly based on where
> exactly the buffer is being handed off to.
Hmm, ok.. I guess the transition will need some hook into the driver.
For freedreno and vc4 (and I suspect this is not uncommon for tiler
GPUs), switching FBOs doesn't necessarily flush rendering to hw.
Maybe it would work out if you requested the sync fd file descriptor
from an EGL fence before passing things to next device, as that would
flush rendering.
I wonder a bit about perf tools and related things.. gallium HUD and
apitrace use SwapBuffers() as a frame marker..
> The one other useful information provided by EGL surfaces that I suspect
> only our hardware cares about is whether the app is potentially going to
> bind a depth buffer along with the color buffers from the surface, and
> AFAICT, the GBM notion of surfaces doesn't provide enough information for
> our driver to determine that at surface creation time, so the GBM surface
> mechanism doesn't fit quite right with NVIDIA hardware anyway.
>
> That's all for the compositors, embedded apps, demos, and whatnot that are
> using GBM directly though. Every existing GL wayland client needs to be
> able to get an EGLSurface and call eglSwapBuffers() on it. As I mentioned
> in my XDC 2017 slides, I think that's best handled by a generic EGL window
> system implementation that all drivers could share, and which uses allocator
> mechanisms behind the scenes to build up an EGLSurface from individual
> buffers. It would all have to be transparent to apps, but we already had
> that working with our EGLStreams wayland implementation, and the Mesa
> Wayland EGL client does roughly the same thing with DRM or GBM buffers IIRC,
> but without a driver-external interface. It should be possible with generic
> allocator buffers too. Jason's Vulkan WSI improvements that were sent out
> recently move Vulkan in that direction already as well, and that was always
> one of the goals of the Vulkan external objects extensions.
>
> This is all a really long-winded way of saying yeah I think it would be
> technically feasible to implement GBM on top of the generic allocator
> mechanisms, but I don't think that's a very interesting undertaking. It'd
> just be an ABI-compatibility thing for a bunch of open-source apps, which
> seems unnecessary in the long run since the apps can just be patched
> instead. Maybe it's useful as a transition mechanism though.
>
> However, if the generic allocator is going to be something separate from
> GBM, I think the idea of modernizing & adapting the existing GBM backend
> infrastructure in Mesa to serve as a backend for the allocator is a good
> idea. Maybe it's easier to just let GBM sit on that same updated backend
> beside the allocator API. For GBM, all the interesting stuff happens in the
> backend anyway.
right
BR,
-R
More information about the mesa-dev
mailing list