Introduction and updates from NVIDIA
Jasper St. Pierre
jstpierre at mecheye.net
Thu Mar 24 18:43:51 UTC 2016
On Thu, Mar 24, 2016 at 10:06 AM, Andy Ritger <aritger at nvidia.com> wrote:
... snip ...
> eglstreams or gbm or any other implementation aside, is it always _only_
> the KMS driver that knows what the optimal configuration would be?
> It seems like part of the decision could require knowledge of the graphics
> hardware, which presumably the OpenGL/EGL driver is best positioned
> to have.
Why would the OpenGL driver the best thing to know about display
controller configuration? On a lot of ARM SoCs, the two are separate
modules, often provided by separate companies. For instance, the Mali
GPUs don't have display controllers, and the Mali driver is often
provided as a blob to vendors, who must use it with their custom-built
Buffer allocation is currently done through DRI2 with the Mali blob,
so it's expected that the best allocation is done server-side in your
I agree that we need somewhere better to hook up smart buffer
allocation, but OpenGL/EGL isn't necessarily the best place. We
decided a little while ago that a separate shared library and
interface designed to do buffer allocation that can be configured on a
per-hardware basis would be a better idea, and that's how gbm started
-- as a generic buffer manager.
> For that aspect: would it be reasonable to execute hardware-specific
> driver code in the drmModeAtomicCommit() call chain between the
> application calling libdrm to make the atomic update, and the ioctl
> into the kernel? Maybe that would be a call to libgbm that dispatches to
> the hardware-specific gbm backend. However it is structured, having
> hardware-specific graphics driver code execute as part of the flip
> request might be one way let the graphics driver piece and the display
> driver piece coordinate on hardware specifics, without polluting the
> application-facing API with hardware-specifics?
Wait a minute. Once you're in commit, isn't that far too late for
hardware specifics? Aren't we talking about buffer allocation and
such, which would need to happen far, far before the commit? Or did I
miss something here?
>> Daniel Vetter
>> Software Engineer, Intel Corporation
> wayland-devel mailing list
> wayland-devel at lists.freedesktop.org
More information about the wayland-devel