Introduction and updates from NVIDIA

Andy Ritger aritger at nvidia.com
Sat Apr 2 00:18:57 UTC 2016


On Thu, Mar 24, 2016 at 11:43:51AM -0700, Jasper St. Pierre wrote:
> On Thu, Mar 24, 2016 at 10:06 AM, Andy Ritger <aritger at nvidia.com> wrote:
> 
> ... snip ...
> 
> > eglstreams or gbm or any other implementation aside, is it always _only_
> > the KMS driver that knows what the optimal configuration would be?
> > It seems like part of the decision could require knowledge of the graphics
> > hardware, which presumably the OpenGL/EGL driver is best positioned
> > to have.
> 
> Why would the OpenGL driver the best thing to know about display
> controller configuration?

Sorry I was unclear: I didn't mean exclusively the display controller
configuration in the above.  Rather, the combination of the display
controller configuration and the graphics rendering capabilities.

> On a lot of ARM SoCs, the two are separate
> modules, often provided by separate companies. For instance, the Mali
> GPUs don't have display controllers, and the Mali driver is often
> provided as a blob to vendors, who must use it with their custom-built
> display controller.
> 
> Buffer allocation is currently done through DRI2 with the Mali blob,
> so it's expected that the best allocation is done server-side in your
> xf86-video-* driver.
> 
> I agree that we need somewhere better to hook up smart buffer
> allocation, but OpenGL/EGL isn't necessarily the best place. We
> decided a little while ago that a separate shared library and
> interface designed to do buffer allocation that can be configured on a
> per-hardware basis would be a better idea, and that's how gbm started
> -- as a generic buffer manager.

OK.

> > For that aspect: would it be reasonable to execute hardware-specific
> > driver code in the drmModeAtomicCommit() call chain between the
> > application calling libdrm to make the atomic update, and the ioctl
> > into the kernel?  Maybe that would be a call to libgbm that dispatches to
> > the hardware-specific gbm backend.  However it is structured, having
> > hardware-specific graphics driver code execute as part of the flip
> > request might be one way let the graphics driver piece and the display
> > driver piece coordinate on hardware specifics, without polluting the
> > application-facing API with hardware-specifics?
> 
> Wait a minute. Once you're in commit, isn't that far too late for
> hardware specifics? Aren't we talking about buffer allocation and
> such, which would need to happen far, far before the commit? Or did I
> miss something here?

I think I led the discussion off course with my previous response to
Daniel Vetter.

Definitely buffer allocation for the current frame can't be altered
at commit time.  But, it seems to me like there is a class of graphics
hardware specifics that _are_ applicable to commit time: detiling, color
decompression, or any other sorts of graphics/display coherency that needs
to be resolved by the graphics driver.  If the graphics driver were in the
commit call chain, then it would have the option to perform those sorts
of resolutions at commit time.  This would, in turn, allow the graphics
driver to _not_ perform these sorts of resolutions (unnecessarily, and
potentially expensively) if the client-produced buffer were going to be
used by something other than display (e.g., texture).

Thanks,
- Andy


> >> -Daniel
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> http://blog.ffwll.ch
> > _______________________________________________
> > wayland-devel mailing list
> > wayland-devel at lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/wayland-devel
> 
> 
> 
> -- 
>   Jasper


More information about the wayland-devel mailing list