[PATCH] dma-buf: Use EXPORT_SYMBOL
Robert Morell
rmorell at nvidia.com
Wed Jan 18 17:11:40 PST 2012
On Wed, Jan 18, 2012 at 06:00:54AM -0800, Dave Airlie wrote:
> On Wed, Jan 18, 2012 at 1:55 PM, Ilija Hadzic
> <ihadzic at research.bell-labs.com> wrote:
> > On Wed, 18 Jan 2012, Dave Airlie wrote:
> >> The problem is the x86 nvidia binary driver does sit outside of
> >> subsystems, and I forsee wanting to share buffers with it from the
> >> Intel driver in light of the optimus hardware. Although nouveau exists
> >> and I'd much rather nvidia get behind that wrt the kernel stuff, I
> >> don't forsee that happening.
Yes, this is one potential use case that I have in mind.
This is digressing a bit, but the binary nvidia driver is the best way
that I see that we can support our users with a feature set compatible
to that available to other operating systems. For technical reasons,
we've chosen to leverage a lot of common code written internally, which
allows us to release support for new hardware and software features much
more quickly than if those of us working on the Linux/FreeBSD/Solaris
drivers wrote it all from scratch. This means that we share a lot with
other NVIDIA drivers, but we for better or worse can't share much
infrastructure like DRI.
> > Please correct me if I blab a nonsense here, but just the other day, we have
> > seen a different thread in which it was decided that user cannot turn on
> > buffer sharing at compile time explicitly, but rather a driver that needs it
> > would turn it on automatically.
> >
> > Doesn't that alone exclude out-of-tree drivers? In other words if you have
> > two out-of-tree drivers that want to use DMA buffer sharing, and no other
> > enabled driver in the kernel enables it implicitly, then such a kernel won't
> > make it possible for said two drivers to work.
>
> Well the use case at least on x86 would be open x86 driver sharing
> with closed nvidia driver, if two closed drivers wanted to share I'd
> except them to do it internally anyways.
Correct. Right now, that covers Optimus laptops with Intel integrated
graphics and an NVIDIA GPU. We'd only use the dma-buf interface in the
case of interoperating with the Intel device.
I only see such hardware becoming more common. For example, in the
future, if we can't agree on using EXPORT_SYMBOL, then if somebody were
to introduce a laptop that had a Tegra GPU (which uses GPL-compatible
open-source Linux drivers) and a GeForce GPU (which is, as described
above, supported by our existing binary driver) then I imagine we'd have
no choice but to re-implement a different open-source buffer allocation
mechanism for Tegra that could be used between the two, or just continue
using our existing nvhost code. This, along with every other SoC's
version, is exactly what the dma-buf project was intended to replace.
> > Frankly, I never understood this "low-level interface" argument that is
> > kicked around when EXPORT_SYMBOL_GPL topic is brought up. My view to
> > EXPORT_SYMBOL vs. EXPORT_SYMBOL_GPL is that it really boils down to license
> > controversy about binary/proprietary modules in Linux kernel. To me it's
> > about whether the authors of certain code (for mostly phylosophical reasons)
> > agree that their (GPL) code is OK or not OK to link against non-GPL module.
> >
> > From that angle, I am not sure if it is ethical at all to modify how the
> > symbol is exported without explicit consent of the original author
> > (regardless of what we think about GPL/proprietary modules covtroversy). So
> > if NVidia needs to link DMA buffer sharing against their proprietary driver,
> > they should have explicit permission from the original author to turn its
> > symbols into EXPORT_SYMBOL.
>
> Which is the point of their patch, to ask permission from the author.
Right. I never intended to submit this patch behind anyone's back, I
just wanted to bring this to their attention and ask if the change could
be made so that we could better serve shared-graphics users.
Thanks,
Robert
More information about the dri-devel
mailing list