[PATCH 3/4] drm/exynos: added userptr feature.

Dave Airlie airlied at gmail.com
Wed May 16 02:22:34 PDT 2012


On Tue, May 15, 2012 at 8:34 AM, Rob Clark <rob.clark at linaro.org> wrote:
> On Mon, Apr 23, 2012 at 7:43 AM, Inki Dae <inki.dae at samsung.com> wrote:
>> this feature could be used to use memory region allocated by malloc() in user
>> mode and mmaped memory region allocated by other memory allocators. userptr
>> interface can identify memory type through vm_flags value and would get
>> pages or page frame numbers to user space appropriately.
>
> I apologize for being a little late to jump in on this thread, but...
>
> I must confess to not being a huge fan of userptr.  It really is
> opening a can of worms, and seems best avoided if at all possible.
> I'm not entirely sure the use-case for which you require this, but I
> wonder if there isn't an alternative way?   I mean, the main case I
> could think of for mapping userspace memory would be something like
> texture upload.  But could that be handled in an alternative way,
> something like a pwrite or texture_upload API, which could temporarily
> pin the userspace memory, kick off a dma from that user buffer to a
> proper GEM buffer, and then unpin the user buffer when the DMA
> completes?

I'm with Rob on this, I really hate the userptr idea, and my problem
with letting it into exynos is it sets a benchmark for others to do
things the same way. I'm still not 100% sure how its going to be used
even with all your explainations.

Since we've agreed only the X server can access the interface, it
makes 0 sense to me to exist at all, as the X server can avoid malloc
memory for all objects it accesses.

I don't think pixman is at the level where you should be acceleration
it directly. I thought the point of pixman was a fast SW engine, not
something to be trunked down to a hw engine. The idea being you use
cairo and backend it onto something.

I know ssp had some ideas for making pixman be able to do hw accel,
but userptr doesn't seem like the proper solution, it seems like a
hack that needs a lot more VM work to operate properly, and by setting
a precedent for one GPU driver, I'll have 20 implementations of this
from ARM vendors and nobody will ever go back and fix things properly.

So I'm really not sure the best way to move this forward, maybe a very
clear set of use cases of where stuff plugs into this, and why dma-buf
or some other method isn't sufficient, but I'm having trouble getting
past the fact its setting a dangerous precedent.

Dave.


More information about the dri-devel mailing list