RFC: hardware accelerated bitblt using dma engine

Daniel Vetter daniel at ffwll.ch
Fri Aug 5 07:49:35 UTC 2016


On Fri, Aug 05, 2016 at 06:37:26AM +0200, Enrico Weigelt, metux IT consult wrote:
> On 05.08.2016 01:16, Enrico Weigelt, metux IT consult wrote:
> 
> <snip>
> Seems I've been on a completely wrong path - what I'm looking
> for is dma-buf. So my idea now goes like this:
> 
> * add a new 'virtual GPU' as render node.
> * the basic operations are:
>   -> create a virtual dumb framebuffer (just inside system memory),
>   -> import dma-buf's as bo's
>   -> blitting between bo's using dma-engine
> 
> That way, everything should be cleanly separated.
> 
> As the application needs to be aware of that buffer-and-blit approach
> anyways (IOW: allocate two BO's and trigger the blitting when it done
> rendering), the extra glue needed for opening and talking to the
> render node should be quite minimal.

Yup, this is pretty much what I've beens suggesting ;-) The other bit is
that pls don't try to make the IOCTL/uapi interfaces generic, it will
hurt. Of course if there's a pile of IP (from the same vendor or whatever)
that all works similarly then sure, shared driver makes sense. But pretty
soon it doesn't (usually right when you want to have something closer to
direct submission to hardware with relocations).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list