[PATCH] drm/vgem: implement virtual GEM
thellstrom at vmware.com
Thu May 21 07:26:57 PDT 2015
On 05/21/2015 04:19 PM, Rob Clark wrote:
> On Thu, May 21, 2015 at 9:49 AM, Thomas Hellstrom <thellstrom at vmware.com> wrote:
>> On 05/21/2015 11:13 AM, Daniel Vetter wrote:
>>> On Thu, Apr 02, 2015 at 10:30:53AM +0200, Thomas Hellstrom wrote:
>>>> On 11/25/2014 02:08 AM, Zachary Reizner wrote:
>>>>> After looking into removing platform_device, I found that using
>>>>> dma_buf_attach with a NULL device always returns an error, thereby
>>>>> preventing me from using VGEM for import and mmap. The solution seems
>>>>> to be to skip using dma_buf_attach, and instead use dma_buf_mmap when
>>>>> user-space tries to mmap a gem object that was imported into VGEM. The
>>>>> drawback to this approach is that most drivers stub their
>>>>> dma_buf_ops->mmap implementation. Presumably mmap could be implemented
>>>>> for the drivers that this would make sense for. Are there any
>>>>> comments, questions, or concerns for this proposed solution?
>>>> I see now that this driver has entered -next, and I'm sorry this comment
>>>> didn't arrive before. I simply missed this discussion :(
>>>> My biggest concern, as stated many many times before, is that dma-buf
>>>> mmap is a horrible interface for incoherent drivers, and for drivers
>>>> that use odd format (tiled) dma-bufs, basically since it doesn't supply
>>>> a dirtied region. Therefore (correct me if I'm wrong) there has been an
>>>> agreement that for purposes outside of ARM SOC, we should simply not
>>>> implement dma-buf mmap for other uses than for internal driver use.
>>>> So assume a real driver implements dma-buf mmap, but it is crawling due
>>>> to coherency- or untiling / tiling operations. How do you tell a generic
>>>> user of the vgem driver *NOT* to mmap for performance reasons? Or is
>>>> this driver only intended for ARM SOC systems?
>>> Seconded. Somehow I thought we've pulled in vgem to support software
>>> rendering like llvmpipe, and I remember that that's been the original
>>> justification. TIL that that's indeed not the case and google is
>>> splattering their cros tree with dma_buf->mmap implementations this is
>>> obviously not the case and the intention really seems to be to use
>>> dma_buf->mmap and vgem as the generic interface to expose buffer objects
>>> of real drivers to software rendering.
>>> Given that neither vgem nor dma_buf->mmap has any sane concept of handling
>>> coherency I'm really unhappy about this and tempted to just submit the
>>> revert for vgem before 4.1 ships. I'll chat with relevant people a bit
>>> more. Worse I chatted with Stephane today and he brushed this off as
>>> not-my-problem and if this hurts intel intel should fix this. That's not
>>> how a proper usptream interface is getting designd, and coherency handling
>>> is an even more serious problem on arm an virtual hw like vmwgfx.
>> So given how this has turned out, my opinion is that before a usable
>> generic mmap of accelerated buffer objects
>> goes upstream, there should be a proper interface to request regions
>> present and to dirty regions. It seems to me that so far
>> all use-cases are for one- or two-dimensional access so it should be
>> sufficient to start with that and add other access
>> modes later on. Now this is no guarantee that people won't request and
>> dirty the whole dma-buf on each access, but at least
>> that would make people think, and if things become slow it's pretty
>> clear where the problem is.
>> I'm all for delaying vgem until we have such an interface in place.
> so, for llvmpipe use-case, I think dri2 is sufficient. So why don't
> we just drop DRIVER_PRIME flag for now..
Fine with me!
>>> On intel
>>> (well at least big core thanks to the huge coherent cache fabric) this is
>>> mostly a non-issue, except that the patch in the cros tree obviously gets
>>> things wrong.
>>> Decently pissed tbh.
>>> Cheers, Daniel
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org
More information about the dri-devel