[PATCHv7 03/15] v4l: vb2: add support for shared buffer (dma_buf)

Tomasz Stanislawski t.stanislaws at samsung.com
Thu Aug 2 09:31:13 PDT 2012


Hi Laurent, Hi Dima,

On 06/27/2012 10:40 PM, Laurent Pinchart wrote:
> Hi Dima,
> 
> On Tuesday 26 June 2012 13:53:34 Dima Zavin wrote:
>> On Tue, Jun 26, 2012 at 2:40 AM, Hans Verkuil <hverkuil at xs4all.nl> wrote:
>>> On Tue 26 June 2012 11:11:06 Laurent Pinchart wrote:
>>>> On Tuesday 26 June 2012 10:40:44 Tomasz Stanislawski wrote:
>>>>> Hi Dima Zavin,
>>>>> Thank you for the patch and for a ping remainder :).
>>>>>
>>>>> You are right. The unmap is missing in __vb2_queue_cancel.
>>>>> I will apply your fix into next version of V4L2 support for dmabuf.
>>>>>
>>>>> Please refer to some comments below.
>>>>>
>>>>> On 06/20/2012 08:12 AM, Dima Zavin wrote:
>>>>>> Tomasz,
>>>>>>
>>>>>> I've encountered an issue with this patch when userspace does several
>>>>>> stream_on/stream_off cycles. When the user tries to qbuf a buffer
>>>>>> after doing stream_off, we trigger the "dmabuf already pinned"
>>>>>> warning since we didn't unmap the buffer as dqbuf was never called.
>>>>>>
>>>>>> The below patch adds calls to unmap in queue_cancel, but my feeling
>>>>>> is that we probably should be calling detach too (i.e. put_dmabuf).
>>>>
>>>> According to the V4L2 specification, the "VIDIOC_STREAMOFF ioctl, apart
>>>> of aborting or finishing any DMA in progress, unlocks any user pointer
>>>> buffers locked in physical memory, and it removes all buffers from the
>>>> incoming and outgoing queues".
>>>
>>> Correct. And what that means in practice is that after a streamoff all
>>> buffers are returned to the state they had just before STREAMON was
>>> called.
>>
>> That can't be right. The buffers had to have been returned to the
>> state just *after REQBUFS*, not just *before STREAMON*. You need to
>> re-enqueue buffers before calling STREAMON. I assume that's what you
>> meant?
> 
> Your interpretation is correct.
> 

So now we should decide what should be changed: the spec or vb2 ?
Bringing the queue state back to *after REQBUFS* may make the
next (STREAMON + QBUFs) very costly operations.

Reattaching and mapping a DMABUF might trigger DMA allocation and
*will* trigger creation of IOMMU mappings. In case of a user pointer,
calling next get_user_pages may cause numerous fault events and
will *create* new IOMMU mapping.

Is there any need to do such a cleanup if the destruction of buffers
and all caches can be explicitly executed by REQBUFS(count = 0) ?

Maybe it would be easier to change the spec by removing
"apart of ... in physical memory" part?

STREAMOFF should mean stopping streaming, and that resources are no
longer used. DMABUFs are unmapped but unmapping does not mean releasing.
The exporter may keep the resource in its caches.

Currently, vb2 does not follow the policy from the spec.
The put_userptr ops is called on:
- VIDIOC_REBUFS
- VIDIOC_CREATE_BUFS
- vb2_queue_release() which is usually called on close() syscall

The put_userptr is not called and streamoff therefore the user pages
are locked after STREAMOFF.

BTW. What does 'buffer locked' mean?

Does it mean that a buffer is pinned or referenced by a driver/HW?

Regards,
Tomasz Stanislawski


>>> So after STREAMOFF you can immediately queue all buffers again with QBUF
>>> and call STREAMON to restart streaming. No mmap or other operations
>>> should be required. This behavior must be kept.
>>>
>>> VIDIOC_REQBUFS() or a close() are the only two operations that will
>>> actually free the buffers completely.
>>>
>>> In practice, a STREAMOFF is either followed by a STREAMON at a later time,
>>> or almost immediately followed by REQBUFS or close() to tear down the
>>> buffers. So I don't think the buffers should be detached at streamoff.
>>
>> I agree. I was leaning this way which is why I left it out of my patch
>> and wanted to hear your guys' opinion as you are much more familiar
>> with the intended behavior than I am.
>>
>> Thanks!
> 
> You're welcome. Thank you for reporting the problem and providing a patch.
> 



More information about the dri-devel mailing list