[PATCH v7 3/6] iio: core: Add new DMABUF interface infrastructure
Christian König
christian.koenig at amd.com
Mon Mar 4 13:40:31 UTC 2024
Am 04.03.24 um 14:28 schrieb Nuno Sá:
> On Mon, 2024-03-04 at 13:44 +0100, Christian König wrote:
>> Am 23.02.24 um 13:14 schrieb Nuno Sa:
>>> From: Paul Cercueil<paul at crapouillou.net>
>>>
>>> Add the necessary infrastructure to the IIO core to support a new
>>> optional DMABUF based interface.
>>>
>>> With this new interface, DMABUF objects (externally created) can be
>>> attached to a IIO buffer, and subsequently used for data transfer.
>>>
>>> A userspace application can then use this interface to share DMABUF
>>> objects between several interfaces, allowing it to transfer data in a
>>> zero-copy fashion, for instance between IIO and the USB stack.
>>>
>>> The userspace application can also memory-map the DMABUF objects, and
>>> access the sample data directly. The advantage of doing this vs. the
>>> read() interface is that it avoids an extra copy of the data between the
>>> kernel and userspace. This is particularly userful for high-speed
>>> devices which produce several megabytes or even gigabytes of data per
>>> second.
>>>
>>> As part of the interface, 3 new IOCTLs have been added:
>>>
>>> IIO_BUFFER_DMABUF_ATTACH_IOCTL(int fd):
>>> Attach the DMABUF object identified by the given file descriptor to the
>>> buffer.
>>>
>>> IIO_BUFFER_DMABUF_DETACH_IOCTL(int fd):
>>> Detach the DMABUF object identified by the given file descriptor from
>>> the buffer. Note that closing the IIO buffer's file descriptor will
>>> automatically detach all previously attached DMABUF objects.
>>>
>>> IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *):
>>> Request a data transfer to/from the given DMABUF object. Its file
>>> descriptor, as well as the transfer size and flags are provided in the
>>> "iio_dmabuf" structure.
>>>
>>> These three IOCTLs have to be performed on the IIO buffer's file
>>> descriptor, obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl.
>> A few nit picks and one bug below, apart from that looks good to me.
> Hi Christian,
>
> Thanks for looking at it. I'll just add some comment on the bug below and for
> the other stuff I hope Paul is already able to step in and reply to it. My
> assumption (which seems to be wrong) is that the dmabuf bits should be already
> good to go as they should be pretty similar to the USB part of it.
>
> ...
>
>>> + if (dma_to_ram) {
>>> + /*
>>> + * If we're writing to the DMABUF, make sure we don't have
>>> + * readers
>>> + */
>>> + retl = dma_resv_wait_timeout(dmabuf->resv,
>>> + DMA_RESV_USAGE_READ, true,
>>> + timeout);
>>> + if (retl == 0)
>>> + retl = -EBUSY;
>>> + if (retl < 0) {
>>> + ret = (int)retl;
>>> + goto err_resv_unlock;
>>> + }
>>> + }
>>> +
>>> + if (buffer->access->lock_queue)
>>> + buffer->access->lock_queue(buffer);
>>> +
>>> + ret = dma_resv_reserve_fences(dmabuf->resv, 1);
>>> + if (ret)
>>> + goto err_queue_unlock;
>>> +
>>> + dma_resv_add_fence(dmabuf->resv, &fence->base,
>>> + dma_resv_usage_rw(dma_to_ram));
>> That is incorrect use of the function dma_resv_usage_rw(). That function
>> is for retrieving fences and not adding them.
>>
>> See the function implementation and comments, when you use it like this
>> you get exactly what you don't want.
>>
> Does that mean that the USB stuff [1] is also wrong?
>
> [1]:https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git/tree/drivers/usb/gadget/function/f_fs.c?h=usb-next#n1669
Yes, that's broken as well. The dma_resv_usage_rw() function is supposed
to be used while retrieving fences.
In other words your "if (dma_to_ram) ..." above is incorrect as well.
That one should look more like this:
/*
* Writes needs to wait for other writes and reads, but readers only have to wait for writers.
*/
retl = dma_resv_wait_timeout(dmabuf->resv, dma_resv_usage_rw(dma_to_ram), timeout);
Regards,
Christian.
>
> - Nuno Sá
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20240304/66d101a7/attachment.htm>
More information about the dri-devel
mailing list