[PATCH 01/15] iio: buffer-dma: Get rid of incoming/outgoing queues
Paul Cercueil
paul at crapouillou.net
Mon Nov 22 15:27:14 UTC 2021
Le lun., nov. 22 2021 at 16:17:59 +0100, Lars-Peter Clausen
<lars at metafoo.de> a écrit :
> On 11/22/21 4:16 PM, Paul Cercueil wrote:
>> Hi Lars,
>>
>> Le lun., nov. 22 2021 at 16:08:51 +0100, Lars-Peter Clausen
>> <lars at metafoo.de> a écrit :
>>> On 11/21/21 9:08 PM, Paul Cercueil wrote:
>>>>
>>>>
>>>> Le dim., nov. 21 2021 at 19:49:03 +0100, Lars-Peter Clausen
>>>> <lars at metafoo.de> a écrit :
>>>>> On 11/21/21 6:52 PM, Paul Cercueil wrote:
>>>>>> Hi Lars,
>>>>>>
>>>>>> Le dim., nov. 21 2021 at 17:23:35 +0100, Lars-Peter Clausen
>>>>>> <lars at metafoo.de> a écrit :
>>>>>>> On 11/15/21 3:19 PM, Paul Cercueil wrote:
>>>>>>>> The buffer-dma code was using two queues, incoming and
>>>>>>>> outgoing, to
>>>>>>>> manage the state of the blocks in use.
>>>>>>>>
>>>>>>>> While this totally works, it adds some complexity to the code,
>>>>>>>> especially since the code only manages 2 blocks. It is much
>>>>>>>> easier to
>>>>>>>> just check each block's state manually, and keep a counter for
>>>>>>>> the next
>>>>>>>> block to dequeue.
>>>>>>>>
>>>>>>>> Since the new DMABUF based API wouldn't use these incoming and
>>>>>>>> outgoing
>>>>>>>> queues anyway, getting rid of them now makes the upcoming
>>>>>>>> changes
>>>>>>>> simpler.
>>>>>>>>
>>>>>>>> Signed-off-by: Paul Cercueil <paul at crapouillou.net>
>>>>>>> The outgoing queue is going to be replaced by fences, but I
>>>>>>> think we need to keep the incoming queue.
>>>>>>
>>>>>> Blocks are always accessed in sequential order, so we now have a
>>>>>> "queue->next_dequeue" that cycles between the buffers
>>>>>> allocated for fileio.
>>>>>>
>>>>>>>> [...]
>>>>>>>> @@ -442,28 +435,33 @@
>>>>>>>> EXPORT_SYMBOL_GPL(iio_dma_buffer_disable);
>>>>>>>> static void iio_dma_buffer_enqueue(struct
>>>>>>>> iio_dma_buffer_queue *queue,
>>>>>>>> struct iio_dma_buffer_block *block)
>>>>>>>> {
>>>>>>>> - if (block->state == IIO_BLOCK_STATE_DEAD) {
>>>>>>>> + if (block->state == IIO_BLOCK_STATE_DEAD)
>>>>>>>> iio_buffer_block_put(block);
>>>>>>>> - } else if (queue->active) {
>>>>>>>> + else if (queue->active)
>>>>>>>> iio_dma_buffer_submit_block(queue, block);
>>>>>>>> - } else {
>>>>>>>> + else
>>>>>>>> block->state = IIO_BLOCK_STATE_QUEUED;
>>>>>>>> - list_add_tail(&block->head, &queue->incoming);
>>>>>>> If iio_dma_buffer_enqueue() is called with a dmabuf and the
>>>>>>> buffer is not active, it will be marked as queued,
>>>>>>> but we don't actually keep a reference to it
>>>>>>> anywhere. It will never be submitted to the DMA, and
>>>>>>> it will never be signaled as completed.
>>>>>>
>>>>>> We do keep a reference to the buffers, in the
>>>>>> queue->fileio.blocks array. When the buffer is enabled,
>>>>>> all the blocks in that array that are in the "queued"
>>>>>> state will be submitted to the DMA.
>>>>>>
>>>>> But not when used in combination with the DMA buf changes later
>>>>> in this series.
>>>>>
>>>>
>>>> That's still the case after the DMABUF changes of the series. Or
>>>> can you point me exactly what you think is broken?
>>>>
>>> When you allocate a DMABUF with the allocate IOCTL and then submit
>>> it with the enqueue IOCTL before the buffer is enabled it will
>>> end up marked as queued, but not actually be queued anywhere.
>>>
>>
>> Ok, it works for me because I never enqueue blocks before enabling
>> the buffer. I can add a requirement that blocks must be enqueued
>> only after the buffer is enabled.
>
> I don't think that is a good idea. This way you are going to
> potentially drop data at the begining of your stream when the DMA
> isn't ready yet.
>
You wouldn't drop data, but it could cause an underrun, yes. Is it such
a big deal, knowing that the buffer was just enabled? I don't think you
can disable then enable the buffer without causing a discontinuity in
the stream.
-Paul
More information about the dri-devel
mailing list