<div dir="ltr"><div class="gmail_extra"><br><br><div class="gmail_quote">2013/6/19 Lucas Stach <span dir="ltr"><<a href="mailto:l.stach@pengutronix.de" target="_blank">l.stach@pengutronix.de</a>></span><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
Am Mittwoch, den 19.06.2013, 19:44 +0900 schrieb Inki Dae:<br>
<div><div class="h5">><br>
> > -----Original Message-----<br>
> > From: Lucas Stach [mailto:<a href="mailto:l.stach@pengutronix.de">l.stach@pengutronix.de</a>]<br>
> > Sent: Wednesday, June 19, 2013 7:22 PM<br>
> > To: Inki Dae<br>
> > Cc: 'Russell King - ARM Linux'; 'linux-fbdev'; 'Kyungmin Park'; 'DRI<br>
> > mailing list'; 'myungjoo.ham'; 'YoungJun Cho'; linux-arm-<br>
> > <a href="mailto:kernel@lists.infradead.org">kernel@lists.infradead.org</a>; <a href="mailto:linux-media@vger.kernel.org">linux-media@vger.kernel.org</a><br>
> > Subject: Re: [RFC PATCH v2] dmabuf-sync: Introduce buffer synchronization<br>
> > framework<br>
> ><br>
> > Am Mittwoch, den 19.06.2013, 14:45 +0900 schrieb Inki Dae:<br>
> > ><br>
> > > > -----Original Message-----<br>
> > > > From: Lucas Stach [mailto:<a href="mailto:l.stach@pengutronix.de">l.stach@pengutronix.de</a>]<br>
> > > > Sent: Tuesday, June 18, 2013 6:47 PM<br>
> > > > To: Inki Dae<br>
> > > > Cc: 'Russell King - ARM Linux'; 'linux-fbdev'; 'Kyungmin Park'; 'DRI<br>
> > > > mailing list'; 'myungjoo.ham'; 'YoungJun Cho'; linux-arm-<br>
> > > > <a href="mailto:kernel@lists.infradead.org">kernel@lists.infradead.org</a>; <a href="mailto:linux-media@vger.kernel.org">linux-media@vger.kernel.org</a><br>
> > > > Subject: Re: [RFC PATCH v2] dmabuf-sync: Introduce buffer<br>
> > synchronization<br>
> > > > framework<br>
> > > ><br>
> > > > Am Dienstag, den 18.06.2013, 18:04 +0900 schrieb Inki Dae:<br>
> > > > [...]<br>
> > > > ><br>
> > > > > > a display device driver. It shouldn't be used within a single<br>
> > driver<br>
> > > > > > as a means of passing buffers between userspace and kernel space.<br>
> > > > ><br>
> > > > > What I try to do is not really such ugly thing. What I try to do is<br>
> > to<br>
> > > > > notify that, when CPU tries to access a buffer , to kernel side<br>
> > through<br>
> > > > > dmabuf interface. So it's not really to send the buffer to kernel.<br>
> > > > ><br>
> > > > > Thanks,<br>
> > > > > Inki Dae<br>
> > > > ><br>
> > > > The most basic question about why you are trying to implement this<br>
> > sort<br>
> > > > of thing in the dma_buf framework still stands.<br>
> > > ><br>
> > > > Once you imported a dma_buf into your DRM driver it's a GEM object and<br>
> > > > you can and should use the native DRM ioctls to prepare/end a CPU<br>
> > access<br>
> > > > to this BO. Then internally to your driver you can use the dma_buf<br>
> > > > reservation/fence stuff to provide the necessary cross-device sync.<br>
> > > ><br>
> > ><br>
> > > I don't really want that is used only for DRM drivers. We really need<br>
> > > it for all other DMA devices; i.e., v4l2 based drivers. That is what I<br>
> > > try to do. And my approach uses reservation to use dma-buf resources<br>
> > > but not dma fence stuff anymore. However, I'm looking into Radeon DRM<br>
> > > driver for why we need dma fence stuff, and how we can use it if<br>
> > > needed.<br>
> > ><br>
> ><br>
> > Still I don't see the point why you need syncpoints above dma-buf. In<br>
> > both the DRM and the V4L2 world we have defined points in the API where<br>
> > a buffer is allowed to change domain from device to CPU and vice versa.<br>
> ><br>
> > In DRM if you want to access a buffer with the CPU you do a cpu_prepare.<br>
> > The buffer changes back to GPU domain once you do the execbuf<br>
> > validation, queue a pageflip to the buffer or similar things.<br>
> ><br>
> > In V4L2 the syncpoints for cache operations are the queue/dequeue API<br>
> > entry points. Those are also the exact points to synchronize with other<br>
> > hardware thus using dma-buf reserve/fence.<br>
><br>
><br>
> If so, what if we want to access a buffer with the CPU _in V4L2_? We<br>
> should open a drm device node, and then do a cpu_prepare?<br>
><br>
</div></div>Not at all. As I said the syncpoints are the queue/dequeue operations.<br>
When dequeueing a buffer you are explicitly dragging the buffer domain<br>
back from device into userspace and thus CPU domain.<br>
<br>
If you are operating on an mmap of a V4L2 processed buffer it's either<br>
before or after it got processed by the hardware and therefore all DMA<br>
operations on the buffer are bracketed by the V4L2 qbug/dqbuf ioctls.<br>
That is where cache operations and synchronization should happen. The<br>
V4L2 driver shouldn't allow you to dequeue a buffer and thus dragging it<br>
back into CPU domain while there is still DMA ongoing. Equally the queue<br>
ioctrl should make sure caches are properly written back to memory. The<br>
results of reading/writing to the mmap of a V4L2 buffer while it is<br>
enqueued to the hardware is simply undefined and there is nothing<br>
suggesting that this is a valid usecase.</blockquote><div> </div><div>Thanks to comments. However, that's not definitely my point, and you just say the conventional way. My point is to more enhance the conventional way.</div>
<div> </div><div>The conventional way is (sorry but I'm not really a good painter) :</div><div> </div><div>CPU -> DMA,</div><div> ioctl(qbuf command) ioctl(streamon)<br> | |<br>
| |<br> qbuf <- syncpoint start streaming <- dma access</div><div> </div><div>DMA accesses a queued buffer with start streaming if source and destination queues are ready.</div>
<div> </div><div>And DMA -> CPU,<br> ioctl(dqbuf command)<br> |<br> |<br> dqbuf <- syncpoint</div><div> </div><div>Internally, dqbuf waits for until DMA opertion is completed. And if completed then user process can access a dequeued buffer.</div>
<div> </div><div><br>On the other hand, the below shows how we could enhance the conventional way with my approach (just example):</div><div> </div><div>CPU -> DMA,<br> ioctl(qbuf command) ioctl(streamon)<br>
| |<br> | |<br> qbuf <- dma_buf_sync_get start streaming <- syncpoint</div><div> </div>
<div>dma_buf_sync_get just registers a sync buffer(dmabuf) to sync object. And the syncpoint is performed by calling dma_buf_sync_lock(), and then DMA accesses the sync buffer.</div><div> </div><div>And DMA -> CPU,<br>
ioctl(dqbuf command)<br> |<br> |<br> dqbuf <- nothing to do</div><div> </div><div>Actual syncpoint is when DMA operation is completed (in interrupt handler): the syncpoint is performed by calling dma_buf_sync_unlock().<br>
</div><div>Hence, my approach is to move the syncpoints into just before dma access as long as possible. Of course, user process need to call dma-buf interfaces or simliar things for buffer synchronization with DMA side. However, as of now, there is no good idea: I had implemented the user interfaces in dma-buf framework but that was just to show you so ugly. The eventual purpose of my approach is to integrate sync interfaces with dmabuf sync, and this approach can be used commonly for v4l2, drm drivers, user processes, and so on: as I already mentioned in document file, this approach is for DMA devices using system memory as DMA buffer, especially most ARM SoCs.</div>
<div> </div><div>Thanks,</div><div>Inki Dae</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">
<div class="im">
Regards,<br>
Lucas<br>
<br>
--<br>
Pengutronix e.K. | Lucas Stach |<br>
Industrial Linux Solutions | <a href="http://www.pengutronix.de/" target="_blank">http://www.pengutronix.de/</a> |<br>
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-5076 |<br>
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |<br>
<br>
</div><div><div class="h5">_______________________________________________<br>
dri-devel mailing list<br>
<a href="mailto:dri-devel@lists.freedesktop.org">dri-devel@lists.freedesktop.org</a><br>
<a href="http://lists.freedesktop.org/mailman/listinfo/dri-devel" target="_blank">http://lists.freedesktop.org/mailman/listinfo/dri-devel</a><br>
</div></div></blockquote></div><br></div></div>