[PATCH 1/3] dma-fence: dma-buf synchronization (v7)

Sumit Semwal sumit.semwal at linaro.org
Tue Aug 7 23:35:45 PDT 2012


Hi Maarten,

On 8 August 2012 00:17, Maarten Lankhorst
<maarten.lankhorst at canonical.com> wrote:
> Op 07-08-12 19:53, Maarten Lankhorst schreef:
>> A dma-fence can be attached to a buffer which is being filled or consumed
>> by hw, to allow userspace to pass the buffer without waiting to another
>> device.  For example, userspace can call page_flip ioctl to display the
>> next frame of graphics after kicking the GPU but while the GPU is still
>> rendering.  The display device sharing the buffer with the GPU would
>> attach a callback to get notified when the GPU's rendering-complete IRQ
>> fires, to update the scan-out address of the display, without having to
>> wake up userspace.

Thanks for this patchset; Could you please also fill up
Documentation/dma-buf-sharing.txt, to include the relevant bits?

We've tried to make sure the Documentation corresponding is kept
up-to-date as the framework has grown, and new features are added to
it - and I think features as important as dma-fence and dmabufmgr do
warrant a healthy update.
>
> I implemented this for intel and debugged it with intel <-> nouveau
> interaction. Unfortunately the nouveau patches aren't ready at this point,
> but the git repo I'm using is available at:
>
> http://cgit.freedesktop.org/~mlankhorst/linux/
>
> It has the patch series and a sample implementation for intel, based on
> drm-intel-next tree.
>
> I tried to keep it deadlock and race condition free as much as possible,
> but locking gets complicated enough that if I'm unlucky something might
> have slipped through regardless.
>
> Especially the locking in i915_gem_reset_requests, is screwed up.
> This shows what a real PITA it is to abort callbacks prematurely while
> keeping everything stable. As such, aborting requests should only be done
> in exceptional circumstances, in this case hardware died and things are
> already locked up anyhow..
>
> ~Maarten
>

-- 
Thanks and best regards,
Sumit Semwal


More information about the dri-devel mailing list