[RFC v2 0/7] drm: asynchronous atomic plane update
Gustavo Padovan
gustavo at padovan.org
Thu Apr 27 18:36:50 UTC 2017
2017-04-27 Ville Syrjälä <ville.syrjala at linux.intel.com>:
> On Thu, Apr 27, 2017 at 12:15:12PM -0300, Gustavo Padovan wrote:
> > From: Gustavo Padovan <gustavo.padovan at collabora.com>
> >
> > Hi,
> >
> > Second take of Asynchronous Plane Updates over Atomic. Here I looked
> > to msm, vc4 and i915 to identify a common pattern to create atomic helpers
> > for async updates. So in patch 1 drm_atomic_async_check() and
> > drm_atomic_helper_async_commit() are introduced along with driver's plane hooks:
> > ->atomic_async_check() and ->atomic_async_commit().
> >
> > For now we only support async update for one plane at a time. Also the async
> > update can't modify the CRTC so no modesets are allowed.
> >
> > Then the other patches add support for it in the drivers. I did virtio mostly
> > for testing. i915 have been converted and I've been using it without any
> > problem. IGT tests seems to be fine, but there are somewhat random failures
> > with or without the async update changes. msm and vc4 are only compile-tested.
> > So I think this needs more testing
> >
> > I started IGT changes to test the Atomic IOCTL with the new flag:
> >
> > https://git.collabora.com/cgit/user/padovan/intel-gpu-tools.git/
> >
> > v2:
> >
> > Apart from all comments on v1 one extra change I made was to remove the
> > constraint of only updating the plane if the queued state didn't touch
> > that plane. I believe it was a too cautious of a change, furthermore this
> > constraint was affecting throughput negatively on i915.
>
> So you're now allowing reordering the updates? As in update A is
> scheduled before update B, but update B happens before update A.
> That is not a good idea.
That is what already happens with legacy cursor updates. They jump ahead
the scheduled update and apply the cursor update. What we propose here
is to do this over atomic when DRM_MODE_ATOMIC_ASYNC_UPDATE flag is set.
Async PageFlips should use the same infrastructure in the future.
Gustavo
More information about the dri-devel
mailing list