[RFC] Async Flip in Atomic ioctl corrections
Murthy, Arun R
arun.r.murthy at intel.com
Wed Jun 11 07:04:19 UTC 2025
struct drm_crtc_state {
/**
* @async_flip:
*
* This is set when DRM_MODE_PAGE_FLIP_ASYNC is set in the legacy
* PAGE_FLIP IOCTL. It's not wired up for the atomic IOCTL
itself yet.
*/
bool async_flip;
In the existing code the flag async_flip was intended for the legacy
PAGE_FLIP IOCTL. But the same is being used for atomic IOCTL.
As per the hardware feature is concerned, async flip is a plane feature
and is to be treated per plane basis and not per pipe basis.
For a given hardware pipe, among the multiple hardware planes, one can
go with sync flip and other 2/3 can go with async flip.
Tearing affect will be noticed with this and if any policy should be
taken care by the user space. KMD not to include any policy
as to allow async on only one plane for a given pipe as all policy done
in user and KMD exposes what is supported by the hardware.
diff --git a/include/drm/drm_plane.h b/include/drm/drm_plane.h
index 01479dd94e76..53447b4a5ba7 100644
--- a/include/drm/drm_plane.h
+++ b/include/drm/drm_plane.h
@@ -260,6 +260,13 @@ struct drm_plane_state {
* flow.
*/
bool color_mgmt_changed : 1;
+ /**
+ * @async_flip:
+ *
+ * This is used for DRM_MODE_ATOMIC_IOCTL. async_flip in
crtc_state is
+ * used for legacy PAGE_FLIP_IOCTL.
+ */
+ bool async_flip;
};
Adding async_flip in plane_state will allow enabling async flip on plane
basis for atomic IOCTL.
There would be a bunch of changes to correct this in the atomic path so
as to remove the async_flip flag in crtc_state which was intended for
legacy page_flip ioctl.
Changes include removal of the checks we have in atomic path so as to
reject any changes(different plane, pipe, connector) along with async
flip. This would be replaced with checks so as to reject any change in
that particular plane where async is enabled(reject any change in
pipe/connector as that would have impact on this plane)
Please let us know your opinion on this.
Thanks and Regards,
Arun R Murthy
--------------------
More information about the Intel-gfx
mailing list