[Intel-xe] [RFC PATCH 0/3] Xe dma fence handling on atomic commit
Hogander, Jouni
jouni.hogander at intel.com
Mon Oct 16 11:22:17 UTC 2023
On Thu, 2023-09-28 at 19:21 +0300, Ville Syrjälä wrote:
> On Thu, Sep 28, 2023 at 07:10:46PM +0300, Ville Syrjälä wrote:
> > On Thu, Sep 28, 2023 at 08:23:41AM +0000, Hogander, Jouni wrote:
> > > On Wed, 2023-09-27 at 14:35 +0200, Maarten Lankhorst wrote:
> > > > Hey,
> > > >
> > > > On 2023-09-27 12:45, Hogander, Jouni wrote:
> > > > > On Wed, 2023-09-27 at 12:33 +0200, Maarten Lankhorst wrote:
> > > > > > Hey,
> > > > > >
> > > > > > When we wrote the original display support, we purposely
> > > > > > decided
> > > > > > on
> > > > > > not
> > > > > > adding i915_sw_fence support.
> > > > > >
> > > > > > In this case, I think a better approach would be to remove
> > > > > > this
> > > > > > code
> > > > > > from i915 as well, and end up with cleaner display code for
> > > > > > both
> > > > > > drivers.
> > > > >
> > > > > Yes, I agree eventually this would be the goal. I did some
> > > > > experiments
> > > > > here:
> > > > >
> > > > > https://patchwork.freedesktop.org/patch/558982/?series=123898&rev=4
> > > > >
> > > > > Which is replacing i915_sw_fence with same code I'm using for
> > > > > Xe
> > > > > driver
> > > > > in this patch set. The problem is GPU reset detection. I
> > > > > don't have
> > > > > currently good ideas how to tackle that without compromizing
> > > > > i915
> > > > > functionality in this scenario -> ended up doing this only
> > > > > for Xe
> > > > > to
> > > > > ensure this is not blocking upstreaming Xe. Would this be
> > > > > acceptable as
> > > > > temporary solution to be solved after upstreaming? Anyways
> > > > > what I'm
> > > > > doing in these patches is not really an i915_sw_fence
> > > > > revised, but
> > > > > using dma_fences.
> > > > In atomic, plane_state->fence is set for all fences, and that
> > > > should
> > > > be
> > > > enough. I think creating a reservation object is a massive
> > > > overkill,
> > > > and we should try to use the drm_atomic_helper_wait_for_fences
> > > > if at
> > > > all
> > > > possible.
> > >
> > > Checked this plane_state->fence. To me it looks like it's for
> > > userspace
> > > to set? I don't think this is the case for i915. For Xe, I don't
> > > know
> > > is it done?
> > >
> > > Also i915 is waiting for fences on old frame buffer objects and
> > > new fb
> > > objects. Shouldn't we do the same for Xe as well?
> >
> > xe shouldn't need the old fb fence since userspace there won't be
> > doing any MI_SCANLINE_WAIT stuff.
> >
> > > Anyways there is
> > > currently no mechanism in dma_fence implementation to have common
> > > fence
> > > for two separate dma_resv objects (dma_resv_get_singleton) -> I
> > > did
> > > this by creating a new dma_resv for this purpose. If there is no
> > > reason
> > > to wait for possible fences on old fb object, maybe we could just
> > > do
> > > dma_resv_get_singleton in prepare planes for Xe and wait for that
> > > and
> > > uapi.fence in commit?
> >
> > drm_gem_plane_helper_prepare_fb() supposedly does something that
> > already
> > works for xe. Looks like it does a few things:
> > - populates plane_state->fence with dma_resv_get_singleton() if
> > using
> > implicit fencing
> > - uses dma_fence_chain internally to add additional (kernel
> > internal
> > for memory movement/etc. maybe?) fences to the uapi supplied
> > explicit fence
> >
> > Presumably we should be able to glue on the old fb fence using
> > dma_fence_chain
> > as well?
I sent a version to intel-gfx mailing list which is utilizing
drm_gem_plane_helper_prepare_fb for both old and new fb. Please check:
https://patchwork.freedesktop.org/series/125160/
>
> Oh and the display vs. GPU reset chicken/egg problem might be
> the hard nut to crack. We need to get the pending display commits
> flushed out before we can let a GPU reset clobber the display, but
> the display commits are waiting on the fences and thus neither the
> display commits nor the GPU reset can proceed, I think. Having to
> wait for some kind of huge fence timeout in the display commit seems
> a bit sub-optimal. Or would the fences get signalled when we detect
> the GPU hang? If not, can we achieve something like that?
I tested this and it seems to be that fences are actually signaled when
GPU hang is detected -> no need to detect this separately while waiting
for fences.
BR,
Jouni Högander
>
More information about the Intel-xe
mailing list