[RFC][PATCH 4/4] drm: i915: Atomic pageflip WIP

Chris Wilson chris at chris-wilson.co.uk
Fri Sep 14 08:56:00 PDT 2012


On Fri, 14 Sep 2012 18:30:21 +0300, Ville Syrjälä <ville.syrjala at linux.intel.com> wrote:
> On Fri, Sep 14, 2012 at 03:27:05PM +0100, Chris Wilson wrote:
> > On Fri, 14 Sep 2012 17:21:30 +0300, Ville Syrjälä <ville.syrjala at linux.intel.com> wrote:
> > > On Fri, Sep 14, 2012 at 02:57:26PM +0100, Chris Wilson wrote:
> > > > On Wed, 12 Sep 2012 18:47:07 +0300, ville.syrjala at linux.intel.com wrote:
> > > > > +static void intel_flip_finish(struct drm_flip *flip)
> > > > > +{
> > > > > +	struct intel_flip *intel_flip =
> > > > > +		container_of(flip, struct intel_flip, base);
> > > > > +	struct drm_device *dev = intel_flip->crtc->dev;
> > > > > +
> > > > > +	if (intel_flip->old_bo) {
> > > > > +		mutex_lock(&dev->struct_mutex);
> > > > > +
> > > > > +		intel_finish_fb(intel_flip->old_bo);
> > > > 
> > > > So if I understand correctly, this code is called after the flip is
> > > > already complete?
> > > 
> > > Yes.
> > > 
> > > > The intel_finish_fb() exists to flush pending batches and flips on the
> > > > current fb, prior to changing the scanout registers. (There is a
> > > > hardware dependency such that the GPU may be executing a command that
> > > > required the current modesetting.) In the case of flip completion, all
> > > > of those dependencies have already been retired and so the finish should
> > > > be a no-op. And so it should no be required, nor the changes to
> > > > intel_finish_fb (which should have included a change in the name to
> > > > indicate that is now taking the fb_obj).
> > > 
> > > Actually I'm not quite sure where this intel_finish_fb() call originated.
> > > Based on the name it didn't make sense to me, but I left it there for
> > > now. Hmm. OK it came from one patch from Imre while I was on vacation.
> > > I suppose he got it from intel_pipe_set_base() which does call
> > > intel_finish_fb() on the old fb. Why does it do that?
> > 
> > It all boils down to the modeset being asynchronous to the GPU
> > processing the command stream. So we may be currently processing a batch
> > that is waiting on the pipe to go past a particular scanline, and if the
> > modesetting were to disable that pipe, or to change its size, then we
> > risk the WAIT_FOR_EVENT never completing - leading to hangcheck
> > detecting the frozen display and an angry user.
> 
> intel_pipe_set_base() won't disable the pipe or change the size,
> it'll just flip the primary plane. So that doesn't quite explain
> why the call is there, as opposed to being called just from the
> full modeset path.

Hmm, at the time it was a convenient point. Now, it is clearly called too
late in the modeset sequence. Daniel, fix please. :)

> Also wouldn't any batch buffer with WAIT_FOR_EVENT be in risk of
> stalling, not just ones related to the old fb?
> 
> > The other aspect is to synchronize the modeset with any outstanding
> > pageflips.
> 
> Right, that does make sense. But doing it from a function called
> intel_finish_fb() is a bit confusing, as the condition really
> shouldn't depend on any specific fb object. But I suppose this is
> just a result of the "only one outstanding flip" policy.

Again, a nice convenient point, calling it an intel_crtc_wait_*() would
probably help (after fixing the ordering).

> BTW regarding this WAIT_FOR_EVENT thing. I got the impression that
> the scanline window wait doesn't work on recent hardware generations
> any more. Is that true? I was thinking that perhaps I could use it
> along with the load register command to perform the flips through
> the command queue.

That impression is pretty accurate. There is a suggestion that some form
of scanline wait was restored for IVB, but driving it seems pretty hit
and miss. Atomic flipping should all be possible with MI_DISPLAY_FLIP,
so presumably you are mostly thinking about atomic modeset? Is the
presumption that it will be an infrequent request and so better to keep
as simple as possible?
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


More information about the dri-devel mailing list