[PATCH v2 5/5] drm/vkms: Implement CRC debugfs API
Haneen Mohammed
hamohammed.sa at gmail.com
Sat Jul 28 18:40:56 UTC 2018
On Wed, Jul 18, 2018 at 02:44:18PM -0400, Sean Paul wrote:
> On Wed, Jul 18, 2018 at 07:24:34PM +0300, Haneen Mohammed wrote:
> > First, just so you have the complete view, this is the missing part in this patch:
> > ------ vkms_crc.c
> > static uint32_t _vkms_get_crc(struct vkms_crc_data *crc_data)
> > {
> > struct drm_framebuffer *fb = &crc_data->fb;
> > struct drm_gem_object *gem_obj = drm_gem_fb_get_obj(fb, 0);
> > struct vkms_gem_object *vkms_obj = drm_gem_to_vkms_gem(gem_obj);
> > u32 crc = 0;
> > int i = 0;
> > unsigned int x = crc_data->src.x1 >> 16;
> > unsigned int y = crc_data->src.y1 >> 16;
> > unsigned int height = drm_rect_height(&crc_data->src) >> 16;
> > unsigned int width = drm_rect_width(&crc_data->src) >> 16;
> > unsigned int cpp = fb->format->cpp[0];
> > unsigned int src_offset;
> > unsigned int size_byte = width * cpp;
> > void *vaddr = vkms_obj->vaddr;
> >
> > if (WARN_ON(!vaddr))
> > return crc;
> >
> > for (i = y; i < y + height; i++) {
> > src_offset = fb->offsets[0] + (i * fb->pitches[0]) + (x * cpp);
> > crc = crc32_le(crc, vaddr + src_offset, size_byte);
> > }
> >
> > return crc;
> > }
> >
> > void vkms_crc_work_handle(struct work_struct *work)
> > {
> > struct vkms_crtc_state *crtc_state = container_of(work,
> > struct vkms_crtc_state,
> > crc_work);
> > struct drm_crtc *crtc = crtc_state->base.crtc;
> > struct vkms_output *out = drm_crtc_to_vkms_output(crtc);
> >
> > if (crtc_state && out->crc_enabled) {
> > u32 crc32 = _vkms_get_crc(&crtc_state->data);
> > unsigned int n_frame = crtc_state->n_frame;
> >
> > drm_crtc_add_crc_entry(crtc, true, n_frame, &crc32);
> > }
> > }
> >
> > int vkms_set_crc_source(struct drm_crtc *crtc, const char *src_name,
> > size_t *values_cnt)
> > {
> > struct vkms_output *out = drm_crtc_to_vkms_output(crtc);
> >
> > if (!src_name) {
> > out->crc_enabled = false;
> > } else if (strcmp(src_name, "auto") == 0) {
> > out->crc_enabled = true;
> > } else {
> > out->crc_enabled = false;
> > return -EINVAL;
> > }
> >
> > *values_cnt = 1;
> >
> > return 0;
> > }
> > --- end vkms_crc.c
>
> /snip
>
> > > > > > > +static void vkms_crtc_atomic_begin(struct drm_crtc *crtc,
> > > > > > > + struct drm_crtc_state *old_crtc_state)
> > > > > > > +{
> > > > > > > + struct vkms_output *vkms_output = drm_crtc_to_vkms_output(crtc);
> > > > > > > +
> > > > > > > + spin_lock_irq(&vkms_output->lock);
> > > > > >
> > > > > > Hmm, you can't lock across atomic_begin/flush. What is this protecting against?
> > > > > >
> > > > >
> > > > > I did this per Daniel recommendation:
> > > > > 1- spin_lock_irq -> vkms_crtc_atomic_begin
> > > > > 2- update vkms_crc_data and get ref to fb -> vkms_primary_plane_update
> > > > > 3- spin_unlock_irq after vblank event -> vkms_crtc_atomic_flush
> > > >
> > > > I think I'm interpreting his mail a little differently. The point of the
> > > > spinlock is to protect the pointer to the crc_data in vkms_crtc (right now it's
> > > > not a pointer, but it should be).
> > > >
> > > > So I think you need to do the following:
> > > > - Change crc_data to a pointer
> > > > - In plane_update, allocate new memory to hold crc_data and, under the spinlock,
> > > > update the crc_data to point to the new memory (add a WARN_ON if the pointer
> > > > is NULL)
> > > > - In the hrtimer, while holding the spinlock, take the pointer from crc_data
> > > > into a local variable and clear the crtc->crc_data pointer
> > > > - Pass the pointer (from the local variable) to the crc_worker
> > > >
> >
> > The problem with this approach is managing the pointer allocation and
> > deallocation. If I grab it in the hrtimer, then where/when should I free it?
> > I can't after the crc worker is done with it because what I noticed is
> > that the crc computation is expected to run multiple times after one
> > plane_atomic_update.
> >
> > > > I don't think you need to hold the spinlock across the atomic hooks, just grab
> > > > it in plane_update.
> > >
> >
> > I asked Daniel in irc about releasing the vkms_output->lock in the begginning of atomic_flush
> > to avoid modifying the event_lock and he mentioned that "output->lock must wrap the event
> > handling code completely since we need to make sure that the flip event
> > and the crc all match up".
> >
> > So maybe I can grab the spinlock in plane_atomic_update and release it at
> > the end of crtc_atomic_flush? Is plane_atomic_update always called
> > before crtc_atomic_flush?
> >
> > Otherwise if I grab the spinlock in atomic_begin I won't be able to
> > use pointer for crc_data and allocate it in plane_atomic_update since
> > I'll be holding a spinlock already.
> >
> > I have based this patch on Daniel's second feedback to my email
> > "[PATCH] drm/vkms: Update CRC support after lock review" which I didn't
> > cc the dri mailing list in it, quoting Daniel:
> >
> > "Overall flow:
> > 1. in crtc_funcs->atomic_begin grab the spinlock, to make sure
> > everything is visible completely or not at all to the vblank handler.
> > Also update a pointer to the vkms_crtc_state in vkms_output->crc_data
> > (or something like that.
> > 2. copy the plane data (like you do here) in plane_funcs->atomic_commit.
> > 3. drop the spinlock in crtc_funcs->atomic_flush after the vblank
> > event is armed.
> >
> > In the vblank hrtimer handler:
> > 1. grab the spinlock
> > 2. drm_crtc_handle_vblank() - we must do this while holding the
> > spinlock, otherwise the event and crc might get out of sync. Also grab
> > the current vblank number here and store it somewhere in
> > vkms_crtc_state so that the worker can use the right one.
> > 3. launch the worker. Note that the struct_work needs to be within the
> > vkms_crtc_state structure, otherwise the crc computation worker
> > doesn't know really for which data it should compute the crc - just
> > having one copy in data[1] doesn't really solve this race.
> > 4. drop spinlock
>
> Ok, I think I understand what we're trying to do. It seems like the spinlock is
> simulating what we would do in hw by telling the display controller not to latch
> in new addresses until we're ready. By blocking the vblank handler (and, by
> extension, the event), we're delay sending an event out until the crc data is
> complete. In hw, this would be a dropped frame.
>
Hi Sean, now that I've moved the crc_data to plane_state, I think I can
move the spinlock to plane_update.
> I'd probably prefer doing something more similar to hw. Something like simulating
> a latch to tell the vblank handler to skip sending the event until the next
> timer interrupt, instead of blocking it. I think that would avoid having to hold
I didn't completely understand this method, but the best I can think of
is that if I want to try this approach, then I should move the spinlock to
plane_update, which mean it can be acquired again by the vblank_handle()
before we send the vblank event in crtc_atomic_flush(), and thus we need
to delay the call to vblank_handle() until we make sure the event was
sent from atomic_flush?
> the spinlock across begin/<everything>/flush. However Daniel is smarter than I am,
> so I'll go with his suggestion since I'm sure there's a very good reason this is a
> very bad idea :-)
>
> In that case, what you have makes sense to me, so if we could flush the work
> item instead of the whole queue, I think we're there.
>
> Sean
>
>
> >
> > So one big problem we have here still is what happens if the crc
> > worker falls behind with computing crcs. There's 2 cases, let's first
> > look at how to fixe the crc worker falling behind atomic updates. The
> > problem there is that the next atomic update might free the
> > vkms_crtc_state structure before the worker is done with it. That's
> > fairly easy to fix:
> > - add a flush_work call to your vkms_crtc_state_destroy function. This
> > means you need to initialize the work already in
> > vkms_crtc_state_duplicate (you'll create both of these when doing the
> > vkms_crtc_state subclassing).
> > - 2nd we need to make sure crc workers are run in order easiest fix
> > for that is to create your own private crc work queue (1 per crtc we
> > have) with the WQ_ORDERED flag.
> >
> > 2nd issue if the vblanks get ahead of the crc worker. This happens
> > when e.g. there's no atomic updates, but we still want to generate 1
> > vblank per frame. This is probably better done in a follow-up patch,
> > but rough solution is:
> > - in the vblank hrtimer check whether the work is still scheduled and
> > not yet complete (while holding the spinlock). in that case simply
> > tell the worker to fill out the same crc for multiple vblanks, by
> > using a vblank_seqno_start/end.
> > - if the worker is not busy, then we tell it to compute a new crc (in
> > case something in the buffers has changed - in the future we might
> > change this behaviour)."
> >
> > > So the more I think about this, I don't think it's quite right. Perhaps I'm
> > > missing an email from Daniel, or (more likely) misunderstanding what you're
> > > trying to do. However, the email I read suggested storing the crc_data
> > > to be consumed by the crc worker in vkms_crtc. However in this patch it's being
> > > stored (along with the work item) in crtc state. Further, the state cannot be
> > > destroyed without completing the crc_work, which kind of defeats the purpose of
> > > the workqueue in the first place.
> > >
> > > Storing a copy of the crc_data in crtc allows you to free the crtc_state, so you
> > > can start working on the next frame while the crc worker is scheduled and picks
> > > up the crc_data from crtc. Storing the crc_data in crtc_state and waiting for
> > > the work to finish is easier since you don't need to worry about copying and
> > > locking (so much). However, it seems like we're doing both in this patch.
> > >
> > > Again, I'm likely just not understanding what the goal of this is, so any
> > > clarification would be greatly appreciated :)
> > >
> > > Sean
> > >
> > > >
> > > > Sean
> > > >
> > > > >
> > > > > > > +}
> > > > > > > +
>
> /snip
>
> --
> Sean Paul, Software Engineer, Google / Chromium OS
More information about the dri-devel
mailing list