[Intel-gfx] [PATCH 1/4] introduce intel_ring_buffer structure

Daniel Vetter daniel at ffwll.ch
Fri May 14 11:51:14 CEST 2010


Hi,

On Fri, May 14, 2010 at 09:39:22AM +0800, Zou, Nanhai wrote:
> Thanks for reviewing the patch.
> 
> To clarify,
> This patch is used for H.264/VC1 decoding.
> Abstract ring buffer is also needed for our later generation GPUs, 
> Because there are more types of ring buffer to come.
> 
> H.264/VC1 HW decoding is a bigger requirement than mepg2.
> XvMC VLD can only support mpeg2. It is rendering in server context
> and lack of post processing features.
> We are not going to support it later, our later media work will be
> based on vaapi.

Ok, let me restate my question: Is there a strict hw requirement to use a
second ring buffer? If not, what does this gain us?

I'm asking because:
- the XvMC implementation can use the media engine without a second ring
  buffer. Yep, I know that XvMC is only mpeg2, but that doesn't change the
  fact that the current kernel interface seems to be enough. An no,
  current XvMC doesn't issue batchbuffers in the Xorg context but in the
  client's process (like direct gl rendering).
- I've strolled through the ironlake docs and I didn't notice anything
  saying that bsd for mpeg4/vc1 needs a special ring buffer. Please
  correct me if I'm wrong.
- Even the old i815 laying around here has multiple ringbuffers with
  arbitration between them. The i915 drm module never used them. The
  added complexity simply doesn't pay.

So please explain the technical reasons we need this rather complex beast
of code in the kernel?

> Synchronize between BSD and render ring will be done by VAAPI client.

As I've said, I'm not gonna like this answer. I still don't. gem is
supposed to hide the asynchronous execution nature of gpus. It would be
dead easy to add the required i915_gem_object_wait_for_rendering when
switching ring buffers. You didn't do it and you don't deliver any
explanation for not doing so. In other words, your code looks deliberately
broken.

> We have run Tons of video validation upon this patch.
> 
> Our QA will do 1-2 days of regression test on different chips each time I rebase and sent out the patch

Frankly, I don't care what your qa says about your patches. I measure code
quality on the following levels:
1) Is it readable by mere humans? This also includes whether someone could
   make sense of it a few months down the road with git blame & friends.
   IMHO your patches fail at that for the outlined reasons.
2) Is it bisectable and nicely split up for regression testing? Your
   patches fail this, too.
3) Does it actually run on real hw?
Without 1) and 2) fulfilled, 3) (i.e. your qa work) doesn't matter,
because these changes are not really supportable going forward.

> Let's get simple code works first then we can optimize it.

You probably noticed by now that I'm slightly pissed, so I apologize
upfront for the harsh tone. Let me state my opinion in plain English:

Your patch series as-is is crap.

And I think the issues I've raised are not just "optimizations", but
rather fundamental problems.  So either you ignore me and I'll cease to
review your patches. Or you start to fix your stuff and/or deliver decent
rebuttals explaining why I'm wrong. And If you resend patches, please
explain in the changelogs what you've changed since last submission (and
why) and what you didn't change (and why).

> Thanks
> Zou Nan hai

Yours, Daniel
-- 
Daniel Vetter
Mail: daniel at ffwll.ch
Mobile: +41 (0)79 365 57 48



More information about the Intel-gfx mailing list