[Libva] gen7 h264 encode bitrate behaviour

Chris Healy cphealy at gmail.com
Mon Aug 18 02:55:23 PDT 2014


Hi Zhao,

I just tested the new values you gave me.  This is a night and day
improvement in bitrate consistency.  Based on the small amount of testing I
have done, this seems to completely address the problem!

I have to understand why moving from 15 and 900 to 1 and 60 makes the
bitrate so consistent.  Both pairs of values are the same so given the
following comment:  /* Tc = num_units_in_tick / time_sacle */  I have the
same Tc in both cases.

How is this changing things for the better AND, what is the tradeoff in
using these values.  (There must be some downside otherwise these values
would have always been 1 and 2 * fps.)

Regards,

Chris

(PS - Thank you!)


On Mon, Aug 18, 2014 at 1:36 AM, Chris Healy <cphealy at gmail.com> wrote:

> Hi Zhao,
>
> I've done testing with both 30 and 24 fps and received similar results.
>
> I will test with the values you mentioned.  Can you explain how
> num_units_in_tick and time_scale work?  (What is a tick?)
>
> Also, is there a good place in the Intel driver to dump the QP value used
> for each frame?  I'd like to add some QP logging when an env variable is
> set.
>
> Regards,
>
> Chris
>
>
> On Mon, Aug 18, 2014 at 1:30 AM, Zhao, Yakui <yakui.zhao at intel.com> wrote:
>
>> On Mon, 2014-08-18 at 01:13 -0600, Chris Healy wrote:
>> > Hi Zhao,
>> >
>> >
>> > I enabled LIBVA_TRACE recently and grabbed a bunch of output.  Here's
>> > a link to good size fragment of the output:
>> >
>> > http://pastebin.com/KJYzGQAA
>> >
>> >
>> > Here's answers to the specific questions you asked:  (From LIBVA_TRACE
>> > output)
>> >
>> > [57113.237423]  intra_period = 30
>> > [57113.237424]  intra_idr_period = 30
>> > [57113.237425]  ip_period = 1
>> > [57113.237427]  bits_per_second = 3700000
>> > [57113.237428]  max_num_ref_frames = 2
>> > [57113.237469]  num_units_in_tick = 15
>> > [57113.237470]  time_scale = 900
>>
>> If the expected fps is 24, the setting of num_units_in_tick/time_scale
>> is incorrect. It will be better that you should use the following
>> setting in your tool:
>>    num_units_in_tick = 1
>>    time_scale = 2 * fps
>>
>>
>>
>> >
>> > I see avenc.c, but it's unclear to me if I am dealing with an issue
>> > with the encoder application or something lower down in libva or
>> > libva-driver-intel or the HW itself.
>> >
>> >
>> > Am I correct in believing (simplified) that the HW is just given a raw
>> > video frame and a QP and the HW returns a chunk of encoded data that
>> > is "some size" and that it is the responsibility of the SW above the
>> > HW to dynamically adjust the QP to hit the target bitrate to meet
>> > whatever the rate control algorithm deems correct?
>> >
>>
>> When the CBR mode is used, the driver will adjust QP dynamically so that
>> the encoded bitrate can meet with the requirement of target bitrate
>> based on the input encoding parameter(For example: intra_period,
>> ip_period, time_scale, num_units_in_tick and so on).
>>
>>
>> > If this is the case, where is the code that is dynamically adjusting
>> > the QP?  Also, in the HW, where are the registers and bits control the
>> > QP?  (I'm looking at the "Intel ® OpenSource HD Graphics Programmer’s
>> > Reference Manual (PRM) Volume 2 Part 3: Multi-Format Transcoder – MFX
>> > (Ivy Bridge)" so a reference to the registers might be helpful for me
>> > to understand better.)
>> >
>> >
>> > Regards,
>> >
>> > Chris
>> >
>> >
>> >
>> > On Sun, Aug 17, 2014 at 11:58 PM, Zhao, Yakui <yakui.zhao at intel.com>
>> > wrote:
>> >         On Sun, 2014-08-17 at 19:27 -0600, Chris Healy wrote:
>> >         > I've done some further analysis with our real stream and we
>> >         experience
>> >         > the same inconsistent bitrate behaviour as with the test
>> >         app.  It
>> >         > seems to me that the way the bitrate control works doesn't
>> >         do a good
>> >         > job of handling certain input video sequences and the
>> >         encoded bitrate
>> >         > subsequently spikes as a result of this.
>> >         >
>> >         > To help understand what I'm dealing with, I've posted a
>> >         video on
>> >         > youtube showing the video being encoded:
>> >         >
>> >         > www.youtube.com/watch?v=LpYS_9IB0jU
>> >         >
>> >         >
>> >         > I've also posted a bitrate graph online too that shows what
>> >         happens
>> >         > when encoding the video referenced above:
>> >         >
>> >         > http://snag.gy/imvBe.jpg
>> >         >
>> >         >
>> >         > In the above graph, I set the targeted encode bitrate to
>> >         3.7Mbps, CBR,
>> >         > and High Profile H.264.  Most of the time the bitrate hovers
>> >         around
>> >         > 3.7Mbps, but sometimes the bitrate drops very low then
>> >         spikes up very
>> >         > high.  I also notice that when the bitrate drops down low
>> >         then spikes
>> >         > up real high, the "highness" seems to be a function of how
>> >         much and
>> >         > long the bitrate was under 3.7Mbps.  It seems that the rate
>> >         control
>> >         > logic is taking a 20 second running bitrate average and
>> >         trying it's
>> >         > best to keep the aggregate bitrate at 3.7Mbps, so if the
>> >         scene
>> >         > complexity drops, the rate control logic reacts by cranking
>> >         the QP to
>> >         > a very low value (high quality) to bring the bitrate back
>> >         up.  This
>> >         > behaviour combined with the fact that the video goes to a
>> >         simple fixed
>> >         > image, then crossfades to something complex in less than 20
>> >         seconds
>> >         > when the QP is a very low value results in the massive spike
>> >         in
>> >         > bitrate.  (This is my naive understanding of what’s going
>> >         on.)
>> >         >
>> >         > The code I'm using to encode and stream is based in large
>> >         part on
>> >         > libva/test/encode/h264encode.c.  I'm not sure if the logic
>> >         for doing
>> >         > rate control is in libva, libva-driver-intel, or supposed to
>> >         be driven
>> >         > by the code that uses libva.  Am I dealing with an issue
>> >         with the
>> >         > encoder itself or is it more likely my code not correctly
>> >         driving the
>> >         > encoder?
>> >
>> >
>> >         Hi, Chris
>> >
>> >             Thank you for reporting the issue.
>> >             Will you please check the encoding parameters required by
>> >         CBR? (For
>> >         example: intra_period/ip_period/
>> >         num_units_in_tick/time_scale/bits_per_second in
>> >         VAEncSequenceParameterBufferH264.)
>> >
>> >             Will you please take a look at the example of
>> >         libva/test/encode/avcenc.c and see whether it is helpful?
>> >         (There exist two h264 encoding examples because of history
>> >         reasons. The
>> >         avcenc case is more consistent with the libva-intel-driver.)
>> >
>> >         Thanks.
>> >             Yakui
>> >
>> >         > What can be changed to keep the actual bitrate from being so
>> >         bursty
>> >         > given the video behaviour?
>> >         >
>> >         >
>> >         > Regards,
>> >         >
>> >         > Chris
>> >         >
>> >         >
>> >         >
>> >         >
>> >         >
>> >         >
>> >         >
>> >         >
>> >         >
>> >         > On Fri, Aug 15, 2014 at 6:03 PM, Chris Healy
>> >         <cphealy at gmail.com>
>> >         > wrote:
>> >         >         I've been encoding h264 content using HD 4000 HW and
>> >         am not
>> >         >         able to make heads or tails of the way the encoder
>> >         is behaving
>> >         >         from the standpoint of the data size coming out of
>> >         the
>> >         >         encoder.
>> >         >
>> >         >         I have a 24 fps 720p video that is the same image
>> >         for ~8
>> >         >         seconds, then a 1.5 second fade to the next image
>> >         followed by
>> >         >         another ~8 seconds on that image.  This goes on and
>> >         on
>> >         >         indefinitely.  I would have expected that the
>> >         bitrate would
>> >         >         have been pretty low, then spike for 1.5 seconds
>> >         then go back
>> >         >         to a similarly low value.
>> >         >
>> >         >
>> >         >         When I look at the data coming out of the encoder
>> >         with a 4Mb/s
>> >         >         bitrate set and CBR, I'm seeing almost the inverse
>> >         where most
>> >         >         of the time, the bitrate is pretty close to 4Mb/s
>> >         then it
>> >         >         spikes above 4Mb/s (presumably for the fade), then
>> >         it drops
>> >         >         down to ~2Mbps for a second or so before going back
>> >         up to
>> >         >         ~4Mb/s.
>> >         >
>> >         >         The strangest part is that for the first ~30 seconds
>> >         of
>> >         >         encode, across the board, the bitrate is ~2x the
>> >         bitrate from
>> >         >         second 31 -> end of encode.  (So, I'm hitting a
>> >         typical rate
>> >         >         of 7Mbps and peaking out at 13Mbps.)
>> >         >
>> >         >
>> >         >         Is this behaviour expected with gen7 HW?  Is there
>> >         something I
>> >         >         can do in the initial setup that will cap the MAX
>> >         bitrate
>> >         >         regardless of the impact on encode quality?
>> >         >
>> >         >         Regards,
>> >         >
>> >         >         Chris
>> >         >
>> >         >
>> >         >
>> >
>> >
>> >
>> >
>> >
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/libva/attachments/20140818/12a19e88/attachment.html>


More information about the Libva mailing list