Question regarding the way RTP payloading is done in rtph264pay and rtpmp4vpay plugins

Olivier Crête olivier.crete at collabora.com
Fri Dec 7 07:53:02 PST 2012


Hi, 

The only reason the H.264 payloader doesn't merge NAL into a single packet is that no one had done the work to do it intelligently. 

Olivier

Paul d'AUBIGNY <visechelle at gmail.com> wrote:

>Hi guys,
>
>When comparing rtph264pay and rtpmp4vpay plugins, I noticed a
>difference in
>the way RTP packets are filled.
>In the case of H264, one RTP packet seems to contain maximum one entire
>NAL
>unit (can be splitted if MTU too small). In the case of rtpmp4vpay, it
>fills the RTP packet with eventually several VOP frames as soon as mtu
>is
>not exceeded or a non VOP frame is received.
>Therefore, with rtph264pay one RTP packet = one frame maximum, with
>rtpmp4vpay one RTP packet = GOP frames maximum.
>
>Why this difference of behavior? I'm wondering this because in case of
>mpeg
>version 4, it can lead to delayed and/or laggy output if it gathers
>several
>frames in one RTP packet and of course in the case of live streaming.
>Of
>course, we can still play with the max-ptime property of the payloader
>so
>that it limits the maximum duration of the packet data, thus we can
>limit
>the number of frame per RTP packet. But by default the behaviors are
>different.
>
>To illustrate what I'm saying, here are 2 examples to see the delay and
>laggy output in case of MPEG4 (in case of H264 both output streams
>might be
>delayed due to the encoding time but both output streams should be
>synchronized):
>
>
>   - Case of MPEG4:
>
>
>gst-launch-1.0 udpsrc port=2222 ! application/x-rtp, media=video,
>payload=96, clock-rate=90000, encoding-name=MP4V-ES ! decodebin !
>xvimagesink sync=false
>
>gst-launch-1.0 v4l2src ! video/x-raw, format=YUY2, width=640,
>height=480,
>interlace-mode=progressive, pixel-aspect-ratio=1/1, framerate=30/1 !
>videoconvert ! videorate ! avenc_mpeg4 ! tee name=t  t. ! queue !
>rtpmp4vpay config-interval=1 mtu=65507 ! udpsink host=127.0.0.1
>port=2222
> t. ! queue ! decodebin ! autovideosink
>
>
>
>   - Case of H264:
>
>
>gst-launch-1.0 udpsrc port=2222 ! application/x-rtp, media=video,
>payload=96, clock-rate=90000, encoding-name=H264 ! decodebin !
>xvimagesink
>sync=false
>
>gst-launch-1.0 v4l2src ! video/x-raw, format=YUY2, width=640,
>height=480,
>interlace-mode=progressive, pixel-aspect-ratio=1/1, framerate=30/1 !
>videoconvert ! videorate ! x264enc ! tee name=t  t. ! queue !
>rtph264pay
>config-interval=1 mtu=65507 ! udpsink host=127.0.0.1 port=2222  t. !
>queue
>! decodebin ! autovideosink
>
>
>*Note*: For each case, play the two pipelines at the same time. I put a
>high MTU to demonstrate what I'm saying but that could occur even with
>the
>default MTU if the frame size is small enough.
>
>
>Cheers,
>
>
>Paul HENRYS
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>gstreamer-devel mailing list
>gstreamer-devel at lists.freedesktop.org
>http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.


More information about the gstreamer-devel mailing list