Pad negotiation at udpsrc element

debruyn debruynels1 at gmail.com
Thu Nov 17 05:51:07 UTC 2016


So after some probing I found the following regarding the capabilities thats
being set. 

/In the following pipeline on the board :/
*rtspsrc | udpsink*
I connect the rtspsrc to the udpsink by listening for the pad-added signal.
When i receive that signal i printed out the current caps on the requested
rtspsrc src pad. I got number 1 and then number 2:

1) Caps set to : application/x-rtp, *media=(string)video*, payload=(int)96,
clock-rate=(int)90000, encoding-name=(string)H264,
profile-level-id=(string)420029, packetization-mode=(string)1,
sprop-parameter-sets=(string)"Z00AH5plAoAt/4C1AQEBQAAA+gAAF1w6GAG3gAG3eu8uNDADbwADbvXeXCg\=\,aO48gA\=\=",
a-recvonly=(string)"", x-dimensions=(string)"1280\,720",
*ssrc=(uint)1342655012*, clock-base=(uint)1601300418,
seqnum-base=(uint)50561, npt-start=(guint64)0, play-speed=(double)1,
play-scale=(double)1 

2) Caps set to : application/x-rtp, *media=(string)audio*, payload=(int)14,
clock-rate=(int)90000, encoding-name=(string)MPA, a-recvonly=(string)"",
a-Media_header=(string)"MEDIAINFO\=494D4B48010100000400010000200110803E000000FA000000000000000000000000000000000000\;",
a-appversion=(string)1.0, *ssrc=(uint)1146891223*,
clock-base=(uint)1601303400, seqnum-base=(uint)55485, npt-start=(guint64)0,
play-speed=(double)1, play-scale=(double)1

Which told me that it is indeed two differant sources that the camera sets
up because i got the pad added signal twice.
/
In the following pipeline on the server:/
*udpsrc | tee | queue | rtph264depay | acdec_h264 | theoraenc | oggmux |
shout2send*

I monitored the current caps on the udpsrc as the state changed to playing.
The only caps i got from the srcpad was:
*Caps set to : application/x-rtp, media=(string)video,
clock-rate=(int)90000, encoding-name=(string)H264*

This led me to think that i never even pushed the audio to the server. 

/So i then changed the pipe on the board to this:/
*rtspsrc | queue | udpsink*

And monitored the src pad of the queue and the only caps i got was the video
one. I did not receive any audio caps.

Thus I want to ask if the following conclusion is sound? That i will need to
parse the video and audio in two differant pipelines and send it in two
differant streams to the server? Or is there a way to combine those two ssrc
on the board into one and send them together?

Thanks for all the help thus far sebastian, i really do appreciate it and
you have been an immense help
Regards
DB




--
View this message in context: http://gstreamer-devel.966125.n4.nabble.com/Pad-negotiation-at-udpsrc-element-tp4680664p4680728.html
Sent from the GStreamer-devel mailing list archive at Nabble.com.


More information about the gstreamer-devel mailing list