<!DOCTYPE html><html><head><title></title><style type="text/css">p.MsoNormal,p.MsoNoSpacing{margin:0}</style></head><body><div>Hi,<br></div><div><br></div><div>I have a pipeline that generates two test patterns using videotestsrc and sends them to two osxvideosink elements reciprocally:<br></div><div><br></div><div><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;">webrtcbin name=webrtcbin1 videotestsrc name=source1 ! video/x-raw,width=1280,height=720 ! queue ! x264enc name=video ! rtph264pay ! queue ! application/x-rtp,media=video,payload=96,encoding-name=H264 ! webrtcbin1. webrtcbin name=webrtcbin2 videotestsrc name=source2 pattern=ball ! video/x-raw,width=1280,height=720 ! queue ! x264enc ! rtph264pay ! queue ! application/x-rtp,media=video,payload=96,encoding-name=H264 ! webrtcbin2.</span><br></div><div><br></div><div>I connected the "pad-added" signal to both webrtcbin and linked to the corresponding depayloader and decoder elements in the callback. Then osxvideosink renders the video. The callback contains this code:<br></div><div><br></div><div><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;">out = gst_parse_bin_from_description(</span><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"><br></span></div><div><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"> "rtph264depay ! avdec_h264 ! "</span><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"><br></span></div><div><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"> "videoconvert ! queue ! osxvideosink",</span><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"><br></span></div><div><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"> TRUE, NULL );</span><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"><br></span></div><div><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;">gst_bin_add( GST_BIN( pipe ), out );</span><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"><br></span></div><div><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;">gst_element_sync_state_with_parent( out );</span><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"><br></span></div><div><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;">sink = (GstPad*)out->sinkpads->data; </span><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;"><br></span></div><div><span class="font" style="font-family:menlo, consolas, monospace, sans-serif;">gst_pad_link( new_pad, sink );</span><br></div><div><br></div><div>This works well if I have the same capabilities for the both videotestsrc elements e.g. video/x-raw,width=1280,height=720 as above. However, if change for instance the video resolution and set one of the caps to something like video/x-raw,width=640,height=360, I get an "Internal data stream error" from gst_parse_launch. Why does this error occur, is it related to the media negotiation for WebRTC or perhaps something about osxvideosink? Is there a way to send video streams using different capabilities? <br></div><div><br></div><div>Thanks,<br></div><div>Serhan<br></div><div><br></div></body></html>