Recording Video

Peter Rennert p.rennert at cs.ucl.ac.uk
Fri Dec 14 11:08:11 PST 2012


Your problem might be slightly more complex than my setup, because you 
have to deal with sound as well, but have a look on this blog post [1].

This works for me. However, I am loosing one frame each time I switch 
the file location. Reason why this happens is described in [2] and has 
to do with the fact that I am dealing with encoded data only. 
Unfortunately, noone gave me an answer to this question.

In your case, if you disconnect the pipeline straight after a queue 
after your video source and replace the entire

! queue ! mux. alsasrc num-buffers=440 ! audioconvert ! 
'audio/x-raw,rate=44100,channels=2' ! queue ! mux. avimux name=mux ! 
filesink

with a new one, you might be even able avoid any problems I faced.

Btw, you do not have to write in C. C++, Python and even Java would be 
alternatives.

[1] 
http://groakat.wordpress.com/2012/12/05/gstreamer-stream-h264-webcam-data-to-series-of-files/
[2] 
http://lists.freedesktop.org/archives/gstreamer-devel/2012-December/038199.html




On 12/14/2012 06:13 PM, Ian Davidson wrote:
> Please can you advise me.
>
> I am attempting to record audio/video which I expect to last up to 1½ 
> hours.  If I try to do that as one single file, storing as AVI, I get 
> a file which is over 2GB and that causes problems. Therefore, I like 
> to record 3 shorter videos - and in the past I have manually stopped 
> the recording at a 'convenient point' and started a new recording - 
> keeping each recording to a maximum of 40 minutes.
>
> An alternative approach would be to record for a certain time and then 
> automatically stop and start a new recording.  This would mean that 
> the recording break would probably come when someone was speaking, so 
> there would be a little 'hiccup'.  I wrote the script below to see how 
> much of a gap there was between successive recordings (the camera was 
> pointing at a digital clock showing seconds).  The videos did not turn 
> out quite as I expected.
>
>
> #!/bin/bash
> fileNamePart="${HOME}/video"
> v4l2-ctl -i 1
> counter=0
> while [ $counter -lt 3 ]; do
>     let counter=counter+1
>     fullFileName="${fileNamePart}-${counter}.avi"
>     gst-launch-1.0 -e v4l2src norm=PAL num-buffers=100 ! 
> 'video/x-raw,format=(string)I420,width=352,height=288,framerate=(fraction)25/1' 
> ! queue ! mux. alsasrc num-buffers=440 ! audioconvert ! 
> 'audio/x-raw,rate=44100,channels=2' ! queue ! mux. avimux name=mux ! 
> filesink location="$fullFileName"
> done
>
> It would appear that video-1 was fine.  Video-2 then started and there 
> was a slight break in the audio from the end of video-1. However, the 
> video content of video-2 has a 4 second gap in the middle of the clip.
>
> I assume that the 2 sources v4l2src and alsasrc are each set to 
> produce the nominated number of buffers - and that's what they are 
> going to do.  The fact that one source has produced an EOS (I assume) 
> does not stop the other.
>
> Would I be correct in assuming that, if I wrote a C program, rather 
> than using gst-launch, I would be able close both sources down at a 
> time that suited me?  And also that I cannot do it using gst-launch?
>
> Thanks
>
> Ian
>
> By the way, thanks to those who have already got me this far.



More information about the gstreamer-devel mailing list