[gst-devel] GSoc:Video3d : http://gitorious.org/video3d/pages/UseCases
Stefan Kost
ensonic at hora-obscura.de
Fri Jun 11 10:13:40 CEST 2010
hi,
finally a few comments to the usecases
1.) A user should be able to merge 2 video streams (one with the image
for the left eye and one with the image of the right eye) into one 3D
video. This type of video is the basis of 3D video support in GStreamer.
It is a stream where each frame is actually composed of the left and the
right eye image.
I was thinking of just interleave left-rightbuffers and marks them with
a GstBufferFlag. Advantage would be that the operation is very cheap.
The plugin would require both inputs having same size, colorspace and
framerate (people can use videoscale, ffmpegcolorspace and videorate
beforehand). Problem are buffer flags for left/right frame:
gstreamer/gst/gstbuffer.h:
typedef enum {
GST_BUFFER_FLAG_READONLY = GST_MINI_OBJECT_FLAG_READONLY,
GST_BUFFER_FLAG_PREROLL = (GST_MINI_OBJECT_FLAG_LAST << 0),
GST_BUFFER_FLAG_DISCONT = (GST_MINI_OBJECT_FLAG_LAST << 1),
GST_BUFFER_FLAG_IN_CAPS = (GST_MINI_OBJECT_FLAG_LAST << 2),
GST_BUFFER_FLAG_GAP = (GST_MINI_OBJECT_FLAG_LAST << 3),
GST_BUFFER_FLAG_DELTA_UNIT = (GST_MINI_OBJECT_FLAG_LAST << 4),
GST_BUFFER_FLAG_MEDIA1 = (GST_MINI_OBJECT_FLAG_LAST << 5),
GST_BUFFER_FLAG_MEDIA2 = (GST_MINI_OBJECT_FLAG_LAST << 6),
GST_BUFFER_FLAG_MEDIA3 = (GST_MINI_OBJECT_FLAG_LAST << 7),
GST_BUFFER_FLAG_LAST = (GST_MINI_OBJECT_FLAG_LAST << 8)
} GstBufferFlag;
and
gst-plugins-base/gst-libs/gst/video/video.h:
#define GST_VIDEO_BUFFER_TFF GST_BUFFER_FLAG_MEDIA1
#define GST_VIDEO_BUFFER_RFF GST_BUFFER_FLAG_MEDIA2
#define GST_VIDEO_BUFFER_ONEFIELD GST_BUFFER_FLAG_MEDIA3
We would need
#define GST_VIDEO_BUFFER_LEFT_VIEW GST_BUFFER_FLAG_MEDIA4
#define GST_VIDEO_BUFFER_RIGHT_VIEW GST_BUFFER_FLAG_MEDIA5
and that would require to push GST_BUFFER_FLAG_LAST. While this seems to
be safe in practise (there are no known subclasses of GstBuffer that
define own flags), it might be very controversical. If we bump the
GST_BUFFER_FLAG_LAST , I'd suggest to bump it to 16 instead of just the
required 10.
2.) A user should be able to convert a “normal” video composed of
left/right eye images (see http://www.3dtv.at/Movies/) into a 3D video
stream manipulable by GStreamer. This basically is just a caps
conversion task.
This could be done with "capssetter" element.
3.) A user should be able to convert a 3D video stream into a regular
video stream that can be views by normal devices or encoded by normal
encoders.
s/views/viewed.
4.) A user should be able to generate an anaglyph video stream from a 3D
video stream
5.) A user should be able to output a 3D video stream to various 3D
display devices.
cases 3,4,5 - could be done in one element, simillar to
ffmpegcolorspace, it would convert one form of 3dvideo into another. The
input needs to be 3dvideo, the output is determined by mode property.
The mode property would be an enum covering mono output (left or right),
interleaved output (l,r,l,r,..), frame packed outputs (over/under,
left/right) and the various anaglyph-outputs. One difficulty in this
plugin is the combinatoric complexity. Doing all the manipulation for
various colorspaces even would make it quite difficult. You will need to
constrain that quite much in the beginning.
Stefan
More information about the gstreamer-devel
mailing list