Audio streaming problems
Michiel Konstapel
michiel at aanmelder.nl
Wed Dec 21 10:21:30 UTC 2022
Nice! My experience with gstreamer is that it's all in the "subtle
differences" so I'm glad I could help :)
WRT the video, you can set the encoder to a super low bit rate so it at
least doesn't needlessly consume bandwidth.
On Monday 19 December 2022 14:52:47 (+01:00), Joel Lord wrote:
> Thank you, Michiel, you got it in one.
>
> aacparse converts an aac stream of audio to "framed", and clearly that
was what mpegtsmux needed. Now that you've pointed that out I see it in
the documentation, but that was a subtle difference I was missing.
>
> The video part works fine, I guess I'm stuck with a static image I
didn't really want. Worse things have happened.
>
> -Joel
>
> On 12/19/2022 3:41 AM, Michiel Konstapel wrote:
> > On Monday 19 December 2022 03:38:38 (+01:00), Joel Lord via
gstreamer-devel wrote:
> > > I'm trying to take an audio feed from a microphone connected via a
USB interface and (ideally) stream it directly out via HTTP. I want
multiple end users to be able to get to the stream and listen to it, real
time, from their phones. I haven't had any luck finding components that add
up to that, but I did find how to produce a mpeg4ts stream, so I added a
static image and have that streaming successfully, I added in a clock to
have something changing to prove that it was working. But the audio never
comes through.
> > >
> > > If I take the mpegtsmux and hlssink off the end of my pipeline and
replace it with a filesink I can prove that my audio is working fine, but
when I feed it to mpegtsmux and hlssink it seems to get ignored. If I have
no image it never creates a second segment or the playlist for the stream,
so I can't see anything at all. If I include the image the stream works but
it seems to have no audio. I've tried 3 or 4 different audio formats, it
doesn't complain that I have a format mismatch and it seems to be working,
but nothing comes through.
> > >
> > > Using gstreamer1.0 version 1.18.4 on a Raspberry Pi 4.
> > >
> > > gst-launch-1.0 -v \
> > > mpegtsmux name=mux ! hlssink
playlist-root=http://stream.ek:80/stream_files
location=/var/www/stream/stream_files/segment%05d.ts target-duration=3
playlist-location=/var/www/stream/playlist.m3u8 \
> > > filesrc location=/home/pi/EKPB.png ! decodebin ! \
> > > videoconvert ! video/x-raw,format=I420 ! imagefreeze ! \
> > > clockoverlay halignment=right ! x264enc ! mux. \
> > > alsasrc ! audioconvert ! audio/x-raw,channels=1 ! \
> > > level ! avenc_mp2fixed ! queue max-size-buffers=0 max-size-bytes=0
max-size-time=1000000000 ! mux.
> > >
> > > If I take the queue out of the audio path it tosses a few warnings
about dropping samples (a lot of samples) and gives me no faith that it
will work. Changing the buffer settings on the queue made no difference.
> > >
> > > So for anyone who has made it this far, I'm quite open to better
solutions than I've found OR fixing the one that I have that isn't working.
I don't actually want anything but the audio feed, and I want it as close
to 0 latency as I can get.
> > >
> > > Thanks for your help!
> > >
> > Interesting! hlssink expects video key frames to know when it can
start a new segment (which has to start with a key frame) so I don't know
what happens with an audio only stream. At least for getting things
working, it's probably going to be easiest to indeed have a "dummy" video
track in there.
> > I use the following for HLS audio:
> > ... ! avenc_aac ! aacparse ! mpegtsmux
> > Maybe give that a go? The aacparse might be adding information needed
by the muxer or the sink.
> > For x264enc you may need to specify a keyframe interval
(key-int-max=<number of frames>) but hlssink will actively ask its upstream
for key frames so that might not be required. Maybe also specify a
framerate in your video caps, I don't know what the default is. After
x264enc I also have a parser:
> > ... ! x264enc ! h264parse config-interval=-1 ! ...
> > HTH,
> > Michiel
>
More information about the gstreamer-devel
mailing list