[pulseaudio-discuss] [PATCH v3 22/24] echo-cancel: Use webrtc's deinterleaved API

Arun Raghavan arun at accosted.net
Wed Feb 17 13:06:48 UTC 2016


On Tue, 2016-02-16 at 14:20 +0200, Tanu Kaskinen wrote:
> On Mon, 2016-01-18 at 13:06 +0530, arun at accosted.net wrote:
> > This is required to have unequal channel counts on capture in and out
> > streams, which is needed for beamforming to work.
> 
> The commit message should mention why the sample format was changed to
> float.

Okay.

> >  void pa_webrtc_ec_play(pa_echo_canceller *ec, const uint8_t *play) {
> >      webrtc::AudioProcessing *apm = (webrtc::AudioProcessing*)ec->params.webrtc.apm;
> > -    webrtc::AudioFrame play_frame;
> >      const pa_sample_spec *ss = &ec->params.webrtc.play_ss;
> > +    int n = ec->params.webrtc.blocksize;
> > +    float **buf = ec->params.webrtc.play_buffer;
> > +    webrtc::StreamConfig config(ss->rate, ss->channels, false);
> >  
> > -    play_frame.num_channels_ = ss->channels;
> > -    play_frame.sample_rate_hz_ = ss->rate;
> > -    play_frame.interleaved_ = true;
> > -    play_frame.samples_per_channel_ = ec->params.webrtc.blocksize;
> > -
> > -    pa_assert(play_frame.samples_per_channel_ <= webrtc::AudioFrame::kMaxDataSizeSamples);
> > -    memcpy(play_frame.data_, play, ec->params.webrtc.blocksize * pa_frame_size(ss));
> > +    pa_deinterleave(play, (void **) buf, ss->channels, pa_sample_size(ss), n);
> >  
> > -    if (apm->ProcessReverseStream(&play_frame) != webrtc::AudioProcessing::kNoError)
> > +    if (apm->ProcessReverseStream(buf, config, config, buf) != webrtc::AudioProcessing::kNoError)
> >          pa_log("Failed to process playback stream");
> > +
> > +    /* FIXME: we need to be able to modify playback samples */
> 
> This comment should say also why we need to be able to modify them (and
> I also don't understand what's preventing us from doing so now).

Reworded this as:

/* FIXME: we need to be able to modify playback samples, which we can't
 * currently do. This is because module-echo-cancel processes playback
 * frames in the source thread, and just stores playback chunks as they
 * pass through the sink. */

> > @@ -573,4 +575,9 @@ void pa_webrtc_ec_done(pa_echo_canceller *ec) {
> >          delete (webrtc::AudioProcessing*)ec->params.webrtc.apm;
> >          ec->params.webrtc.apm = NULL;
> >      }
> > +
> > +    for (i = 0; i < ec->params.webrtc.rec_ss.channels; i++)
> > +        pa_xfree(ec->params.webrtc.rec_buffer[i]);
> > +    for (i = 0; i < ec->params.webrtc.play_ss.channels; i++)
> > +        pa_xfree(ec->params.webrtc.play_buffer[i]);
> 
> I think it's not guaranteed that rec_ss and play_ss are initialized at
> this point. You should check that rec_buffer and play_buffer are non-
> NULL before accessing them.

There should not be a path where rec_buffer and play_buffer are not
initialised before done, but I'll add that check in case this becomes
possible in the future.

-- Arun


More information about the pulseaudio-discuss mailing list