Surround Sound In WebRTC


Dr Alex explains the background and current capability of Surround Sound in lib#webrtc and Chrome and is full paper is here

While scalability is often the first excuse some use for not to using WebRTC, which by the way is a persistent claim (even though that myth has been debunked long ago and many times over since), “quality” is certainly the second one.

Opponents says “WebRTC cannot do <put a resolution here>, it’s just good enough for webcams, it cannot use more than 2.5 Mbps, ….. “.  This is also false.

While all of those previous claims are false, there is one claim that WAS correct: the default audio codecs in webrtc implementations, and especially in browsers, are not as good – when it comes to spatial info – than their streaming or gaming counterparts… untill recently!

Google then started implementing opus surround sound in 2017, in plain sight. By april 2019, it was in libwebrtc, by may 2019 in chrome (around chrome 78). That’s good to know, but I know what what your next question is: how did they do it, and how can I use it??

Since there is no opus RTP payload specification available beyond 2 channels. Google used the best next thing: the ogg payload specification. In it, especially in section 5.1, you can see that any multiple channels implementation is based on a collection of mono and stereo streams, advertised as total number of streams, and coupled (stereo) streams, with a mapping for the spatial location.

The implementation in chrome/libwebrtc, uses the “multiopus” codec name to refer to those two (5.1 and 7.1) configurations of opus, and simply “opus” for the webrtc mandatory to implement stereo mode. ….

For more click click on the link to Dr Alex’s paper here:


For more about libweb#RTC go here: