[openal] 5.1 and 7.1 speaker setups

Chris Robinson chris.kcat at gmail.com
Mon Feb 6 06:38:37 EST 2017

On 02/05/2017 06:19 PM, Ian Reed wrote:
> Hi Chris,
> Thanks for your help in the other thread.
> I have a new question.
> Can you explain how the support for 5.1 and 7.1 speaker systems works?
> I don't see anywhere to tell OpenAL Soft that the user has a 5.1 or 7.1
> speaker setup.

OpenAL Soft will try to detect it from the system. With PulseAudio, 
MMDevAPI, DSound, and CoreAudio, it can query the current channel setup 
and configure itself to fit what it reports back (i.e. it will 
automatically use 5.1 or 7.1 output if that's what the system says it 
has, or HRTF if it's told stereo headphones are being used). 
Unfortunately, other APIs like ALSA, OSS, and OpenSL don't have a way to 
query the current channel setup, so it just has to set stereo by default 
for them.

The OpenAL API doesn't have a way to explicitly request a channel 
configuration. From what I understand, this is partly deliberate so 
playback isn't restricted from using what it can due to apps not knowing 
what's available (e.g. not being able to select hypothetical 10.1 
output, because apps only knows about support up to 7.1 and there not 
being a 10.1 enum defined).

For OpenAL Soft's part, there is a config file the user can edit to set 
a specific output configuration (alsoft-config being a GUI front-end for 
that), in case the library can't or doesn't autodetect the optimal one 
for the user's system. That handles not only the channel configuration, 
but also the mode like 'headphones' or 'speakers' which it can use to 
automatically apply HRTF or cross-feed filters as needed, or methods 
like UHJ which encode a surround sound mix in stereo-compatible output.

> I do see that I could supply 6 or 8 channels when creating the buffer,
> and set the format to be Multi51Chn16Ext or Multi71Chn16Ext (OpenTK enum
> names).
> Does OpenAL Soft then use the virtual speakers to place the 6 or 8
> channels, and then mix that down to a single wave form the sound card uses?

OpenAL requires multi-channel buffer formats to automatically be remixed 
to fit the output configuration. If you play a 7.1 buffer with quad 
channel output, for example, the front-center and side channels will be 
virtualized using the available speakers so the user still hears it all.

> Do all sound libraries use virtual speakers to mix down to a single wave
> form when it goes to the sound card, or do sound APIs ever tell the
> sound card directly that they have 5.1/7.1 sound channels?

Most sound libraries generally have the programmer/user specify the 
output configuration they want. The results from that vary, where setup 
can fail or fall back to some default if the requested configuration 
isn't available, or it simply pretends you got what you asked for and 
remixes it to the channels that are actually available (which can create 
odd results depending on your use case), or it gives you what you asked 
and simply drops any missing output channels. The behavior depends on 
both the library and the audio system/hardware.

OpenAL takes a different approach. Rather than having the 
programmer/user specify a desired output configuration, it uses the 
configuration that's already been set on the system. If it knows the 
system has 5.1 output, for instance, there's no need to try anything 
else or have the programmer specify anything else. Since the API's 
mostly designed for 3D audio, the app doesn't really need to care about 
the output configuration either. The app specifies sound positions in 
virtual 3D space and the library automatically fits it to the output as 
best it can. Multi-channel buffers will be properly remixed to fit the 
output as well, using the best methods the implementation has available.

More information about the openal mailing list