[openal] Buffer layering

Chris Robinson chris.kcat at gmail.com
Sun Dec 17 02:47:00 EST 2017


Hi folks.

Recent changes for OpenAL Soft have begun the groundwork for buffer 
layering. Or buffer compositing, or whatever term you want to use for it 
(mixing multiple buffers into a single input for the source in 
real-time). The idea was sparked after I ran across this talk 
https://www.youtube.com/watch?v=W9UkYTw5J6A

The main purpose is to allow apps to 'build' a sound at run-time from 
multiple sound components. Historically, game sounds would be 
built/composed in a studio using various layers of sounds and saved out 
as a wav or mp3 or something. The game would then load and play back the 
sound with little variation; maybe some slight pitch change or reverb, 
but the sound itself is set and static. You could save out multiple 
variations of the sound and pick one at random to play, but the amount 
of variation increases roughly linearly with the amount of audio data.

These days the layering should be doable in real-time and allow games to 
alter various aspects of the sound each time it's played, either 
randomly and/or based upon various criteria in the game to create a 
richer and more varied soundscape. This way, the amount of variation 
increases almost exponentially with the amount of audio data.

Technically speaking, an app could do this using multiple sources. Play 
them synchronized (using alSourcePlayv) and alter any properties of them 
together using batching. However, this is somewhat cumbersome since it 
requires managing multiple sources for one sound, making identical 
changes for each to keep the individual layers synchronized. This also 
has additional processing costs, since each layer is going through the 
full mixing pipeline (resampling, filters, panning (HRTF or matrixing), 
etc). And when it comes to streaming, there can be issues with 
synchronization if one source happens to underrun while others keep going.

In contrast, the new buffer layering system allows a single source to 
handle multiple buffers simultaneously. The buffers are combined when 
preparing for mixing (which requires them to be the same format, but I 
doubt this will be much of an issue), so it still only invokes a single 
pass through the resampler and filters and such. It also solves the 
potential desynchronization with streaming sources, since each queue 
entry will have all the layers given at once.


Currently OpenAL Soft is still missing a way for apps to actually 
declare buffer layers, which I'm hoping to get feedback on. One idea is 
to let buffer objects act as a meta-buffer of sorts. Rather than storing 
samples directly, a buffer object could instead reference multiple other 
buffer objects. When setting or queueing such a buffer onto a source, it 
layers the referenced buffers in its place. Or there could be a new 
Composite Buffer object type which does the same thing, usable in place 
of a buffer for sources, but is a unique type distinct from a buffer.

Alternatively, the individual buffers could be set directly on the 
source using layered properties. So for instance,
alSourcei(source, AL_BUFFER0, base_buffer);
alSourcei(source, AL_BUFFER1, layered_buffer);
...etc...

Queueing layered buffers would need new functions, 
alSourceQueueLayeredBuffers(source, num_layers, buffers);
for example, would add one queue entry (AL_BUFFERS_QUEUED would only 
increase by one regardless of the number of layers), which layers the 
given buffers for that section of the queue.

There's also the idea that each layer could have a sample offset 
relative to the start of the composited result, which allows for a bit 
more flexibility with constructing sounds. That would complicate 
alSourceQueueLayeredBuffers, though.


Anyway, that's a recent thing I've been working on. Feedback is welcome!


More information about the openal mailing list