[openal] Synchronizing 3D sources

Chris Robinson chris.kcat at gmail.com
Fri Jan 24 06:34:07 EST 2014

On 01/23/2014 07:46 AM, Doug Binks wrote:
> In brief, the problem is that there is a need to synchronize the playback
> of one 3D positional source with another at a sample accurate (i.e.
> sub-buffer) level. I can see a number of ways of going about this without
> OpenAL alterations, but they're all fairly involved.
> Due to pitch and Doppler variations I don't think it's possible to
> implement an API which guarantees continuous synchronized playback of
> multiple spatial sources, but timing the start of one with a sample
> position of another should be possible.
> My proposal would be a trigger API. Triggers have an event sample position
> (likely best using AL_SAMPLE_OFFSET_LATENCY_SOFT i64v 32.32 int.fract
> format), and a list of sources to play (played all at once when the trigger
> is hit).

I think synchronizing it to the output's sample offset would be a more 
viable option than a source offset. Actually, using a microsecond or 
nanosecond clock would probably be even better. The implementation would 
then just get it as close as possible. To help synchronize with a 
playing source, there'd be a way to get the source's offset, latency, 
and the device clock, all in one "atomic" query. That would allow the 
app a reasonable way of calculating a device clock time that would 
correspond to a source offset.

The benefit of doing it this way is that it could even be emulated using 
the loopback device... use a clock based on how many samples were 
rendered, and start sources whenever you need after rendering the 
appropriate amount of samples. The downside to that is you become 
responsible for getting the rendered samples out to a device. There are 
ways to do it, though not without some drawbacks (e.g. difficulty in 
supporting surround sound and auto-detection of a preferred mode).

More information about the openal mailing list