[openal] Synchronizing 3D sources

Doug Binks doug at enkisoftware.com
Fri Jan 24 06:43:48 EST 2014


That sounds similar to what we were thinking.

For device clock, one extra issue is that the programmer needs to know when
to submit the play request in order to ensure the buffer can be queued
before the device mixes the next output buffer. So we may also need to
query the device output buffer size - I'm not sure if that's already in the
extension spec.

I think I can get an example implementation going on a branch of OpenAL
Soft on my git fork, and get back in touch for commenting.


On 24 January 2014 12:34, Chris Robinson <chris.kcat at gmail.com> wrote:

> On 01/23/2014 07:46 AM, Doug Binks wrote:
>
>> In brief, the problem is that there is a need to synchronize the playback
>> of one 3D positional source with another at a sample accurate (i.e.
>> sub-buffer) level. I can see a number of ways of going about this without
>> OpenAL alterations, but they're all fairly involved.
>>
>> Due to pitch and Doppler variations I don't think it's possible to
>> implement an API which guarantees continuous synchronized playback of
>> multiple spatial sources, but timing the start of one with a sample
>> position of another should be possible.
>>
>> My proposal would be a trigger API. Triggers have an event sample position
>> (likely best using AL_SAMPLE_OFFSET_LATENCY_SOFT i64v 32.32 int.fract
>> format), and a list of sources to play (played all at once when the
>> trigger
>> is hit).
>>
>
> I think synchronizing it to the output's sample offset would be a more
> viable option than a source offset. Actually, using a microsecond or
> nanosecond clock would probably be even better. The implementation would
> then just get it as close as possible. To help synchronize with a playing
> source, there'd be a way to get the source's offset, latency, and the
> device clock, all in one "atomic" query. That would allow the app a
> reasonable way of calculating a device clock time that would correspond to
> a source offset.
>
> The benefit of doing it this way is that it could even be emulated using
> the loopback device... use a clock based on how many samples were rendered,
> and start sources whenever you need after rendering the appropriate amount
> of samples. The downside to that is you become responsible for getting the
> rendered samples out to a device. There are ways to do it, though not
> without some drawbacks (e.g. difficulty in supporting surround sound and
> auto-detection of a preferred mode).
> _______________________________________________
> openal mailing list
> openal at openal.org
> http://openal.org/mailman/listinfo/openal
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openal.org/pipermail/openal/attachments/20140124/7bca602f/attachment.html>


More information about the openal mailing list