<div dir="ltr">That sounds similar to what we were thinking.<div><br></div><div>For device clock, one extra issue is that the programmer needs to know when to submit the play request in order to ensure the buffer can be queued before the device mixes the next output buffer. So we may also need to query the device output buffer size - I'm not sure if that's already in the extension spec.</div>
<div><br></div><div>I think I can get an example implementation going on a branch of OpenAL Soft on my git fork, and get back in touch for commenting.</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 24 January 2014 12:34, Chris Robinson <span dir="ltr"><<a href="mailto:chris.kcat@gmail.com" target="_blank">chris.kcat@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On 01/23/2014 07:46 AM, Doug Binks wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
In brief, the problem is that there is a need to synchronize the playback<br>
of one 3D positional source with another at a sample accurate (i.e.<br>
sub-buffer) level. I can see a number of ways of going about this without<br>
OpenAL alterations, but they're all fairly involved.<br>
<br>
Due to pitch and Doppler variations I don't think it's possible to<br>
implement an API which guarantees continuous synchronized playback of<br>
multiple spatial sources, but timing the start of one with a sample<br>
position of another should be possible.<br>
<br>
My proposal would be a trigger API. Triggers have an event sample position<br>
(likely best using AL_SAMPLE_OFFSET_LATENCY_SOFT i64v 32.32 int.fract<br>
format), and a list of sources to play (played all at once when the trigger<br>
is hit).<br>
</blockquote>
<br></div>
I think synchronizing it to the output's sample offset would be a more viable option than a source offset. Actually, using a microsecond or nanosecond clock would probably be even better. The implementation would then just get it as close as possible. To help synchronize with a playing source, there'd be a way to get the source's offset, latency, and the device clock, all in one "atomic" query. That would allow the app a reasonable way of calculating a device clock time that would correspond to a source offset.<br>
<br>
The benefit of doing it this way is that it could even be emulated using the loopback device... use a clock based on how many samples were rendered, and start sources whenever you need after rendering the appropriate amount of samples. The downside to that is you become responsible for getting the rendered samples out to a device. There are ways to do it, though not without some drawbacks (e.g. difficulty in supporting surround sound and auto-detection of a preferred mode).<br>
______________________________<u></u>_________________<br>
openal mailing list<br>
<a href="mailto:openal@openal.org" target="_blank">openal@openal.org</a><br>
<a href="http://openal.org/mailman/listinfo/openal" target="_blank">http://openal.org/mailman/<u></u>listinfo/openal</a><br>
</blockquote></div><br></div>