<div dir="ltr">The latest code at <a href="https://github.com/dougbinks/openal-soft/commits/Device_Clock">https://github.com/dougbinks/openal-soft/commits/Device_Clock</a> moves to a nanosecond based clock as requested. I did add an error accumulator to reduce drift, but this can be removed if we don't need sample accuracy over long periods.<div>
<br></div><div>The changes now cope with aligning sources with different frequencies than the device.</div><div><br></div><div>In order for a developer to correctly align sources, they need the actual frequency of the source. I have thus added an alcGetDoublevSOFTX function which can be used (as in the example program) to convert a frequency into the actual frequency as resampled into the output. This seemed a better approach than exposing the interval computations.</div>
<div><br></div><div>If you think this is looking like something you'd be interested in including, I can do any further changes needed along with a spec and some documentation.</div><div><br></div><div><br></div></div>
<div class="gmail_extra"><br><br><div class="gmail_quote">On 31 January 2014 11:12, Doug Binks <span dir="ltr"><<a href="mailto:doug@enkisoftware.com" target="_blank">doug@enkisoftware.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
It makes a lot of sense not to encumber the library with any features<br>
which would prevent forward progress on other more important aspects.<br>
<br>
I'll see if I can get some feedback on whether 'almost' sample<br>
accurate would do.<br>
<br>
The square wave test isn't something I think can be made to<br>
generically work in all circumstances, but it's certainly one which<br>
shows any errors up well.<br>
<div class="HOEnZb"><div class="h5"><br>
On 31 January 2014 02:18, Chris Robinson <<a href="mailto:chris.kcat@gmail.com">chris.kcat@gmail.com</a>> wrote:<br>
> On 01/30/2014 11:01 AM, Doug Binks wrote:<br>
>><br>
>> The problem is that even with nanosecond precision, the drift will<br>
>> equal one sample after a few hundred seconds.<br>
>><br>
>> The important objective I (and the other use case I'm aware of) is<br>
>> trying to achieve is to get per-sample synchronization with two or<br>
>> more samples being played. We don't care (in this context) about the<br>
>> absolute timing, so drift of any backend or hardware doesn't matter to<br>
>> us so long as the mixing is accurate to the sample level.<br>
><br>
><br>
> I'm not sure sample-perfect timing should be a reliable feature. The problem<br>
> is when the audio clock isn't directly derived from the number of processed<br>
> samples, but rather, the number of samples processed is derived from the<br>
> audio clock. Or if the returned clock time and source offsets are<br>
> interpolated in some way.<br>
><br>
> Granted this isn't much of an issue for OpenAL Soft right now since the<br>
> clock would be based on the number of samples and not be interpolated. But<br>
> I'm not sure if that will always be the case, or if other implementations<br>
> might want to do it differently. I actually have some pretty ambitious (if<br>
> long-term) plans for the mixer that would decouple it from property<br>
> processing, making it completely lockless and perhaps getting really low<br>
> latencies, which would require rethinking how timing works. Though even if<br>
> it doesn't go that far, the way timing works could still change to give<br>
> better granularity than the 20+ms it currently has.<br>
><br>
> So all said, I'm not sure your example of being able to play one square<br>
> wave, then time-start another square wave so that they perfectly cancel out,<br>
> would be able to reliably work.<br>
><br>
><br>
> Though if anyone else wants to weigh in on the issue, please do. Right now<br>
> I'm mainly working with hypotheticals, trying to avoid making guarantees<br>
> that could prevent future internal improvements.<br>
</div></div></blockquote></div><br></div>