From chris.kcat at gmail.com Sun Dec 3 19:38:28 2017 From: chris.kcat at gmail.com (Chris Robinson) Date: Sun, 3 Dec 2017 16:38:28 -0800 Subject: [openal] ALC_SOFT_device_clock proposal Message-ID: This extension's been waiting for a while now, so I've added the last two queries I was planning for it and wrote up and extension spec. Feedback is welcome! Especially if you have experience with doing accurate audio timing and synchronization with other timing sources, if there's anything that seems to be missing or can be improved, please let me know. You can play around with it and test it out in the latest Git. As far as OpenAL Soft's implementation goes, the one potential gotcha is that the device clock does not have a high update rate. It essentially counts the samples that it writes to the backend device, meaning it only updates every 20ms or so by default. This guarantees the device clock updates in step with sources and other OpenAL processing steps, but it's certainly not an alternative to std::chrono::high_resolution_clock or even std::chrono::steady_clock or the like. The latency can be updated more frequently depending on the backend used (PulseAudio, for instance, interpolates and estimates the output latency in between server info updates so it's updating constantly), but it still shouldn't be relied on for very frequent updates. -------------- next part -------------- Name ALC_SOFT_device_clock Contributors Chris Robinson Contact Chris Robinson (chris.kcat 'at' gmail.com) Status In progress Dependencies This extension is written against the OpenAL 1.1 specification. This extension requires AL_SOFT_source_latency. This extension interacts with ALC_SOFT_pause_device. Overview This extension allows applications to query the timing clock from the audio device. This clock lets applications measure the passage of time as the audio device sees it, which may be different than how the system clock sees it (creating the infamous timer drift). Issues None. New Primitive Types ALC Type Description GL Type ============= ==================================== ======== ALCint64SOFT Signed 64-bit 2's-compliment integer GLint64 ------------- ------------------------------------ -------- ALCuint64SOFT Unsigned 64-bit integer GLuint64 New Procedures and Functions void alcGetInteger64vSOFT(ALCdevice *device, ALCenum pname, ALsizei size, ALCint64SOFT *values); New Tokens Accepted as the parameter of alcGetInteger64vSOFT: ALC_DEVICE_CLOCK_SOFT 0x1600 ALC_DEVICE_LATENCY_SOFT 0x1601 ALC_DEVICE_CLOCK_LATENCY_SOFT 0x1602 Accepted as the parameter of alGetSourcei64vSOFT: AL_SAMPLE_OFFSET_CLOCK_SOFT 0x1202 Accepted as the parameter of alGetSourcedvSOFT: AL_SEC_OFFSET_CLOCK_SOFT 0x1203 Additions to Specification Querying the Audio Device Clock Application can query timing properties of the audio pipeline using the new 64-bit Integer query function, void alcGetInteger64vSOFT(ALCdevice *device, ALCenum pname, ALsizei size, ALCint64SOFT *values); This query function accepts all the same alcGetIntegerv queries, in addition to some new ones. Note that the size parameter is the number of ALCint64SOFT elements in the provided buffer provided, not the number of bytes. Table 6-x. 64-bit Integer Query Types. Name Description ---------------------------------- -------------------------------------- ALC_DEVICE_CLOCK_SOFT The audio device clock time, expressed in nanoseconds. NULL is an invalid device. ALC_DEVICE_LATENCY_SOFT The current audio device latency, in nanoseconds. This is effectively the time between OpenAL's processing and the DAC output. NULL is an invalid device. ALC_DEVICE_CLOCK_LATENCY_SOFT Expects a destination size of 2, and provides both the audio device clock time and latency, both in nanoseconds. The two values are measured atomically with respect to one another (i.e. the latency value was measured at the same time the device clock value was retrieved). NULL is an invalid device. If the ALC_SOFT_pause_device extension is available, the device clock does not increment while the device playback is paused. It is implementation- defined whether or not the device clock increments while no contexts are allocated. The initial clock time value of an opened device is also implementation-defined, except that it must not be negative and should be low enough to avoid wrapping during program execution. In addition to the above queries, an application can query the offset of a source with the device clock using two new source attributes. Source AL_SAMPLE_OFFSET_CLOCK_SOFT Attribute Name Signature Values Default --------------------------- --------- -------------------- ------- AL_SAMPLE_OFFSET_CLOCK_SOFT i64v {[0, Any], [0, Any]} N/A Description: the playback position, expressed in fixed-point samples, along with the device clock, expressed in nanoseconds. This attribute is read-only. The first value in the returned vector is the sample offset, which is a 32.32 fixed-point value. The whole number is stored in the upper 32 bits and the fractional component is in the lower 32 bits. The value is similar to that returned by AL_SAMPLE_OFFSET, just with more precision. The second value is the device clock, in nanoseconds. This updates at the same rate as the offset, and both are measured atomically with respect to one another. Source AL_SEC_OFFSET_LATENCY_SOFT Attribute Name Signature Values Default ------------------------ --------- ------------------------ ------- AL_SEC_OFFSET_CLOCK_SOFT dv {[0.0, Any], [0.0, Any]} N/A Description: the playback position, along with the device clock, both expressed in seconds. This attribute is read-only. The first value in the returned vector is the offset in seconds. The value is similar to that returned by AL_SEC_OFFSET, just with more precision. The second value is the device clock, in seconds. This updates at the same rate as the offset, and both are measured atomically with respect to one another. Be aware that this value may be subtly different from the other device clock queries due to the variable precision of floating-point values. Errors An ALC_INVALID_DEVICE error is generated if alcGetInteger64vSOFT is called with a NULL device and ALC_DEVICE_CLOCK_SOFT, ALC_DEVICE_LATENCY_SOFT, or ALC_DEVICE_CLOCK_LATENCY_SOFT queries. From chris.kcat at gmail.com Sun Dec 17 02:47:00 2017 From: chris.kcat at gmail.com (Chris Robinson) Date: Sat, 16 Dec 2017 23:47:00 -0800 Subject: [openal] Buffer layering Message-ID: <9232ec6c-bb9d-0cd7-24a4-876c9c0ea362@gmail.com> Hi folks. Recent changes for OpenAL Soft have begun the groundwork for buffer layering. Or buffer compositing, or whatever term you want to use for it (mixing multiple buffers into a single input for the source in real-time). The idea was sparked after I ran across this talk https://www.youtube.com/watch?v=W9UkYTw5J6A The main purpose is to allow apps to 'build' a sound at run-time from multiple sound components. Historically, game sounds would be built/composed in a studio using various layers of sounds and saved out as a wav or mp3 or something. The game would then load and play back the sound with little variation; maybe some slight pitch change or reverb, but the sound itself is set and static. You could save out multiple variations of the sound and pick one at random to play, but the amount of variation increases roughly linearly with the amount of audio data. These days the layering should be doable in real-time and allow games to alter various aspects of the sound each time it's played, either randomly and/or based upon various criteria in the game to create a richer and more varied soundscape. This way, the amount of variation increases almost exponentially with the amount of audio data. Technically speaking, an app could do this using multiple sources. Play them synchronized (using alSourcePlayv) and alter any properties of them together using batching. However, this is somewhat cumbersome since it requires managing multiple sources for one sound, making identical changes for each to keep the individual layers synchronized. This also has additional processing costs, since each layer is going through the full mixing pipeline (resampling, filters, panning (HRTF or matrixing), etc). And when it comes to streaming, there can be issues with synchronization if one source happens to underrun while others keep going. In contrast, the new buffer layering system allows a single source to handle multiple buffers simultaneously. The buffers are combined when preparing for mixing (which requires them to be the same format, but I doubt this will be much of an issue), so it still only invokes a single pass through the resampler and filters and such. It also solves the potential desynchronization with streaming sources, since each queue entry will have all the layers given at once. Currently OpenAL Soft is still missing a way for apps to actually declare buffer layers, which I'm hoping to get feedback on. One idea is to let buffer objects act as a meta-buffer of sorts. Rather than storing samples directly, a buffer object could instead reference multiple other buffer objects. When setting or queueing such a buffer onto a source, it layers the referenced buffers in its place. Or there could be a new Composite Buffer object type which does the same thing, usable in place of a buffer for sources, but is a unique type distinct from a buffer. Alternatively, the individual buffers could be set directly on the source using layered properties. So for instance, alSourcei(source, AL_BUFFER0, base_buffer); alSourcei(source, AL_BUFFER1, layered_buffer); ...etc... Queueing layered buffers would need new functions, alSourceQueueLayeredBuffers(source, num_layers, buffers); for example, would add one queue entry (AL_BUFFERS_QUEUED would only increase by one regardless of the number of layers), which layers the given buffers for that section of the queue. There's also the idea that each layer could have a sample offset relative to the start of the composited result, which allows for a bit more flexibility with constructing sounds. That would complicate alSourceQueueLayeredBuffers, though. Anyway, that's a recent thing I've been working on. Feedback is welcome! From demetrioussharpe at netscape.net Sun Dec 17 03:16:32 2017 From: demetrioussharpe at netscape.net (Dee Sharpe) Date: Sun, 17 Dec 2017 02:16:32 -0600 Subject: [openal] Buffer layering In-Reply-To: <9232ec6c-bb9d-0cd7-24a4-876c9c0ea362@gmail.com> References: <9232ec6c-bb9d-0cd7-24a4-876c9c0ea362@gmail.com> Message-ID: <3AB00BE9-8C27-4A79-BEBB-CB293700D5E6@netscape.net> This sounds interesting. Looks like the prequel to sound shaders. Nice! Apollo D. Sharpe, Sr. > On Dec 17, 2017, at 1:47 AM, Chris Robinson wrote: > > Hi folks. > > Recent changes for OpenAL Soft have begun the groundwork for buffer layering. Or buffer compositing, or whatever term you want to use for it (mixing multiple buffers into a single input for the source in real-time). The idea was sparked after I ran across this talk https://www.youtube.com/watch?v=W9UkYTw5J6A > > The main purpose is to allow apps to 'build' a sound at run-time from multiple sound components. Historically, game sounds would be built/composed in a studio using various layers of sounds and saved out as a wav or mp3 or something. The game would then load and play back the sound with little variation; maybe some slight pitch change or reverb, but the sound itself is set and static. You could save out multiple variations of the sound and pick one at random to play, but the amount of variation increases roughly linearly with the amount of audio data. > > These days the layering should be doable in real-time and allow games to alter various aspects of the sound each time it's played, either randomly and/or based upon various criteria in the game to create a richer and more varied soundscape. This way, the amount of variation increases almost exponentially with the amount of audio data. > > Technically speaking, an app could do this using multiple sources. Play them synchronized (using alSourcePlayv) and alter any properties of them together using batching. However, this is somewhat cumbersome since it requires managing multiple sources for one sound, making identical changes for each to keep the individual layers synchronized. This also has additional processing costs, since each layer is going through the full mixing pipeline (resampling, filters, panning (HRTF or matrixing), etc). And when it comes to streaming, there can be issues with synchronization if one source happens to underrun while others keep going. > > In contrast, the new buffer layering system allows a single source to handle multiple buffers simultaneously. The buffers are combined when preparing for mixing (which requires them to be the same format, but I doubt this will be much of an issue), so it still only invokes a single pass through the resampler and filters and such. It also solves the potential desynchronization with streaming sources, since each queue entry will have all the layers given at once. > > > Currently OpenAL Soft is still missing a way for apps to actually declare buffer layers, which I'm hoping to get feedback on. One idea is to let buffer objects act as a meta-buffer of sorts. Rather than storing samples directly, a buffer object could instead reference multiple other buffer objects. When setting or queueing such a buffer onto a source, it layers the referenced buffers in its place. Or there could be a new Composite Buffer object type which does the same thing, usable in place of a buffer for sources, but is a unique type distinct from a buffer. > > Alternatively, the individual buffers could be set directly on the source using layered properties. So for instance, > alSourcei(source, AL_BUFFER0, base_buffer); > alSourcei(source, AL_BUFFER1, layered_buffer); > ...etc... > > Queueing layered buffers would need new functions, alSourceQueueLayeredBuffers(source, num_layers, buffers); > for example, would add one queue entry (AL_BUFFERS_QUEUED would only increase by one regardless of the number of layers), which layers the given buffers for that section of the queue. > > There's also the idea that each layer could have a sample offset relative to the start of the composited result, which allows for a bit more flexibility with constructing sounds. That would complicate alSourceQueueLayeredBuffers, though. > > > Anyway, that's a recent thing I've been working on. Feedback is welcome! > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal