From icculus at icculus.org Mon Jan 6 14:31:31 2014 From: icculus at icculus.org (Ryan C. Gordon) Date: Mon, 06 Jan 2014 14:31:31 -0500 Subject: [openal] test ... Message-ID: <52CB0493.6060400@icculus.org> test again! From icculus at icculus.org Mon Jan 6 14:16:59 2014 From: icculus at icculus.org (Ryan C. Gordon) Date: Mon, 06 Jan 2014 14:16:59 -0500 Subject: [openal] test Message-ID: <52CB012B.9050406@icculus.org> testing! --ryan. From icculus at icculus.org Mon Jan 6 14:50:06 2014 From: icculus at icculus.org (Ryan C. Gordon) Date: Mon, 06 Jan 2014 14:50:06 -0500 Subject: [openal] one more test... Message-ID: <52CB08EE.6060107@icculus.org> ...and I think we're good to go. --ryan. From chris.kcat at gmail.com Mon Jan 6 15:07:04 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Mon, 06 Jan 2014 12:07:04 -0800 Subject: [openal] Hello Message-ID: <52CB0CE8.6000202@gmail.com> Hello. :) Suppose it's good to start directing people here, now? From icculus at icculus.org Tue Jan 7 00:34:31 2014 From: icculus at icculus.org (Ryan C. Gordon) Date: Tue, 07 Jan 2014 00:34:31 -0500 Subject: [openal] Hello In-Reply-To: <52CB0CE8.6000202@gmail.com> References: <52CB0CE8.6000202@gmail.com> Message-ID: <52CB91E7.8050602@icculus.org> On 01/06/2014 03:07 PM, Chris Robinson wrote: > Hello. :) Suppose it's good to start directing people here, now? Yep! We still need to actually rebuild a website and stuff, but we can at least get discussion going again. --ryan. From chris.kcat at gmail.com Wed Jan 8 17:07:08 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Wed, 08 Jan 2014 14:07:08 -0800 Subject: [openal] MIDI support Message-ID: <52CDCC0C.5080706@gmail.com> Not sure who all is here yet, but I suppose it's a good a time as any to get a conversation going. This will be a long email, as it tries to explain how it's (currently) used, as well as my reasoning. If you have questions, or suggestions for changing something, please do. :) 1. What and why In recent commits with OpenAL Soft, I've been adding support for a MIDI extension (tentatively called ALC_SOFT_midi_interface). I realize the majority of games do and will use things like mp3 or ogg, however I feel there is potential utility in making MIDI available for games and apps. Modern ports/remakes of old game engines would obviously have some benefit of built-in MIDI support, but I also feel even new games could utilize MIDI for a much more dynamic music system compared to what you could get from streaming pre-rendered audio. The amount of memory and CPU power available these days is more than enough to handle software MIDI synths with quality soundfonts, too. As well, a good soundfont can be smaller than a game's collection of compressed music (MIDI files themselves are of course insanely small, so there's little size overhead to adding more music). I'll start off by saying the MIDI extension is based on the SF2 spec. It allows apps to specify their own soundfont so it can have some assurance about the sound it gets, rather than being at the whim of the device or system configuration. The actual quality of the sound may differ some, but that's how it is with standard OpenAL too (different methods for resampling, effects, filters, etc). By having a base spec to work with, I hope to avoid problems many old games would run into with MIDI. Currently OpenAL Soft implements MIDI using FluidSynth, however I hope to be able to handle it all internally (a lot of what's needed is already there, but some more groundwork is still needed). 2. API basics The extension API is designed in a way that tries to avoid messing about with file formats (with one exception, see Soundfonts). So instead of loading a MIDI file into a buffer and playing it via a source or something, you specify timestamped MIDI events. This allows apps flexibility in how they wish to store the sequence data, and the capabilities of the sequence data (loops, dynamic volume/instrument alterations, etc). void alMidiEventSOFT(ALuint64SOFT time, ALenum event, ALsizei channel, ALsizei param1, ALsizei param2); void alMidiSysExSOFT(ALuint64SOFT time, const ALbyte *data, ALsizei size); The time is based on a microsecond clock. Because a 32-bit value would overflow relatively quickly, I decided to make it 64-bit. The current clock time can be retrieved with passing AL_MIDI_CLOCK_SOFT to: ALint64SOFT alGetInteger64SOFT(ALenum pname); void alGetInteger64vSOFT(ALenum pname, ALint64SOFT *values); So for example, if you want to play a note a half-second after the current clock time, for one second, you could do: ALuint64SOFT tval = alGetInteger64SOFT(AL_MIDI_CLOCK_SOFT); alMidiEventSOFT(tval + 500000, AL_NOTEON_SOFT, 0, 60, 64); alMidiEventSOFT(tval + 1500000, AL_NOTEOFF_SOFT, 0, 60, 0); Specifying multiple events with the same clock time processes them in the order specified. An application could also execute events in "real-time" by always passing a timestamp of 0 (any timestamp before the current clock will execute ASAP; it won't try to pretend the event actually happened at the time specified). Though this obviously limits the event granularity to however often OpenAL processes audio updates -- specifying timestamps after the current clock allows for more precise event processing. By default, the MIDI engine is not in a playing state. The means the clock will not be incrementing and events will not be processed. To control MIDI processing, you have the functions: void alMidiPlaySOFT(void); void alMidiPauseSOFT(void); void alMidiStopSOFT(void); void alMidiResetSOFT(void); alMidiPlaySOFT will start/resume MIDI processing, and the clock will start incrementing. alMidiPauseSOFT will pause MIDI processing, and the clock will stop incrementing. Any currently playing notes will stay on. alMidiStopSOFT will stop MIDI processing, and the clock will reset to 0. Any pending MIDI events before the current clock (as of the time it as called) will be processed, and all channels will get an all-notes-off event (which will put any playing notes into a release phase). All other events are flushed. alMidiResetSOFT will stop MIDI processing, and the clock will reset to 0. All MIDI events will be flushed, and the MIDI engine will be reset to power-on status. 3. Soundfonts This one may be difficult to properly explain. The structure used to handle soundfonts is heavily distilled from the HYDRA structure described in the SF2 spec. The spec itself mentions that the structure is not optimized for run-time synthesis or on-the-fly editing, so I tried to cut out a lot of the stuff that's unneeded for, or unnecessarily complicates, run-time synthesis. Basically a lot of things got merged, so there's a bit more load-time work for better run-time processing (or so I hope). Soundfonts are broken up into 3 objects. Soundfont, Preset, and Fontsound (name is debatable, but I'm not sure what else to call it to avoid confusion or clashes). A soundfont contains PCM samples and a collection of presets, which contain a collection of fontsounds, which contain generator properties, modulators, and sample info. These objects have the standard alGen*, alDelete*, and alIs* functions. In a break from standard AL API design, a function is provided to load an SF2 format soundfont into a soundfont object. I decided to do this because the SF2 format can be rather difficult to parse, and even more difficult to properly process and load into AL objects. And it gets even more difficult if you want proper error checking. I think it's a bit unfair for apps to rely on 3rd party libs (like Alure) or to create a loader themselves to properly load a soundfont. void alLoadSoundfontSOFT(ALuint id, size_t(*cb)(ALvoid*,size_t,ALvoid*), ALvoid *user); Instead of taking a filename, it's given a read callback. This allows apps to store soundfonts however they want and not be restricted to having them on disk for standard IO access (i.e. they can be stored in a resource archive, a zip file or whatever). For example: size_t read_func(ALvoid *buffer, suze_t len, ALvoid *user) { return fread(buffer, 1, len, (FILE*)user); } ... FILE *file = fopen("my-soundfont.sf2", "rb"); ALuint sfont; alGenSoundfontsSOFT(1, &sfont); alLoadSoundfontSOFT(sfont, read_func, file); fclose(file); Selecting a soundfont to use is done with the functions: void alMidiSoundfontSOFT(ALuint id); void alMidiSoundfontvSOFT(ALsizei count, const ALuint *ids); These can only be called when MIDI processing is stopped or reset. Selecting a soundfont will deselect any previously selected. To deselect all soundfonts without selecting a new one, call alMidiSoundfontvSOFT with a count of 0. A soundfont cannot be deleted if it's currently selected. A default soundfont can be accessed using soundfont ID 0. This soundfont can be whatever the user or system may have configured. Obviously this isn't something that can be relied on, but it can be a useful option if a user has a preferred soundfont and your music uses standard MIDI functionality (i.e. doesn't rely on a non-standard instrument set, or on modulators that significantly alter the behavior of MIDI controllers). This email is getting a bit long, so I'll probably end it here. There's obviously more to it, and I can attempt to explain the rest of the soundfont stuff if anyone's curious. But for now, I wouldn't mind some feedback on what I've written about so far. :) Thanks! From choubeagle at gmail.com Tue Jan 14 10:57:10 2014 From: choubeagle at gmail.com (Beagle Chou) Date: Tue, 14 Jan 2014 23:57:10 +0800 Subject: [openal] Hello Message-ID: Hi there. I am new here, and have waited for two months to join. Glad to see the website reopen. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Wed Jan 15 20:10:04 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Wed, 15 Jan 2014 17:10:04 -0800 Subject: [openal] MIDI support In-Reply-To: <52CDCC0C.5080706@gmail.com> References: <52CDCC0C.5080706@gmail.com> Message-ID: <52D7316C.1020907@gmail.com> Just pinging this again for anyone that newly registered, in case they missed it. Or is interest in it just that low? On 01/08/2014 02:07 PM, Chris Robinson wrote: > Not sure who all is here yet, but I suppose it's a good a time as any to > get a conversation going. This will be a long email, as it tries to > explain how it's (currently) used, as well as my reasoning. If you have > questions, or suggestions for changing something, please do. :) > > > 1. What and why > > In recent commits with OpenAL Soft, I've been adding support for a MIDI > extension (tentatively called ALC_SOFT_midi_interface). I realize the > majority of games do and will use things like mp3 or ogg, however I feel > there is potential utility in making MIDI available for games and apps. > > Modern ports/remakes of old game engines would obviously have some > benefit of built-in MIDI support, but I also feel even new games could > utilize MIDI for a much more dynamic music system compared to what you > could get from streaming pre-rendered audio. The amount of memory and > CPU power available these days is more than enough to handle software > MIDI synths with quality soundfonts, too. As well, a good soundfont can > be smaller than a game's collection of compressed music (MIDI files > themselves are of course insanely small, so there's little size overhead > to adding more music). > > I'll start off by saying the MIDI extension is based on the SF2 spec. It > allows apps to specify their own soundfont so it can have some assurance > about the sound it gets, rather than being at the whim of the device or > system configuration. The actual quality of the sound may differ some, > but that's how it is with standard OpenAL too (different methods for > resampling, effects, filters, etc). By having a base spec to work with, > I hope to avoid problems many old games would run into with MIDI. > > Currently OpenAL Soft implements MIDI using FluidSynth, however I hope > to be able to handle it all internally (a lot of what's needed is > already there, but some more groundwork is still needed). > > > 2. API basics > > The extension API is designed in a way that tries to avoid messing about > with file formats (with one exception, see Soundfonts). So instead of > loading a MIDI file into a buffer and playing it via a source or > something, you specify timestamped MIDI events. This allows apps > flexibility in how they wish to store the sequence data, and the > capabilities of the sequence data (loops, dynamic volume/instrument > alterations, etc). > > void alMidiEventSOFT(ALuint64SOFT time, ALenum event, ALsizei channel, > ALsizei param1, ALsizei param2); > void alMidiSysExSOFT(ALuint64SOFT time, const ALbyte *data, > ALsizei size); > > The time is based on a microsecond clock. Because a 32-bit value would > overflow relatively quickly, I decided to make it 64-bit. The current > clock time can be retrieved with passing AL_MIDI_CLOCK_SOFT to: > > ALint64SOFT alGetInteger64SOFT(ALenum pname); > void alGetInteger64vSOFT(ALenum pname, ALint64SOFT *values); > > So for example, if you want to play a note a half-second after the > current clock time, for one second, you could do: > > ALuint64SOFT tval = alGetInteger64SOFT(AL_MIDI_CLOCK_SOFT); > alMidiEventSOFT(tval + 500000, AL_NOTEON_SOFT, 0, 60, 64); > alMidiEventSOFT(tval + 1500000, AL_NOTEOFF_SOFT, 0, 60, 0); > > Specifying multiple events with the same clock time processes them in > the order specified. An application could also execute events in > "real-time" by always passing a timestamp of 0 (any timestamp before the > current clock will execute ASAP; it won't try to pretend the event > actually happened at the time specified). Though this obviously limits > the event granularity to however often OpenAL processes audio updates -- > specifying timestamps after the current clock allows for more precise > event processing. > > By default, the MIDI engine is not in a playing state. The means the > clock will not be incrementing and events will not be processed. To > control MIDI processing, you have the functions: > > void alMidiPlaySOFT(void); > void alMidiPauseSOFT(void); > void alMidiStopSOFT(void); > void alMidiResetSOFT(void); > > alMidiPlaySOFT will start/resume MIDI processing, and the clock will > start incrementing. > > alMidiPauseSOFT will pause MIDI processing, and the clock will stop > incrementing. Any currently playing notes will stay on. > > alMidiStopSOFT will stop MIDI processing, and the clock will reset to 0. > Any pending MIDI events before the current clock (as of the time it as > called) will be processed, and all channels will get an all-notes-off > event (which will put any playing notes into a release phase). All other > events are flushed. > > alMidiResetSOFT will stop MIDI processing, and the clock will reset to > 0. All MIDI events will be flushed, and the MIDI engine will be reset to > power-on status. > > > 3. Soundfonts > > This one may be difficult to properly explain. The structure used to > handle soundfonts is heavily distilled from the HYDRA structure > described in the SF2 spec. The spec itself mentions that the structure > is not optimized for run-time synthesis or on-the-fly editing, so I > tried to cut out a lot of the stuff that's unneeded for, or > unnecessarily complicates, run-time synthesis. Basically a lot of things > got merged, so there's a bit more load-time work for better run-time > processing (or so I hope). > > Soundfonts are broken up into 3 objects. Soundfont, Preset, and > Fontsound (name is debatable, but I'm not sure what else to call it to > avoid confusion or clashes). A soundfont contains PCM samples and a > collection of presets, which contain a collection of fontsounds, which > contain generator properties, modulators, and sample info. These objects > have the standard alGen*, alDelete*, and alIs* functions. > > In a break from standard AL API design, a function is provided to load > an SF2 format soundfont into a soundfont object. I decided to do this > because the SF2 format can be rather difficult to parse, and even more > difficult to properly process and load into AL objects. And it gets even > more difficult if you want proper error checking. I think it's a bit > unfair for apps to rely on 3rd party libs (like Alure) or to create a > loader themselves to properly load a soundfont. > > void alLoadSoundfontSOFT(ALuint id, > size_t(*cb)(ALvoid*,size_t,ALvoid*), > ALvoid *user); > > Instead of taking a filename, it's given a read callback. This allows > apps to store soundfonts however they want and not be restricted to > having them on disk for standard IO access (i.e. they can be stored in a > resource archive, a zip file or whatever). For example: > > size_t read_func(ALvoid *buffer, suze_t len, ALvoid *user) > { > return fread(buffer, 1, len, (FILE*)user); > } > ... > FILE *file = fopen("my-soundfont.sf2", "rb"); > > ALuint sfont; > alGenSoundfontsSOFT(1, &sfont); > alLoadSoundfontSOFT(sfont, read_func, file); > fclose(file); > > Selecting a soundfont to use is done with the functions: > > void alMidiSoundfontSOFT(ALuint id); > void alMidiSoundfontvSOFT(ALsizei count, const ALuint *ids); > > These can only be called when MIDI processing is stopped or reset. > Selecting a soundfont will deselect any previously selected. To deselect > all soundfonts without selecting a new one, call alMidiSoundfontvSOFT > with a count of 0. A soundfont cannot be deleted if it's currently > selected. > > A default soundfont can be accessed using soundfont ID 0. This soundfont > can be whatever the user or system may have configured. Obviously this > isn't something that can be relied on, but it can be a useful option if > a user has a preferred soundfont and your music uses standard MIDI > functionality (i.e. doesn't rely on a non-standard instrument set, or on > modulators that significantly alter the behavior of MIDI controllers). > > > This email is getting a bit long, so I'll probably end it here. There's > obviously more to it, and I can attempt to explain the rest of the > soundfont stuff if anyone's curious. But for now, I wouldn't mind some > feedback on what I've written about so far. :) > > Thanks! From simons.philippe at gmail.com Tue Jan 21 06:21:55 2014 From: simons.philippe at gmail.com (Philippe Simons) Date: Tue, 21 Jan 2014 12:21:55 +0100 Subject: [openal] openSL backend Message-ID: I'd like to enable low-lantency output on android. For this, the user should request a specific Frequency and UpdateSize when creating the context. Currently the OpenSL implementation, override these with hardcoded values. I've commented this code to let it uses the values set by UpdateDeviceParams(), and it seems to works fine on my devices. My question is, why is there hardcoded values in the OpenSL implementation, and can it be safely removed? Philippe -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Tue Jan 21 07:20:47 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Tue, 21 Jan 2014 04:20:47 -0800 Subject: [openal] openSL backend In-Reply-To: References: Message-ID: <52DE661F.70900@gmail.com> On 01/21/2014 03:21 AM, Philippe Simons wrote: > I'd like to enable low-lantency output on android. > > For this, the user should request a specific Frequency and UpdateSize when > creating the context. > > Currently the OpenSL implementation, override these with hardcoded values. > I've commented this code to let it uses the values set > by UpdateDeviceParams(), and it seems to works fine on my devices. > > My question is, why is there hardcoded values in the OpenSL implementation, > and can it be safely removed? They were hardcoded to try to avoid compatibility problems. I don't know what Android/OpenSL is capable of, and I can't test the OpenSL code myself to know how to react to incompatible settings (e.g. unknown/unsupported sample rate or buffer size). Reseting the device is particularly touchy because if it doesn't set something usable, the device gets marked as disconnected and basically becomes unusable. The backend's reset method has to be as foolproof as possible, and has to get the underlying device in a known and usable state as best it can, regardless of whatever the initial parameters may have been. If you believe your code is robust enough to deal safely with that, though, I can take a look. In regards to UpdateSize, however, that shouldn't be controlled by user parameters. The device period/mix size is currently linked to the refresh, but that may change. At most, NumUpdates should scale if the playback frequency needs to change -- e.g. if you request 32khz playback with 2 updates, and you get 48khz playback, the number of updates should increase to 3 so that it takes about the same amount of time to play the whole buffer. From simons.philippe at gmail.com Tue Jan 21 08:31:56 2014 From: simons.philippe at gmail.com (Philippe Simons) Date: Tue, 21 Jan 2014 14:31:56 +0100 Subject: [openal] openSL backend In-Reply-To: <52DE661F.70900@gmail.com> References: <52DE661F.70900@gmail.com> Message-ID: The only constraint I'm aware on android is on the Frequency, Channels and FrameSize, I'm ok with leaving Channels and FrameSize hardcoded, and I'll add code to check the Frequency value with 44.1 as fallback. about: UpdateSize, in the current openSL implementation, you are already changing it in the _reset_ function. Device->UpdateSize = (ALuint64)Device->UpdateSize * 44100 / Device->Frequency; Device->UpdateSize = Device->UpdateSize * Device->NumUpdates / 2; Device->NumUpdates = 2; Which btw, double the size of the given UpdateSize, since default values are (Frequency = 44100 and NumUpdates = 4) May I change UpdateSize by just a bit ie: UpdateSize is 1024, but I'd like to use 1200? On Tue, Jan 21, 2014 at 1:20 PM, Chris Robinson wrote: > On 01/21/2014 03:21 AM, Philippe Simons wrote: > >> I'd like to enable low-lantency output on android. >> >> For this, the user should request a specific Frequency and UpdateSize when >> creating the context. >> >> Currently the OpenSL implementation, override these with hardcoded values. >> I've commented this code to let it uses the values set >> by UpdateDeviceParams(), and it seems to works fine on my devices. >> >> My question is, why is there hardcoded values in the OpenSL >> implementation, >> and can it be safely removed? >> > > They were hardcoded to try to avoid compatibility problems. I don't know > what Android/OpenSL is capable of, and I can't test the OpenSL code myself > to know how to react to incompatible settings (e.g. unknown/unsupported > sample rate or buffer size). > > Reseting the device is particularly touchy because if it doesn't set > something usable, the device gets marked as disconnected and basically > becomes unusable. The backend's reset method has to be as foolproof as > possible, and has to get the underlying device in a known and usable state > as best it can, regardless of whatever the initial parameters may have been. > > If you believe your code is robust enough to deal safely with that, > though, I can take a look. > > In regards to UpdateSize, however, that shouldn't be controlled by user > parameters. The device period/mix size is currently linked to the refresh, > but that may change. At most, NumUpdates should scale if the playback > frequency needs to change -- e.g. if you request 32khz playback with 2 > updates, and you get 48khz playback, the number of updates should increase > to 3 so that it takes about the same amount of time to play the whole > buffer. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Tue Jan 21 11:37:38 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Tue, 21 Jan 2014 08:37:38 -0800 Subject: [openal] openSL backend In-Reply-To: References: <52DE661F.70900@gmail.com> Message-ID: <52DEA252.8080005@gmail.com> On 01/21/2014 05:31 AM, Philippe Simons wrote: > about: UpdateSize, in the current openSL implementation, you are already > changing it in the _reset_ function. Yes, just about any of the device format parameters can be changed as needed to fit the actual device constraints, and/or whatever actually gets set on the device. The only real restriction is that NumUpdates should be between 2 and 16, UpdateSize is between 64 and 8192, and Frequency is at least 8khz (maybe 11khz? I think the I3DL2 spec that EFX is based on has a minimum requirement of 11khz or 22khz) and should try to avoid being higher than 96khz (though it would likely work, it could maybe hit some range limits in some cases). I just mean that the UpdateSize should not be updated to reflect what the app specifies for the ALC_REFRESH setting, because the two or logically separate functions -- ALC_REFRESH determines how often the internal mixing parameters refresh, and UpdateSize is how many samples to mix on each update. Currently they're tied together because the internal parameters are updated with each mix update, but that probably won't always be the case (updating internal parameters is a bit costly, so it would benefit the mixer to not waste time doing that and instead just focus on mixing; it will also help in making the mixer lock-less). > May I change UpdateSize by just a bit > ie: UpdateSize is 1024, but I'd like to use 1200? A value like that would certainly work if the device can support it, but I think a power-of-two update size is more advantageous than an exact refresh value at 48 or 24khz. Is there a particular reason for that? From simons.philippe at gmail.com Tue Jan 21 14:04:06 2014 From: simons.philippe at gmail.com (Philippe Simons) Date: Tue, 21 Jan 2014 20:04:06 +0100 Subject: [openal] openSL backend In-Reply-To: <52DEA252.8080005@gmail.com> References: <52DE661F.70900@gmail.com> <52DEA252.8080005@gmail.com> Message-ID: On Tue, Jan 21, 2014 at 5:37 PM, Chris Robinson wrote: > On 01/21/2014 05:31 AM, Philippe Simons wrote: > >> about: UpdateSize, in the current openSL implementation, you are already >> changing it in the _reset_ function. >> > > Yes, just about any of the device format parameters can be changed as > needed to fit the actual device constraints, and/or whatever actually gets > set on the device. The only real restriction is that NumUpdates should be > between 2 and 16, UpdateSize is between 64 and 8192, and Frequency is at > least 8khz (maybe 11khz? I think the I3DL2 spec that EFX is based on has a > minimum requirement of 11khz or 22khz) and should try to avoid being higher > than 96khz (though it would likely work, it could maybe hit some range > limits in some cases). > > I just mean that the UpdateSize should not be updated to reflect what the > app specifies for the ALC_REFRESH setting, because the two or logically > separate functions -- ALC_REFRESH determines how often the internal mixing > parameters refresh, and UpdateSize is how many samples to mix on each > update. Currently they're tied together because the internal parameters are > updated with each mix update, but that probably won't always be the case > (updating internal parameters is a bit costly, so it would benefit the > mixer to not waste time doing that and instead just focus on mixing; it > will also help in making the mixer lock-less). Ok I see, so if I want to enable low-latency on Android, I can change the UpdateSize and Frequency in _reset_. I'll protect this with #ifdef ANDROID since it will involve JNI binding to retrieve the "preferred" values. > > > May I change UpdateSize by just a bit >> ie: UpdateSize is 1024, but I'd like to use 1200? >> > No it was just a sample, but currently I think that for a Nexus 4 the UpdateSize should be 240 (or a multiple of). I'll submit you a patch soon. > > A value like that would certainly work if the device can support it, but I > think a power-of-two update size is more advantageous than an exact refresh > value at 48 or 24khz. Is there a particular reason for that? > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at enkisoftware.com Thu Jan 23 10:11:00 2014 From: doug at enkisoftware.com (Doug Binks) Date: Thu, 23 Jan 2014 16:11:00 +0100 Subject: [openal] MIDI support Message-ID: MIDI support looks interesting, but I was wondering if this could potentially be used as a positional source? Cheers! -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at enkisoftware.com Thu Jan 23 10:46:26 2014 From: doug at enkisoftware.com (Doug Binks) Date: Thu, 23 Jan 2014 16:46:26 +0100 Subject: [openal] Synchronizing 3D sources Message-ID: I had an interesting (but short) twitter discussion with Leonard Ritter (@paniq) about synchronizing 3D sources in OpenAL , and was wondering if folk on this thread had ideas / input. In brief, the problem is that there is a need to synchronize the playback of one 3D positional source with another at a sample accurate (i.e. sub-buffer) level. I can see a number of ways of going about this without OpenAL alterations, but they're all fairly involved. Due to pitch and Doppler variations I don't think it's possible to implement an API which guarantees continuous synchronized playback of multiple spatial sources, but timing the start of one with a sample position of another should be possible. My proposal would be a trigger API. Triggers have an event sample position (likely best using AL_SAMPLE_OFFSET_LATENCY_SOFT i64v 32.32 int.fract format), and a list of sources to play (played all at once when the trigger is hit). Sources are modified to add a list of triggers, which are ordered when added. Processing of the triggers would appear to be best done in mixer.c, at the end of MixSource, or in alu.c during aluMixData where the source processing takes place (which would potentially make it easier to add the sources. This would be low overhead, in the case of no triggers one conditional branch per source, and only one extra integer test per source with a trigger. /* apply triggers, example for just b4 Update source info in mixer.c*/ while( Source->trigger ) { ALuint timeToGo = Source->trigger->time - Source->position; if( timeToGo < DataPosInt ) //using int pos as example { // fire off event EnqueueSourceDelayed( Source->trigger->source, // src to play Source, // src of event timeToGo ); // delay Source->trigger = Source->trigger->next; } else { break; } } The EnqueueSourceDelayed function would add the sources to the list of playing sources with a delay which would attempt to time the beginning of the source with the sample offset required. I'd be interested to know if people think that this would be useful, or if there is already some functionality I've missed which could achieve a similar objective. I'd be happy to take a look at making these changes for contribution to OpenAL, first of all detailing the API in greater depth for feedback. Thanks for reading! twitter: @dougbinks web: http://www.enkisoftware.com/about.html#doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at enkisoftware.com Thu Jan 23 13:22:00 2014 From: doug at enkisoftware.com (Doug Binks) Date: Thu, 23 Jan 2014 19:22:00 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: Message-ID: For syncing to sources without Doppler and where pitch changes are not to be expected a simpler API would be to permit sources to be played back at an absolute time, coupled with an API to get the current output sample time. An additional requirement might be to be able to get the current time and a sources sample offset at the same time. I can specify this in more detail if there's any interest. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Fri Jan 24 03:40:10 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Fri, 24 Jan 2014 00:40:10 -0800 Subject: [openal] MIDI support In-Reply-To: References: Message-ID: <52E226EA.8080506@gmail.com> On 01/23/2014 07:11 AM, Doug Binks wrote: > MIDI support looks interesting, but I was wondering if this could > potentially be used as a positional source? I think the individual notes would be more analogous to sources, rather than the mixed output. Each note can be panned, attenuated, pitch shifted, etc, individually, which then gets mixed to dry and wet paths. From xavier at tinybigstory.be Fri Jan 24 06:24:23 2014 From: xavier at tinybigstory.be (Xavier Wielemans) Date: Fri, 24 Jan 2014 12:24:23 +0100 Subject: [openal] Real-time interactive 3D audio landscape Message-ID: Hi everyone, ?My name is Xavier, I am a free-lance interactive multimedia designer/developer, mainly active in the museal/artistic/cultural domain. I am looking into OpenAL in the context of an art installation I'm working on, on behalf of Constantin Dupuis, student at Le Fresnoy . [ Le Fresnoy is a french contemporary art studio that is also a very high level school (2 years program for a selection of 25 students per year. The school program is focused on each student creating and producing at the end of each year a piece of contemporary art (installation or performance) involving in most cases quite a lot of interactive media technologies. All these creations are integrated in the Panoramaexhibition that opens each year in June. This year will be the 16th edition, and Panorama has now gained much public awareness and critic recognition. It's hard to beleive that all the great work showcased in the exhibition is "only student work". ] Constantin being an experimental musician and sound designer, his idea for this year is to offer visitors the possibility to navigate through a virtual 3D scene, in real-time. Nothing very original so far, except that in his case the 3D scene is made of sound only. The user/visitor trying the installation will be blindfolded, seated in a custom-designed chair fitted with a navigation pad, and given a pair of stereo headphones, nothig else. He/she will walk through the virtural environment using the pad controls and perceive a 3D spatialized sonic landscape. By moving around, he/she will gradually build an internal representation of that sonic world and be able to orient him/herself. Basically, the scene will be composed of static audio sources/clips, each of them mono and positionned in a fixed point of the 3D scene. Only the listener will move through the scene, getting closer or further away from the sources. (Only exception: a virtual "character" consisting of a moving audio source will also be present in the scene, moving in reaction to the user's motion, following or guiding him in the virtual world.) Binaural 3D audio rendering is essential, of course, to give the user a clear perception of the audio sources's 3D positions. Constantin asked me to develop for him a sort of "editor" allowing him to integrate sources in the 3D scene and to position them precisely, and a "player" allowing the users to navigate through the resulting 3D scene (without being able to edit/modify it). Given that the project has a very tight budget, I would like to use an existing 3D engine such as Unity3D or Unreal as the base of both the editor and the player. If not possible, though, I'm OK to develop an editor and player from scratch, most probably in C# or C++, then. Any help/tips appreciated... Is there an existing OpenAL implementation on Windows, working with Unity (preferably) or Unreal? Or otherwise, a good quality windows implementation coming with an SDK or some example code to help me start developing my own editor/player in C++ or C# ? I've spoken with Blue Ripple Sound (Rapture3D implementation vendor) already, they are most willing to assist me but for a development case like mine they depend on Creative's OpenAL SDK, that they are not allowed to redistribute - And Creative has closed down the OpenAL.org website, it seems - which you certainly all know very well already... :) Most important aspect for us is the precision of the 3D spatialization of the sound sources. Audio quality comes second, as long as the user is able to precisely perceive where the sound is coming from around him/her... Im am looking forward to (3D-)hearing from you! Kind regards, Xavier -- tiny*big*story sprl Avenue de la Paix 44 B-1420 Braine-l'Alleud Belgique +32 472 98 56 97 xavier at tinybigstory.be skype: xavier.wielemans TVA BE 0501.800.004 ING 363-1136573-64 >* watch Showreel 2013 * [image: vimeo.com/tinybigstory/tinybigstoryshowreel2013] -- tiny*big*story sprl Avenue de la Paix 44 B-1420 Braine-l'Alleud Belgique +32 472 98 56 97 xavier at tinybigstory.be skype: xavier.wielemans TVA BE 0501.800.004 ING 363-1136573-64 >* watch Showreel 2013 * [image: vimeo.com/tinybigstory/tinybigstoryshowreel2013] -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at enkisoftware.com Fri Jan 24 06:30:54 2014 From: doug at enkisoftware.com (Doug Binks) Date: Fri, 24 Jan 2014 12:30:54 +0100 Subject: [openal] Real-time interactive 3D audio landscape In-Reply-To: References: Message-ID: Unity 3D should have all you need without further audio libraries such as OpenAL. Unity's audio solution is powered by FMOD, see here: http://unity3d.com/unity/quality/audio Note that FMOD has a free license for non commercial usage http://www.fmod.org/sales/, I'm not sure how the Unity plugin works from a licensing perspective (Unity may cover the commercial license costs, but I don't know). On 24 January 2014 12:24, Xavier Wielemans wrote: > Hi everyone, > > ?My name is Xavier, I am a free-lance interactive multimedia > designer/developer, mainly active in the museal/artistic/cultural domain. > > I am looking into OpenAL in the context of an art installation I'm working > on, on behalf of Constantin Dupuis, student at Le Fresnoy > . > > [ Le Fresnoy is a french contemporary art studio that is also a very high > level school (2 years program for a selection of 25 students per year. The > school program is focused on each student creating and producing at the end > of each year a piece of contemporary art (installation or performance) > involving in most cases quite a lot of interactive media technologies. > > All these creations are integrated in the Panoramaexhibition that opens each year in June. This year will be the 16th > edition, and Panorama has now gained much public awareness and critic > recognition. It's hard to beleive that all the great work showcased in the > exhibition is "only student work". ] > > Constantin being an experimental musician and sound designer, his idea for > this year is to offer visitors the possibility to navigate through a > virtual 3D scene, in real-time. Nothing very original so far, except that > in his case the 3D scene is made of sound only. The user/visitor trying the > installation will be blindfolded, seated in a custom-designed chair fitted > with a navigation pad, and given a pair of stereo headphones, nothig else. > He/she will walk through the virtural environment using the pad controls > and perceive a 3D spatialized sonic landscape. By moving around, he/she > will gradually build an internal representation of that sonic world and be > able to orient him/herself. > > Basically, the scene will be composed of static audio sources/clips, each > of them mono and positionned in a fixed point of the 3D scene. Only the > listener will move through the scene, getting closer or further away from > the sources. (Only exception: a virtual "character" consisting of a moving > audio source will also be present in the scene, moving in reaction to the > user's motion, following or guiding him in the virtual world.) > > Binaural 3D audio rendering is essential, of course, to give the user a > clear perception of the audio sources's 3D positions. > > Constantin asked me to develop for him a sort of "editor" allowing him to > integrate sources in the 3D scene and to position them precisely, and a > "player" allowing the users to navigate through the resulting 3D scene > (without being able to edit/modify it). > > Given that the project has a very tight budget, I would like to use an > existing 3D engine such as Unity3D or Unreal as the base of both the editor > and the player. If not possible, though, I'm OK to develop an editor and > player from scratch, most probably in C# or C++, then. > > Any help/tips appreciated... Is there an existing OpenAL implementation on > Windows, working with Unity (preferably) or Unreal? Or otherwise, a good > quality windows implementation coming with an SDK or some example code to > help me start developing my own editor/player in C++ or C# ? > > I've spoken with Blue Ripple Sound (Rapture3D implementation vendor) > already, they are most willing to assist me but for a development case like > mine they depend on Creative's OpenAL SDK, that they are not allowed to > redistribute - And Creative has closed down the OpenAL.org website, it > seems - which you certainly all know very well already... :) > > Most important aspect for us is the precision of the 3D spatialization of > the sound sources. Audio quality comes second, as long as the user is able > to precisely perceive where the sound is coming from around him/her... > > Im am looking forward to (3D-)hearing from you! > > Kind regards, > > Xavier > -- > > tiny*big*story sprl > > Avenue de la Paix 44 > B-1420 Braine-l'Alleud > Belgique > +32 472 98 56 97 > xavier at tinybigstory.be > skype: xavier.wielemans > > TVA BE 0501.800.004 > ING 363-1136573-64 > > > >* watch Showreel 2013 > * > [image: vimeo.com/tinybigstory/tinybigstoryshowreel2013] > > > > -- > > tiny*big*story sprl > > Avenue de la Paix 44 > B-1420 Braine-l'Alleud > Belgique > +32 472 98 56 97 > xavier at tinybigstory.be > skype: xavier.wielemans > > TVA BE 0501.800.004 > ING 363-1136573-64 > > > >* watch Showreel 2013 > * > [image: vimeo.com/tinybigstory/tinybigstoryshowreel2013] > > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Fri Jan 24 06:34:07 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Fri, 24 Jan 2014 03:34:07 -0800 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: Message-ID: <52E24FAF.8020904@gmail.com> On 01/23/2014 07:46 AM, Doug Binks wrote: > In brief, the problem is that there is a need to synchronize the playback > of one 3D positional source with another at a sample accurate (i.e. > sub-buffer) level. I can see a number of ways of going about this without > OpenAL alterations, but they're all fairly involved. > > Due to pitch and Doppler variations I don't think it's possible to > implement an API which guarantees continuous synchronized playback of > multiple spatial sources, but timing the start of one with a sample > position of another should be possible. > > My proposal would be a trigger API. Triggers have an event sample position > (likely best using AL_SAMPLE_OFFSET_LATENCY_SOFT i64v 32.32 int.fract > format), and a list of sources to play (played all at once when the trigger > is hit). I think synchronizing it to the output's sample offset would be a more viable option than a source offset. Actually, using a microsecond or nanosecond clock would probably be even better. The implementation would then just get it as close as possible. To help synchronize with a playing source, there'd be a way to get the source's offset, latency, and the device clock, all in one "atomic" query. That would allow the app a reasonable way of calculating a device clock time that would correspond to a source offset. The benefit of doing it this way is that it could even be emulated using the loopback device... use a clock based on how many samples were rendered, and start sources whenever you need after rendering the appropriate amount of samples. The downside to that is you become responsible for getting the rendered samples out to a device. There are ways to do it, though not without some drawbacks (e.g. difficulty in supporting surround sound and auto-detection of a preferred mode). From doug at enkisoftware.com Fri Jan 24 06:43:48 2014 From: doug at enkisoftware.com (Doug Binks) Date: Fri, 24 Jan 2014 12:43:48 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: <52E24FAF.8020904@gmail.com> References: <52E24FAF.8020904@gmail.com> Message-ID: That sounds similar to what we were thinking. For device clock, one extra issue is that the programmer needs to know when to submit the play request in order to ensure the buffer can be queued before the device mixes the next output buffer. So we may also need to query the device output buffer size - I'm not sure if that's already in the extension spec. I think I can get an example implementation going on a branch of OpenAL Soft on my git fork, and get back in touch for commenting. On 24 January 2014 12:34, Chris Robinson wrote: > On 01/23/2014 07:46 AM, Doug Binks wrote: > >> In brief, the problem is that there is a need to synchronize the playback >> of one 3D positional source with another at a sample accurate (i.e. >> sub-buffer) level. I can see a number of ways of going about this without >> OpenAL alterations, but they're all fairly involved. >> >> Due to pitch and Doppler variations I don't think it's possible to >> implement an API which guarantees continuous synchronized playback of >> multiple spatial sources, but timing the start of one with a sample >> position of another should be possible. >> >> My proposal would be a trigger API. Triggers have an event sample position >> (likely best using AL_SAMPLE_OFFSET_LATENCY_SOFT i64v 32.32 int.fract >> format), and a list of sources to play (played all at once when the >> trigger >> is hit). >> > > I think synchronizing it to the output's sample offset would be a more > viable option than a source offset. Actually, using a microsecond or > nanosecond clock would probably be even better. The implementation would > then just get it as close as possible. To help synchronize with a playing > source, there'd be a way to get the source's offset, latency, and the > device clock, all in one "atomic" query. That would allow the app a > reasonable way of calculating a device clock time that would correspond to > a source offset. > > The benefit of doing it this way is that it could even be emulated using > the loopback device... use a clock based on how many samples were rendered, > and start sources whenever you need after rendering the appropriate amount > of samples. The downside to that is you become responsible for getting the > rendered samples out to a device. There are ways to do it, though not > without some drawbacks (e.g. difficulty in supporting surround sound and > auto-detection of a preferred mode). > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Fri Jan 24 07:29:46 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Fri, 24 Jan 2014 04:29:46 -0800 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> Message-ID: <52E25CBA.5000708@gmail.com> On 01/24/2014 03:43 AM, Doug Binks wrote: > That sounds similar to what we were thinking. > > For device clock, one extra issue is that the programmer needs to know when > to submit the play request in order to ensure the buffer can be queued > before the device mixes the next output buffer. So we may also need to > query the device output buffer size - I'm not sure if that's already in the > extension spec. It should be enough for the app to know the update/period time and add that to the retrieved clock time (i.e. a time or frequency derived from UpdateSize). So even if the clock updates just after you read it, you will still specify an equal or larger clock time to get accurate timing. Currently you could get that info with the ALC_REFRESH value (update time = clock_res / refresh), although like I noted in another message, that's not the best idea because the refresh is not a logical product of the update size (it currently is, but that's an implementation detail that could change). From doug at enkisoftware.com Fri Jan 24 07:34:59 2014 From: doug at enkisoftware.com (Doug Binks) Date: Fri, 24 Jan 2014 13:34:59 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: <52E25CBA.5000708@gmail.com> References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> Message-ID: Makes sense. Since this needs new code for atomically getting a device timer etc., also adding code for this shouldn't be too much of an extra problem although it does add to device maintenance costs. On 24 January 2014 13:29, Chris Robinson wrote: > On 01/24/2014 03:43 AM, Doug Binks wrote: > >> That sounds similar to what we were thinking. >> >> For device clock, one extra issue is that the programmer needs to know >> when >> to submit the play request in order to ensure the buffer can be queued >> before the device mixes the next output buffer. So we may also need to >> query the device output buffer size - I'm not sure if that's already in >> the >> extension spec. >> > > It should be enough for the app to know the update/period time and add > that to the retrieved clock time (i.e. a time or frequency derived from > UpdateSize). So even if the clock updates just after you read it, you will > still specify an equal or larger clock time to get accurate timing. > > Currently you could get that info with the ALC_REFRESH value (update time > = clock_res / refresh), although like I noted in another message, that's > not the best idea because the refresh is not a logical product of the > update size (it currently is, but that's an implementation detail that > could change). > > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Fri Jan 24 07:44:26 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Fri, 24 Jan 2014 04:44:26 -0800 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> Message-ID: <52E2602A.30509@gmail.com> On 01/24/2014 04:34 AM, Doug Binks wrote: > Makes sense. Since this needs new code for atomically getting a device > timer etc., also adding code for this shouldn't be too much of an extra > problem although it does add to device maintenance costs. It should be fine to just add another field to ALCdevice, something like ALuint64 SamplesMixed; which would be initialized to 0 and incremented in aluMixData. Then when the clock time is queried, convert it using the device frequency. From metalcaedes at gmail.com Fri Jan 24 07:56:56 2014 From: metalcaedes at gmail.com (Daniel Gibson) Date: Fri, 24 Jan 2014 13:56:56 +0100 Subject: [openal] Real-time interactive 3D audio landscape In-Reply-To: References: Message-ID: <52E26318.3090902@gmail.com> Am 24.01.2014 12:24, schrieb Xavier Wielemans: > > I've spoken with Blue Ripple Sound (Rapture3D implementation vendor) > already, they are most willing to assist me but for a development case > like mine they depend on Creative's OpenAL SDK, that they are not > allowed to redistribute - And Creative has closed down the OpenAL.org > website, it seems - which you certainly all know very well already... :) > OpenAL soft is also available for Windows and provides an "SDK", i.e. headers to use and libs to link against: http://kcat.strangesoft.net/openal.html Engine-wise the Doom3 engine could also be used, there is an OpenAL supporting version available: https://github.com/dhewm/dhewm3/ The level could be created with http://darkradiant.sourceforge.net/ (The level would probably just be a huge box with some sound sources in it) Cheers, Daniel From doug at enkisoftware.com Fri Jan 24 08:14:43 2014 From: doug at enkisoftware.com (Doug Binks) Date: Fri, 24 Jan 2014 14:14:43 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: <52E2602A.30509@gmail.com> References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> Message-ID: Sounds good - presumably there are no current plans to add a device output frequency change to the spec, but can always change the code at that point if needed. On 24 January 2014 13:44, Chris Robinson wrote: > On 01/24/2014 04:34 AM, Doug Binks wrote: > >> Makes sense. Since this needs new code for atomically getting a device >> timer etc., also adding code for this shouldn't be too much of an extra >> problem although it does add to device maintenance costs. >> > > It should be fine to just add another field to ALCdevice, something like > ALuint64 SamplesMixed; > which would be initialized to 0 and incremented in aluMixData. Then when > the clock time is queried, convert it using the device frequency. > > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Fri Jan 24 08:22:25 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Fri, 24 Jan 2014 05:22:25 -0800 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> Message-ID: <52E26911.2050601@gmail.com> On 01/24/2014 05:14 AM, Doug Binks wrote: > Sounds good - presumably there are no current plans to add a device output > frequency change to the spec, but can always change the code at that point > if needed. Hmm, that's a good point, actually. The playback frequency can change if a new context is created and specifies an ALC_FREQUENCY attribute. In that case, a better option would be to store the clock time in the device, instead of the samples mixed. Then you'd increase the clock time in aluMixData by converting the number of samples to mix into a time increment. From doug at enkisoftware.com Fri Jan 24 08:24:20 2014 From: doug at enkisoftware.com (Doug Binks) Date: Fri, 24 Jan 2014 14:24:20 +0100 Subject: [openal] Real-time interactive 3D audio landscape In-Reply-To: <52E26318.3090902@gmail.com> References: <52E26318.3090902@gmail.com> Message-ID: The project will also need to track the listener in real space, so presumably the Kinect or similar will be used, for which Unity has a plugin and it's been used for a few art installations. Unity also has the advantage of some great tutorials available online, it should do what's required out of the box (except for the kinect or other tracker plugin). On 24 January 2014 13:56, Daniel Gibson wrote: > Am 24.01.2014 12:24, schrieb Xavier Wielemans: > > >> I've spoken with Blue Ripple Sound (Rapture3D implementation vendor) >> already, they are most willing to assist me but for a development case >> like mine they depend on Creative's OpenAL SDK, that they are not >> allowed to redistribute - And Creative has closed down the OpenAL.org >> website, it seems - which you certainly all know very well already... :) >> >> > OpenAL soft is also available for Windows and provides an "SDK", i.e. > headers to use and libs to link against: > http://kcat.strangesoft.net/openal.html > > Engine-wise the Doom3 engine could also be used, there is an OpenAL > supporting version available: > > https://github.com/dhewm/dhewm3/ > > The level could be created with > http://darkradiant.sourceforge.net/ > > (The level would probably just be a huge box with some sound sources in it) > > Cheers, > Daniel > > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at enkisoftware.com Fri Jan 24 08:39:03 2014 From: doug at enkisoftware.com (Doug Binks) Date: Fri, 24 Jan 2014 14:39:03 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: <52E26911.2050601@gmail.com> References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> Message-ID: I'll have a think - this would accumulate a small amount of error but it should be OK. Another potential would be to recalculate the number of samples when the frequency is changed into the new sample rate. On 24 January 2014 14:22, Chris Robinson wrote: > On 01/24/2014 05:14 AM, Doug Binks wrote: > >> Sounds good - presumably there are no current plans to add a device output >> frequency change to the spec, but can always change the code at that point >> if needed. >> > > Hmm, that's a good point, actually. The playback frequency can change if a > new context is created and specifies an ALC_FREQUENCY attribute. > > In that case, a better option would be to store the clock time in the > device, instead of the samples mixed. Then you'd increase the clock time in > aluMixData by converting the number of samples to mix into a time increment. > > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at enkisoftware.com Mon Jan 27 12:24:29 2014 From: doug at enkisoftware.com (Doug Binks) Date: Mon, 27 Jan 2014 18:24:29 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> Message-ID: [Just an FYI, TL;DR: wait until I finish the prototype.] I've started a prototype implementation, with temporary extension name AL_SOFT_device_clock on a branch on my github fork of OpenAL (otherwise unchanged from the main repo). https://github.com/dougbinks/openal-soft/commits/Device_Clock Currently the functionality is just for getting the required data atomically, as per example I've added here: https://github.com/dougbinks/openal-soft/commit/6ede7493ef42eb355f52436aee8ca99cf51153f3 The extension add three source enums, only the first two of which work as yet: AL_SAMPLE_OFFSET_DEVICE_CLOCK_SOFT, AL_SAMPLE_OFFSET_LATENCY_DEVICE_CLOCK_SOFT and AL_PLAY_ON_DEVICE_CLOCK_SOFT. AL_SAMPLE_OFFSET_DEVICE_CLOCK_SOFT returns source offset, output sample count, output frequency, and output update size. AL_SAMPLE_OFFSET_LATENCY_DEVICE_CLOCK_SOFT returns source offset in 32.32 format as per AL_SAMPLE_OFFSET_LATENCY along with latency, output sample count, output frequency and output update size. These should be sufficient for deciding when to play a source with the new (not implemented yet) AL_PLAY_ON_DEVICE_CLOCK_SOFT timing. After some thought the device clock is just in output samples for now. This is sufficient for the use cases I am aware of for now, and we can add a more generally applicable clock later, but we'll likely need sub-sample timing precision if we're to support synchronization of sources with different frequencies from the output clock. On 24 January 2014 14:39, Doug Binks wrote: > I'll have a think - this would accumulate a small amount of error but it > should be OK. Another potential would be to recalculate the number of > samples when the frequency is changed into the new sample rate. > > > On 24 January 2014 14:22, Chris Robinson wrote: > >> On 01/24/2014 05:14 AM, Doug Binks wrote: >> >>> Sounds good - presumably there are no current plans to add a device >>> output >>> frequency change to the spec, but can always change the code at that >>> point >>> if needed. >>> >> >> Hmm, that's a good point, actually. The playback frequency can change if >> a new context is created and specifies an ALC_FREQUENCY attribute. >> >> In that case, a better option would be to store the clock time in the >> device, instead of the samples mixed. Then you'd increase the clock time in >> aluMixData by converting the number of samples to mix into a time increment. >> >> _______________________________________________ >> openal mailing list >> openal at openal.org >> http://openal.org/mailman/listinfo/openal >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Mon Jan 27 13:43:31 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Mon, 27 Jan 2014 10:43:31 -0800 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> Message-ID: <52E6A8D3.7050805@gmail.com> On 01/27/2014 09:24 AM, Doug Binks wrote: > AL_SAMPLE_OFFSET_DEVICE_CLOCK_SOFT returns source offset, output > sample count, output frequency, and output update size. > > AL_SAMPLE_OFFSET_LATENCY_DEVICE_CLOCK_SOFT returns source offset in > 32.32 format as per AL_SAMPLE_OFFSET_LATENCY along with latency, > output sample count, output frequency and output update size. It shouldn't be necessary to return the output frequency here. Returning the output sample count and output update size as a micro- or nanosecond time value removes the need for it. It also makes more sense for the update time to be a separate device-level (alcGetIntegerv) query... it's a relatively non-volatile value that doesn't need to be retrieved with the others, as it can only change during alcCreateContext. > These should be sufficient for deciding when to play a source with the > new (not implemented yet) AL_PLAY_ON_DEVICE_CLOCK_SOFT timing. I think functions would be better than a property. void alSourcePlayTimeSOFT(ALuint64SOFT time, ALuint id); void alSourcePlayTimevSOFT(ALuint64SOFT time, ALsizei count, const ALuint *ids); This makes more sense to me, for the same reason you'd use alSourcePlay instead of setting the AL_SOURCE_STATE property to AL_PLAYING. From jan at jdrabner.eu Mon Jan 27 14:09:49 2014 From: jan at jdrabner.eu (Jan Drabner) Date: Mon, 27 Jan 2014 20:09:49 +0100 Subject: [openal] FFmpeg + OpenAL - playback streaming sound from video won't work Message-ID: <52E6AEFD.8060805@jdrabner.eu> Hey, I am decoding an OGG video (theora & vorbis as codecs) and want to show it on the screen (using Ogre 3D) while playing its sound. I can decode the image stream just fine and the video plays perfectly with the correct frame rate, etc. However, I cannot get the sound to play at all with OpenAL. Here is how I decode audio packets (in a background thread, the equivalent works just fine for the image stream of the video file): |//------------------------------------------------------------------------------ int decodeAudioPacket( AVPacket& p_packet, AVCodecContext* p_audioCodecContext, AVFrame* p_frame, FFmpegVideoPlayer* p_player, VideoInfo& p_videoInfo) { // Decode audio frame int got_frame= 0; int decoded= avcodec_decode_audio4(p_audioCodecContext, p_frame, &got_frame, &p_packet); if (decoded< 0) { p_videoInfo.error= "Error decoding audio frame."; return decoded; } // Frame is complete, store it in audio frame queue if (got_frame) { int bufferSize= av_samples_get_buffer_size(NULL, p_audioCodecContext->channels, p_frame->nb_samples, p_audioCodecContext->sample_fmt, 0); int64_t duration= p_frame->pkt_duration; int64_t dts= p_frame->pkt_dts; if (staticOgreLog) { staticOgreLog->logMessage("Audio frame bufferSize / duration / dts:" + boost::lexical_cast(bufferSize) + " /" + boost::lexical_cast(duration) + " /" + boost::lexical_cast(dts), Ogre::LML_NORMAL); } // Create the audio frame AudioFrame* frame= new AudioFrame(); frame->dataSize= bufferSize; frame->data= new uint8_t[bufferSize]; memcpy(frame->data, p_frame->data, bufferSize); double timeBase= ((double)p_audioCodecContext->time_base.num) / (double)p_audioCodecContext->time_base.den; frame->lifeTime= duration* timeBase; p_player->addAudioFrame(frame); } return decoded; } | So, as you can see, I decode the frame, memcpy it to my own struct, AudioFrame. Now, when the sound is played, I use these audio frame like this: |//------------------------------------------------------------------------------|| int numBuffers= 4; ALuint buffers[4]; alGenBuffers(numBuffers, buffers); ALenum success= alGetError(); if(success!= AL_NO_ERROR) { CONSOLE_LOG("Error on alGenBuffers :" + Ogre::StringConverter::toString(success) + alGetString(success)); return; } // Fill a number of data buffers with audio from the stream std::vector audioBuffers; std::vector audioBufferSizes; unsigned int numReturned= FFMPEG_PLAYER->getDecodedAudioFrames(numBuffers, audioBuffers, audioBufferSizes); // Assign the data buffers to the OpenAL buffers for (unsigned int i= 0; i< numReturned; ++i) { alBufferData(buffers[i], _streamingFormat, audioBuffers[i]->data, audioBufferSizes[i], _streamingFrequency); success= alGetError(); if(success!= AL_NO_ERROR) { CONSOLE_LOG("Error on alBufferData :" + Ogre::StringConverter::toString(success) + alGetString(success) + " size:" + Ogre::StringConverter::toString(audioBufferSizes[i])); return; } } // Queue the buffers into OpenAL alSourceQueueBuffers(_source, numReturned, buffers); success= alGetError(); if(success!= AL_NO_ERROR) { CONSOLE_LOG("Error queuing streaming buffers:" + Ogre::StringConverter::toString(success) + alGetString(success)); return; } } alSourcePlay(_source);| The format and frequency I give to OpenAL are AL_FORMAT_STEREO16 (it is a stereo sound stream) and 48000 (which is the sample rate of the AVCodecContext of the audio stream). And during playback, I do the following to refill OpenAL's buffers: |//------------------------------------------------------------------------------|||| ALint numBuffersProcessed; // Check if OpenAL is done with any of the queued buffers alGetSourcei(_source, AL_BUFFERS_PROCESSED, &numBuffersProcessed); if(numBuffersProcessed<= 0) return; // Fill a number of data buffers with audio from the stream std::vector audioBuffers; std::vector audioBufferSizes; unsigned int numFilled= FFMPEG_PLAYER->getDecodedAudioFrames(numBuffersProcessed, audioBuffers, audioBufferSizes); // Assign the data buffers to the OpenAL buffers ALuint buffer; for (unsigned int i= 0; i< numFilled; ++i) { // Pop the oldest queued buffer from the source, // fill it with the new data, then re-queue it alSourceUnqueueBuffers(_source, 1, &buffer); ALenum success= alGetError(); if(success!= AL_NO_ERROR) { CONSOLE_LOG("Error Unqueuing streaming buffers:" + Ogre::StringConverter::toString(success)); return; } alBufferData(buffer, _streamingFormat, audioBuffers[i]->data, audioBufferSizes[i], _streamingFrequency); success= alGetError(); if(success!= AL_NO_ERROR) { CONSOLE_LOG("Error on re- alBufferData:" + Ogre::StringConverter::toString(success)); return; } alSourceQueueBuffers(_source, 1, &buffer); success= alGetError(); if(success!= AL_NO_ERROR) { CONSOLE_LOG("Error re-queuing streaming buffers:" + Ogre::StringConverter::toString(success) + "" + alGetString(success)); return; } } // Make sure the source is still playing, // and restart it if needed. ALint playStatus; alGetSourcei(_source, AL_SOURCE_STATE, &playStatus); if(playStatus!= AL_PLAYING) alSourcePlay(_source);| As you can see, I do quite heavy error checking. But I do not get any errors. What I hear somewhat resembles the actual audio from the video, but VERY high pitched and stuttering VERY much. Also, it seems to be playing on top of TV noise. Very strange. Plus, it is playing much slower than the correct audio would. But it is very hard to hear anything specific in that mess. The video itself is not broken, it can be played fine on any player. OpenAL can also play *.way files just fine in the same application, so it is also working. Any ideas what could be wrong here or how to do this correctly? My only guess is that somehow, FFmpeg's decode function does not produce data OpenGL can read. But this is as far as the FFmpeg decode example goes, so I don't know what's missing. As I understand it, the decode_audio4 function decodes the frame to raw data. And OpenAL should be able to work with RAW data (or rather, doesn't work with anything else). P.S: You can also find this question on StackOverflow, figured it wouldn't hurt to ask on multiple places: http://stackoverflow.com/questions/21386135/ffmpeg-openal-playback-streaming-sound-from-video-wont-work -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at enkisoftware.com Mon Jan 27 17:16:40 2014 From: doug at enkisoftware.com (Doug Binks) Date: Mon, 27 Jan 2014 23:16:40 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: <52E6A8D3.7050805@gmail.com> References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> <52E6A8D3.7050805@gmail.com> Message-ID: I've updated the code to add playback. This should help understand the use cases a bit more. In the example I playback two square waves of the same frequency, timing the second wave to cancel the first without clicks. The main use case is for rhythm-action style synchronization of spatial sources with music. Feedback sounds good, some further comments on this below. * Output frequency - indeed I think this could disappear from the return as it's relatively non volatile. However I'd like to leave the option to get samples rather than time as it makes the code easier to use. Adding a get time rather than samples as per latency makes sense though for some cases. * Output Sample Count - yes, now I realise this is non-volatile I'll remove it, and move it to a alcGetIntegerv if it's not already. * Play functions - these make sense, I used a parameter for now as it made it easier to write the prototype. It may still be necessary to have a get for this, as otherwise the 'length' of the sample makes no sense if developers are writing code which keeps all state in OpenAL. Please be as mean as you can with the code, even though it's just a prototype to show this can work. Meanwhile I'll look into the proposed changes and clean up the example a bit. I'm trying to stay away from using SDL for now to reduce dependencies with the example I wrote hence the slight repeat of code. On 27 January 2014 19:43, Chris Robinson wrote: > On 01/27/2014 09:24 AM, Doug Binks wrote: > >> AL_SAMPLE_OFFSET_DEVICE_CLOCK_SOFT returns source offset, output >> sample count, output frequency, and output update size. >> > > > > AL_SAMPLE_OFFSET_LATENCY_DEVICE_CLOCK_SOFT returns source offset in > > 32.32 format as per AL_SAMPLE_OFFSET_LATENCY along with latency, > > output sample count, output frequency and output update size. > > It shouldn't be necessary to return the output frequency here. Returning > the output sample count and output update size as a micro- or nanosecond > time value removes the need for it. It also makes more sense for the update > time to be a separate device-level (alcGetIntegerv) query... it's a > relatively non-volatile value that doesn't need to be retrieved with the > others, as it can only change during alcCreateContext. > > > These should be sufficient for deciding when to play a source with the >> new (not implemented yet) AL_PLAY_ON_DEVICE_CLOCK_SOFT timing. >> > > I think functions would be better than a property. > > void alSourcePlayTimeSOFT(ALuint64SOFT time, ALuint id); > void alSourcePlayTimevSOFT(ALuint64SOFT time, ALsizei count, > const ALuint *ids); > > This makes more sense to me, for the same reason you'd use alSourcePlay > instead of setting the AL_SOURCE_STATE property to AL_PLAYING. > > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.kcat at gmail.com Mon Jan 27 19:19:30 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Mon, 27 Jan 2014 16:19:30 -0800 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> <52E6A8D3.7050805@gmail.com> Message-ID: <52E6F792.5060904@gmail.com> On 01/27/2014 02:16 PM, Doug Binks wrote: > * Output frequency - indeed I think this could disappear from the return as > it's relatively non volatile. However I'd like to leave the option to get > samples rather than time as it makes the code easier to use. Adding a get > time rather than samples as per latency makes sense though for some cases. It's possible to convert the clock time (and update time) back to sample count if needed (at microsecond resolution you should be able to convert back and forth just fine up to 500Khz, and nanosecond should handle up to 500Mhz). By having the clock in microseconds or nanoseconds, an app can also convert it directly to/from the source buffer's sample rate if it needs to, and avoid the device's rate altogether... you can largely ignore the fact that the device has a sample rate and just consider the sources individually. > Please be as mean as you can with the code, even though it's just a > prototype to show this can work. Meanwhile I'll look into the proposed > changes and clean up the example a bit. I'm trying to stay away from using > SDL for now to reduce dependencies with the example I wrote hence the > slight repeat of code. Aside from stuff already mentioned, in-progress extensions should have their tokens and things added to the private alMain.h instead of the public alext.h. The extension string should also use SOFTX instead of SOFT while it's being worked on and is subject to changes. And since it will probably be adding new device-level queries and methods (I imagine there will be a new alGetInteger64vSOFT function to get the device clock on its own), it should be an ALC extension. From chris.kcat at gmail.com Mon Jan 27 20:56:31 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Mon, 27 Jan 2014 17:56:31 -0800 Subject: [openal] FFmpeg + OpenAL - playback streaming sound from video won't work In-Reply-To: <52E6AEFD.8060805@jdrabner.eu> References: <52E6AEFD.8060805@jdrabner.eu> Message-ID: <52E70E4F.70405@gmail.com> On 01/27/2014 11:09 AM, Jan Drabner wrote: > Hey, > > I am decoding an OGG video (theora & vorbis as codecs) and want to show > it on the screen (using Ogre 3D) while playing its sound. I can decode > the image stream just fine and the video plays perfectly with the > correct frame rate, etc. > > However, I cannot get the sound to play at all with OpenAL. > > Here is how I decode audio packets (in a background thread, the > equivalent works just fine for the image stream of the video file): The main thing that sticks out to me is that you seem to be returning one frame per buffer, and you're limited to only 4 buffers. The frame size can be fairly small depending on the format, so it's likely the queue underruns a lot. Beyond that, it's difficult to say with just the provided code snippets. Is there more available to view online anywhere? If not, it may help to make a small test app that still exhibits the problem (easier said than done, I know :/). FWIW, I do have some code available that plays movies using Ogre, OpenAL, and FFMPEG (and boost), and even has somewhat proper A/V synchronization and aspect ratio correction. However the audio code is heavily abstracted to the point where you don't directly see OpenAL in the player code, so may not be terribly useful for someone not familiar with the audio interface. If you're interested anyway, you can see it here as part of the OpenMW code base: From jan at jdrabner.eu Tue Jan 28 03:19:44 2014 From: jan at jdrabner.eu (Jan Drabner) Date: Tue, 28 Jan 2014 09:19:44 +0100 Subject: [openal] FFmpeg + OpenAL - playback streaming sound from video won't work In-Reply-To: <52E70E4F.70405@gmail.com> References: <52E6AEFD.8060805@jdrabner.eu> <52E70E4F.70405@gmail.com> Message-ID: <52E76820.5050601@jdrabner.eu> Thanks for the answer. Is yours based on this player: https://github.com/scrawl/ogre-ffmpeg-videoplayer/blob/master/VideoPlayer.cpp ? This is the one I used for reference partly. But it is based on the dranger tutorials, which makes the code very hard to read and also uses SDL for audio which makes that part rather useless to me, too. SDL seems to call a callback, requesting a number of bytes/buffers to be filled. But with OpenAL you need to put the complete audio stream in there and OpenAL does the skipping itself (otherwise that whole multi-buffering it does wouldn't make much sense). So you don't really need to do any audio synching with OpenAL if I got that correctly. Also, I do not see where you use the swr_convert I was pointed to by Eugen. I am trying to use that (similar to sws_scale for image frames), but it just keeps crashing (without error messages, of course, that would be helpful). My decoder fills single frames to the player. But the player fills the data of those frames into OpenGL buffers. That is what |getDecodedAudioFrames| does. It is supposed to place all decoded frames it has evenly into the passed number of buffers. But the result was just as bad as placing each frame directly. So I decided to just put one frame into each buffer. Which also didn't work. Here is the function, with both ways of doing it (the concatening way is commented out): //------------------------------------------------------------------------------ int FFmpegVideoPlayer::getDecodedAudioFrames(unsigned int p_numBuffers, std::vector& p_audioBuffers, std::vector& p_audioBufferSizes) { boost::mutex::scoped_lock lock(*_playerMutex); // Get the actual number of buffers to fill unsigned int numBuffers = _audioFrames.size() >= p_numBuffers? p_numBuffers : _audioFrames.size(); if (numBuffers == 0) return 0; AudioFrame* frame; for (unsigned int i = 0; i < numBuffers; ++i) { frame = _audioFrames.front(); _audioFrames.pop_front(); uint8_t* buffer = new uint8_t[frame->dataSize]; memcpy(buffer, frame->data, frame->dataSize); _currentAudioStorage -= frame->lifeTime; p_audioBuffers.push_back(buffer); p_audioBufferSizes.push_back(frame->dataSize); delete frame; } // // Calculate the number of audio frames per buffer, and special for the // // last buffer as there may be a rest // int numFramesPerBuffer = _audioFrames.size() / numBuffers; // int numFramesLastBuffer = _audioFrames.size() % numBuffers; // // // Fill each buffer // double totalLifeTime; // AudioFrame* frame; // std::vector frames; // for (unsigned int i = 0; i < numBuffers; ++i) // { // // Get all frames for this buffer to count the lifeTime and data size // unsigned int dataSize = 0; // totalLifeTime = 0.0; // for (unsigned int j = 0; j < numFramesPerBuffer; ++j) // { // frame = _audioFrames.front(); // _audioFrames.pop_front(); // frames.push_back(frame); // // totalLifeTime += frame->lifeTime; // dataSize += frame->dataSize; // } // _currentAudioStorage -= totalLifeTime; // // // Create the buffer // uint8_t* buffer = new uint8_t[dataSize]; // // // Concatenate frames into a single memory target // uint8_t* destination = buffer; // for (unsigned int j = 0; j < numFramesPerBuffer; ++j) // { // memcpy(destination, frames[j]->data, frames[j]->dataSize); // destination += frames[j]->dataSize; // } // // // Delete used frames // for (unsigned int j = 0; j < numFramesPerBuffer; ++j) // { // delete frames[j]; // } // frames.clear(); // // // Store buffer and size in return values // p_audioBuffers.push_back(buffer); // p_audioBufferSizes.push_back(dataSize); // } // We got at least one new frame, wake up the decoder for more decoding _decodingCondVar->notify_all(); return numBuffers; } Am 28.01.2014 02:56, schrieb Chris Robinson: > On 01/27/2014 11:09 AM, Jan Drabner wrote: >> Hey, >> >> I am decoding an OGG video (theora & vorbis as codecs) and want to show >> it on the screen (using Ogre 3D) while playing its sound. I can decode >> the image stream just fine and the video plays perfectly with the >> correct frame rate, etc. >> >> However, I cannot get the sound to play at all with OpenAL. >> >> Here is how I decode audio packets (in a background thread, the >> equivalent works just fine for the image stream of the video file): > > The main thing that sticks out to me is that you seem to be returning > one frame per buffer, and you're limited to only 4 buffers. The frame > size can be fairly small depending on the format, so it's likely the > queue underruns a lot. > > Beyond that, it's difficult to say with just the provided code > snippets. Is there more available to view online anywhere? If not, it > may help to make a small test app that still exhibits the problem > (easier said than done, I know :/). > > FWIW, I do have some code available that plays movies using Ogre, > OpenAL, and FFMPEG (and boost), and even has somewhat proper A/V > synchronization and aspect ratio correction. However the audio code is > heavily abstracted to the point where you don't directly see OpenAL in > the player code, so may not be terribly useful for someone not > familiar with the audio interface. > > If you're interested anyway, you can see it here as part of the OpenMW > code base: > > > > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at enkisoftware.com Tue Jan 28 05:02:21 2014 From: doug at enkisoftware.com (Doug Binks) Date: Tue, 28 Jan 2014 11:02:21 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: <52E6F792.5060904@gmail.com> References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> <52E6A8D3.7050805@gmail.com> <52E6F792.5060904@gmail.com> Message-ID: Re: time vs samples I'm more worried about the extra complexity for the main use cases I know of than precision, but it's true that this does make the code completely device frequency neutral. I do need to check out the sample mixing process more in depth to ensure the mix can align two same frequency samples when they're on a different clock, which will require sub-sample accurate timing. Currently I just offset the write output which makes things really easy, but won't work for different frequencies. Thanks for the info on alMain, SOFTX etc., will get to work on all that! Many thanks for all the feedback! On 28 January 2014 01:19, Chris Robinson wrote: > On 01/27/2014 02:16 PM, Doug Binks wrote: > >> * Output frequency - indeed I think this could disappear from the return >> as >> it's relatively non volatile. However I'd like to leave the option to get >> samples rather than time as it makes the code easier to use. Adding a get >> time rather than samples as per latency makes sense though for some cases. >> > > It's possible to convert the clock time (and update time) back to sample > count if needed (at microsecond resolution you should be able to convert > back and forth just fine up to 500Khz, and nanosecond should handle up to > 500Mhz). By having the clock in microseconds or nanoseconds, an app can > also convert it directly to/from the source buffer's sample rate if it > needs to, and avoid the device's rate altogether... you can largely ignore > the fact that the device has a sample rate and just consider the sources > individually. > > > Please be as mean as you can with the code, even though it's just a >> prototype to show this can work. Meanwhile I'll look into the proposed >> changes and clean up the example a bit. I'm trying to stay away from using >> SDL for now to reduce dependencies with the example I wrote hence the >> slight repeat of code. >> > > Aside from stuff already mentioned, in-progress extensions should have > their tokens and things added to the private alMain.h instead of the public > alext.h. The extension string should also use SOFTX instead of SOFT while > it's being worked on and is subject to changes. And since it will probably > be adding new device-level queries and methods (I imagine there will be a > new alGetInteger64vSOFT function to get the device clock on its own), it > should be an ALC extension. > > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan at jdrabner.eu Tue Jan 28 05:12:58 2014 From: jan at jdrabner.eu (Jan Drabner) Date: Tue, 28 Jan 2014 11:12:58 +0100 Subject: [openal] FFmpeg + OpenAL - playback streaming sound from video won't work In-Reply-To: <52E6AEFD.8060805@jdrabner.eu> References: <52E6AEFD.8060805@jdrabner.eu> Message-ID: <52E782AA.9040309@jdrabner.eu> I made some progress by setting the OpenAL format to AL_FORMAT_STEREO_FLOAT32 instead of STEREO16. Makes sense, as the decoded frames are in FLTP format. Also, I went back to not use swr_convert. It simply won't work and only crashes without any indication why. And FLTP seems to be the correct format already when openAL uses STEREO_FLOAT32. At least now, the sound is played at the correct speed. However, it is still very high pitched, so something is still very much broken. Am 27.01.2014 20:09, schrieb Jan Drabner: > > Hey, > > I am decoding an OGG video (theora & vorbis as codecs) and want to > show it on the screen (using Ogre 3D) while playing its sound. I can > decode the image stream just fine and the video plays perfectly with > the correct frame rate, etc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan at jdrabner.eu Tue Jan 28 06:45:44 2014 From: jan at jdrabner.eu (Jan Drabner) Date: Tue, 28 Jan 2014 12:45:44 +0100 Subject: [openal] FFmpeg + OpenAL - playback streaming sound from video won't work In-Reply-To: <52E782AA.9040309@jdrabner.eu> References: <52E6AEFD.8060805@jdrabner.eu> <52E782AA.9040309@jdrabner.eu> Message-ID: <52E79868.9020305@jdrabner.eu> And I finally figured it out. Have a look at the stackoverflow post: http://stackoverflow.com/questions/21386135/ffmpeg-openal-playback-streaming-sound-from-video-wont-work/21404721#21404721 From chris.kcat at gmail.com Tue Jan 28 12:11:33 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Tue, 28 Jan 2014 09:11:33 -0800 Subject: [openal] FFmpeg + OpenAL - playback streaming sound from video won't work In-Reply-To: <52E76820.5050601@jdrabner.eu> References: <52E6AEFD.8060805@jdrabner.eu> <52E70E4F.70405@gmail.com> <52E76820.5050601@jdrabner.eu> Message-ID: <52E7E4C5.2030904@gmail.com> Glad you figured it out. :) I actually wish I could include an ffmpeg video player example with OpenAL Soft, but ffmpeg's constant API breaks makes that too impractical to do. :/ As for this bit: > SDL seems to call a callback, requesting a number of bytes/buffers to be > filled. But with OpenAL you need to put the complete audio stream in > there and OpenAL does the skipping itself (otherwise that whole > multi-buffering it does wouldn't make much sense). So you don't really > need to do any audio synching with OpenAL if I got that correctly. What I mean by synchronization is that the correct video frame is shown with the correct audio frame being heard. OpenAL will correctly play the samples you give it, but the timing of the audio stream may not be the same as the main system/video. Particularly when resampling is involved, the audio time can ever-so-slightly "drift", and that drift accumulates over time. This is further complicated by how much delay there can be between queueing an audio frame onto a source and it being heard, or even the delay between the source offset and what's being heard (this was the main reason behind AL_SOFT_source_latency, to get more accurate positioning of what's being heard; it also tends to give a bit better granularity than OpenAL Soft's default ~22ms updates). From doug at enkisoftware.com Thu Jan 30 12:16:50 2014 From: doug at enkisoftware.com (Doug Binks) Date: Thu, 30 Jan 2014 18:16:50 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> <52E6A8D3.7050805@gmail.com> <52E6F792.5060904@gmail.com> Message-ID: The prototype implementation including the proposed changes is now up on my github fork https://github.com/dougbinks/openal-soft/commits/Device_Clock The code currently doesn't handle sub-sample accuracy, so you cannot align two sources with the same frequency but one which is different from the device output to sample precision. I'm now going to work on that aspect. Device time uses samples output plus an offset internally. This way we do not accumulate errors, which quickly add up even with a nanosecond accurate clock. The offset stays at 0 unless a frequency change occurs, at which point the offset is set to the current time and the output sample count reset to 0. I've not been able to test device frequency changes as yet though. On 28 January 2014 11:02, Doug Binks wrote: > Re: time vs samples I'm more worried about the extra complexity for the main > use cases I know of than precision, but it's true that this does make the > code completely device frequency neutral. > > I do need to check out the sample mixing process more in depth to ensure the > mix can align two same frequency samples when they're on a different clock, > which will require sub-sample accurate timing. Currently I just offset the > write output which makes things really easy, but won't work for different > frequencies. > > Thanks for the info on alMain, SOFTX etc., will get to work on all that! > > Many thanks for all the feedback! > > > On 28 January 2014 01:19, Chris Robinson wrote: >> >> On 01/27/2014 02:16 PM, Doug Binks wrote: >>> >>> * Output frequency - indeed I think this could disappear from the return >>> as >>> it's relatively non volatile. However I'd like to leave the option to get >>> samples rather than time as it makes the code easier to use. Adding a get >>> time rather than samples as per latency makes sense though for some >>> cases. >> >> >> It's possible to convert the clock time (and update time) back to sample >> count if needed (at microsecond resolution you should be able to convert >> back and forth just fine up to 500Khz, and nanosecond should handle up to >> 500Mhz). By having the clock in microseconds or nanoseconds, an app can also >> convert it directly to/from the source buffer's sample rate if it needs to, >> and avoid the device's rate altogether... you can largely ignore the fact >> that the device has a sample rate and just consider the sources >> individually. >> >> >>> Please be as mean as you can with the code, even though it's just a >>> prototype to show this can work. Meanwhile I'll look into the proposed >>> changes and clean up the example a bit. I'm trying to stay away from >>> using >>> SDL for now to reduce dependencies with the example I wrote hence the >>> slight repeat of code. >> >> >> Aside from stuff already mentioned, in-progress extensions should have >> their tokens and things added to the private alMain.h instead of the public >> alext.h. The extension string should also use SOFTX instead of SOFT while >> it's being worked on and is subject to changes. And since it will probably >> be adding new device-level queries and methods (I imagine there will be a >> new alGetInteger64vSOFT function to get the device clock on its own), it >> should be an ALC extension. >> >> _______________________________________________ >> openal mailing list >> openal at openal.org >> http://openal.org/mailman/listinfo/openal > > From chris.kcat at gmail.com Thu Jan 30 13:39:23 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Thu, 30 Jan 2014 10:39:23 -0800 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> <52E6A8D3.7050805@gmail.com> <52E6F792.5060904@gmail.com> Message-ID: <52EA9C5B.7010605@gmail.com> On 01/30/2014 09:16 AM, Doug Binks wrote: > Device time uses samples output plus an offset internally. This way we > do not accumulate errors, which quickly add up even with a nanosecond > accurate clock. The offset stays at 0 unless a frequency change > occurs, at which point the offset is set to the current time and the > output sample count reset to 0. Note that with a default update size of 1024 samples, with the default playback rate of 44100hz, converting samples to nanoseconds and adding that to the clock time (rather than keeping samples and converting to nanoseconds on request) will only introduce an error/drift of about -28ns per second. I'm not sure this is that big of a deal, all considered. I imagine the timing drift from the actual audio hardware, and any resampling from the underlying device software (e.g. alsa's dmix, pulseaudio, dsound) would be far more. Storing it internally as a nanosecond clock also avoids overflow problems when you try to convert large sample counts; samples*1000000000/44100 would overflow even a 64-bit integer if samples was left to accumulate for about 2 days. From doug at enkisoftware.com Thu Jan 30 14:01:01 2014 From: doug at enkisoftware.com (Doug Binks) Date: Thu, 30 Jan 2014 20:01:01 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: <52EA9C5B.7010605@gmail.com> References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> <52E6A8D3.7050805@gmail.com> <52E6F792.5060904@gmail.com> <52EA9C5B.7010605@gmail.com> Message-ID: The problem is that even with nanosecond precision, the drift will equal one sample after a few hundred seconds. The important objective I (and the other use case I'm aware of) is trying to achieve is to get per-sample synchronization with two or more samples being played. We don't care (in this context) about the absolute timing, so drift of any backend or hardware doesn't matter to us so long as the mixing is accurate to the sample level. The offset clock is in nanoseconds with a wrap-around time of 58 years, and the sample clock is in samples, with a far larger wraparound time. If we can drop the requirement to cope with frequency changes, other than resetting the sample clock, then this can be made a lot simpler since we can stick to output sample units - however the current code works with the exception of sub-sample accurate mixing which is required when the sources have a different frequency than the output. On 30 January 2014 19:39, Chris Robinson wrote: > On 01/30/2014 09:16 AM, Doug Binks wrote: >> >> Device time uses samples output plus an offset internally. This way we >> do not accumulate errors, which quickly add up even with a nanosecond >> accurate clock. The offset stays at 0 unless a frequency change >> occurs, at which point the offset is set to the current time and the >> output sample count reset to 0. > > > Note that with a default update size of 1024 samples, with the default > playback rate of 44100hz, converting samples to nanoseconds and adding that > to the clock time (rather than keeping samples and converting to nanoseconds > on request) will only introduce an error/drift of about -28ns per second. > > I'm not sure this is that big of a deal, all considered. I imagine the > timing drift from the actual audio hardware, and any resampling from the > underlying device software (e.g. alsa's dmix, pulseaudio, dsound) would be > far more. Storing it internally as a nanosecond clock also avoids overflow > problems when you try to convert large sample counts; > samples*1000000000/44100 would overflow even a 64-bit integer if samples was > left to accumulate for about 2 days. > > _______________________________________________ > openal mailing list > openal at openal.org > http://openal.org/mailman/listinfo/openal From chris.kcat at gmail.com Thu Jan 30 20:18:55 2014 From: chris.kcat at gmail.com (Chris Robinson) Date: Thu, 30 Jan 2014 17:18:55 -0800 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> <52E6A8D3.7050805@gmail.com> <52E6F792.5060904@gmail.com> <52EA9C5B.7010605@gmail.com> Message-ID: <52EAF9FF.4030009@gmail.com> On 01/30/2014 11:01 AM, Doug Binks wrote: > The problem is that even with nanosecond precision, the drift will > equal one sample after a few hundred seconds. > > The important objective I (and the other use case I'm aware of) is > trying to achieve is to get per-sample synchronization with two or > more samples being played. We don't care (in this context) about the > absolute timing, so drift of any backend or hardware doesn't matter to > us so long as the mixing is accurate to the sample level. I'm not sure sample-perfect timing should be a reliable feature. The problem is when the audio clock isn't directly derived from the number of processed samples, but rather, the number of samples processed is derived from the audio clock. Or if the returned clock time and source offsets are interpolated in some way. Granted this isn't much of an issue for OpenAL Soft right now since the clock would be based on the number of samples and not be interpolated. But I'm not sure if that will always be the case, or if other implementations might want to do it differently. I actually have some pretty ambitious (if long-term) plans for the mixer that would decouple it from property processing, making it completely lockless and perhaps getting really low latencies, which would require rethinking how timing works. Though even if it doesn't go that far, the way timing works could still change to give better granularity than the 20+ms it currently has. So all said, I'm not sure your example of being able to play one square wave, then time-start another square wave so that they perfectly cancel out, would be able to reliably work. Though if anyone else wants to weigh in on the issue, please do. Right now I'm mainly working with hypotheticals, trying to avoid making guarantees that could prevent future internal improvements. From doug at enkisoftware.com Fri Jan 31 05:12:36 2014 From: doug at enkisoftware.com (Doug Binks) Date: Fri, 31 Jan 2014 11:12:36 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: <52EAF9FF.4030009@gmail.com> References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> <52E6A8D3.7050805@gmail.com> <52E6F792.5060904@gmail.com> <52EA9C5B.7010605@gmail.com> <52EAF9FF.4030009@gmail.com> Message-ID: It makes a lot of sense not to encumber the library with any features which would prevent forward progress on other more important aspects. I'll see if I can get some feedback on whether 'almost' sample accurate would do. The square wave test isn't something I think can be made to generically work in all circumstances, but it's certainly one which shows any errors up well. On 31 January 2014 02:18, Chris Robinson wrote: > On 01/30/2014 11:01 AM, Doug Binks wrote: >> >> The problem is that even with nanosecond precision, the drift will >> equal one sample after a few hundred seconds. >> >> The important objective I (and the other use case I'm aware of) is >> trying to achieve is to get per-sample synchronization with two or >> more samples being played. We don't care (in this context) about the >> absolute timing, so drift of any backend or hardware doesn't matter to >> us so long as the mixing is accurate to the sample level. > > > I'm not sure sample-perfect timing should be a reliable feature. The problem > is when the audio clock isn't directly derived from the number of processed > samples, but rather, the number of samples processed is derived from the > audio clock. Or if the returned clock time and source offsets are > interpolated in some way. > > Granted this isn't much of an issue for OpenAL Soft right now since the > clock would be based on the number of samples and not be interpolated. But > I'm not sure if that will always be the case, or if other implementations > might want to do it differently. I actually have some pretty ambitious (if > long-term) plans for the mixer that would decouple it from property > processing, making it completely lockless and perhaps getting really low > latencies, which would require rethinking how timing works. Though even if > it doesn't go that far, the way timing works could still change to give > better granularity than the 20+ms it currently has. > > So all said, I'm not sure your example of being able to play one square > wave, then time-start another square wave so that they perfectly cancel out, > would be able to reliably work. > > > Though if anyone else wants to weigh in on the issue, please do. Right now > I'm mainly working with hypotheticals, trying to avoid making guarantees > that could prevent future internal improvements. From doug at enkisoftware.com Fri Jan 31 12:54:30 2014 From: doug at enkisoftware.com (Doug Binks) Date: Fri, 31 Jan 2014 18:54:30 +0100 Subject: [openal] Synchronizing 3D sources In-Reply-To: References: <52E24FAF.8020904@gmail.com> <52E25CBA.5000708@gmail.com> <52E2602A.30509@gmail.com> <52E26911.2050601@gmail.com> <52E6A8D3.7050805@gmail.com> <52E6F792.5060904@gmail.com> <52EA9C5B.7010605@gmail.com> <52EAF9FF.4030009@gmail.com> Message-ID: The latest code at https://github.com/dougbinks/openal-soft/commits/Device_Clock moves to a nanosecond based clock as requested. I did add an error accumulator to reduce drift, but this can be removed if we don't need sample accuracy over long periods. The changes now cope with aligning sources with different frequencies than the device. In order for a developer to correctly align sources, they need the actual frequency of the source. I have thus added an alcGetDoublevSOFTX function which can be used (as in the example program) to convert a frequency into the actual frequency as resampled into the output. This seemed a better approach than exposing the interval computations. If you think this is looking like something you'd be interested in including, I can do any further changes needed along with a spec and some documentation. On 31 January 2014 11:12, Doug Binks wrote: > It makes a lot of sense not to encumber the library with any features > which would prevent forward progress on other more important aspects. > > I'll see if I can get some feedback on whether 'almost' sample > accurate would do. > > The square wave test isn't something I think can be made to > generically work in all circumstances, but it's certainly one which > shows any errors up well. > > On 31 January 2014 02:18, Chris Robinson wrote: > > On 01/30/2014 11:01 AM, Doug Binks wrote: > >> > >> The problem is that even with nanosecond precision, the drift will > >> equal one sample after a few hundred seconds. > >> > >> The important objective I (and the other use case I'm aware of) is > >> trying to achieve is to get per-sample synchronization with two or > >> more samples being played. We don't care (in this context) about the > >> absolute timing, so drift of any backend or hardware doesn't matter to > >> us so long as the mixing is accurate to the sample level. > > > > > > I'm not sure sample-perfect timing should be a reliable feature. The > problem > > is when the audio clock isn't directly derived from the number of > processed > > samples, but rather, the number of samples processed is derived from the > > audio clock. Or if the returned clock time and source offsets are > > interpolated in some way. > > > > Granted this isn't much of an issue for OpenAL Soft right now since the > > clock would be based on the number of samples and not be interpolated. > But > > I'm not sure if that will always be the case, or if other implementations > > might want to do it differently. I actually have some pretty ambitious > (if > > long-term) plans for the mixer that would decouple it from property > > processing, making it completely lockless and perhaps getting really low > > latencies, which would require rethinking how timing works. Though even > if > > it doesn't go that far, the way timing works could still change to give > > better granularity than the 20+ms it currently has. > > > > So all said, I'm not sure your example of being able to play one square > > wave, then time-start another square wave so that they perfectly cancel > out, > > would be able to reliably work. > > > > > > Though if anyone else wants to weigh in on the issue, please do. Right > now > > I'm mainly working with hypotheticals, trying to avoid making guarantees > > that could prevent future internal improvements. > -------------- next part -------------- An HTML attachment was scrubbed... URL: