[openal] Looking for tips on streaming music.
support at blindaudiogames.com
Sat Aug 9 12:50:51 EDT 2014
Thanks Chris. Your response was very informative.
I think I'll start with just music and a long background ambience using
queued buffers and leave all the other sounds loaded normally.
The 64KB buffer sounds like a good starting point.
It was smaller than I originally would have tried, but I see the wisdom
in using a smaller buffer size but with more buffers as each buffer fill
will take less time.
Good point about the benefit of sharing the normal buffers between
sources. I am already doing this so it would be good to keep it working.
On 8/8/2014 10:05 PM, Chris Robinson wrote:
> On 08/08/2014 11:14 AM, Ian Reed wrote:
>> I'd like to use alSourceQueueBuffers so I can decode just the first 5
>> seconds or so and get it playing, then take my time in subsequent game
>> loops to load the rest of the file.
>> My first thought was that it would be nice to stop worrying about
>> queueing and further decoding overhead once the entire file had been
>> loaded into buffers.
>> Then I'd just let the normal looping property on a source take care of
>> looping it, assuming that works with queued buffers.
>> And I'd be able to release the original file handle.
>> But I've just done some testing and found these results:
>> About 5 seconds of this file would be 95KB of ogg vorbis data.
>> The decoded buffer is about 36 megs.
>> So 5 seconds is about 871KB of raw data.
>> Due to the 36 megs of uncompressed data I now wonder if it would be
>> better to keep the file handle open and repeatedly decode the data into
>> buffers, including handling looping myself by starting decoding at the
>> beginning of the file once I've reached the end.
>> Does anyone have experience with this?
>> I'd appreciate some advice or thoughts on pros and cons of the 2 ideas.
> The first method, loading bits of the audio into an ever-growing
> queue, would have less buffer management overhead, but it'd have more
> memory overhead and make underruns problematic to handle (if you don't
> get the next chunk of audio ready in time, you'll end up prematurely
> replaying everything you've already decoded unless you can keep track
> of where you were).
> The second method is the usual way to stream a large audio file.
> You'll only have a handful of relatively small, reusable buffers, and
> make underrun recovery fairly simple. This method is also a good way
> to handle audio sources that may not have an explicit length (e.g. if
> the music is generated in real-time, or continuous voice com streams).
>> Also, is there an optimal buffer size?
>> How many buffers should I have queued per source?
>> Off hand it seems like 2 buffers, one that is currently playing and one
>> that is pre-loaded.
> I'd have at least 3 buffers. One playing, one ready to play, and one
> to refill. There isn't really an optimal size, although 64KB per
> buffer isn't a bad starting point (~350ms for 44.1khz 16-bit stereo).
>> And once I have queued buffers working does it make sense for me to use
>> them on all my sound files? Or just certain ones based on size of file
>> and whether it needs to be uncompressed?
>> If just certain ones, what rules should I use to choose whether to load
>> the whole file or use queued buffers?
> Just certain ones. Most sound effects are typically short and reused a
> lot, so it makes sense to load them each into a single buffer once and
> share that buffer with every instance of the sound.
> As for the rules, that depends on what criteria you have to work with.
> The simplest setup is to just stream background music, and load
> everything else normally. If your app is voice-heavy, with lots of
> unique voice clips that are several seconds or longer and aren't often
> repeated, it may make sense to stream those, too.
> openal mailing list
> openal at openal.org
More information about the openal