[openal] MIDI support

Chris Robinson chris.kcat at gmail.com
Tue Mar 25 04:45:42 EDT 2014

On 03/24/2014 08:13 AM, Xerxes Rånby wrote:
> Thanks Chris and Ryan for keeping the OpenAL.org and OpenAL spec
> community driven and vibrant.
> I am working on an enhancement bugreport for JogAmp JOAL to support
> your OpenAL-soft MIDI extensions from Java.
> https://jogamp.org/bugzilla/show_bug.cgi?id=1002

Cool. Thanks for the feedback. :)

> It would be great if the OpenAL-soft MIDI extension can handle
> external general MIDI input and output devices as well. For many
> games and apps it would be good to use all kinds of MIDI devices
> ranging all from keyboards to guitars and "3D" multibutton DJ
> equipment for input. Using OpenAL to orchestrate hardware
> synthesizers with spatialisation is all welcome as well.

It should be possible to handle external MIDI devices. As far as I 
recall, the MIDI specification does have methods to load sample data 
onto devices, so presuming the AL implementation can talk back and forth 
with it to figure out its capabilities, it can decide whether it's a 
suitable MIDI device (since a part of this extension is to guarantee a 
certain baseline, we wouldn't want the device to silently ignore the 
soundfont samples sent to it and make the songs sound all wrong).

> It would be awesome if the OpenAL-soft MIDI extension had a callback
> api for real time audio generation for use to add a custom MIDI synth
> implemented using software or hardware DSP.
> http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing

That page actually explains pretty well why I don't want to implement 
callbacks in OpenAL Soft's mixer. A long-term goal I have is to make the 
mixer completely lockless, and move all source/effect parameter 
processing out of the mixer thread, so that it can run in a real-time 
(or otherwise really low latency) context. But adding user callbacks 
complicates that because they could do anything from locking a mutex to 
allocating memory, or worse.

What I'd like to do is get playback latency low enough and provide 
efficient access to update playing buffers, so user code can play 
reasonably low-latency audio on its own without having to callback in 
the mixer.

> Is the alMidi* operating on the current bound OpenAL context?

Yes, all al* functions operate on the current context.

> How can I manipulate the OpenAL Source for the Midi engine?

There is no such source. Each MIDI note that gets played is its own 
voice that gets mixed directly to the output buffers as appropriate, 
there's no internal concept of a source here.

> How can I use multiple Midi engines?

I'm still thinking of ways to allow apps to enumerate available MIDI 
ports, and a way to "select" one for a context. I'm also considering 
ways to select multiple ports to be able to provide more than 16 MIDI 

> Is there a function to un-load the loaded soundfonts?

Not in a single call, no. But you can retrieve the list of presets in 
the soundfont, and the list of fontsounds in each preset, and then 
delete the IDs you get. So given a soundfont, you would do:

vector<ALuint> presets, sounds;

// Get the presets in the soundfont
ALint num_presets;
alGetSoundfontivSOFT(sfont, AL_PRESETS_SIZE_SOFT, &num_presets);
alGetSoundfontivSOFT(sfont, AL_PRESETS_SOFT, &presets[0]);

for(auto preset : presets)
     // Get the fontsounds in each preset
     ALint num_sounds;
     alGetPresetivSOFT(preset, AL_FONTSOUNDS_SIZE_SOFT, &num_sounds);

     size_t current = sounds.size();
     sounds.resize(current + num_sounds);
     alGetPresetivSOFT(preset, AL_FONTSOUNDS_SOFT, &sounds[current]);

// Delete the objects (soundfont first so it releases the presets,
// then presets so it releases the fontsounds).
alDeleteSoundfontsSOFT(1, &sfont);
alDeletePresetsSOFT(presets.size(), &presets[0]);
alDeleteFontsoundsSOFT(sounds.size(), &sounds[0]);

Though there's a bit of a complication here (isn't there always), and 
it's actually something I'm hoping to get some feedback on.

The SF2 spec has the concept of "stereo" samples. Other than mono, a 
sample header can be specified as 'left' or 'right' (though they're not 
implicitly left or right panned; that's controlled via generators), and 
contains a reference to another sample for the other side. Each side 
references the other. What this means for OpenAL is that a fontsound for 
one side would reference another fontsound for the other, and obviously 
you can't delete a fontsound if it's still referenced by something. 
There are ways to work around that, but the optimal method would depend 
on how exactly stereo samples work.

Now, something that confuses me a bit here is that a sample header 
definition in the sf2 spec does not contain enough information to play a 
pairing sound (it contains a base MIDI key, pitch correction, sample 
rate, and the start/stop/loop points, which, aside from a single offset 
for the start/stop/loop point, should be the same). According to the SF2 
spec, the zone generators for stereo samples work independently... that 
is, the left and right channels can have different volumes, pans, 
filters, etc. The exception is that the pitch for the left sample is 
taken from the right so they play in sync.

As far as I can guess, the two sample headers would be referenced by two 
different zones in the same instrument, and those zones would be set to 
trigger on the same MIDI key/velocity range. However, the spec doesn't 
say if they need to use the same range, so I don't know whether I need 
to account for the possibility that they don't (if they don't, it can 
complicate things a bit). The spec is also silent on what happens if the 
two samples have different lengths, different relative loop points, 
different loop modes, etc (I guess such samples would be rejected or 
ignored, but again, it doesn't say).

So, I'm wondering if anyone has any insight on how stereo samples are 
supposed to work in sf2 soundfonts. What do soundfonts do, and what 
behavior do they expect?

More information about the openal mailing list