MIDI Driven Animation using CoreMIDI in Objective C

So in this post I'm going to explain how to produce MIDI driven animation on OSX or iOS using the CoreMIDI and CoreAudio frameworks. When I first started trying to do this I thought it would be easy - just register a callback in the MIDI player which is called every time a MIDI message is played. Unfortunately this is not possible and I ended up spending three long days figuring it out from the limited documentation available. Hopefully this post will save someone some time!

Project files

A fully working X-Code project can be downloaded here.

The Goal

In this guide I will explain how to do the following:

  • Load and play a MIDI sequence from a file using a MusicPlayer
  • Play the MIDI notes with an instrument effect (SoundFont) using an AUGraph
  • Create a virtual endpoint to intercept and display the MIDI messages in realtime



Load and play a MIDI Sequence

The following tasks are needed to load and play a MIDI file:

  • Create a MusicSequence to hold the MIDI information
  • Get a NSURL to hold the path to the MIDI file
  • Load the sequence file into the sequence using MusicSequenceFileLoad
  • Create a new MusicPlayer, add the sequence and play the sequence

Now here's the code. You will need to include the following frameworks: CoreAudio, CoreMIDI and AudioToolbox as well as the import: AudioToolbox/MusicPlayer.h

  1. // Create a new music sequence
  2. MusicSequence s;
  3. // Initialise the music sequence
  4. NewMusicSequence(&s);
  5.  
  6. // Get a string to the path of the MIDI file which
  7. // should be located in the Resources folder
  8. // I'm using a simple test midi file which is included in the download bundle at the end of this document
  9. NSString *midiFilePath = [[NSBundle mainBundle]
  10. pathForResource:@"simpletest"
  11. ofType:@"mid"];
  12.  
  13. // Create a new URL which points to the MIDI file
  14. NSURL * midiFileURL = [NSURL fileURLWithPath:midiFilePath];
  15.  
  16.  
  17. MusicSequenceFileLoad(s, midiFileURL, 0, 0);
  18.  
  19. // Create a new music player
  20. MusicPlayer p;
  21. // Initialise the music player
  22. NewMusicPlayer(&p);
  23.  
  24. // Load the sequence into the music player
  25. MusicPlayerSetSequence(p, s);
  26. // Called to do some MusicPlayer setup. This just
  27. // reduces latency when MusicPlayerStart is called
  28. MusicPlayerPreroll(p);
  29. // Starts the music playing
  30. MusicPlayerStart(p);
  31.  
  32. // Get length of track so that we know how long to kill time for
  33. MusicTrack t;
  34. MusicTimeStamp len;
  35. UInt32 sz = sizeof(MusicTimeStamp);
  36. MusicSequenceGetIndTrack(s, 1, &t);
  37. MusicTrackGetProperty(t, kSequenceTrackProperty_TrackLength, &len, &sz);
  38.  
  39.  
  40. while (1) { // kill time until the music is over
  41. usleep (3 * 1000 * 1000);
  42. MusicTimeStamp now = 0;
  43. MusicPlayerGetTime (p, &now);
  44. if (now >= len)
  45. break;
  46. }
  47.  
  48. // Stop the player and dispose of the objects
  49. MusicPlayerStop(p);
  50. DisposeMusicSequence(s);
  51. DisposeMusicPlayer(p);

Hopefully you will have heard a rather mechanical scale followed by a chromatic scale. It's basic but at least it's a start. The next step is to create an AU graph so that we can play our MIDI file with an instrument effect.

Creating an AUGraph

When I first started reading about AU Graphs I thought it sounded horribly incomprehensible and opaque. In reality it's not too bad just a bit fiddly to set up.

An AUGraph is a container to hold a collection of AUNodes. AU Nodes are effects units which are supplied by Apple. Really it's just like music units in real life. Maybe you have a MIDI keyboard and you want to output the sound as a trumpet with an echo effect. You would need to plug your keyboard into a box which translates MIDI messages and turns them into trumpet sounds. This box would need to be plugged into an echo unit which is plugged into the speakers.

Choosing your AUNodes

In CoreAudio you choose the type of AUNode you need using three properties (defined by ENUMs):

  • componentManufacturer: The author of the AUNode in this case we will be using audio units from Apple - kAudioUnitManufacturer_Apple
  • componentType: The unit type
  • componentSubType: The sub unit type

The unit type and sub-unit type can be found in the Apple documentation or in the header file AUComponent.h. Basically to find the audio unit you need it's easiest to use Google. But say I want a high pass filter, I look in the AUComponent.h header file and find kAudioUnitSubType_HighPassFilter - this is the sub type. I then count how many sub type definitions there were before this one - in this case 2. I then look at the top of the document and look at the third Audio unit type defined kAudioUnitType_MusicEffect. Now I have my manufacturer, type and sub type and I can use the Audio Unit.

For this example we will be using the following two Audio Units:

  • Sampler: This is a unit converts MIDI to music sounds defined in a Sound Font or AUPreset and is available on iOS 5
  • RemoteIO: This unit allows us to output sounds to iPhone speakers
  • So here's the code - adapted from an example provided by Apple but with extra comments.

    1. - (BOOL) createAUGraph {
    2.  
    3. // Each core audio call returns an OSStatus. This means that we
    4. // Can see if there have been any errors in the setup
    5. OSStatus result = noErr;
    6.  
    7. // Create 2 audio units one sampler and one IO
    8. AUNode samplerNode, ioNode;
    9.  
    10. // Specify the common portion of an audio unit's identify, used for both audio units
    11. // in the graph.
    12. // Setup the manufacturer - in this case Apple
    13. AudioComponentDescription cd = {};
    14. cd.componentManufacturer = kAudioUnitManufacturer_Apple;
    15.  
    16. // Instantiate an audio processing graph
    17. result = NewAUGraph (&_processingGraph);
    18. NSCAssert (result == noErr, @"Unable to create an AUGraph object. Error code: %d '%.4s'", (int) result, (const char *)&result);
    19.  
    20. //Specify the Sampler unit, to be used as the first node of the graph
    21. cd.componentType = kAudioUnitType_MusicDevice; // type - music device
    22. cd.componentSubType = kAudioUnitSubType_Sampler; // sub type - sampler to convert our MIDI
    23.  
    24. // Add the Sampler unit node to the graph
    25. result = AUGraphAddNode (self.processingGraph, &cd, &samplerNode);
    26. NSCAssert (result == noErr, @"Unable to add the Sampler unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
    27.  
    28. // Specify the Output unit, to be used as the second and final node of the graph
    29. cd.componentType = kAudioUnitType_Output; // Output
    30. cd.componentSubType = kAudioUnitSubType_RemoteIO; // Output to speakers
    31.  
    32. // Add the Output unit node to the graph
    33. result = AUGraphAddNode (self.processingGraph, &cd, &ioNode);
    34. NSCAssert (result == noErr, @"Unable to add the Output unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
    35.  
    36. // Open the graph
    37. result = AUGraphOpen (self.processingGraph);
    38. NSCAssert (result == noErr, @"Unable to open the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
    39.  
    40. // Connect the Sampler unit to the output unit
    41. result = AUGraphConnectNodeInput (self.processingGraph, samplerNode, 0, ioNode, 0);
    42. NSCAssert (result == noErr, @"Unable to interconnect the nodes in the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
    43.  
    44. // Obtain a reference to the Sampler unit from its node
    45. result = AUGraphNodeInfo (self.processingGraph, samplerNode, 0, &_samplerUnit);
    46. NSCAssert (result == noErr, @"Unable to obtain a reference to the Sampler unit. Error code: %d '%.4s'", (int) result, (const char *)&result);
    47.  
    48. // Obtain a reference to the I/O unit from its node
    49. result = AUGraphNodeInfo (self.processingGraph, ioNode, 0, &_ioUnit);
    50. NSCAssert (result == noErr, @"Unable to obtain a reference to the I/O unit. Error code: %d '%.4s'", (int) result, (const char *)&result);
    51.  
    52. return YES;
    53. }

    Next we need to create a function to start the AUGraph running. This is equivalent to turning on the physical devices.

    1. // Starting with instantiated audio processing graph, configure its
    2. // audio units, initialize it, and start it.
    3. - (void) configureAndStartAudioProcessingGraph: (AUGraph) graph {
    4.  
    5. OSStatus result = noErr;
    6. if (graph) {
    7.  
    8. // Initialize the audio processing graph.
    9. result = AUGraphInitialize (graph);
    10. NSAssert (result == noErr, @"Unable to initialze AUGraph object. Error code: %d '%.4s'", (int) result, (const char *)&result);
    11.  
    12. // Start the graph
    13. result = AUGraphStart (graph);
    14. NSAssert (result == noErr, @"Unable to start audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
    15.  
    16. // Print out the graph to the console
    17. CAShow (graph);
    18. }
    19. }

    So, now we've created a new audio graph with a sampler and an output unit. We've connected the sampler unit to the output unit and we've started the graph. Finally we need to set up the instrument effect, connect the music sequence and play.

    Set up the sound effect

    1. // Load a sound effect from a SoundFont file
    2. -(OSStatus) loadFromDLSOrSoundFont: (NSURL *)bankURL withPatch: (int)presetNumber {
    3.  
    4. OSStatus result = noErr;
    5.  
    6. // fill out a bank preset data structure
    7. AUSamplerBankPresetData bpdata;
    8. bpdata.bankURL = (__bridge CFURLRef) bankURL;
    9. bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
    10. bpdata.bankLSB = kAUSampler_DefaultBankLSB;
    11. bpdata.presetID = (UInt8) presetNumber;
    12.  
    13. // set the kAUSamplerProperty_LoadPresetFromBank property
    14. result = AudioUnitSetProperty(self.samplerUnit,
    15. kAUSamplerProperty_LoadPresetFromBank,
    16. kAudioUnitScope_Global,
    17. 0,
    18. &bpdata,
    19. sizeof(bpdata));
    20.  
    21. // check for errors
    22. NSCAssert (result == noErr,
    23. @"Unable to set the preset property on the Sampler. Error code:%d '%.4s'",
    24. (int) result,
    25. (const char *)&result);
    26.  
    27. return result;
    28. }

    This code takes a sound font NSURL and a preset number as input. The NSURL should point to the Sound Font file in your Resources directory. Sound Fonts can hold a number of instrument effects so the presetNumber defines which one should be used.

    Now we just repeat what we did before but with a few added lines (marked by stars).

    1. // Create a new music player
    2. MusicPlayer p;
    3. // Initialise the music player
    4. NewMusicPlayer(&p);
    5.  
    6.  
    7. // ************* Tell the music sequence to output through our new AUGraph
    8. MusicSequenceSetAUGraph(s, self.processingGraph);
    9.  
    10.  
    11. // ************* Load the sound font from file
    12. NSURL *presetURL = [[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:@"Gorts_Filters" ofType:@"sf2"]];
    13.  
    14. // ************* Initialise the sound font
    15. [self loadFromDLSOrSoundFont: (NSURL *)presetURL withPatch: (int)10];
    16.  
    17. // Load the sequence into the music player
    18. MusicPlayerSetSequence(p, s);
    19. // Called to do some MusicPlayer setup. This just
    20. // reduces latency when MusicPlayerStart is called
    21. MusicPlayerPreroll(p);
    22. // Starts the music playing
    23. MusicPlayerStart(p);
    24.  
    25. // Get length of track so that we know how long to kill time for
    26. MusicTrack t;
    27. MusicTimeStamp len;
    28. UInt32 sz = sizeof(MusicTimeStamp);
    29. MusicSequenceGetIndTrack(s, 1, &t);
    30. MusicTrackGetProperty(t, kSequenceTrackProperty_TrackLength, &len, &sz);
    31.  
    32.  
    33. while (1) { // kill time until the music is over
    34. usleep (3 * 1000 * 1000);
    35. MusicTimeStamp now = 0;
    36. MusicPlayerGetTime (p, &now);
    37. if (now >= len)
    38. break;
    39. }
    40.  
    41. // Stop the player and dispose of the objects
    42. MusicPlayerStop(p);
    43. DisposeMusicSequence(s);
    44. DisposeMusicPlayer(p);

    From the sample project you should understand how to play a MIDI file with a Sound Font effect. The final step is to get real time access to the messages being parsed by the MusicPlayer. To do this we need to add an extra step to our chain. Currently it looks like this:
    MIDI File -> Sequence -> Sampler -> IO Unit -> Speakers
    We want it to look like this:
    MIDI File -> Sequence -> callback function to read messages -> Sampler -> IO Unit -> Speakers
    With this system we will receive the messages in real-time before passing them on to the Sampler unit. This cal be achieved by creating a new MIDI end point. A MIDI endpoint is a destination where midi messages can be sent. This could be another MIDI app on your iPhone, an external MIDI instrument or, in this case, a callback function.




    Creating a new MIDI end point

    In order to capture the MIDI messages we need a destination that they can be sent to. This can be done by creating a MIDI end point:

    1. // Create a client
    2. // This provides general information about the state of the midi engine to the callback MyMIDINotifyProc
    3. MIDIClientRef virtualMidi;
    4. result = MIDIClientCreate(CFSTR("Virtual Client"),
    5. MyMIDINotifyProc,
    6. NULL,
    7. &virtualMidi);
    8.  
    9. NSAssert( result == noErr, @"MIDIClientCreate failed. Error code: %d '%.4s'", (int) result, (const char *)&result);
    10.  
    11. // Create an endpoint
    12. // In this endpoint we define our client, a name: Virtual Destination
    13. // a callback function which will receive the MIDI packets: MyMIDIReadProc
    14. // a reference to the sampler unit for use within our callback
    15. // a point to our end point: virtualEndpoint
    16. MIDIEndpointRef virtualEndpoint;
    17. result = MIDIDestinationCreate(virtualMidi, @"Virtual Destination", MyMIDIReadProc, self.samplerUnit, &virtualEndpoint);
    18.  
    19. NSAssert( result == noErr, @"MIDIDestinationCreate failed. Error code: %d '%.4s'", (int) result, (const char *)&result);

    We also need to implement the callbacks in our code. This example will log each note as it's played:

    1. // Get general midi notifications
    2. void MyMIDINotifyProc (const MIDINotification *message, void *refCon) {
    3. printf("MIDI Notify, messageId=%d,", message->messageID);
    4. }
    5. // Get the MIDI messages as they're sent
    6. static void MyMIDIReadProc(const MIDIPacketList *pktlist,
    7. void *refCon,
    8. void *connRefCon) {
    9.  
    10. // Cast our Sampler unit back to an audio unit
    11. AudioUnit *player = (AudioUnit*) refCon;
    12.  
    13. MIDIPacket *packet = (MIDIPacket *)pktlist->packet;
    14. for (int i=0; i < pktlist->numPackets; i++) {
    15. Byte midiStatus = packet->data[0];
    16. Byte midiCommand = midiStatus >> 4;
    17.  
    18. // If the command is note-on
    19. if (midiCommand == 0x09) {
    20. Byte note = packet->data[1] & 0x7F;
    21. Byte velocity = packet->data[2] & 0x7F;
    22.  
    23. // Log the note letter in a readable format
    24. int noteNumber = ((int) note) % 12;
    25. NSString *noteType;
    26. switch (noteNumber) {
    27. case 0:
    28. noteType = @"C";
    29. break;
    30. case 1:
    31. noteType = @"C#";
    32. break;
    33. case 2:
    34. noteType = @"D";
    35. break;
    36. case 3:
    37. noteType = @"D#";
    38. break;
    39. case 4:
    40. noteType = @"E";
    41. break;
    42. case 5:
    43. noteType = @"F";
    44. break;
    45. case 6:
    46. noteType = @"F#";
    47. break;
    48. case 7:
    49. noteType = @"G";
    50. break;
    51. case 8:
    52. noteType = @"G#";
    53. break;
    54. case 9:
    55. noteType = @"A";
    56. break;
    57. case 10:
    58. noteType = @"Bb";
    59. break;
    60. case 11:
    61. noteType = @"B";
    62. break;
    63. default:
    64. break;
    65. }
    66. NSLog([noteType stringByAppendingFormat:[NSString stringWithFormat:@": %i", noteNumber]]);
    67.  
    68. // Use MusicDeviceMIDIEvent to send our MIDI message to the sampler to be played
    69. OSStatus result = noErr;
    70. result = MusicDeviceMIDIEvent (player, midiStatus, note, velocity, 0);
    71.  
    72. }
    73. packet = MIDIPacketNext(packet);
    74. }
    75. }

    The final step is to modify our main function to set the MusicSequence destination to our new endpoint:

    1. OSStatus result = noErr;
    2.  
    3. self.graphSampleRate = 44100.0;
    4.  
    5.  
    6. [self createAUGraph];
    7. [self configureAndStartAudioProcessingGraph: self.processingGraph];
    8.  
    9. // Create a client
    10. MIDIClientRef virtualMidi;
    11. result = MIDIClientCreate(CFSTR("Virtual Client"),
    12. MyMIDINotifyProc,
    13. NULL,
    14. &virtualMidi);
    15.  
    16. NSAssert( result == noErr, @"MIDIClientCreate failed. Error code: %d '%.4s'", (int) result, (const char *)&result);
    17.  
    18. // Create an endpoint
    19. MIDIEndpointRef virtualEndpoint;
    20. result = MIDIDestinationCreate(virtualMidi, @"Virtual Destination", MyMIDIReadProc, self.samplerUnit, &virtualEndpoint);
    21.  
    22. NSAssert( result == noErr, @"MIDIDestinationCreate failed. Error code: %d '%.4s'", (int) result, (const char *)&result);
    23.  
    24.  
    25.  
    26. // Create a new music sequence
    27. MusicSequence s;
    28. // Initialise the music sequence
    29. NewMusicSequence(&s);
    30.  
    31. // Get a string to the path of the MIDI file which
    32. // should be located in the Resources folder
    33. NSString *midiFilePath = [[NSBundle mainBundle]
    34. pathForResource:@"simpletest"
    35. ofType:@"mid"];
    36.  
    37. // Create a new URL which points to the MIDI file
    38. NSURL * midiFileURL = [NSURL fileURLWithPath:midiFilePath];
    39.  
    40.  
    41. MusicSequenceFileLoad(s, (__bridge CFURLRef) midiFileURL, 0, 0);
    42.  
    43. // Create a new music player
    44. MusicPlayer p;
    45. // Initialise the music player
    46. NewMusicPlayer(&p);
    47.  
    48. // ************* Set the endpoint of the sequence to be our virtual endpoint
    49. MusicSequenceSetMIDIEndpoint(s, virtualEndpoint);
    50.  
    51. // Load the ound font from file
    52. NSURL *presetURL = [[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:@"Gorts_Filters" ofType:@"sf2"]];
    53.  
    54. // Initialise the sound font
    55. [self loadFromDLSOrSoundFont: (NSURL *)presetURL withPatch: (int)10];
    56.  
    57. // Load the sequence into the music player
    58. MusicPlayerSetSequence(p, s);
    59. // Called to do some MusicPlayer setup. This just
    60. // reduces latency when MusicPlayerStart is called
    61. MusicPlayerPreroll(p);
    62. // Starts the music playing
    63. MusicPlayerStart(p);
    64.  
    65. // Get length of track so that we know how long to kill time for
    66. MusicTrack t;
    67. MusicTimeStamp len;
    68. UInt32 sz = sizeof(MusicTimeStamp);
    69. MusicSequenceGetIndTrack(s, 1, &t);
    70. MusicTrackGetProperty(t, kSequenceTrackProperty_TrackLength, &len, &sz);
    71.  
    72.  
    73. while (1) { // kill time until the music is over
    74. usleep (3 * 1000 * 1000);
    75. MusicTimeStamp now = 0;
    76. MusicPlayerGetTime (p, &now);
    77. if (now >= len)
    78. break;
    79. }
    80.  
    81. // Stop the player and dispose of the objects
    82. MusicPlayerStop(p);
    83. DisposeMusicSequence(s);
    84. DisposeMusicPlayer(p);

    So there you have it! Play your MIDI file through a nice reedy SoundFont while collecting the messages to drive your animation! I hope this saves you the 3 days it took me to figure it out! Here's the link again to the project files in case you missed it at the top of the guide. Project Files.




    Update:

    It's been pointed out to me that several resource files are missing from the project - a midi file called simpletest.mid and a sound font file called Gorts_Filters.SF2. These files can be downloaded here. To add them to the project you need to right click on the resources folder in XCode and click "Add Files". As a side note, this code should work with any MIDI file and any Sound Font file. The only thing to watch with sound font files is that the preset/patch that you're requesting exists.

    If you want to ask a general question about CoreAudio or discuss your CoreAudio issue please ask your questions in the CoreAudio section of the forum.

Tweet: 

Comments

Hi Ben,
i've download your files and extra mid SF2 but when i launch the program i receive:

  1. 2012-05-07 23:17:50.138 MidiTest[5660:707] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSURL initFileURLWithPath:]: nil string parameter'

I've control the path and seems right.
Why?

The problem is in line 275 of AudioTest.m
NSURL *presetURL = [[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:@"Gorts_Filters" ofType:@"sf2"]];
You need to change the extension from sf2 to SF2 (caps) that will fix that
NSURL *presetURL = [[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:@"Gorts_Filters" ofType:@"SF2"]];

To fix the problem:

  1. *** Assertion failure in -[AudioTest midiTest], /Users/rzilibowitz/Documents/sourcecode/MidiTest/MidiTest/AudioTest.m:245
  2. *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'MIDIDestinationCreate failed. Error code: -10844 '§’ˇˇ''

You need to do this http://www.deluge.co/?q=comment/477#comment-477

I just ran a quick test. I tried to load a SoundFont file which didn't exist. I get the same error as you. Double check the path. Check that it's as it should be - just the name of the SoundFont file and then the extension separately.

If you're sure the path's correct, have you added the SoundFont file to your XCode project? You have to right click the folder where you store your resources and click "Add Files to [name of project]...". You should be able to see the files in the browse to the left of the screen. After you've done this let me know if you're still having problems.

Hi Ben,
i solve added manually in Build Phases.
Now second problem:

  1. objc[3592]: Object 0x6891f90 of class __NSCFString autoreleased with no pool in place - just leaking - break on objc_autoreleaseNoPool() to debug
  2. objc[3592]: Object 0x681d620 of class __NSCFString autoreleased with no pool in place - just leaking - break on objc_autoreleaseNoPool() to debug
  3. 2012-05-08 00:13:25.305 MidiTest[3592:16503] C: 0

T's

I don't get that error when I run the script. I would recommend profiling the script with Instruments. XCode->Product->Profile and then choose profile for leaks. It will give you a more detailed idea of where the leak is happening. Once you know that you can add an autorelease pool. For ARC enabled code you use the following:

  1. @autorelease {
  2. // Code
  3. }

This is really good work and I think I'm beginning to understand it but I can't get it to compile under IOS5.0
The error I get from compiling your downloaded project is:
Automatic Reference Counting Issue
No known instance method for selector 'initAudioTest'

Any ideas?

Thanks

Chris

Hi Chris,
To me this sounds like somewhere in the code there is a call which uses a selector i.e. @selector(initAudioTest) and that the initAudioTest method doesn't exist. I'd search the project for selectors and investigate any call like this. Or you could debug and see exactly where this error happens. I didn't have any problem running the code on iOS5 so it's not an ARC problem. If you're still having trouble let me know.
Ben

Hi Ben,
It bombs in the second line of code:

  1. + audioTest {
  2. return [[self alloc] initAudioTest]; //RIGHT HERE
  3. }
  4.  
  5.  
  6. -(id) initAudioTest {
  7. if((self = [self init] )) {
  8.  
  9. }
  10. return self;
  11. }

That looks fine could you copy me a version of the interface file? It would throw a no known selector error if the method were not defined in the interface. If you email me a zipped version of the project it would be much easier for me to debug. My email is: bensmiley45@hotmail.com.

Hi Ben,
Found the issue and fixed it. I should have looked at the build settings more carefully. For some reason ARC was turned on and garbage collection off. I turned ARC off as I have never used it anyway. Also despite my changing the target to 10S 5 there was still one spot with 5.1 changed that as well and it works fine.

Thanks Again!

Chris

Hi Ben - thanks for this tutorial, looks really helpful.

Just trying out the code with a more complex midi file featuring multiple tracks. Without changing any code beyond file names, if I load in the file with the default SF2 font file provided, I can hear the lead guitar track fine, and another drum-specific font file lets me hear the drum track. I'm a bit of a MIDI newbie - how does the MIDI playback interact with the sound font? Is it just a case of it, say, requesting a 'drum' sample, and if we happen to have a valid one loaded then it'll use that? Does it fall back to a default if it can't find the exact instrument requested by the MIDI file?

Also, is there a way to load in multiple instruments so that I can hear both the guitar and drum tracks? Presumably I'd need to do something like create multiple sampler units in my graph and then call loadFromDLSOrSoundFont to initialise each?

thanks
-Hugh

Hi Hugh,

I'm in the process of writing an advanced Midi tutorial which will cover: manually sequencing the midi track and playing multi instrument tracks using different sound fonts.

In the meantime here's a brief description of how to achieve the result you're looking for. The sound gets played by the line:

  1. MusicDeviceMIDIEvent (sampler, status, note, velocity);

The sampler is a unit which converts a midi note into a sound using the sound font. To play a different instrument you would need to setup a number of samplers each with a different font (I use a dictionary to store a list of pointers to the mixers) and then use a mixer unit to mix the outputs together. However, I'm not sure how you would find out which track it was which.

Thanks for the effort! It's a shame Apple does not have better documentation for this incredible framework and libraries available. Cheers.

Hi

I wanted to do sth. similar, and tested your code.. worked great with some adaptions, thank you for that!!
unfortunately it didn't work to use the c - midi callback function to put sth out to other objective c objects.
when a Midi-note arrives.

in the first paragraph of this blog you said about your initial idea: "Unfortunately this is not possible... "
I guess you were talking just about the above being not possible, right?

does somebody know why this is so? I guess it hast sth. to do with the c-run-loop which is not accessible from objective c.. ? just guessing, cause Im relatively new in the "pro-programming" world.

What we're talking about is the ability to register a callback with the music player to notify us when a midi message is parsed.This isn't possible simply because the API doesnt support it (i.e. the framework doesnt provide a way to register this callback)! There are two ways however to get around this. The first is what I've explained here using a virtual endpoint. The second is to write your own music player and parse the midi commands manually. This is more complex but provides maximum flexibiliy. I'm in the process of writing a tutorial on how to do this. If you want to be notified when its ready gollow me on Twitter.

ahh ok got it.

My problem is: I want to start animations of notes in a score, driven by the midi events. Every midi event should trigger an animation with a corresponding note.

Tried to call my objective c "animateNote:noteType" method in "MyMIDIReadProc" above in line 67, but nothing happened..

Im stuck there and not sure why id did not work, but would you say, basically this must be possible?

Thanks in advance!!

I try to comment many many times but it's don't show. Your project is very useful for me but can you help me show clock time count down (ex:3:20) and when playing i don't know how to show lyric of mid file. In function myMidiReadProc() i can't call any function or call self. Example midi file size 42350byte how to get current byte when playing and call function parser to get lyric of this byte. Thank you so much .

My problem is how to show lyric when playing file .mid your example is very useful for me but i don't know how to show lyric when play.Can you help me please thank you so much

Ive recently published a new starter kit showing how to do karaoke on the iPhone. Available here Karaoke Starter Kit

I like this project a lot. I wanted to use it within a Mac OS X project and with some small changes got it to the point where it would have been calling MusicDeviceMIDIEvent repeatedly except for the fact that I had temporarily commented it out since it was my one remaining link problem. I had included the necessary frameworks (AudioToolbox, CoreMIDI, CoreGraphics, CoreAudio), but this link could not be resolved. Perhaps it was the 64-bit environment. I tried to change to 32-bit, but that got other errors, perhaps because I wasn't doing it right.

Any ideas are welcome, of course. Thanks, Bruce

I'm using Mountain Lion and I don't get any problems. Could you tell me which version of iOS you're using. Problems have been reported with iOS6 where you have to set the UIBackgroundModes to audio. Also, could you tell me the exact error message you're getting.

Running this on the iOS 5 Simulator it works. Running on the iOS 6 Simulator it crashes before playing anything with the following console output.

2012-10-01 09:07:06.495 MidiTest[1074:c07] *** Assertion failure in -[AudioTest midiTest], /Users/rzilibowitz/Documents/sourcecode/MidiTest/MidiTest/AudioTest.m:245
2012-10-01 09:07:06.498 MidiTest[1074:c07] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'MIDIDestinationCreate failed. Error code: -10844 '§’ˇˇ''

To get rid of this error add the following key to your app's .plist file: "Required background modes". This will create an array. Add an entry to the array called "App plays audio".

Hello Ben,

Thank you for your tutorial, it seems that it could help me a lot.
I had the same problem as Ruben Zilibowitz, with the same error message. (*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'MIDIDestinationCreate failed. Error code: -10844 '§’ˇˇ'')
I added "Required background modes" and the entry "App plays audio"and now I have this error message when I try to run the App:
2013-05-11 22:25:06.693 MidiTest[20025:c07] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSURL initFileURLWithPath:]: nil string parameter'
***
I'm under Mountain Lion and the sdk is set to iOS 6.1

Would you have any idea that could help me fix this?

Thank you

Thanks Ben,

Great job. Saved my a lot of time.

THANK YOU for this great writeup. I was able to get the project working when using your SF2 file, Gorts_Filters.SF2. However, when using any number of other SF2 files, downloaded from various places, the app crashes. The console says:

2012-10-19 01:42:21.188 MidiTest[64999:fb03] DLSBankManager::AddBank: Bank load failed
2012-10-19 01:42:21.189 MidiTest[64999:fb03] GlobalState::LoadInstrumentFromDLSCollection: Bank load failed
2012-10-19 01:42:21.189 MidiTest[64999:fb03] *** Assertion failure in -[AudioTest loadFromDLSOrSoundFont:withPatch:](), /Users/doconnor/Downloads/MidiTest/MidiTest/AudioTest.m:219
2012-10-19 01:42:21.190 MidiTest[64999:fb03] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Unable to set the preset property on the Sampler. Error code:-10871 'â’ˇˇ''

or

2012-10-19 01:48:18.715 MidiTest[65165:fb03] BankEntry::LoadInstrument: Unable to find patch 10 bank 0x79/0
2012-10-19 01:48:18.716 MidiTest[65165:fb03] GlobalState::LoadInstrumentFromDLSCollection: Bank load failed
2012-10-19 01:48:18.717 MidiTest[65165:fb03] *** Assertion failure in -[AudioTest loadFromDLSOrSoundFont:withPatch:](), /Users/doconnor/Downloads/MidiTest/MidiTest/AudioTest.m:219
2012-10-19 01:48:18.717 MidiTest[65165:fb03] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Unable to set the preset property on the Sampler. Error code:-10851 'ù’ˇˇ''

Do these errors mean that the SF2 files are no good? I've tried probably 6-7 different ones from different sources.

Dennis

Lets look at how a Sound Font works. Essentially, a Sound font is a way of storing music samples. Each sample contains all the information needed to produce all the notes supported by MIDI. A Sound Font can contain a large number of samples. Each sample is indexed and retrieved by it's patch number. So, for example, patch 1 could be a piano sound and patch 2 could be a flute.

However, there's another level of organization with Sound Fonts in that patches can be arranged into banks. For example, you could have a piano bank which contained a grand piano, electric piano, harpsichord etc... The "load sound font function" I've written uses the default harmonic bank but you could set the parameter in the function to be the percussion bank or any other bank.

The reason you're getting these errors is that you're trying to access a patch that doesn't exist. For example, access patch 5 when there are only 2 patches.

To solve the issue, use another piece of software to analyze the Sound Font and then choose a patch which exists.

Thanks Ben, your tutorial was very helpful!

But I'm a bit puzzled by the fact that MyMIDIReadProc is called with two 'note on event' per event, where only one 'note on event' is expected. Can you explain that? And is there a work-around to fix this?

All the best.

Hi Kenneth, I'm moving this question to it's own thread in the forum. MyMIDIReadProc called twice per event

I want to say again that I really appreciate this thread. You pointed me in the right direction on the SoundFont issue, above, and I've got that fixed.

The problem now is how to make sure that playback occurs at the right tempo. I have two midi files of the same piece of Bach music (2-part Invention #1). In both files the time signature is identical (04 02 18 08). In File #1 the microseconds per beat are 500,000 and in File #2 the microseconds per beat are 722,892. Yet File #2 plays much *faster* than File #1. This is true when using your code (adapted) or using another OS X midi player called Rondo.

I now suspect that this problem is due to a different timebase/PPQN/tempo resolution. File #1 has a timebase of 384, but File #2 has a timebase of 192. I think that the MusicPlayer has a certain pulse rate and is using only 192 pulses per quarter note when playing File #2.

First, is it possible to arrange things so that the MusicPlayer plays the correct number of microseconds per quarter note; that is, so that adjusts properly for the file's timebase? I guess that to do this I'd have to set the pulse rate of the Music Player.

If this is not possible, is there a way to programmatically determine the timebase of a midi file (apart from parsing it separately myself) so that I can adjust the tempo accordingly? This seems like a hack but I'd take it if necessary.

I earlier asked about tempo differences in different midis of the same piece of music. Turns out that the "slower" version had twice as many bars in the file. So that explains it, not some far-out problem with ticks or timings.

My audio code is now working great on the simulator, thanks to you.

However, playback is terrible on the actual device. It seems to be taking too many system resources, in that the audio is heard to be playing, but there are frequent brief pauses in the music and user interaction is essentially suspended (is extremely slow) while audio is playing. When playback ends, the system becomes responsive again.

Has anybody else run into this problem?

when i got the data during the MyMIDIReadProc i want to show in the textview

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • You can enable syntax highlighting of source code with the following tags: <code>, <blockcode>, <c>, <cpp>, <drupal5>, <drupal6>, <java>, <javascript>, <php>, <python>, <ruby>. The supported tag styles are: <foo>, [foo].
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.