30 Apr 12
16:20

iOS: encoding to AAC with the Extended Audio File Services gotchas

I recently implemented encoding to AAC from raw PCM in memory (as opposed to from a file) using the extended audio file services, which have their calls prefixed with ExtAudioFile*. ExtAudioFile stuff basically wraps a converter with the standard AudioFile functionality. It was a bit hard to track down the correct documentation, but it’s not that far off from using the regular AudioFile stuff you’d do when writing a WAV file, for instance.

Then, a few days later, AAC encoding suddenly stopped working, without my code changing at all. Debugging led me to find that this only happened on my iPhone 4S, and not my iPod or 3GS, nor the simulator. This problem existed with apps from other developers too, including the helpful Michael Tyson’s TPAAC converter project on github. I was getting a long amount of blocking (10-30 seconds) and then an error from sometimes ExtAudioFileSetProperty on the kExtAudioFileProperty_ClientDataFormat and if not that, the ExtAudioFileWrite call. I have no clue why sometimes the ExtAudioFileSetProperty would go through. In all cases though, the audio on my app was dead after this call failed until I restarted the app. This made me think it was something to do with AudioSessions (as Michael Tyson points out can cause issues), but turns out this wasn’t the case for me. FYI my OSError code was 268451843, which doesn’t even translate to anything in core audio.

After reverting to swearing at the docs that are the notorious API documentation, and exploring various dead ends such as modifying my interrupt handler for the AudioService as well as trying it on the main thread just in case there was some race case issue, I came across this life saving stack overflow post by a mystery user1021430 who I am feeling very fond for (since he has an ambiguous handle I can imagine what I want). His post mentioned that it seems for certain dual core devices like the 4S, the hardware encoder is finicky, and it is possible to switch to a software encoder to fix these issues. I had no clue about this from the docs, and really think this guy that posted the answer probably has the most undervalued reputation (1).

Anyway, here’s my finalized (and unpolished, including progressbar updates that you should remove to use) code that exports to aac from a buffer. The FilterSound and FilterAudioBuffer classes are ommited, but it basically just helps fill out a buffer and shouldn’t be hard to understand.


-(BOOL)saveAACFileWithSound:(FilterSound*)s filename:(NSString*)filename progressView:(UIProgressView*)progView
{
   OSStatus err = noErr;

   NSString *path = [self pathForAudioFileWithFilename:filename];
   NSURL *url = [NSURL fileURLWithPath:path];

   char errCode[5];

   uint64_t doneSamples = 0;
   uint64_t totalSamples = [s approxTotalSamples];   
   
   AudioStreamBasicDescription format;
   ExtAudioFileRef outFile;
   memset(&format, 0, sizeof(format));

   format.mFormatID = kAudioFormatMPEG4AAC;
   //format.mFormatFlags =  kMPEG4Object_AAC_Main;
   //format.mSampleRate = kFilterSampleRate; // the encoder for compressed formats uses a different sample rate (see docs)
   format.mChannelsPerFrame = 2;
   
   err = ExtAudioFileCreateWithURL((CFURLRef)url,
                                kAudioFileM4AType,
                                &format,
                                   NULL,
                                kAudioFileFlags_EraseFile,
                                &outFile);

   if (err != noErr) {
      strncpy(errCode,(const char*) &err, 4);
      errCode[4] = 0;
      
      NSLog(@"AudioFileCreateWithURL path=%@ Error %li = %s", path, err, errCode);

      return NO;
   }

   // sometimes the dual core devices like iPhone 4S will freak out on the next set property call unless you do this.
   // What's more the device will be broken.
   // No idea why this is.
   UInt32 codecManf = kAppleSoftwareAudioCodecManufacturer;
   ExtAudioFileSetProperty(outFile, kExtAudioFileProperty_CodecManufacturer, sizeof(UInt32), &codecManf);

   // setup the client format since we want to convert from PCM
   AudioStreamBasicDescription clientFormat;
   memset(&clientFormat, 0, sizeof(format));

   clientFormat.mFormatID = kAudioFormatLinearPCM;
   // WAV should be little endian
   clientFormat.mFormatFlags =  kLinearPCMFormatFlagIsSignedInteger | 
         kLinearPCMFormatFlagIsPacked;//~kAudioFormatFlagIsBigEndian;
   clientFormat.mSampleRate = kFilterSampleRate;
   clientFormat.mBitsPerChannel = sizeof(AudioSampleType) * 8; // AudioSampleType == 16 bit signed ints
   clientFormat.mChannelsPerFrame = 2;
   clientFormat.mFramesPerPacket = 1;
   clientFormat.mBytesPerFrame = (clientFormat.mBitsPerChannel / 8) * clientFormat.mChannelsPerFrame;
   clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame * clientFormat.mFramesPerPacket;
   
   err = ExtAudioFileSetProperty(outFile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
   if (err != noErr){
      NSLog(@"ExtAudioFileSetProperty error %li", err);
      return NO;
   }
   
   size_t outNumPackets = 0;
   size_t numBytes = 0;

   FilterAudioBuffer* audio_buf;
   audio_buf = [[FilterAudioBuffer alloc] initWithChannels:2
                                                    frames:kSaveWavBufferSamples
                                                shouldZero:NO];

   AudioBufferList writeBuffer;
   while (![s shouldBeRemoved]) {
      // we use the same buffer each iteration, so we need to wipe it clean.
      memset(audio_buf->buffer, 0, sizeof(SInt16) * 2 * kSaveWavBufferSamples);
      // have the sound object fill out the next sound
      // for this function the term 'packet' is a frame for uncompressed formats.
      // fortunately writeToBuffer returns the number of frames written
      // (This can be shorter for buffer playback at the end when the sound size doesnt
      // divide evenly into kSaveWavBufferSamples
      outNumPackets = [s writeToBuffer:(SInt16*)audio_buf->buffer samples:kSaveWavBufferSamples];
      numBytes = outNumPackets * sizeof(SInt16) * 2;

      writeBuffer.mNumberBuffers = 1;
      writeBuffer.mBuffers[0].mNumberChannels = clientFormat.mChannelsPerFrame;
      writeBuffer.mBuffers[0].mDataByteSize = numBytes;
      writeBuffer.mBuffers[0].mData = audio_buf->buffer;

      err = ExtAudioFileWrite(outFile, outNumPackets, &writeBuffer);

      doneSamples += outNumPackets;
      float prog;
      prog = doneSamples / ((float) totalSamples);
      [self performSelectorOnMainThread:@selector(updateProgress:)
                             withObject:[NSArray arrayWithObjects:[NSNumber numberWithFloat:prog],
                                                 progView, nil]
                          waitUntilDone:NO];
      if (err != noErr){
         NSLog(@"AudioFileWritePackets error %li", err);
         break;
      }
   }
   [audio_buf release];
   ExtAudioFileDispose(outFile);
   return err == noErr;
}

21 Apr 12
17:06

SoundCloud iOS integration gotchas

SoundCloud's iOS SDK

Last year when I was living in Berlin I had the opportunity to visit the SoundCloud campus where Henrik Lenberg and Eric Wahlforss were nice enough to talk to me about a range of things from Audacity to how SoundCloud functions as a development team. I’ve been to quite a few coorporate campuses and a few startup offices, but SoundCloud’s place really felt different. Located in the middle of the design-savvy Mitte district, with a couple floors of really fashionable looking workspaces and employees, I was kind of surprised to feel myself being a little self-conscious about my clothes at a tech company. SoundCloud’s public image is also of interest to me because a few people have told me soundcloud is ‘open-source’, and while that claim doesn’t hold or really even make sense, I understand the association one might make with these freemium services that are as inviting as soundcloud.

As it were, there was no work I could do with soundcloud at that time. However, I’ve been integrating SoundCloud into an iOS project lately, following their instructions from their one of the their many github pages (CocoaSoundCloudAPI). There are a few other projects like the CocoaAPIWrapper that seems to be a wrapper but I chose not use it because their SDK page points to the other.

The directions in the readme on their github are pretty good, but they didn’t get me up and running in 15 minutes, as would have been possible if I had these extra notes to get around some gotchas from their readme on CocoaSoundCloudAPI that I will share in case it saves other people time:

The “In XCode” section, Step 1 says :

1. Create a Workspace containing all those submodules added above.

This is glossing over a lot of stuff. First there are many ways to create an xcode workspace. Since they used the word ‘create’ I wrongly went to File->New->Workspace… .
The next nit is that they should specify to add just the project files instead of the subdirectories, which the wording sort of implies.
So the correct thing to do here is actually to just drag in to your existing XCode project (the one you want to add SoundCloud integration to) the 5 .xcodeproj files. The first one you add will ask you if you want to convert to a workspace, which is kind of like a Microsoft Visual Studio solution file to manage multiple projects. I just named my new workspace with the same file prefix as my xcode project and saved it in the same place the xcodeproj file was, and from now on I open the workspace instead of the project file.

To be honest I had no idea what an xcode workspace was, or how it interacts with project files in XCode. I recently upgraded to XCode 4, but I don’t remember seeing workspaces in XCode 2 or 3, and I did work with some projects that used multiple projects.

2.To be able to find the Headers, you still need to add ../** (or ./** depending on your setup) to the Header Search Path of the main project.

For their step 2 I had to use ./** in User Search Paths (not Header Search Paths) in the build settings for the project. If you are adding the submodules inside of the top level folder of the project, I don’t see how you would need ../**, since that goes outside all of your projects. But maybe new xcode projects do something weird like stick the project file two levels down, in which case you may need the ../**.

That one only held me up for a little while, but the next item in the “Usage” section tripped me up for much longer:

+ (void)initialize;
{
    [SCSoundCloud  setClientID:@""
                        secret:@""
                   redirectURL:[NSURL URLWithString:@""]];
}

Client ID and secret are provided to you when you register your app on soundcloud. The redirect URL is something you have to input.
At first I thought it was just some support webpage url, and that it was not crucial, so I just entered my webpage.
After finishing all their steps, the SoundCloud login view was appearing, but my sounds would not upload, with the log messages:

[NXOAuth2PostBodyStream open] Stream has been reopened after close
Cancelled

each time I would try to upload a wav track. Nothing would happen to my soundcloud account I was uploading to. So it seems although I could enter my password, the transaction was being cancelled midway.

I spent a long time looking into the first message, but it turns out that was completely unrelated. My app’s soundcloud upload now works, but that “Stream has been reopened after close” message is still there.
Eventually I looked at the redirect URI and realized it is some callback for the iOS app to handle messages from the soundcloud api, via some kind of URL handler method. I just changed the URL from http://company.com to myapp://soundcloud and then everything worked. I did not implement any kind of URL handler for this. This one’s kind of a mystery still for me, but maybe it is some common standard since I don’t deal with web services all that often.

One other item of note that will help newer git users who are already managing their projects with git save a few minutes of googling on how to integrate these submodules. Since the instructions tell you to do the ‘git submodule’ commands within your project folder, this creates something like a weak reference, so that when you commit and push, and then pull from another computer, you will have empty folders where these submodule would be. So you will need to run ‘git submodule init;git submodule update’ to fill them out on the other computer.

So, that’s it. All in all I’d say it took 4 hours for simple integration of a SoundCloud upload without knowing these details. Now that I know them, getting the same basic functionality should be pretty quick next time. <plug> So if you’re looking for someone to do contract work to integrate SoundCloud into your project who you don’t want to pay to learn the SoundCloud API, consider hiring me through Roughsoft.</plug>