21 Dec 12
18:46

XCode 4 with a networked drive: fixing ‘null character(s) ignored’ and ‘missing @end’ warnings

RIP command-option-up header/source switching, format indent

XCode 4 had a lot of changes over 3.x. Some seemed kinda okay, others seemed horrible. When it came out I did… absolutely nothing, for as long as possible and held on to the 3.x version as long as it was possible to do so.
Since I had to switch to the iTunes-ized Xcode 4, I’ve been using it only as a compiler/debugger, doing all my text editing in emacs on my linux box, using synergy to swap to my mac to compile. I want my editor to be on the same box as as the source code files in case something weird with the network or filesharing service happens. So the project I compile on my mac is served by smb on my linux box. This may sound needlessly complex, but I like having two boxes even after trying dual-screen mac. Also, I had already set up smb to my liking before trying it out, and I already was playing with this even while xcode 3 was current.

With XCode 4, I noticed something weird happening when I made multiple saves in a short duration, e.g. two quick saves in under a minute. The saved changes wouldn’t go into the build for whatever reason. I remember a similar bug with XCode 3, but if you just waited a second and recompiled, it would be okay. But with XCode 4, sometimes the new changes would never get propogated across the network and compiled even after waiting for a long time. I eventually realized if I saved the file on my linux box again by making a small edit, XCode would pick it up and compile it, but oddly enough, only if the save happened a minute or so after the first.

However, after inspecting the fileshared source code on my mac with other editors, it was clear that this wasn’t a filesharing issue, but simply XCode caching the file. The “Touch command” that existed in XCode 3 would presumably fix this (although I would be afraid to use it if caching is the suspect), but it was inexplicably removed with XCode 4.
When the caching is an issue, you get a warning that looks like ‘null character(s) ignored’ or an error that says ‘missing @end’ and sometimes the file will be cut in half in XCode.

After dealing with this for over a year I just found out you can prevent these by making sure you have no source files open in XCode. It seems the editor uses it’s own cache, and the compiler goes directly to compiling the cache instead of the disk. Note that if you click on an error in the left side panel, this opens up the editor. So now I just create a new tab with a .png file and close all other xcode tabs. When googling this issue over the last year I saw no mentions of this bug, so maybe no one else uses XCode networked. Noting here just in case.

14 Oct 12
20:31

Fixing guake transparency with metacity slow alt-tabbing (ubuntu)

I like guake as my terminal because I can full screen and half screen prettier than the gnome terminal (I can hide the tabs too).
One part that is important for prettiness is transparency. Some will say that this is just eye candy, but it helps me to monitor chatrooms while I program in full screen with 90 percent opacity.
I can only get guake transparency to work by enabling metacity with

gconftool-2 -s '/apps/metacity/general/compositing_manager' --type bool true

But there’s a catch. Enabling metacity makes alt-tabbing take a few seconds on my machine. This is even after you go into compiz settings and set the display delay in the application switcher to zero (which is possibly the most useless feature that ever existed). Turns out the issue is in metacity.

This page goes over the bug and has a patch that is not applied (although I’m still on natty.
I’m copying it here for convenience. You can save it to a file called metacityalttab.patch and put that in the metacity directory you will create in the next step.

Index: metacity-2.30.3/src/core/screen.c
===================================================================
--- metacity-2.30.3.orig/src/core/screen.c 2011-08-30 22:55:19.000000000 +0400
+++ metacity-2.30.3/src/core/screen.c 2011-08-30 23:02:19.000000000 +0400
@@ -1285,7 +1285,6 @@
       entries[i].key = (MetaTabEntryKey) window->xwindow;
       entries[i].title = window->title;

- win_pixbuf = get_window_pixbuf (window, &width, &height);
       if (win_pixbuf == NULL)
         entries[i].icon = g_object_ref (window->icon);
       else

You can then download the source, patch, build and install with this

#patch an apt-get package (metacity example)
mkdir metacity
cd metacity/
apt-get source metacity
#you may need to play with -p0/-p1
patch -l -p0<metacityalttab.patch
sudo apt-get build-dep metacity
#cant remember, but maybe you need to cd to within the metacity dir for this
dpkg-buildpackage
#if you are running 64 bit use the 64 bit deb in the same dir
sudo dpkg -i metacity_2.30.3-0ubuntu8_i386.deb

After this, you won’t see the changes until you restart.
If you’re like me and hate restarting more than risking crashing you can try killing metacity and restarting it.
I don’t know if there’s a better way to do this, or if this is even dangerous, but I seem to have survived.
Although, killing metacity will in fact make your windows look really weird for a second.
You might also first disable metacity to be safe.

gconftool-2 -s '/apps/metacity/general/compositing_manager' --type bool false
killall -9 metacity
nohup metacity>/dev/null &
gconftool-2 -s '/apps/metacity/general/compositing_manager' --type bool true

After that alt-tabbing should be fast as it should be, and you should have transparency.
And all was well again.

05 May 12
18:10

Use NSMutableData with NSCoder if you will move the data or die

I recently implemented dynamic loading-unloading of audio buffers so I could do real time effects/recording with unlimited lengths on my app. (The core data stuff wouldn’t cut it).

It was working well, except that my save functions got all screwed up and ended up sounding all glitchy and out of order like some kind of accidental fennesz. That’s pretty useful and was almost as fun as the time I discovered Yasunao Tone’s Wounded Man stuff but I prefer to have save working.
I was using NSData to encode via encodeDataObject for my NSCoder. The thing is, as you see in the code sample below, I am dealing with large amounts of data >100 MB, so I am loading and unloading the data before and after the NSData creation and it’s encodeDataObject call. I thought this would be fine since I expect the encoder to write immediately. Instead, it appears it does not write immediately, and there is no flush method to force it to write the data, meaning that if you modify the data in NSData even after it is out of scope, it will end up writing something other than you intended.

If you use NSMutableData, however, it writes immediately.
I understand that NSData is immutable. However, since I was creating with dataWithBytesNoCopy: I was using my own data and thought it would finish by the end of encodeDataObject.

Next.

- (void)encodeWithCoder:(NSCoder *)encoder
{
   [encoder encodeInt:channels forKey:@"channels"];
   [encoder encodeInt:numFrames forKey:@"frames"];
   [encoder encodeFloat:averagePower forKey:@"power"];
   // encodeArrayOfObjCType takes into account sizeof(SInt16) via the type parameter, so we don't computer         
   // actual memory size, but the number of SInt16's in the array                                                  
   size_t bufsize = channels * numFrames *sizeof(*buffer);
   // use NSData to encode but do not free the buffer when it releases                                             
   [self fetchBuffer];
   NSData *data = [NSMutableData dataWithBytesNoCopy:buffer length:bufsize freeWhenDone:NO];
   [encoder encodeDataObject:data];
   [self unfetchBuffer];
}

30 Apr 12
16:20

iOS: encoding to AAC with the Extended Audio File Services gotchas

I recently implemented encoding to AAC from raw PCM in memory (as opposed to from a file) using the extended audio file services, which have their calls prefixed with ExtAudioFile*. ExtAudioFile stuff basically wraps a converter with the standard AudioFile functionality. It was a bit hard to track down the correct documentation, but it’s not that far off from using the regular AudioFile stuff you’d do when writing a WAV file, for instance.

Then, a few days later, AAC encoding suddenly stopped working, without my code changing at all. Debugging led me to find that this only happened on my iPhone 4S, and not my iPod or 3GS, nor the simulator. This problem existed with apps from other developers too, including the helpful Michael Tyson’s TPAAC converter project on github. I was getting a long amount of blocking (10-30 seconds) and then an error from sometimes ExtAudioFileSetProperty on the kExtAudioFileProperty_ClientDataFormat and if not that, the ExtAudioFileWrite call. I have no clue why sometimes the ExtAudioFileSetProperty would go through. In all cases though, the audio on my app was dead after this call failed until I restarted the app. This made me think it was something to do with AudioSessions (as Michael Tyson points out can cause issues), but turns out this wasn’t the case for me. FYI my OSError code was 268451843, which doesn’t even translate to anything in core audio.

After reverting to swearing at the docs that are the notorious API documentation, and exploring various dead ends such as modifying my interrupt handler for the AudioService as well as trying it on the main thread just in case there was some race case issue, I came across this life saving stack overflow post by a mystery user1021430 who I am feeling very fond for (since he has an ambiguous handle I can imagine what I want). His post mentioned that it seems for certain dual core devices like the 4S, the hardware encoder is finicky, and it is possible to switch to a software encoder to fix these issues. I had no clue about this from the docs, and really think this guy that posted the answer probably has the most undervalued reputation (1).

Anyway, here’s my finalized (and unpolished, including progressbar updates that you should remove to use) code that exports to aac from a buffer. The FilterSound and FilterAudioBuffer classes are ommited, but it basically just helps fill out a buffer and shouldn’t be hard to understand.


-(BOOL)saveAACFileWithSound:(FilterSound*)s filename:(NSString*)filename progressView:(UIProgressView*)progView
{
   OSStatus err = noErr;

   NSString *path = [self pathForAudioFileWithFilename:filename];
   NSURL *url = [NSURL fileURLWithPath:path];

   char errCode[5];

   uint64_t doneSamples = 0;
   uint64_t totalSamples = [s approxTotalSamples];   
   
   AudioStreamBasicDescription format;
   ExtAudioFileRef outFile;
   memset(&format, 0, sizeof(format));

   format.mFormatID = kAudioFormatMPEG4AAC;
   //format.mFormatFlags =  kMPEG4Object_AAC_Main;
   //format.mSampleRate = kFilterSampleRate; // the encoder for compressed formats uses a different sample rate (see docs)
   format.mChannelsPerFrame = 2;
   
   err = ExtAudioFileCreateWithURL((CFURLRef)url,
                                kAudioFileM4AType,
                                &format,
                                   NULL,
                                kAudioFileFlags_EraseFile,
                                &outFile);

   if (err != noErr) {
      strncpy(errCode,(const char*) &err, 4);
      errCode[4] = 0;
      
      NSLog(@"AudioFileCreateWithURL path=%@ Error %li = %s", path, err, errCode);

      return NO;
   }

   // sometimes the dual core devices like iPhone 4S will freak out on the next set property call unless you do this.
   // What's more the device will be broken.
   // No idea why this is.
   UInt32 codecManf = kAppleSoftwareAudioCodecManufacturer;
   ExtAudioFileSetProperty(outFile, kExtAudioFileProperty_CodecManufacturer, sizeof(UInt32), &codecManf);

   // setup the client format since we want to convert from PCM
   AudioStreamBasicDescription clientFormat;
   memset(&clientFormat, 0, sizeof(format));

   clientFormat.mFormatID = kAudioFormatLinearPCM;
   // WAV should be little endian
   clientFormat.mFormatFlags =  kLinearPCMFormatFlagIsSignedInteger | 
         kLinearPCMFormatFlagIsPacked;//~kAudioFormatFlagIsBigEndian;
   clientFormat.mSampleRate = kFilterSampleRate;
   clientFormat.mBitsPerChannel = sizeof(AudioSampleType) * 8; // AudioSampleType == 16 bit signed ints
   clientFormat.mChannelsPerFrame = 2;
   clientFormat.mFramesPerPacket = 1;
   clientFormat.mBytesPerFrame = (clientFormat.mBitsPerChannel / 8) * clientFormat.mChannelsPerFrame;
   clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame * clientFormat.mFramesPerPacket;
   
   err = ExtAudioFileSetProperty(outFile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
   if (err != noErr){
      NSLog(@"ExtAudioFileSetProperty error %li", err);
      return NO;
   }
   
   size_t outNumPackets = 0;
   size_t numBytes = 0;

   FilterAudioBuffer* audio_buf;
   audio_buf = [[FilterAudioBuffer alloc] initWithChannels:2
                                                    frames:kSaveWavBufferSamples
                                                shouldZero:NO];

   AudioBufferList writeBuffer;
   while (![s shouldBeRemoved]) {
      // we use the same buffer each iteration, so we need to wipe it clean.
      memset(audio_buf->buffer, 0, sizeof(SInt16) * 2 * kSaveWavBufferSamples);
      // have the sound object fill out the next sound
      // for this function the term 'packet' is a frame for uncompressed formats.
      // fortunately writeToBuffer returns the number of frames written
      // (This can be shorter for buffer playback at the end when the sound size doesnt
      // divide evenly into kSaveWavBufferSamples
      outNumPackets = [s writeToBuffer:(SInt16*)audio_buf->buffer samples:kSaveWavBufferSamples];
      numBytes = outNumPackets * sizeof(SInt16) * 2;

      writeBuffer.mNumberBuffers = 1;
      writeBuffer.mBuffers[0].mNumberChannels = clientFormat.mChannelsPerFrame;
      writeBuffer.mBuffers[0].mDataByteSize = numBytes;
      writeBuffer.mBuffers[0].mData = audio_buf->buffer;

      err = ExtAudioFileWrite(outFile, outNumPackets, &writeBuffer);

      doneSamples += outNumPackets;
      float prog;
      prog = doneSamples / ((float) totalSamples);
      [self performSelectorOnMainThread:@selector(updateProgress:)
                             withObject:[NSArray arrayWithObjects:[NSNumber numberWithFloat:prog],
                                                 progView, nil]
                          waitUntilDone:NO];
      if (err != noErr){
         NSLog(@"AudioFileWritePackets error %li", err);
         break;
      }
   }
   [audio_buf release];
   ExtAudioFileDispose(outFile);
   return err == noErr;
}

21 Apr 12
17:06

SoundCloud iOS integration gotchas

SoundCloud's iOS SDK

Last year when I was living in Berlin I had the opportunity to visit the SoundCloud campus where Henrik Lenberg and Eric Wahlforss were nice enough to talk to me about a range of things from Audacity to how SoundCloud functions as a development team. I’ve been to quite a few coorporate campuses and a few startup offices, but SoundCloud’s place really felt different. Located in the middle of the design-savvy Mitte district, with a couple floors of really fashionable looking workspaces and employees, I was kind of surprised to feel myself being a little self-conscious about my clothes at a tech company. SoundCloud’s public image is also of interest to me because a few people have told me soundcloud is ‘open-source’, and while that claim doesn’t hold or really even make sense, I understand the association one might make with these freemium services that are as inviting as soundcloud.

As it were, there was no work I could do with soundcloud at that time. However, I’ve been integrating SoundCloud into an iOS project lately, following their instructions from their one of the their many github pages (CocoaSoundCloudAPI). There are a few other projects like the CocoaAPIWrapper that seems to be a wrapper but I chose not use it because their SDK page points to the other.

The directions in the readme on their github are pretty good, but they didn’t get me up and running in 15 minutes, as would have been possible if I had these extra notes to get around some gotchas from their readme on CocoaSoundCloudAPI that I will share in case it saves other people time:

The “In XCode” section, Step 1 says :

1. Create a Workspace containing all those submodules added above.

This is glossing over a lot of stuff. First there are many ways to create an xcode workspace. Since they used the word ‘create’ I wrongly went to File->New->Workspace… .
The next nit is that they should specify to add just the project files instead of the subdirectories, which the wording sort of implies.
So the correct thing to do here is actually to just drag in to your existing XCode project (the one you want to add SoundCloud integration to) the 5 .xcodeproj files. The first one you add will ask you if you want to convert to a workspace, which is kind of like a Microsoft Visual Studio solution file to manage multiple projects. I just named my new workspace with the same file prefix as my xcode project and saved it in the same place the xcodeproj file was, and from now on I open the workspace instead of the project file.

To be honest I had no idea what an xcode workspace was, or how it interacts with project files in XCode. I recently upgraded to XCode 4, but I don’t remember seeing workspaces in XCode 2 or 3, and I did work with some projects that used multiple projects.

2.To be able to find the Headers, you still need to add ../** (or ./** depending on your setup) to the Header Search Path of the main project.

For their step 2 I had to use ./** in User Search Paths (not Header Search Paths) in the build settings for the project. If you are adding the submodules inside of the top level folder of the project, I don’t see how you would need ../**, since that goes outside all of your projects. But maybe new xcode projects do something weird like stick the project file two levels down, in which case you may need the ../**.

That one only held me up for a little while, but the next item in the “Usage” section tripped me up for much longer:

+ (void)initialize;
{
    [SCSoundCloud  setClientID:@""
                        secret:@""
                   redirectURL:[NSURL URLWithString:@""]];
}

Client ID and secret are provided to you when you register your app on soundcloud. The redirect URL is something you have to input.
At first I thought it was just some support webpage url, and that it was not crucial, so I just entered my webpage.
After finishing all their steps, the SoundCloud login view was appearing, but my sounds would not upload, with the log messages:

[NXOAuth2PostBodyStream open] Stream has been reopened after close
Cancelled

each time I would try to upload a wav track. Nothing would happen to my soundcloud account I was uploading to. So it seems although I could enter my password, the transaction was being cancelled midway.

I spent a long time looking into the first message, but it turns out that was completely unrelated. My app’s soundcloud upload now works, but that “Stream has been reopened after close” message is still there.
Eventually I looked at the redirect URI and realized it is some callback for the iOS app to handle messages from the soundcloud api, via some kind of URL handler method. I just changed the URL from http://company.com to myapp://soundcloud and then everything worked. I did not implement any kind of URL handler for this. This one’s kind of a mystery still for me, but maybe it is some common standard since I don’t deal with web services all that often.

One other item of note that will help newer git users who are already managing their projects with git save a few minutes of googling on how to integrate these submodules. Since the instructions tell you to do the ‘git submodule’ commands within your project folder, this creates something like a weak reference, so that when you commit and push, and then pull from another computer, you will have empty folders where these submodule would be. So you will need to run ‘git submodule init;git submodule update’ to fill them out on the other computer.

So, that’s it. All in all I’d say it took 4 hours for simple integration of a SoundCloud upload without knowing these details. Now that I know them, getting the same basic functionality should be pretty quick next time. <plug> So if you’re looking for someone to do contract work to integrate SoundCloud into your project who you don’t want to pay to learn the SoundCloud API, consider hiring me through Roughsoft.</plug>