A tale of two cameras

Julian James
Julian James

I’m going to share some of the thinking and code behind the Picle iPhone app starting with the camera functionality. This will involve showing some code snippets and describing classes and frameworks found in the Apple iOS SDK. 

At the heart of the the Picle app is the ability to use the iPhone camera and microphone in quick succession. The user experience of this functionality is critical to feel of the app so plenty of work has been done both before and after the initial release. Indeed the next release will have this completely reworked.

In iOS there are two distinct ways to handle the camera. One with the UIImagePickerController (iOS 2.0 onwards) and the other using the newer AV Foundation framework (iOS 4.0 onwards). The initial release of Picle uses the older UIImagePickerController whilst the next release will use the AV Foundation framework.

UIImagePickerController

Those familiar with the iPhone camera will have seen that when the camera is opened or a picture is taken an iris shutter animation is played. As we use the UIImagePickerController in the current version of Picle users will be familiar with the view below.

We lay our own graphics over the top to give us our own controls for the camera. The picker initialisation is shown below. We do the usual Objective-C allocation and initialisation. We set the source to the camera as opposed to the camera roll or library and explicitly turn off the standard camera controls.

 _imagePicker = [[UIImagePickerController alloc] init];
 _imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
 _imagePicker.showsCameraControls = NO;

We can however still control the flash through the cameraFlashMode property. This can be of types auto, on or off. We can also set which camera is to be used through the cameraDevice property, front or rear. We add our own buttons for selecting these options though these don’t apply to the 3GS. These controls sit in custom view on top of the picker view. Displaying the picker is simply a case of calling

[self presentModalViewController:_imagePicker animated:NO];

And is dismissed with

[self dismissModalViewControllerAnimated:NO];

Using the UIImagePickerController is pretty simple and doesn’t require much code to get it to run, however the problem with disabling the camera controls is we lose some functionality like tap to focus and set exposure. We also don’t really want the iris animation so together this became the motivation for rewriting the camera functionality using the AV Foundation framework.

AV Foundation

Using the AV Foundation framework gives us considerably more control over how the camera is configured. We wanted to eliminate the iris, have a faster start and faster capture. The AV Foundation framework enables you get input streams from multiple devices and manipulate video during realtime capture. There is however more code required and one downside is that when taking a still image the video camera is still active. We take a frame grab of the video to place in a view above the video camera while we process the still image. We cannot stop the video until the still image has been captured.

Here comes a snippet of the code to initialise the capture session.


//  Init the capture session
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
newCaptureSession.sessionPreset = AVCaptureSessionPresetPhoto;
 
// Init the device inputs

AVCaptureDeviceInput *newVideoInput =
[[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];

// Setup the still image output
AVCaptureStillImageOutput *newStillImageOutput =

[[AVCaptureStillImageOutput alloc] init];
[newStillImageOutput setOutputSettings:[[NSDictionary alloc]
initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,nil]];

 // Setup the video grab output
 AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutputalloc] init];

 // Specify the pixel format
 [captureOutput setVideoSettings:[NSDictionary dictionaryWithObject:
                                [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 
                                forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

    
 [captureOutput setAlwaysDiscardsLateVideoFrames:YES]; 

 // Use GCD to run a thread to capture video frames.
 dispatch_queue_t queue;
 queue = dispatch_queue_create("cameraQueue", NULL);
 [captureOutput setSampleBufferDelegate:self queue:queue];
 dispatch_release(queue);

 // Add inputs and output to the capture session
 if ([newCaptureSession canAddInput:newVideoInput]) {
     [newCaptureSession addInput:newVideoInput];
 }

 if ([newCaptureSession canAddOutput:captureOutput]) {
     [newCaptureSession addOutput:captureOutput];
 }

 if ([newCaptureSession canAddOutput:newStillImageOutput]) {
     [newCaptureSession addOutput:newStillImageOutput];
 }

In essence what the code above does is to create a single capture session that consists of these:

  • A video input - that’s what the user sees in the view and can interact with.
  • A video capture out - that’s what we grab frames from.
  • A still image output - that’s for when the user takes a picture.

We have to add the video preview layer to the layer of the view we wish to run the video in and then call [newCaptureSession startRunning] to start the video.

We make the class containing this capture session a delegate of the capture and still outputs so methods get called when each frame is shown and when a still image is taken. A copy of the last video frame is kept so when we take a still we have an image to cover the video input. We also animate a flash effect over the whole view to visually indicate the taking of a still image.

Key to the new camera was also adding tap to focus. The capture device has the properties whiteBalanceMode, focusMode and exposureMode. If the device supports those properties we set all three when the user taps the camera view. We also render a custom focus box to show visual feedback.

Conclusion

The end result of the using the AV Foundation framework is that the camera starts and takes still images quicker than the UIImagePickerController. We also get the tap to focus functionality we required. Clearly there is a huge amount more code involved in using the camera on iOS but I hope this little taste of what we’ve done can at least show the depth development has to go to complete desired functionality. This is also only a small part of the overall app.

References:

http://developer.apple.com/library/ios/#DOCUMENTATION/UIKit/Reference/UIImagePickerController_Class/UIImagePickerController/UIImagePickerController.html

http://developer.apple.com/library/ios/#DOCUMENTATION/AudioVideo/Conceptual/AVFoundationPG/Articles/00_Introduction.html#//apple_ref/doc/uid/TP40010188

0 comments