How to create a Stop Motion animation camera using AVFoundation in iOS

Building a camera is one of the common tasks in developing a social media iOS app. In this article, we will build an app with a camera to create the Stop motion animation effect. Stop motion effect is achieved by playing a series of images as a fast sequence to create an illusion of movement.

Stop-Motion created using AVFoundation — End of the tutorial you should have an app doing this :)
First of all, creating an iOS app with camera functionality can be done in two different ways. It is either using an UIImagePickerController or through AVFoundation framework.

What is AVFoundation?

AVFoundation is an iOS framework used for building power audiovisual functionality into your app. Using this framework, you can manipulate the capturing, processing, editing, importing and exporting of the audio and video assets.

UIImagePickerController vs AVFoundation:

With UIImagePickerController, we can perform all the basic tasks such as capturing, toggling flash, switching camera, editing focus and exposures. Also, it lets you access the photo library to store and share images. It is easier to implement compared to AVFoundation.

However, AVFoundation provides full control over camera settings and media manipulation. It lets you do all basic tasks as an UIImagePickerController and in addition to that you can process raw data from capture, playbacks, creating thumbnail/still images from video, editing and changing settings like focus, exposure, etc for still and video media capture.

How to use AVFoundation?

AVFoundation has 5 major classes:

  • AVCaptureDevice: represents a physical capture device such as the camera and microphone. For every iPhone, there is one microphone(audio input) and two camera (front and back for visual input). You can configure a input device settings using this before passing it to the capture session.
  • AVCaptureDeviceInput: provides the media from a capture device to a capture session.
  • AVCaptureOutput: represents the output from a capture device. The output from capture could be one of the following — AVCaptureMovieFileOutput, AVCaptureVideoDataOutput, AVCaptureAudioDataOutput or AVCapturePhotoOutput. AVCapturePhotoOutput represents a still image output. We can configure the output settings such as preset, data format, raw data representation settings for AVCapturePhotoOutput.
  • AVCaptureSession: acts an co-ordinator between the AV inputs and outputs. You initialize a capture session, add the input and output for the capture to the session. Then you startRunning the session to create the flow of data between the input and output.
  • AVCaptureVideoPreviewLayer: Apart from the above, to provide the user with a real time preview of what is being captured or recorded, this is used.

Now, with this basics, lets start building the app:

In order to achieve the stop motion effect, we will let the user take a series of photos and create a GIF file using AVFoundation framework. The app will require two UIViewController classes and a class to abstract all camera functionalities that we will use:

  • ViewController — The default initial view controller that gets generated when you create a new single view app. We will use this to show the live camera with buttons to change camera settings, take a photo and create a stop motion media.
  • PreviewViewController —Shows the preview of the stop motion with an option to save it to photo library.
  • CameraSetup — An abstraction over all camera functionalities we use. In this class, we will be using AVFoundation framework. CameraViewController class will contain an instance of CameraSetup.

TL:DR You can find the working code in my git repository.

  1. Let’s start with setting up the story board. Create a project for a single view app. In the initial ViewController, just add another view and set its edges to the edges of the superview. Add 4 buttons in total as shown below for — switching camera, toggling flash, capturing and a Done button for creating stop motion GIF from the set of captured images.

2. Create a new cocoa touch class file which is a subclass of UIViewController. Name it as “PreviewViewController” as this is going to show the final preview of the created stop motion GIF.

3. Add another view controller to the storyboard. Include an image view and set its edges to that of superview. Add two buttons here — one for saving the GIF to photo library and the other one is to dismiss without saving. In its identity inspector, specify the class as PreviewViewController.

4. Now create a segue from the ViewController to the PreviewViewController and in its attributes inspector, set the segue identifier to “showPreview”. We will be using this in the code to perform transition from first to second view controller.

5. Create a new swift class for “CameraSetup” in which we will be implementing AVFoundation. And we would be initializing class object in the ViewController to perform camera actions.

Let’s focus on the CameraSetup class where we will be using AVFoundation.

import AVFoundation
Class CameraSetup {
....
}

In the class file, let’s first declare the list of variables we would require.

//camera capture session is initialized 
var captureSession = AVCaptureSession()
//List of capture device needed to take still image
var frontCam: AVCaptureDevice?
var rearCam: AVCaptureDevice?
var currentCam: AVCaptureDevice?
//input and output for the capture
var captureInput: AVCaptureDeviceInput?
var captureOutput: AVCapturePhotoOutput?
//camera preview layer 
var previewLayer: AVCaptureVideoPreviewLayer?

Now lets create some function to configure and run the capture session. Lets start with CaptureDevice

func captureDevice(){
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video , position: .unspecified)
        for d in discoverySession.devices{
if d.position == .front{
frontCam = d
}
else if d.position == .back{
rearCam = d
do{
try rearCam?.lockForConfiguration()
rearCam?.focusMode = .autoFocus
rearCam?.exposureMode = .autoExpose
rearCam?.unlockForConfiguration()
}
catch let error{
print(error)
}
}
}
}

Using DiscoverySession query, provides us with the list of physical input devices that matches the criteria specified. Here we want list of devices with built in wide angle camera and that can take visual media. Then we set the device to frontCam if the position of the device is front and same applies to rear.

With rearCam , I have configured its focus mode and exposure. These are some features that can be configured using AVFoundation while setting up a Camera for capture.

We have the frontCam and rearCam which we could add as Capture inputs. Lets create a function to configure captureInput, to add input to the session.

func configureCaptureInput(){
currentCam = rearCam!
do{
captureInput = try AVCaptureDeviceInput(device: currentCam!)
if captureSession.canAddInput(captureInput!){
captureSession.addInput(captureInput!)
}
}
catch let error{
print(error)
}
}

Here, we check if the input can be added to the session , if true, we add it. Similarly we need to configure and add the CaptureOutput to the session. So lets add the following function to the class,

func configureCaptureOutput(){
captureOutput = AVCapturePhotoOutput()
captureOutput!.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey : AVVideoCodecType.jpeg])], completionHandler: nil)
        if captureSession.canAddOutput(captureOutput!){
captureSession.addOutput(captureOutput!)
}
        captureSession.startRunning()
}

setPreparedPhotoSettingsArray sets the photo output setting of our desire. Once the captureOutput is added to session, its time to start the session. For this purpose, we call startRunning().

Finally, what is left of the five classes is the AVVideoPreviewLayer. Lets go ahead and add this last part.

func configurePreviewLayer(view: UIView){
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer?.connection?.videoOrientation = .portrait

view.layer.insertSublayer(previewLayer!, at: 0)
previewLayer?.frame = view.frame
}

To this function we would pass the UIView we created in the first ViewController. AVCaptureVideoPreviewLayer is a subclass of the CALayer (Core Animation) and works concurrently with the given captureSession.

6. Go to the ViewController, connect the view, button and its actions to the view controller from the story board. It would look something like this,

Add a function called initialize to call all the configuration functions from CameraSetup .

var cameraSetup: CameraSetup!
...
func initialize(){
cameraSetup = CameraSetup()
cameraSetup.captureDevice()
cameraSetup.configureCaptureInput()
cameraSetup.configureCaptureOutput()
cameraSetup.configurePreviewLayer(view: camView)
}

Note: You can not run this in a simulator. Run the app on your iphone. You should be able to see the preview.

Notice that when we click the capture button, nothing happens. This is because we have not yet setup that functionality. So let’s add that now in the view controller.

Capturing an image is an asynchronous action. This means that you have to pass a callback function to AVFoundation that will be called once the image is captured.
var previewImage = [UIImage]()
...
@IBAction func captureAction(_ sender: Any) {
cameraSetup.captureImage {(image, error) in
guard let image = image else {
print(error ?? "Image capture error")
return
}
self.previewImage.append(image)
}
}

We are creating a UIImage array to store all images captured to create the stop-motion. In captureAction we call captureImage function of CameraSetup with a closure as parameter. The image returned in this closure is then saved into the array.

In the CameraSetup class, add the following function

var photoCaptureCompletionBlock: ((UIImage?, Error?) -> Void)?
...
func captureImage(completion: @escaping (UIImage?, Error?) -> Void) {
let settings = AVCapturePhotoSettings()
settings.flashMode = self.flashMode

self.captureOutput?.capturePhoto(with: settings, delegate: self as AVCapturePhotoCaptureDelegate)
self.photoCaptureCompletionBlock = completion
}

photoCaptureCompletionBlock is the name for the closure defined within our captureAction.

capturePhoto is used capture a still photography with the specified settings for the photo such as flash option, data format etc. This call requires a AVCapturePhotoCaptureDelegate to be implemented. Notice we have made self as delegate and hence lets add a function to follow the delegate’s protocol.

public func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
if let x = error {
self.photoCaptureCompletionBlock?(nil, x)
}
else if let data = photo.fileDataRepresentation(), let image = UIImage(data: data) {
self.photoCaptureCompletionBlock?(image, nil)
}
}

The didFinishProcessingPhoto provides us with processed final image which we will store it in our array.

7. Functionality for Done button of the ViewController will include preparing and performing the segue. Remember the segue “showPreview” we created, that is called here when Done button is clicked. The definition for Done button,

@IBAction func doneAction(_ sender: Any) {
self.performSegue(withIdentifier: "showPreview", sender: self)
}
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if let destination = segue.destination as? PreviewViewController{
destination.pImg = self.previewImage
}
}

8. Open your PreviewViewController, connect the UIImageView to the view controller. Create an UIImage array called “pImg” and a timer. In the viewDidLoad() add the following.

override func viewDidLoad() {
super.viewDidLoad()
preview.image = pImg[0]
timer = Timer.scheduledTimer(timeInterval: 0.5, target: self, selector: #selector(self.display), userInfo: nil, repeats: true)
}

@objc func display(){
if !(pImg.count == counter){
preview.image = pImg[counter]
counter += 1
}
else {
counter = 0
}
}

A simple timer is created to run the preview of the set images in loop using the display().

We are almost done. Lets add the action blocks for the save and close button. Add the closeAction() where we will invalidate the timer and dismiss the PreviewViewController.

@IBAction func closeAction(_ sender: Any) {
timer.invalidate()
dismiss(animated: true, completion: nil)
}

Now for saving the set of images as a GIF in photo library, do the following

@IBAction func saveAction(_ sender: Any) {
createGIF(images: pImg)
dismiss(animated: true, completion: nil)
}

The createGIF function will create GIF using kCGImagePropertyGIFDictionary and then create an URL that would have the image data. From this URL, using PHPhotoLibrary create and add asset to the photo library. (for reference)

import Photos
...
func createGIF(images: [UIImage]){
let fileProperties: CFDictionary = [kCGImagePropertyGIFDictionary as String: [kCGImagePropertyGIFLoopCount as String: 0]] as CFDictionary
let frameProperties: CFDictionary = [kCGImagePropertyGIFDictionary as String: [kCGImagePropertyGIFUnclampedDelayTime as String: 0.5]] as CFDictionary

let documentsDirectoryURL: URL? = try? FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
let fileURL: URL? = documentsDirectoryURL?.appendingPathComponent("animated.gif")
if let url = fileURL as CFURL? {
if let destination = CGImageDestinationCreateWithURL(url, kUTTypeGIF, images.count, nil) {
CGImageDestinationSetProperties(destination, fileProperties)
for image in images {
if let cgImage = image.cgImage {
CGImageDestinationAddImage(destination, cgImage, frameProperties)
}
}
if !CGImageDestinationFinalize(destination) {
print("Failed to finalize the image destination")
}
print("Url = \(fileURL!)")
}
}

// Request creating an asset from the image and saving it to
// PhotoLibrary.
PHPhotoLibrary.shared().performChanges({ PHAssetChangeRequest.creationRequestForAssetFromImage(atFileURL: fileURL!)
})
}

And we are done.

All left now is toggle camera and flash functionalities which are optional. For toggling camera, if the session has rearCam then we remove all inputs from the session and add frontCam as new captureInput and make currentCam as frontCam and vice versa.

func toggleCam(){
captureSession.beginConfiguration()
let newCam = (currentCam?.position == .front) ? rearCam : frontCam

for input in captureSession.inputs{
captureSession.removeInput(input as! AVCaptureDeviceInput)
}

currentCam = newCam
do{
captureInput = try AVCaptureDeviceInput(device: currentCam!)
if captureSession.canAddInput(captureInput!){
captureSession.addInput(captureInput!)
}
}
catch let error{
print(error)
}

captureSession.commitConfiguration()
}

In the view controller for cameraToggle call this func from CameraSetup,

@IBAction func cameraToggle(_ sender: Any) {
cameraSetup.toggleCam()
}

Similarly for flash toggle, just change the value of flashMode variable.

@IBAction func flashToggle(_ sender: Any) {
if cameraSetup.flashMode == .off {
flashButton.setImage(UIImage(named: "flash_on"), for: .normal)
cameraSetup.flashMode = .on
}
else {
flashButton.setImage(UIImage(named: "flash_off"), for: .normal)
cameraSetup.flashMode = .off
}
}

That’s it now we have a full functional app. Run the app. The output should be like,

Images for demo taken from : https://vignette.wikia.nocookie.net/moana/images/d/df/Moana-ca-hair-sketches.jpg/revision/latest?cb=20170218194719

The full project is available via GitHub link mentioned below. I welcome all feedback and any questions. Create your own custom camera now with AVFoundation. Happy coding.. :)