How to access to the Back and Front camera in Swift

Hi there, this is one of these tutorials that at the end of it you said ‘Thank god I found this’. I’m a noob in iOS development, so when I was trying to learn how to manage the camera on swift I realice that it’s hard to find something good without die trying to do it.

So, as a superhero of the noobs I will try to explain this in an easy way, let’s get started.

Things you will need to understand before you touch the keyboard and start coding like crazy:

  • device: Every iPhone or iPad has 3 types of devices, here it’s an image of the two devices, third one is the mic.
  • session: It’s something that is in charge to control everything related between the device and the preview layer, we have to provide two things to this session, an input and an output.
  • input: it’s a wrapper of the device. I mean, we have to insert the device into the input, hopefully with the code you can catch what I’m trying to say.
  • output: The output is the variable in charge of configure some things like, what kind of codec are we using, the quality of the video we want and other stuff.
  • preview layer: Basically this is the layer where we are going to preview all the frames that the device capture (back and front camera).

What I’m trying to say is this:

Well, If you didn’t understand my weird explanation let’s start with the fun and understandable part, code code code!

Create new project.

At this point I guess you should know how to set the view and things like that, but if you are poor guy who all your life have lived in a cave then I will explain.

Select Single View Application
Name your app.
Search the View from the Object library and drag it to the default View Controller you already have.

Give some constraints to the UIView

Depending on how would you like to show the camera, you will need to give a size and position to this new UIView, so in my case I will do the UIView fullscreen, here is how I did it.

Add more constraints

Finally update the frames

We are done with our basic interface.

Code code code code!

Create an outlet for the UIView we just add to the ViewController.swift

@IBOutlet weak var cameraView: UIView!

In order to use all the type of variable and methods we need, we will need a default library call AVFoundation, so import it.

import AVFoundation

Then we have to declare the variables we will use.

var session: AVCaptureSession?
var input: AVCaptureDeviceInput?
var output: AVCaptureStillImageOutput?
var previewLayer: AVCaptureVideoPreviewLayer?

Remember that we need a device?, let’s build a function that return a device we want.

//Get the device (Front or Back)
func getDevice(position: AVCaptureDevicePosition) -> AVCaptureDevice? {
    let devices: NSArray = AVCaptureDevice.devices();
    for de in devices {
        let deviceConverted = de as! AVCaptureDevice
        if(deviceConverted.position == position){
           return deviceConverted
        }
    }
   return nil
}

As you can see, this function accept an argument of type AVCaptureDevicePosition and returns a AVCaptureDevice, there are just two positions we are going to use in this tutorial:

  • AVCaptureDeviePosition.Front
  • AVCaptureDeviePosition.Back

So if we want to use this function we just do it like:

let camera = getDevice(.Back)
//or
let camera = getDevice(.Front)

Now we need to add this device to the input

do {
   input = try AVCaptureDeviceInput(device: camera)
} catch let error as NSError {
   print(error)
   input = nil
}

Before we add the input to the session we can use the canAddInput() method to verify if this input can be used into the session.

if(session?.canAddInput(device) == true){

//Add the input to the session

session?.addInput(input)
}

Before we add the output to the session we can configure some settings like the codec we want to use or the quality, I’m just going to add a codec for testing purposes but this is not necessary.

output?.outputSettings = [AVVideoCodecKey : AVVideoCodecJPEG]

Then add the output to the session in the same way we added the input

if(session?.canAddOutput(output) == true){
   session?.addOutput(output)
}

We are done with the input and output, now we have to take care of the previewLayer.

First we need to add the session to this previewLayer

previewLayer = AVCaptureVideoPreviewLayer(session: session)

Optional

We can configure things in the previewLayer like, aspect fill, video orientation, etc.

previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.Portrait

Now we have to add a frame to the previewLayer and add the layer to the UIView we created at the beginning (cameraView).

//previewLayer will have the same size and position of the cameraView
previewLayer?.frame = cameraView.bounds
cameraView.layer.addSublayer(previewLayer!)

Finally we just have to start running the session

session?.startRunning()

Done

Note: You will be able to see the result if you run the app in an actual device.

Final code

This is the final code

import UIKit
import AVFoundation
class ViewController: UIViewController {
   @IBOutlet weak var cameraPreview: UIView!
   var session: AVCaptureSession?
   var input: AVCaptureDeviceInput?
   var output: AVCaptureStillImageOutput?
   var previewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
   super.viewDidLoad()
   //Initialize session an output variables this is necessary
session = AVCaptureSession()
   output = AVCaptureStillImageOutput()
   let camera = getDevice(.Back)
   do {
      input = try AVCaptureDeviceInput(device: camera)
   } catch let error as NSError {
      print(error)
      input = nil
   }
   if(session?.canAddInput(input) == true){
      session?.addInput(input)
      output?.outputSettings = [AVVideoCodecKey : AVVideoCodecJPEG]
      if(session?.canAddOutput(output) == true){
         session?.addOutput(output)
         previewLayer = AVCaptureVideoPreviewLayer(session: session)
         previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
         previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.Portrait
         previewLayer?.frame = cameraPreview.bounds
         cameraPreview.layer.addSublayer(previewLayer!)
         session?.startRunning()
      }
   }
}
//Get the device (Front or Back)
func getDevice(position: AVCaptureDevicePosition) -> AVCaptureDevice? {
   let devices: NSArray = AVCaptureDevice.devices();
   for de in devices {
      let deviceConverted = de as! AVCaptureDevice
      if(deviceConverted.position == position){
         return deviceConverted
      }
   }
   return nil
}
}

Build and run the app.