iOS — How to Integrate Camera APIs using SwiftUI

Amisha I
Canopas
Published in
8 min readOct 20, 2023
Designed By Canopas

Exciting News! Our blog has a new home! 🚀

Introduction

Today, we’ll show you how to seamlessly connect SwiftUI with Camera APIs, with simplifying the process of creating camera apps. We’ve focused on making the implementation user-friendly, ensuring that developers, whether new or experienced, can utilize this guide seamlessly.

This blog will serve as a comprehensive guide, starting from attaching a camera preview to your UI, along with covering essential features like flash support, focus control, zoom and camera switch capabilities, and the key process of capturing and saving preview images to your device.

This guide may appear lengthy, but I’m sure you’ll find it a valuable resource that simplifies the process of crafting camera-enabled applications, especially with SwiftUI.

Sponsored

We are what we repeatedly do. Excellence, then, is not an act, but a habit. Try out Justly and start building your habits today!

Get Started

We’re going to create a SwiftUI app to understand the basic camera APIs.

Here is the basic UI of the main camera screen which we are going to design throughout this blog,

I’ve taken the UI inspiration from this blog and tried to optimize the Camera API implementation to make it a more user-friendly and simplified experience with SwiftUI.

The source code is available on GitHub.

Display Camera Preview

First, add a new swift file with the name CameraPreview.swift and in that add a struct for the camera preview view.

Here is the code for the preview UI.

import SwiftUI
import AVFoundation // To access the camera related swift classes and methods

struct CameraPreview: UIViewRepresentable { // for attaching AVCaptureVideoPreviewLayer to SwiftUI View

let session: AVCaptureSession

// creates and configures a UIKit-based video preview view
func makeUIView(context: Context) -> VideoPreviewView {
let view = VideoPreviewView()
view.backgroundColor = .black
view.videoPreviewLayer.session = session
view.videoPreviewLayer.videoGravity = .resizeAspect
view.videoPreviewLayer.connection?.videoOrientation = .portrait
return view
}

// updates the video preview view
public func updateUIView(_ uiView: VideoPreviewView, context: Context) { }

// UIKit-based view for displaying the camera preview
class VideoPreviewView: UIView {

// specifies the layer class used
override class var layerClass: AnyClass {
AVCaptureVideoPreviewLayer.self
}

// retrieves the AVCaptureVideoPreviewLayer for configuration
var videoPreviewLayer: AVCaptureVideoPreviewLayer {
return layer as! AVCaptureVideoPreviewLayer
}
}
}

UIViewRepresentable acts as a bridge to a UIKit’s UIView.

AVCaptureSession is an object that manages real-time capture of audio and video. It represents the session used to capture the live camera feed.

The remaining code sets up a UIKit-based view in SwiftUI, specifically a video preview view with customized settings.

Add Camera Manager

The Manager will be the main component to connect to the iPhone’s camera and capture the amazing photos and many more things you have been waiting for.

I’ve tried to follow the coding standards with the MVVM architecture and state management.

Let’s start by creating a manager class named CameraManager.swift and replacing the default code with the following code:

// this class conforms to ObservableObject to make it easier to use with future Combine code
class CameraManager: ObservableObject {

// Represents the camera's status
enum Status {
case configured
case unconfigured
case unauthorized
case failed
}

// Observes changes in the camera's status
@Published var status = Status.unconfigured

// AVCaptureSession manages the camera settings and data flow between capture inputs and outputs.
// It can connect one or more inputs to one or more outputs
let session = AVCaptureSession()

// AVCapturePhotoOutput for capturing photos
let photoOutput = AVCapturePhotoOutput()

// AVCaptureDeviceInput for handling video input from the camera
// Basically provides a bridge from the device to the AVCaptureSession
var videoDeviceInput: AVCaptureDeviceInput?

// Serial queue to ensure thread safety when working with the camera
private let sessionQueue = DispatchQueue(label: "com.demo.sessionQueue")

// Method to configure the camera capture session
func configureCaptureSession() {
sessionQueue.async { [weak self] in
guard let self, self.status == .unconfigured else { return }

// Begin session configuration
self.session.beginConfiguration()

// Set session preset for high-quality photo capture
self.session.sessionPreset = .photo

// Add video input from the device's camera
self.setupVideoInput()

// Add the photo output configuration
self.setupPhotoOutput()

// Commit session configuration
self.session.commitConfiguration()

// Start capturing if everything is configured correctly
self.startCapturing()
}
}

// Method to set up video input from the camera
private func setupVideoInput() {
do {
// Get the default wide-angle camera for video capture
// AVCaptureDevice is a representation of the hardware device to use
let camera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: position)

guard let camera else {
print("CameraManager: Video device is unavailable.")
status = .unconfigured
session.commitConfiguration()
return
}

// Create an AVCaptureDeviceInput from the camera
let videoInput = try AVCaptureDeviceInput(device: camera)

// Add video input to the session if possible
if session.canAddInput(videoInput) {
session.addInput(videoInput)
videoDeviceInput = videoInput
status = .configured
} else {
print("CameraManager: Couldn't add video device input to the session.")
status = .unconfigured
session.commitConfiguration()
return
}
} catch {
print("CameraManager: Couldn't create video device input: \(error)")
status = .failed
session.commitConfiguration()
return
}
}

// Method to configure the photo output settings
private func setupPhotoOutput() {
if session.canAddOutput(photoOutput) {
// Add the photo output to the session
session.addOutput(photoOutput)

// Configure photo output settings
photoOutput.isHighResolutionCaptureEnabled = true
photoOutput.maxPhotoQualityPrioritization = .quality // work for ios 15.6 and the older versions
//photoOutput.maxPhotoDimensions = .init(width: 4032, height: 3024) // for ios 16.0*

// Update the status to indicate successful configuration
status = .configured
} else {
print("CameraManager: Could not add photo output to the session")
// Set an error status and return
status = .failed
session.commitConfiguration()
return
}
}

// Method to start capturing
private func startCapturing() {
if status == .configured {
// Start running the capture session
self.session.startRunning()
} else if status == .unconfigured || status == .unauthorized {
DispatchQueue.main.async {
// Handle errors related to unconfigured or unauthorized states
self.alertError = AlertError(title: "Camera Error", message: "Camera configuration failed. Either your device camera is not available or its missing permissions", primaryButtonTitle: "ok", secondaryButtonTitle: nil, primaryAction: nil, secondaryAction: nil)
self.shouldShowAlertView = true
}
}
}

// Method to stop capturing
func stopCapturing() {
// Ensure thread safety using `sessionQueue`.
sessionQueue.async { [weak self] in
guard let self else { return }

// Check if the capture session is currently running.
if self.session.isRunning {
// stops the capture session and any associated device inputs.
self.session.stopRunning()
}
}
}
}

It may seem like there’s a lot to digest here 😅 but don’t worry, as you go through the code with those added comments, you will gain a clear understanding of each component step by step.

Add a View Model

To keep our code organized and clear, it’s advisable to create a separate view model. This view model will handle the core logic and provide the necessary data to our view for display.

We’ll eventually add some fairly intensive business logic around what will be displayed on the screen of the ContentViewto this view model class.

Requesting camera access permission

Privacy is one of Apple’s most touted pillars. Since Apple cares about users’ privacy, it only makes sense that the user needs to grant an app permission to use the camera.

So make sure to add a camera usage description in the project’s info.plist file as given in the below image.

Let’s create a new Swift file named CameraViewModel.swift. Then, replace the contents of that file with the following code:

class CameraViewModel: ObservableObject {

// Reference to the CameraManager.
@ObservedObject var cameraManager = CameraManager()

// Published properties to trigger UI updates.
@Published var isFlashOn = false
@Published var showAlertError = false
@Published var showSettingAlert = false
@Published var isPermissionGranted: Bool = false

var alertError: AlertError!

// Reference to the AVCaptureSession.
var session: AVCaptureSession = .init()

// Cancellable storage for Combine subscribers.
private var cancelables = Set<AnyCancellable>()

init() {
// Initialize the session with the cameraManager's session.
session = cameraManager.session
}

deinit {
// Deinitializer to stop capturing when the ViewModel is deallocated.
cameraManager.stopCapturing()
}

// Setup Combine bindings for handling publisher's emit values
func setupBindings() {
cameraManager.$shouldShowAlertView.sink { [weak self] value in
// Update alertError and showAlertError based on cameraManager's state.
self?.alertError = self?.cameraManager.alertError
self?.showAlertError = value
}
.store(in: &cancelables)
}

// Check for camera device permission.
func checkForDevicePermission() {
let videoStatus = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
if videoStatus == .authorized {
// If Permission granted, configure the camera.
isPermissionGranted = true
configureCamera()
} else if videoStatus == .notDetermined {
// In case the user has not been asked to grant access we request permission
AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler: { _ in })
} else if videoStatus == .denied {
// If Permission denied, show a setting alert.
isPermissionGranted = false
showSettingAlert = true
}
}

// Configure the camera through the CameraManager to show a live camera preview.
func configureCamera() {
cameraManager.configureCaptureSession()
}
}

This view model will manage all the camera-related operations for SwiftUI views. For now,

  • It handles camera permissions, errors, and session configuration.
  • Combine bindings to update UI based on the manager’s state changes.
  • Ensures proper camera management and access.

Now, Let’s merge this data with the UI.

Add Camera Screen Design

We are going to design the screen like the default camera App. For that let’s update the code in the ContentView.swift file:

struct ContentView: View {

@ObservedObject var viewModel = CameraViewModel()

var body: some View {
GeometryReader { geometry in
ZStack {
Color.black.edgesIgnoringSafeArea(.all)

VStack(spacing: 0) {
Button(action: {
// Call method to on/off flash light
}, label: {
Image(systemName: viewModel.isFlashOn ? "bolt.fill" : "bolt.slash.fill")
.font(.system(size: 20, weight: .medium, design: .default))
})
.accentColor(viewModel.isFlashOn ? .yellow : .white)

CameraPreview(session: viewModel.session)

HStack {
PhotoThumbnail()
Spacer()
CaptureButton { // Call the capture method }
Spacer()
CameraSwitchButton { // Call the camera switch method }
}
.padding(20)
}
}
.alert(isPresented: $viewModel.showAlertError) {
Alert(title: Text(viewModel.alertError.title), message: Text(viewModel.alertError.message), dismissButton: .default(Text(viewModel.alertError.primaryButtonTitle), action: {
viewModel.alertError.primaryAction?()
}))
}
.alert(isPresented: $viewModel.showSettingAlert) {
Alert(title: Text("Warning"), message: Text("Application doesn't have all permissions to use camera and microphone, please change privacy settings."), dismissButton: .default(Text("Go to settings"), action: {
self.openSettings()
}))
}
.onAppear {
viewModel.setupBindings()
viewModel.checkForDevicePermission()
}
}
}

// use to open app's setting
func openSettings() {
let settingsUrl = URL(string: UIApplication.openSettingsURLString)
if let url = settingsUrl {
UIApplication.shared.open(url, options: [:])
}
}
}

Let’s add the implementation of other views which we have used in the above design in the HStack.

struct PhotoThumbnail: View {

var body: some View {
Group {
// if we have Image then we'll show image
Image(uiImage: image)
.resizable()
.aspectRatio(contentMode: .fill)
.frame(width: 60, height: 60)
.clipShape(RoundedRectangle(cornerRadius: 10, style: .continuous))

// else just show black view
Rectangle()
.frame(width: 50, height: 50, alignment: .center)
.foregroundColor(.black)
}
}
}
}

struct CaptureButton: View {

var action: () -> Void

var body: some View {
Button(action: action) {
Circle()
.foregroundColor(.white)
.frame(width: 70, height: 70, alignment: .center)
.overlay(
Circle()
.stroke(Color.black.opacity(0.8), lineWidth: 2)
.frame(width: 59, height: 59, alignment: .center)
)
}
}
}

struct CameraSwitchButton: View {

var action: () -> Void

var body: some View {
Button(action: action) {
Circle()
.foregroundColor(Color.gray.opacity(0.2))
.frame(width: 45, height: 45, alignment: .center)
.overlay(
Image(systemName: "camera.rotate.fill")
.foregroundColor(.white))
}
}
}

With these, we are done with the core camera screen and you can test the app now.

This blog post was originally published on canopas.com.

To read the full version, please visit this blog.

Thanks for the love you’re showing!

If you like what you read, be sure you won’t miss a chance to give 👏 👏👏 below — as a writer it means the world!

Feedback and suggestions are most welcome, add them in the comments section.

Follow Canopas to get updates on interesting articles!

--

--