Working With Video in iOS: AVFoundation and CoreMedia

Tim Beals 🎸
Swift2Go
Published in
10 min readOct 14, 2018

This article will cover a number of important concepts in order to work with video in iOS. It is divided into two parts. In Part 1 we will create a VideoService which sets up and launches a UIImagePickerController responsible for recording and saving a movie file to our Photos application. Then in Part 2 we will create a custom VideoPlayerView in order to playback our video. Additionally, VideoPlayerView will provide functionality for pausing and scrubbing through the video file with a slider and displaying the duration and current position of playback. Along the way, this article will provide explanation of the frameworks that are being used. Let’s get started!

You can fork the demo project that accompanies this article here:

Recording and Saving Video with VideoService

Step 1: Create Singleton

We can only record and save a single video at a time, and it’s very likely that the entity that achieves these two tasks will need to be stateful in order to hold a reference to the saved video. With this in mind, our VideoService is a prime candidate for the Singleton pattern. In the snippet below, the initializer is made private (line 4) and a single instance is stored in a static property (line 3). This ensures that no other VideoService instances will exist in our system, and that our single instance is globally accessible.

Step 2: Get Permissions, Perform Device Checks and Launch Video Recorder

Before we can think about launching our video recorder, we need to seek the users permission to access their device hardware, specifically the microphone and the camera. In typical iOS fashion, we are required to request permission through the info.plist. Open the file and add two new keys: Microphone Usage Description, and Camera Usage Description. Each will need a String value attached, which is a brief explanation to the user why the application wishes to have access to their hardware.

Now it’s time to perform some checks and setup our video recorder, which is actually an instance of UIImagePickerController. In the snippet below, we have two private methods. First, isVideoRecordingAvailable (lines 3–13) returns a boolean to indicate if the device has a camera available and if the UIImagePickerController has the movie type available to it. Notice that the return value for media (line 9) is an array of String type. In order to check that the identifying String for movie type is available, we can use a constant kUTTypeMovie which is accessible when we import MobileCoreServices at the top of our file. This constant is of type CFString, which means that it requires upcasting to String type.

Our second private method setupRecordingPicker (lines 15–21) returns an instance of UIImagePickerController which has been configured to record video. Notice that we specify the media type by again referencing the kUTTypeMovie constant.

Now we can create a public method launchVideoRecorder(in vc: completion:) (lines 23–34) which calls the previous two methods. In it, we check if our device is an iPhone and if it is, we get our instance of UIViewController to present the picker modally. For convenience, I created a completion block in case the view controller needs to execute code after the segue animation has completed.

Now we can launch our video recorder from our view controller with one simple line:

VideoService.instance.launchVideoRecorder(in: self, completion: nil)

To test our code, we need to use an actual device as opposed to a simulator. Build and run, and when you point your camera at some kid art that is next to your work station you should see something like this…

Check out that cotton candy…

Step 3: Conform to UIImagePickerControllerDelegate

If you tap the record button, you will see that the expected default recording behaviour is available to us, however we haven’t specified how we want to save the video yet. The action that will trigger our save is when the user taps ‘Use Video’ after the recording has been captured. This causes the picker to access it’s delegate and call the method imagePickerController(picker: didFinishPickingMediaWithInfo:). So now let’s adopt and conform to the UIImagePickerControllerDelegate.

First, notice that UIImagePickerControllerDelegate conforms to NSObject, and so we need our VideoService to do the same (line 1). This means that our initializer is now an override and needs to be marked as such (line 4). The compiler also informs us that we need to adopt UINavigationControllerDelegate protocol which contains several optional methods that we won’t require in our demo.

Once this is done, we need to set our VideoService as the picker delegate. We can do this in the method setupVideoRecordingPicker() from the previous snippet and set picker.delegate = self.

To check that this works, we can add a print statement to the delegate method (line 11), and build and run our app once again.

Step 4: Save Video to Photos Album

Once we have confirmed that our delegate method is being called successfully, we can go about saving the captured video to the Photos application.

Let’s start by getting the video url in the didFinishPickingMediaWithInfo delegate method (lines 18–21). When the picker selects some media (image or video) the method is called, and an Info dictionary is passed in. You can access all kinds of useful information about your selected media by calling one of the InfoKey cases which is simply an enum held by UIImagePickerController. In our example we want to access the URL to the video we captured, so we subscript using the appropriate key and cast the value (which is stored as Any) to our desired URL type (line 19).

Once we have this url, we can create a new private method saveVideo(at mediaUrl:) (lines 3–9) and pass in the url. Again we need to seek permission from the user to store media in their library, so we return to our info.plist and create a new key Photo Library Additions Usage Description with an accompanying string which will be presented to the user. With that done, saving is made incredibly simple with two static methods found in UIKit, the first returns a boolean to indicate whether the video is compatible with saving to a photo album, and then the second performs the save and calls a #selector method to indicate whether the save was successful. This #selector method has been created and will be used in the next step (lines 11–13).

Step 5: Pass Video URL with Delegate Pattern

Once we have saved the video we want to be able to pass its url to our view controller. To do this, we will use the delegate pattern. First, we create a protocol VideoServiceDelegate and create a method videoDidFinishSaving(error: url:) (lines 1–3). This method will pass an optional error if the save was unsuccessful, and then an optional url if the save was successful. We create a delegate property of our VideoServiceDelegate? and use that property to pass the error and url in the #selector method we created in the previous snippet (lines 16–19).

Now in the view controller, we can adopt VideoServiceDelegate, implement the protocol method and set VideoService’s delegate property using VideoService.instance.delegate = self. In the snippet below, we are confirming that the save was successful and showing an alert informing the user. If we build and run, we should find that our save is successful! Now we can also open our Photos application and see the newly created movie in ‘All Photos’.

Playing Back Video with VideoPlayerView

In the past, video playback was achieved with an MPMoviePlayerController which provided some handy default functionality in a few lines of setup code. It was deprecated in iOS 9 and replaced by AVFoundation’s AVPlayer which requires more in terms of setup, but allows for greater customization of your player.

In the next section, we will create a custom VideoPlayerView, which is a subclass of UIView and contains all of the UI components for interacting with the video. In the screenshot below you can see that there are two labels and a slider at the bottom of the view to show the current position and overall duration of the video. This article will not address how these components are created and added to their superview, but will instead focus on setting up the AVPlayer and handling the tracking of video progress using AVFoundation and CoreMedia.

Step 1: Create AVPlayer and AVPlayerLayer

In order to show a video, we first need to import AVFoundation and then setup two properties AVPlayer and AVPlayerLayer. Notice that we want to initialize our custom class with a frame and videoURLString (line 6) which we pass into a method setupVideoPlayerWith(path:). Here we create the AVPlayer; the playback engine which handles playing, pausing, and tracking progress. This can be done on a single line with the video url (line 19). Once we have our player setup we need to create the player layer. All subclasses of UIView have layers and typically we interact with them to create borders, radius and even layer animations. In this case the layer acts as our player screen. We need to initialize our layer with the player (line 20) and then set the frame of our layer and add it as a sublayer of our class (lines 22–23). With all of this setup done, we can begin playback with the play() method (line 14).

Now, we can test it out by taking the url from our VideoServiceDelegate method and passing it into a method playMovie(with url:) (lines 15–19) that initializes the VideoPlayerView with the video url and adds it as a subview of our ViewController view. When we build and run, we should see that the video starts playing on its own.

Step 2: Tap Gesture Controls Play and Pause

Let’s now control play and pause by adding a tap gesture to our VideoPlayerView. We initialize the tap gesture and add it to our custom class in one line (line 9) using handleTapGesture(sender:) as our #selector method. This method checks a boolean isSettingPlay (line 3) to determine whether play should be started or stopped and then changes the value of that same property (line 18). Build and run again to check that the behaviour is as we would expect.

Step 3: Track Video Progress with Labels and Slider

So far we have been able to successfully play and pause our video, but our time indication labels and slider haven’t changed at all. In our VideoPlayerView we create a new method trackVideoProgress() in order to update our current time label and slider (lines 25–33). We begin by creating an instance of CMTime (line 26). CM is the prefix for CoreMedia, which is the framework responsible for handling sample processing, synchronization, queues, and time representation in AVFoundation. CMTime is initialized with a value and timescale, and can be converted into many different time measurement units. Seconds is always value / timescale. With this in mind, we can set our desired time interval of 0.5 seconds. Now we observe our AVPlayer instance and get a CMTime object with the current time at the frequency of our interval using addPeriodicTimeObserver(forInterval: queue: , using: ). We update the label and the slider within the completion handler.

In order to get the total length of our video, we can override an NSObject instance method observeValue(forKeyPath:, of object: , change:, context: ) (lines 25–41). The AVPlayer calls this method once it has loaded the video from the URL and before it begins playback. In this method we can guard that the loadedTimeRanges are available (line 37). Provided they are, we will be able to access the duration of our video (line 39).

To properly display the time in our labels, we can create an extension on our String class that accepts a CMTime argument and converts it to a String showing minutes and seconds in our desired format: 00:00 (lines 1-10). We can convert the CMTime type to a Float64 representation of seconds using CMTimeGetSeconds(_ time:). Then it is a simple matter of getting the number of minutes by dividing by 60 and the remaining seconds by using the modulo operator % (lines 5–6). This method is called to update label text in both of the previous methods on (line 28 and line 40).

Finally, we can update the slider in our trackVideoProgress method (line 30) using setSliderValue(for player:, progress:) (lines 14-19). In it, we access the duration of the video from the AVPlayer and the current progress and then divide the progress by the duration. The slider needs to be set up with minimum and maximum values of 0.0 and 1.0. Once again we can build and run.

Step 4: Scrub to Different Points in Video Using Slider

We have the playback of the video now driving the movement of the slider, but we can also do the opposite and have the user move the slider to determine the progress of the video. In order to do this we need to add a #selector method to our slider with slider.addTarget(self, action: #selector(handleSliderChangedValue(sender:)), for: .valueChanged). In this method we get the duration (CMTime) of our video from the player (line 4), and convert it to seconds with CMTimeGetSeconds (which you will remember is type Float64). We then calculate the seek position in terms of number of seconds by multiplying the slider value with the total seconds. Convert that back into a CMTime object using a timescale 1 for seconds, and then pass that into the AVPlayer method seek(to:, completionHandler:).

Wrap Up

And that’s it! We have managed to implement a whole lot of functionality into our two custom classes and provided the user with a range of typical features when working with media. An important thing to remember is that many of the concepts are transferrable. For example UIImagePickerController is able to work with images and AVFoundation is a powerful framework for working with audio, and with little adjustment you could create a totally different user experience with a totally different media type. Please share this article and build something cool. Thanks for reading!

--

--