Learning AVFoundation — Part 2 (Understanding and Using AVAssets)

Divya Nayak
3 min readDec 5, 2019

--

Continued from Part 1: https://medium.com/@divya.nayak/learning-avfoundation-part-1-c761aad183ad

Now we understand what is an AVAsset in brief. We can get all the information about an asset through AVAsset. All the information need not be available instantly. Like, to get the exact duration of the media needs some time to process. Once we have an AVAsset object we can get the thumbnails, convert it to another format, trim the contents, etc.

In this blog, we will learn:

  1. Creating AVURLAsset

2. Load the properties of an asset to use them

3. Generating thumbnail for an asset

4. Trim and export the asset to a different file format

  1. Let us understand how we create an AVAsset from a URL using an example.

Create a simple single view Xcode project -> Use any video file say “sample.mov” and drag it into the project bundle -> Add AVFoundation framework in the general settings of the project target -> Import AVFoundation in the class you want to implement.

sampleAsset is the AVAsset object for the sample.mov. and sampleAsset.duration gives an approximate duration of the video file. Asset duration is CMTime type, which is a struct from the Core Media framework. Initializing this way need not be necessarily mean that all the data are available right there. The exact duration of the video, all the tracks of the video need time to calculate. Hence one should not block the main thread, rather it should load the details related to AVAsset asynchronously.

2. Load duration asynchronously for an asset

Based on the requirements we can load the respective keys. If we want an asset for playback then we should go for loading the tracks property of an asset. Replace duration with tracks to load the tracks for an asset in the above code.

3. Generating thumbnails for an asset

AVFoundation has “AVAssetImageGenerator“ class that helps in generating the still images for an asset. An asset can be a video or audio. So assets need not can have images along with it. Initialization of AVAssetImageGenerator with asset objects would be successful. But before that, we got to check if the asset contains images.

Image generator gives the single image at a specific time/series of images at a specific set of times. AVFoundation understands time in terms of CMTime of Core Media. We also have control over the dimension of the images using the maximumSize property of the image generator. We can also capture the actual time where the thumbnail is generated.

In order to generate many thumbnails, an array of CMTime in terms of NSValue should be sent as a parameter to generateCGImagesAsynchronously: method. The closure’s result parameter is used to check if the image generation is successful/failed. It also gives us the requested and actual time of thumbnails generation along with CGImageRef

As image generation is time-consuming and it is asynchronously called; we have a method to cancel the image generation process by using a method cancelAllCGImageGeneration()

4. Trim and export the asset

Using AVAssetExportSession we can transcode from one format to another, at the same time we can specify a range of CMTime for which the asset will be trimmed in the exported media. Before initializing the session, one must understand the export preset the asset is compatible with. An asset can be a video/audio. So we should not specify video preset to an audio asset. So we must first check the compatible preset for an asset using exportPresetsCompatibleWithAsset:

Set the output URL, compatible preset and type of the video to be and export the asset asynchronously. Handle the export status once the export is done. Now we have a trimmed asset with a different file format (.mov to .mp4).

contd: Part 3 https://medium.com/@divya.nayak/learning-avfoundation-part-3-playback-playing-assets-e11b6010d6c5

Thank you!

--

--