🍎 WWDC 18: Integrate FaceTime features within your app with Agora.io 📱

Sid Sharma
Agora.io
Published in
4 min readJun 8, 2018

At WWDC, Apple touted new features for FaceTime, creating new buzz for developers interested in real-time communications. Starting now, FaceTime will be supporting group video calls for up to 32 participants- a considerable update. The service is now integrated with the iMessage app, so users can seamlessly move from text chat to video chat. Adding extensive filters and emojis to the platform allows further engagement options for both users and developers. It also allows Apple to stay on par with similar competition.

Another added feature to Facetime is the ability to gauge which participant is currently speaking. To provide a better user experience, the video window of the actively speaking participant is enlarged. The design is similar to Google Hangouts, but cleaner and closer to Apple’s design ethos.

Excitement has been brewing at numerous conferences in recent years over hot new features associated with video calling. Today we’re going to run through some of the most interesting ones and explain how we can implement them using the Agora RTC SDK.

FaceTime Video Window Automatically Adjusts

Users may be hesitant about this new group FaceTime feature, because it seems challenging to follow the conversation with the number of people in the group. However, like Google Hangouts, FaceTime now has the ability to automatically select/highlight the user who is speaking and will enlarge their window.

Facetime WWDC 2018 Features

This feature will be incredibly useful, and surprisingly will be straightforward to implement. You can make this happen via the Agora RTC SDK in just a few steps (without having to worry about AI integrations).

Firstly, we implement the multiplayer video call function in the SDK. Secondly, use the interface to return the user’s uid (unique identifier) to identify which user is speaking. Next, adjust the UI to maximize the user window. The specific interface calls include:

FaceTime: Video Preview

FaceTime has always had video preview functionality. Users can view when someone FaceTimes them before answer. Users can see their local video rendered at call initiation.

From an implementation point of view, this is quite easy to exercise within your code logic. The only step is to render the video data onto the interface before one can begin sending audio and video streams.

Google Duo: Knock Knock

At Google I/O 2016, Google released an instant messaging tool, Allo, along with a video calling application, Google Duo. Two years later, the Allo project was suspended, however Duo has been growing.

Within Duo, there is a “Knock Knock” function that users seem to really like. Before the video call is connected, the party receiving the call can see the caller’s video stream. However, the calling party cannot see the screen they are calling. This works like a knock at the door — you look through the peephole and see who’s there, but they can’t see you.

Knock, Knock — Google Duo

Using the Agora RTC SDK to implment this functionality, we need to use the method to control the transmission logic of audio and video streams. Before the video call is answered, the calling and receiving parties need to call the interface as follows.

Caller:

Receiving Party:

After the call is answered, we need to modify some of the previous parameters:

Caller:

Receiving Party :

Snapchat: Real-Time Stickers

Adding a variety of stickers and effects to your video call is not new, but when Snapchat began featuring them in live broadcast, they were a instant hit! Let’s add these to the live broadcast/video call, again using the Agora Platform.

Start by pre-processing the video through self-developed or third-party libraries, and then render and transmit via the Agora RTC SDK. Here, developers can call the interface in the following two ways.

Method One: Using self-collection API

Method 2: Custom Video Source (Supported by Video Call SDK 2.1 and above)

Step 1: Implement the AgoraVideoSourceProtocol and build a custom Video Source class in which you:

  • Specify the type of buffer used for video capture (bufferType)
  • Prepare the system environment, initializes parameters, etc. in the initialized video source (shouldInitialize)
  • Start capturing video data in the startVideo source (shouldStart)
  • Pass the collected data to the Media Engine through the interface defined by the AgoraVideoFrameConsumer protocol
  • Stop capturing video data (shouldStop)
  • Release hardware resources and other system resources (shouldDispose)

Step 2: Create a Custom Video Source Object

Step 3: Set the custom video source object to Media Engine through Media Engine’s Set Video Source (setVideoSource) method

Step 4: The method of AgoraVideoSourceProtocol that Media Engine will implement in the appropriate actual call to the custom video source

Want to implement FaceTime yourself? Download the Agora SDK and try it within your app or fork this 1–to-1 Video Call demo and add the features above!

Note: In the snippets above, we’re only using iOS as an example to list the interface invocation logic and implementation — this can be done similarly on other operating systems (Android, Mac, PC, etc).

Please shoot me an email or DM if you have any questions!

--

--

Sid Sharma
Agora.io

Born and raised in the Bay Area #Cali ☀️ I speak Swift & C#. Sometimes English. GS Warriors | Dallas Cowboys | Chi Cubs DevRel | @Nexmo | @CodinGame | @Agora.io