Getting Started with CameraX in Jetpack Compose
Part 1 of Unlocking the Power of CameraX in Jetpack Compose
This blog post is a part of Camera and Media Spotlight Week. We’re providing resources — blog posts, videos, sample code, and more — all designed to help you uplevel the media experiences in your app.
To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.
We’ve heard from you that you love the power that both the CameraX and Jetpack Compose libraries give you, but that you’d like idiomatic Compose APIs for building Camera UIs. This year, our engineering teams worked on two new Compose artifacts, the low-level viewfinder-compose
and the high-level camera-compose
. Both are now available as alpha releases 🚀🚀🚀.
In this blog post series, we’ll show you how to integrate the camera-compose
APIs in your app. But more excitingly, we’ll show you some of the ✨ delightful UI experiences that integration with Compose unlocks. All the amazing Compose features, like adaptive APIs and animation support, integrate seamlessly with the camera preview!
Here’s a short summary of what each post will contain:
- 🧱 Part 1 (this post): Building a basic camera preview using the new
camera-compose
artifact. We’ll cover permission handling and basic integration. - 👆 Part 2: Using the Compose gesture system, graphics, and coroutines to implement a visual tap-to-focus.
- 🔎 Part 3: Exploring how to overlay Compose UI elements on top of your camera preview for a richer user experience.
- 📂 Part 4: Using adaptive APIs and the Compose animation framework to smoothly animate to and from tabletop mode on foldable phones.
With all of these in action, our final app will look as follows:
In addition, it will smoothly move to and from tabletop mode:
By the end of this first post, you’ll have a functional camera viewfinder ready to be expanded upon in the subsequent parts of the series. Do please code along, it’s the best way to learn!
Add the library dependencies
I’m assuming that you already have Compose set up in your app. If you want to follow along, simply create a new app in Android Studio. I typically use the latest Canary version, because it has the latest Compose templates (and because I like living on the edge 😀).
Add the following to your libs.versions.toml
file:
[versions]
..
camerax = "1.5.0-alpha03"
accompanist = "0.36.0" # or whatever matches with your Compose version
[libraries]
..
# Contains the basic camera functionality such as SurfaceRequest
androidx-camera-core = { module = "androidx.camera:camera-core", version.ref = "camerax" }
# Contains the CameraXViewfinder composable
androidx-camera-compose = { module = "androidx.camera:camera-compose", version.ref = "camerax" }
# Allows us to bind the camera preview to our UI lifecycle
androidx-camera-lifecycle = { group = "androidx.camera", name = "camera-lifecycle", version.ref = "camerax" }
# The specific camera implementation that renders the preview
androidx-camera-camera2 = { module = "androidx.camera:camera-camera2", version.ref = "camerax" }
# The helper library to grant the camera permission
accompanist-permissions = { module = "com.google.accompanist:accompanist-permissions", version.ref = "accompanist" }
Next, add these to your module build.gradle.kts
dependencies block:
dependencies {
..
implementation(libs.androidx.camera.core)
implementation(libs.androidx.camera.compose)
implementation(libs.androidx.camera.lifecycle)
implementation(libs.androidx.camera.camera2)
implementation(libs.accompanist.permissions)
}
We added all dependencies so that we can grant the camera permission and then actually display the camera preview. Next, let’s look at granting the right permission.
Grant camera permissions
The Accompanist permissions library allows us to easily grant the right camera permission. First, we need to set up the AndroidManifest.xml
:
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
<uses-feature android:name="android.hardware.camera" android:required="true" />
<uses-permission android:name="android.permission.CAMERA" />
..
</manifest>
Now, we can simply follow the library’s instructions to grant the right permission:
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
enableEdgeToEdge()
setContent {
MyApplicationTheme {
CameraPreviewScreen()
}
}
}
}
@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun CameraPreviewScreen(modifier: Modifier = Modifier) {
val cameraPermissionState = rememberPermissionState(android.Manifest.permission.CAMERA)
if (cameraPermissionState.status.isGranted) {
CameraPreviewContent(modifier)
} else {
Column(
modifier = modifier.fillMaxSize().wrapContentSize().widthIn(max = 480.dp),
horizontalAlignment = Alignment.CenterHorizontally
) {
val textToShow = if (cameraPermissionState.status.shouldShowRationale) {
// If the user has denied the permission but the rationale can be shown,
// then gently explain why the app requires this permission
"Whoops! Looks like we need your camera to work our magic!" +
"Don't worry, we just wanna see your pretty face (and maybe some cats). " +
"Grant us permission and let's get this party started!"
} else {
// If it's the first time the user lands on this feature, or the user
// doesn't want to be asked again for this permission, explain that the
// permission is required
"Hi there! We need your camera to work our magic! ✨\n" +
"Grant us permission and let's get this party started! \uD83C\uDF89"
}
Text(textToShow, textAlign = TextAlign.Center)
Spacer(Modifier.height(16.dp))
Button(onClick = { cameraPermissionState.launchPermissionRequest() }) {
Text("Unleash the Camera!")
}
}
}
}
@Composable
private fun CameraPreviewContent(modifier: Modifier = Modifier) {
// TODO: Implement
}
With this, we get a nice UI that allows the user to grant the camera permission before showing the camera preview:
Create a ViewModel
It is good practice to separate our business logic from our UI. We can do this by creating a view model for our screen. This view model sets up the CameraX Preview
use case. Please note that use cases in CameraX represent configurations of various workflows one can implement with the library, i.e. previewing, capturing, recording, and analyzing. The view model also binds the UI to the camera provider:
class CameraPreviewViewModel : ViewModel() {
// Used to set up a link between the Camera and your UI.
private val _surfaceRequest = MutableStateFlow<SurfaceRequest?>(null)
val surfaceRequest: StateFlow<SurfaceRequest?> = _surfaceRequest
private val cameraPreviewUseCase = Preview.Builder().build().apply {
setSurfaceProvider { newSurfaceRequest ->
_surfaceRequest.update { newSurfaceRequest }
}
}
suspend fun bindToCamera(appContext: Context, lifecycleOwner: LifecycleOwner) {
val processCameraProvider = ProcessCameraProvider.awaitInstance(appContext)
processCameraProvider.bindToLifecycle(
lifecycleOwner, DEFAULT_FRONT_CAMERA, cameraPreviewUseCase
)
// Cancellation signals we're done with the camera
try { awaitCancellation() } finally { processCameraProvider.unbindAll() }
}
}
There’s quite a bit going on here! The code defines a CameraPreviewViewModel
class, responsible for managing the camera preview. It uses the CameraX Preview
builder to configure how the preview should be bound to the UI. The bindToCamera
function initializes the camera, binds to the provided LifecycleOwner
so that the camera only runs when the lifecycle is at least started, and starts the preview stream.
The camera, which is part of the internals of the camera libraries, needs to render to the surface that is provided by the UI. So the library needs to have a way to request a surface. That’s exactly what the SurfaceRequest
is for! So whenever the camera indicates it needs a surface, a surfaceRequest
is triggered. You then forward that request to the UI, where it can pass the surface to the request object.
Finally, we wait until the UI is done binding to the camera, and make sure that we release the camera resources to avoid leaks.
Implement the camera preview UI
Now that we have a view model, we can implement our CameraPreviewContent
composable. It reads the surface request from the view model, binds to the camera while the composable is in the composition tree, and calls the CameraXViewfinder
from the library:
@Composable
fun CameraPreviewContent(
viewModel: CameraPreviewViewModel,
modifier: Modifier = Modifier,
lifecycleOwner: LifecycleOwner = LocalLifecycleOwner.current
) {
val surfaceRequest by viewModel.surfaceRequest.collectAsStateWithLifecycle()
val context = LocalContext.current
LaunchedEffect(lifecycleOwner) {
viewModel.bindToCamera(context.applicationContext, lifecycleOwner)
}
surfaceRequest?.let { request ->
CameraXViewfinder(
surfaceRequest = request,
modifier = modifier
)
}
}
As mentioned in the previous section, the surfaceRequest
allows the camera library to request a surface when it needs one to render to. In this piece of code, we collect those surfaceRequest
instances and forward them to the CameraXViewfinder
, which is part of the camera-compose
artifact.
Result
And with that, we have a working full screen viewfinder! You can find the full code snippet here. In the next blog post, we will add a smooth tabletop mode by listening to the device’s display features, and by using Compose animations to go to and from tabletop mode. Stay tuned!
The code snippets in this blog have the following license:
// Copyright 2024 Google LLC. SPDX-License-Identifier: Apache-2.0
Many thanks to Don Turner, Trevor McGuire, Nick Butcher, and Caren Chang, and Lauren Ward for reviewing and providing feedback. Made possible through the hard work by Yasith Vidanaarachch.