How to Create a Simple Camera App Using Android CameraX Library

Clint Paul
The Startup
Published in
10 min readDec 15, 2020
Photo by Angela Compagnone on Unsplash

I can only remember a handful of API’s that are developed by the Android team as effortless and advanced as this API. It all started with the inception of Android Jetpack. We can divide the Android development before Jetpack and after Jetpack. So many modern changes happened in the Android development in a short period after the introduction of AndroidX and Jetpack. But, only for good. I’m pleased that the Android team is giving their best service to the developers. Many such API’s are now in production. Few of them are in beta and alpha. They are asking for our feedback to make it even better.

CameraX is one such API that made an impact on the way Android developers used to handle the camera. I want to congratulate everyone who worked on this API. It is built with such clarity and easiness.

Backward compatibility

Like most of the Jetpack products, CameraX will also give you backward compatibility to Android 5.0.

Lifecycle aware

You don’t have to worry about when to open the camera, when to stop and shut down the camera or when to create a capture session.

Use-case based approach

CameraX gives you the ability to configure the use cases. Now, there are options to add configurations for three use cases.

  1. Preview: To get the real-time image on the display
  2. Image Analysis: You will get frame by frame luminosity of the image. It will help you send the data to Firebase ML Kit for machine learning and related image analysis tools. Such as face detection or recognition.
  3. Image capture: You can capture the image in high quality and save it.

Finally, you can combine these use-cases by attaching it with the listeners and decide what to do with the final output.

YouCam CameraX case study

Let’s create a camera app now

The core CameraX libraries are in beta stage. Beta releases are functionally stable and have a feature-complete API surface. They are ready for production use but may contain bugs.

Add the gradle dependencies

def camerax_version = "1.0.0-beta12"
// CameraX core library using camera2 implementation
implementation "androidx.camera:camera-camera2:$camerax_version"
// CameraX Lifecycle Library
implementation "androidx.camera:camera-lifecycle:$camerax_version"
// CameraX View class
implementation "androidx.camera:camera-view:1.0.0-alpha19"

We are adding three dependencies.

androidx.camera:camera-camera2: CameraX using the same camera2 implementation for the backward compatibility.

androidx.camera:camera-lifecycle: It will make the CameraX lifecycle aware.

androidx.camera:camera-view: Base camera controller class which includes most of the CameraX features.

Add the necessary permissions

<uses-feature android:name="android.hardware.camera.any" />
<uses-permission android:name="android.permission.CAMERA" />

Adding android.hardware.camera.any makes sure that the device has a camera. Specifying .any means that it can be a front camera or a back camera.

Create the viewfinder layout

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">

<Button
android:id="@+id/camera_capture_button"
android:layout_width="100dp"
android:layout_height="100dp"
android:layout_marginBottom="50dp"
android:elevation="2dp"
android:scaleType="fitCenter"
android:text="@string/take_photo"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent" />

<androidx.camera.view.PreviewView
android:id="@+id/viewFinder"
android:layout_width="match_parent"
android:layout_height="match_parent" />

</androidx.constraintlayout.widget.ConstraintLayout>

androidx.camera.view.PreviewView is a custom view that displays the camera feed for CameraX’s Preview use case.

Setup the Main Activity

In your onCreate method, make sure you request the user for the necessary permissions. We have to ask for the CAMERA permissions here. I’m using easyPermissions library to handle the permission requests on Android M and above. Then create and initialize objects for ImageCapture, File and ExecutorService. Also, create a click listener for the camera capture button.

private var imageCapture: ImageCapture? = null
private lateinit var outputDirectory: File
private lateinit var cameraExecutor: ExecutorService


override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
requestPermission()

// Set up the listener for take photo button
camera_capture_button.setOnClickListener { takePhoto() }

outputDirectory = getOutputDirectory()

cameraExecutor = Executors.newSingleThreadExecutor()

}

Create a new function getOutPutDirectory() which will create a new directory if it’s not created already.

private fun getOutputDirectory(): File {
val mediaDir = externalMediaDirs.firstOrNull()?.let {
File(it, resources.getString(R.string.app_name)).apply { mkdirs() }
}
return if (mediaDir != null && mediaDir.exists())
mediaDir else filesDir
}

Also, let’s don’t forget to shutdown the cameraExecutor service which will stop all actively executing tasks.

override fun onDestroy() {
super.onDestroy()
cameraExecutor.shutdown()
}

Now, create a new function startCamera() where we will write all the use-case configurations, listeners and bind all of these to the lifecycle of the activity. Make sure you call this function, only if the required permissions are granted. We will fill both the functions takePhoto() and startCamera() in sometime.

Now, run the code. The app should look like this.

In the next step we will add the preview

Implement preview use case

Remember what I said earlier? CameraX is a use-case based approach, and we can configure these use cases and add listeners to them and then bind it with the activity lifecycle? Let’s see how we can implement a preview use case first. Add the following code to the startCamera() function.

Since it is a lifecycle aware approach, you don’t need to think about opening and closing the camera. It will open when the activity starts and close when the activity destroys. ProcessCameraProvider helps us to do this. Create a new instance of the ProcessCameraProvider

val cameraProviderFuture = ProcessCameraProvider.getInstance(this)

Add a listener to the cameraProviderFuture with two parameters. A Runnable and ContextCompat.getMainExecutor(this) which will return an executor that runs on the main thread.

cameraProviderFuture.addListener(Runnable {}, ContextCompat.getMainExecutor(this))

In the runnable, add the ProcessCameraProvider which will help us to bind the camera to the LifeCycleOwner with in the applications process.

val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

Now, let’s create our preview object and configure it with the necessary details. Create a Preview builder and then get the surface provider from the viewfinder and set it on the preview.

val preview = Preview.Builder()
.build()
.also {
it
.setSurfaceProvider(viewFinder.surfaceProvider)
}

Next, create a cameraSelector object which will help us select BACK CAMERA or FRONT CAMERA.

val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

Finally, create a try block and inside that make sure nothing is bound to your cameraProvider and then bind the cameraSelector and the preview objects to the cameraProvider.

try {
cameraProvider.unbindAll()
cameraProvider.bindToLifecycle(
this, cameraSelector, preview)
}

Don’t forget to add the catch block to make sure we are not missing any exception details.

catch(exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}

Your final code in the startCamera() will look like this.

private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)

cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

// Preview
val preview = Preview.Builder()
.build()
.also {
it
.setSurfaceProvider(viewFinder.surfaceProvider)
}

// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, preview
)

} catch (exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}

}, ContextCompat.getMainExecutor(this))
}

Now, run the code. You will be able to see our Camera Preview in action.

You can see the preview of the image now

Implement ImageCapture use case

It is another use case that helps us to capture the image and then save it into our device. Like we did earlier, we will create an ImageCapture object and add the required configurations. Then we will bind it to the cameraProvider . We will fill the takePhoto() function which will be called when the Take photo button is pressed.

First, get a reference to the imageCapture object. We should check if the object is null or not before continuing. If you accidentally click on the Take picture button before is initialized, app might crash.

val imageCapture = imageCapture ?: return

Then, create a file to hold the captured image. It is good practice to make the file name unique. So, we are adding the timestamp to make it distinctive.

// Create time-stamped output file to hold the image
val photoFile = File(
outputDirectory,
SimpleDateFormat(
FILENAME_FORMAT, Locale.US
).format(System.currentTimeMillis()) + ".jpg"
)

Create a OutputFileOptions object. This class is used to configure save location and meta data. Save location can be either a File, MediaStore or a OutputStream. The metadata will be stored with the saved image.

// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()

Call the takePicture() on the imageCapture object. Pass the parameters such as outputOptions, the executor and a callback to know when the picture is saved.

imageCapture.takePicture(
outputOptions,
ContextCompat.getMainExecutor(this),
object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
}

override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = Uri.fromFile(photoFile)
val msg = "Photo capture succeeded: $savedUri"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.d(TAG, msg)
}
})

You will get two callbacks. One to know if the image capturing failed for some reason. If the failure callback didn’t call, then we will understand that the image capturing process is a success. The complex task of capturing and saving the image has been completed within a few lines of code.

Don’t forget. We have to add two more lines in the startCamera() class to make it work. Initialize the imageCapture use case and then bind it to the cameraProviderobject.

imageCapture = ImageCapture.Builder()
.build()
cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageCapture)

Your final takePhoto() method will look like this.

private fun takePhoto() {
// Get a stable reference of the modifiable image capture use case
val imageCapture = imageCapture ?: return

// Create time-stamped output file to hold the image
val photoFile = File(
outputDirectory,
SimpleDateFormat(
FILENAME_FORMAT, Locale.US
).format(System.currentTimeMillis()) + ".jpg"
)

// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()

// Set up image capture listener, which is triggered after photo has
// been taken
imageCapture.takePicture(
outputOptions,
ContextCompat.getMainExecutor(this),
object : ImageCapture.OnImageSavedCallback {
override fun onError(exc: ImageCaptureException) {
Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
}

override fun onImageSaved(output: ImageCapture.OutputFileResults) {
val savedUri = Uri.fromFile(photoFile)
val msg = "Photo capture succeeded: $savedUri"
Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
Log.d(TAG, msg)
}
})
}

Your current startCamera() function will be looking like this.

private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)

cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

// Preview
val preview = Preview.Builder()
.build()
.also {
it
.setSurfaceProvider(viewFinder.surfaceProvider)
}

imageCapture = ImageCapture.Builder()
.build()

// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()

// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageCapture)

} catch (exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}

}, ContextCompat.getMainExecutor(this))
}

Run the app again and press on the Take Photo button. You will see a toast with the captured images saved location.

Please navigate to this location to find the file

Implementing ImageAnalysis use case

Finally, let’s implement the ImageAnalysis use case. Like we discussed earlier, It helps you when you want to integrate an image analysis tool into your app( Firebase MLKit or Amazon Rekognition ). It will analyze frame by frame of the incoming image and will return all the details of the such as luminosity.

Create a new inner class called LuminosityAnalyzer and override the ImageAnalysis.Analyzer interface. It will log the average luminosity of the image.

private class LuminosityAnalyzer(private val listener: LumaListener) : ImageAnalysis.Analyzer {

private fun ByteBuffer.toByteArray(): ByteArray {
rewind() // Rewind the buffer to zero
val data = ByteArray(remaining())
get(data) // Copy the buffer into a byte array
return data // Return the byte array
}

override fun analyze(image: ImageProxy) {

val buffer = image.planes[0].buffer
val data = buffer.toByteArray()
val pixels = data.map { it.toInt() and 0xFF }
val luma = pixels.average()

listener(luma)

image.close()
}
}

Now, all we need to do is create a use case object for ImageAnalysis and configure it and then bind it with the cameraProvider

val imageAnalyzer = ImageAnalysis.Builder()
.build()
.also {
it
.setAnalyzer(cameraExecutor, LuminosityAnalyzer { luma ->
Log.d(TAG, "Average luminosity: $luma")
})
}
// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageCapture, imageAnalyzer
)

The full method will now look like this.

private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(this)

cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

// Preview
val preview = Preview.Builder()
.build()
.also {
it
.setSurfaceProvider(viewFinder.surfaceProvider)
}

imageCapture = ImageCapture.Builder()
.build()

val imageAnalyzer = ImageAnalysis.Builder()
.build()
.also {
it
.setAnalyzer(cameraExecutor, LuminosityAnalyzer { luma ->
Log.d(TAG, "Average luminosity: $luma")
})
}

// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()

// Bind use cases to camera
cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageCapture, imageAnalyzer
)

} catch (exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}

}, ContextCompat.getMainExecutor(this))
}

Run the app again and check if you can see the Image analysis logs.

Even though the CameraX API is still in beta, many apps have already started using it in production. We can expect it to be in production by next year for sure. There is a chance for a slight variation from the current codebase when it finally gets deployed into production. Nevertheless, the basics and core will remain the same.

I hope you enjoyed reading this post. Please try it yourself and check if everything is working as expected. You can find the complete codebase here. Also, please check this codelab if you need more information. I wish you all a great Christmas holiday. Have fun coding. Stay safe.

:)

Article was originally posted at clintpauldev.com.

--

--

Clint Paul
The Startup

Software Engineer @ShareChat. I love to read and write.