Illustration by Virginia Poltrack

What’s new in CameraX

How to add advanced camera controls to your app.

Xi Zhang
Xi Zhang
Feb 24, 2020 · 6 min read

Co-authored with Caren Chang, Developer Programs Engineer

This article is based on a presentation from the Android Dev Summit 2019 with updates to reflect the current state of the CameraX API. Watch the full presentation here:

The Camera2 API is powerful but it can be tricky to get the most out of it, especially due to the variety of camera capabilities offered by different devices such as HDR or night modes. To address this, at last year’s Google I/O, we announced CameraX a new Jetpack library designed to take the frustration out of adding camera features to apps.

As part of an effort to help developers more easily integrate camera features into their applications, the CameraX team focused on these key aspects:

  • New capabilities and APIs, to help you effortlessly enable more camera features in your apps. These now include support for tap-to-focus, zoom control, and device rotation information. This makes it easier to handle different configurations and flash ability based on lens — so you can query whether a camera lens has flash ability, and a lot more.
  • Widening the range of devices providing extension functions, so apps can make use of camera features such as night mode or HDR on more devices. At the time of writing, we have compatibility with phones from Samsung, LG, OPPO, Xiaomi, and Motorola (from Android 10).
  • Testing, focusing in particular on API consistency and stability, using a lab with 52 different device models from low- to high-end devices — representing over 200 million active devices.

As part of this work the CameraX team is working closely with the Lens Go team to understand how the library performs in the wild. Lens Go is an app that lets users point their camera at something — like an airport sign — analyze the image, and give feedback in real-time — such as a translation of the sign. This cooperation proved to be a good way to test how the CameraX library works, particularly on low-end devices — a key device segment for Lens Go. And, with millions of users using Lens Go every month across hundreds of devices, seeing how the CameraX library performs in Lens Go has helped a lot to deliver a more stable library.

One of the biggest benefits that the Lens Go team saw from integrating CameraX was a smaller APK size, because CameraX has been heavily optimized for performance and size. They were also able to ship features faster without having to maintain their own camera code.

Using CameraX

  • Preview, enabling you to include a viewfinder showing a live camera feed in your app.
  • Image Analysis, enabling you to access camera frame data to implement features such as object detection and augmented reality.
  • Image Capture, enabling you to take a picture and save it to disk.

For each use case, there are three setup steps: configuration, binding, and interaction.

To illustrate how this works, let’s take the example of image capture: taking a picture in your app.

The first step is to create an image capture use case. You can specify parameters such as the resolution of the picture. But, you don’t have to worry about the set resolution being available on your user’s device. If the device doesn’t support the requested resolution, CameraX simply falls back to the nearest resolution. This means that a configuration is always successful.

The second step is binding. There are different lifecycles to consider, such as the lifecycle of activities, the lifecycle of camera and capture session. By binding the use case to a LifecycleOwner, CameraX manages of all the lifecycles so you don’t have to manage the state machines yourself. For example, the camera is opened when it’s needed and released once done.

The final step is interaction. When you call takePicture your app will snap a picture.

So, with just a few lines of code, you have an image capture pipeline.

Advanced features

Implementing tap to focus

If you do this with the Camera2 API you need to figure out the transformation between the UI (viewfinder) coordinates and the camera sensor (image) coordinates and specify the size of the focus area.

This is how you do it with CameraX:

First, you transform the coordinates by creating aDisplayOrientedMeteringPointFactory that takes in a Display, a CameraSelector andSurfaceView width/height. Then use it to convert the metering point in UI coordinates to normalized sensor coordinates.

Next, you create an action. To focus and meter at the same point, use FocusMeteringAction, passing the normalized metering point.

Finally, you give this action to cameraControl and CameraX handles the rest of the work.

Implementing pinch to zoom

This is how you do it with CameraX:

To implement pinch to zoom we need two values: the base value and the delta value. The base value is the current zoom ratio, and the delta value is how much it’s been changed with the user’s pinch.

To get the delta value, create ScaleGestureDetector. This Android object converts a touch event into a scale factor. The scale factor here is the delta value.

Then, the base value is obtained from cameraInfo, the API for getting the status of camera features such as zoom ratio, flash availability, and sensor rotation degrees.

With these two values, now call size and ratio on the camera control. CameraX will figure out the crop region and send a request to the camera and that’s it, you have implemented pinch to zoom.

Implementing a zoom slider

Implementing a zoom slider with Android logo
Implementing a zoom slider with Android logo

This isn’t the best user experience, which is why CameraX includes the setLinearZoom API. This API takes a slider value and does the necessary transformation to deliver linear zoom.

Implementing a zoom slider can, therefore, be done with one line of code:

Learn more

If you have any questions or feedback, post them on the CameraX Google group.

What do you think?

Android Developers

The official Android Developers publication on Medium