In this blog post, we will cover the basic principles behind the CameraX Jetpack Library in addition to a few ways in which the library has changed since its announcement at Google I/O 2019. For up-to-date resources, check out the documentation, review the official sample, and join our online developer community.
In line with many other Jetpack libraries, one of the core properties of CameraX is lifecycle awareness. Instead of having to manage the opening and closing of camera devices and sessions, CameraX takes care of this on our behalf as long as we provide a lifecycle owner. When the lifecycle starts, so does the camera; and when the lifecycle stops because our app, activity, or fragment is going away, we don’t have to worry about closing either because CameraX does that for us. Nice!
Another core principle of the CameraX Jetpack Library is its use-case-driven approach. CameraX tries to find the sweet spot between having a very low-level abstraction, like Camera2 with its capture request templates, and an over-simplistic API, like the deprecated Camera API.
The main use cases provided by CameraX are:
- Preview: Used to display a viewfinder of what the camera is pointing at.
- ImageAnalysis: Used to parse information from the camera feed, such as to detect faces in the frame.
- ImageCapture: Used to take a high-quality photo, which can be done at either high resolution or low latency depending on the application requirements.
Use cases are built using configuration objects. The API is consistent across use cases and developers can implement this in a simple three-step process:
Note that each use case is fully independent. This means that, in theory, it is possible to set a different resolution, aspect ratio, or even lifecycle owner for each one of our active cases if we want to. In our official sample we keep things simple and use the same parameters for all use cases.
Even though we could technically have different lifecycle owners for different use cases, it is recommended that we bind our use cases to the lifecycle via CameraX in a single call and using the same lifecycle owner:
The reason behind this is that CameraX will attempt to perform graceful degradation and provide the closest possible configuration if what we asked for is not feasible. By providing all the use cases at once, we give CameraX an opportunity to find a compromise such that all use cases can run. Providing use cases incrementally adds constraints while the camera session is already active, possibly requiring the session to be restarted, which can result in a glitch for ongoing camera streams.
There are many reasons why a specific configuration may not be feasible, depending on what the Camera2 API can provide, which differs across devices. One of the main constraints is related to the limitation of camera streams, which we explained in a previous post. Unfortunately, in CameraX, there is currently no good way to know if a combination of configurations will succeed without some amount of trial and error.
Talking about compatibility on Android typically refers to API levels. Fortunately, the Camera2 APIs that CameraX is built on top of have been fairly stable since their introduction in API level 21, and CameraX supports devices starting at 21 and above — which represents about 90% of all Android devices (as of August 2019).
However, even within the same API level, there are quite a few things that can be different across devices under the hood; for example the HAL level and varying support for different pixel formats can have a big impact on performance.
To tackle this issue, the CameraX team has invested significantly in automated testing, including building a dedicated test lab with a range of devices. All with the goal that, when we pick one of the tested combinations of use case configurations, it works for all of our users.
Making CameraX work across many different devices is no small task. To help with this matter, Google collaborates with a number of OEM partners to develop extensions to the core use cases described above. The documentation best describes that:
CameraX provides an API for accessing device-specific vendor effects, such as bokeh, HDR, and additional functionality. The API enables you to query whether a particular extension is available on the current device and to enable the extension preferentially. That is, if the extension is available on that device, it will be enabled, and will degrade gracefully if it is not.
Enabling an extension for a use case, like image capture, potentially modifies all other active use cases. In other words, enabling the HDR extension potentially modifies the preview and image analysis use cases, not only the image capture use case. Note that, currently (as of alpha01 version of the extensions module), this is enforced by making sure that all active use cases have an extension enabled. We can set an error listener using an ExtensionsErrorListener, although this design might change in the future.
Recent API changes
Since the announcement at Google I/O, a few new APIs have been released and some others have changed:
- Vendor extensions are now available. To include the artifacts in our app, we add the following to our Gradle dependencies:
implementation "androidx.camera:camera-extensions:1.0.0-alpha01"(or higher)
- Use cases now accept executors instead of handlers. Simply call
setBackgroundExecutoron the use case configuration object.
- Image analysis now allows for non-blocking operation, removing the need for throttling at the app level to avoid resource starvation.
- Camera properties will be available via a new CameraInfo object. This is particularly helpful since the alternative was falling back to Camera2 APIs and guessing which camera device corresponded to the selected lens facing property.
- Camera controls such as zoom and focus, including some very handy tap-to-focus utilities, are available via the CameraControl class.
To learn more about CameraX, check out the documentation and the official sample, or join our online developer community. Stay tuned for future blog posts and updates as CameraX makes its way from alpha to a production-ready library.