Build a Dog Camera using Flutter and Tensorflow

Bo Hellgren
Jan 13 · 11 min read

The Dog Camera app is similar to the regular Android camera app. But if there is a dog in view, its breed can be displayed.

When a dog is detected, it gets surrounded by a yellow frame. If the white button is tapped then, a snapshot is taken and saved to the gallery, as with the regular Camera app. But before it is saved, the dog’s breed is written in the image.

Regular Samsung Camera app — Dog Camera detects dog —Dog Camera snapshot showing breed

This article describes how to build the Dog Camera app using Flutter and Tensorflow Lite. The reader is assumed to have some Flutter experience. You should know how to create a Flutter project, how to test an app on a mobile phone attached to the computer, and how to use a Flutter plug-in (import, pubspec.yaml). If you are new to Flutter, start at

The code is available as gists on GitHub, links below. The completed app is available in the Google Play Store.

Part 1. Display the camera preview

Create a Flutter project. Update pubspec.yaml to include the camera plug-in. Replace main.dart with the following code:

The initState statements lets the app use the area at the top normally reserved for system information.

The resolution is set to medium. Using low causes the preview to look pixelated.

The CameraPreview image usually has a different aspect ratio than the mobile. To cover the whole height of the mobile screen without stretching circles into ovals, the CameraPreview widget is wrapped with an AspectRatio widget. But this makes the image wider than the mobile screen, which would be an error unless the OverflowBox widget was used.

child: OverflowBox(
maxWidth: double.infinity,
child: AspectRatio(
aspectRatio: _controller.value.aspectRatio,
child: CameraPreview(_controller))))

Do a flutter run and check that it works as expected.

Part 2. Add buttons and shutter sound

To mimic the looks of the regular camera app, replace the CameraPreview widget with

Stack(fit: StackFit.expand, children: <Widget>[
CustomPaint(painter: ButtonsPainter(null)),

The ButtonsPainter first paints the transparent black area at the bottom, and then the three buttons. It can be called with an image as argument, which will be drawn on the left button. This will be used in Part 6. The ButtonsPainter looks like this:

Wrap the Container with a GestureDetector widget to make it possible to act when a button is tapped:

Widget build(BuildContext context) {
return Scaffold(
body: GestureDetector(
onTapDown: (TapDownDetails details) async {
double mediaHeight = MediaQuery.of(context).size.height;
if (details.localPosition.dy < mediaHeight * 0.8) return;
double mediaWidth = MediaQuery.of(context).size.width;
double xTap = details.localPosition.dx;

if (xTap < mediaWidth * 0.35) {
print('Left button tapped');
} else if (xTap < mediaWidth * 0.65) {
print('Middle button tapped');;
} else {
print('Right button tapped');
child: Container(

The plays the sound of a camera shutter when the middle button is tapped. For this to work, import the Soundpool plugin and add these lines to _initializeApp:

_pool = Soundpool(streamType: StreamType.notification);
_soundId = await rootBundle
.then((ByteData soundData) {
return _pool.load(soundData);

The camera shutter sound was recorded by user Snapper4298 on Download it and put it in the project assets folder.

The complete main.dart for part 2 can be found here. Replace main.dart with this code and do a flutter run. You should hear a shutter sound when you tap the white button.

Part 3. Make snapshot from CameraImage

To process images from the camera, add this statement to _initializeApp, right after _cameraInitialized = true:

await _controller.startImageStream((CameraImage image) => _processCameraImage(image));

Flutter will now stream images from the camera at a high speed and invoke the following function for each image:

void _processCameraImage(CameraImage image) async {
if (_isDetecting) return;
_isDetecting = true;
// Detecting a dog will be done here.
setState(() {
_savedImage = image;
_isDetecting = false;

In an app like this one has to deal with a number of image formats, of which CameraImage is one. The image plug-in has lots of functions for manipulating images, like sizing, cropping and rotating. Import the image plugin as imglib:

import 'package:image/image.dart' as imglib;

Then qualify images and methods with imglib to make it clear what type of image it is.

The image plugin is very comprehensive, but it does not support the CameraImage format. The following function converts a CameraImage to a imglib.Image:

Most of this code is from a GitHub issue. Credit to Alejandro Pirola.

The CameraImage by default has a landscape orientation. The Dog Camera app is designed to be used in portrait mode only. The final imglib.copyRotate(img, 90) fixes that.

We can now create a _snapShot when the white button is tapped and display it instead of the CameraPreview by modifying the widget tree:

child: _showSnapshot
? Stack(fit: StackFit.expand, children: <Widget>[
_snapShot != null
? Image.memory(_snapShot)
: Text('wait'),
CustomPaint(painter: ButtonsPainter(null)),
: Stack(fit: StackFit.expand, children: <Widget>[
CustomPaint(painter: ButtonsPainter(null)),

The _snapShot has yet another image format, a Uint8List list of bytes as can be written to or read from a .png file. It is what the Image.memory widget accepts. To create the snapshot from the converted CameraImage and display it for four seconds, run the following code when the white button is tapped:

double mediaHeight = MediaQuery.of(context).size.height;
imglib.Image convertedImage = _convertCameraImage(_savedImage);
imglib.Image fullImage = imglib.copyResize(convertedImage,
height: mediaHeight.round());
_snapShot = imglib.encodePng(fullImage);
setState(() {_showSnapshot = true;});
Future.delayed(const Duration(seconds: 4), () {
setState(() {_showSnapshot = false;});

The complete main.dart for part 3 can be found here. Replace main.dart with this code and do a flutter run. When you tap the white button, you should hear a shutter sound and the display should “freeze” for four seconds.

Part 4. Detect dogs

TensorFlow is an open source platform for machine learning. TensorFlow Lite is the lightweight version for deploying on mobile and embedded devices. Using TensorFlow Lite you can do object detection and image classification in an app. For Flutter apps, the easiest way to do this is to use the tflite plug-in.

In Part 4, we will detect if there is a dog in the CameraImage, and in that case paint a yellow frame around it in the CameraPreview widget.

Replace the _processCameraImage function with this:

void _processCameraImage(CameraImage image) async {
if (_isDetecting) return;
_isDetecting = true;
Future findDogFuture = _findDog(image);
List results = await Future.wait(
[findDogFuture, Future.delayed(Duration(milliseconds: 500))]);
setState(() {
_savedImage = image;
_savedRect = results[0];
_isDetecting = false;

The _findDog future examines a CameraImage and returns a rectangle if there is a dog, or null otherwise. The Future.wait is used to limit the _findDog calls to two calls per second. This gives a smoother user experience. The _findDog future looks like this:

The Tflite.detectObjectOnFrame method accepts a CameraImage as input (almost, you have to concatenate its three image planes to a long list of bytes). It uses a Tensorflow model called SSDMobileNet. You must download the model file ssd_mobilnet.tflite and its list of objects, ssd_mobilenet.txt, and put them in the assets directory. In addition, include the following statement in the _initializeApp function:

await Tflite.loadModel(
model: "assets/ssd_mobilenet.tflite",
labels: "assets/ssd_mobilenet.txt");

The Tflite.detectObjectOnFrame method returns a list of detected objects, each in a Map where “detectedClass” is the object type, like “person”, “bicycle”, “car”, “dog” (edit the .txt file to see all possible types) and ”rect” is a Map showing the objects place in the image, where “x” and “y” define the upper left corner of the rectangle, and “w” and “h” define its width and height. These are expressed as fractions of the image width and height, not as pixels.

After the method completes, the result is examined. According to my experience, it can happen that a dog is reported as a cat, a bear, a teddy bear or even a sheep, so all of these are accepted. There may be more than one dog in the image. In that case the dog with the biggest rectangle is chosen and its rectangle returned. If no dog is found, the function returns null.

To paint a yellow frame around the detected dog, another painter is used:

Now all that remains is to add the RectPainter to the CameraPreview stack:

: Stack(fit: StackFit.expand, children: <Widget>[
CustomPaint(painter: ButtonsPainter(null)),
CustomPaint(painter: RectPainter(_savedRect))

The complete main.dart for part 4 can be found here. You also need to update app/build.gradle to make sure the tflite model is not compressed. Here is the complete defaultConfig section:

defaultConfig {
applicationId "se.ndssoft.dog_camera"
minSdkVersion 23
targetSdkVersion 28
versionCode flutterVersionCode.toInteger()
versionName flutterVersionName
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"

aaptOptions {
noCompress 'tflite'

The reasonfor minSdkVersion 23 (Android 6.0 Marshmallow) is that some older mobile phones, like the Samsung Galaxy S4, have a different CameraImage format, and the app does not account for this.

Replace main.dart and do a flutter run. Aim the Dog Camera at a dog, holding the mobile upright (portrait mode). You should see a yellow frame around the dog. If there is no dog nearby, you can display a picture of a dog on your computer, or print one on a piece of paper. Using the computer screen often results in interference patterns and reflections, so paper is better, and a real dog is best.

No yellow frame? Try changing ResolutionPreset.medium to ResolutionPreset.low. This is required for the app to work on the OnePlus 7, for some reason that I haven’t had the time to investigate.

Part 5. Get the dog’s breed

Tensorflow Lite can also be used for image classification. For example, an image of a dog can be classified as a poodle, a pug, a beagle, etc. To create a model for this, one needs lots of images of dogs with known breeds. The Stanford Dogs Dataset is very useful here. It contains some 20,000 images in 120 categories = breeds. Still, it is missing a few popular breeds like Dachshund, Jack Russel Terrier and Cockerpoo. I downloaded around hundred images of each from the Internet and added them. I then used Transfer Learning to adapt a pre-trained model for image classification called MobileNet to do dog classification. This was a fairly complex process which I might describe in another Medium post if asked. The resulting model dogs.tflite and the corresponding text file with all the breeds dog_labels.txt are stored in the assets directory.

The model was trained using cropped images, i.e. images where everything outside the bounding rectangle is removed. In our app, when the white button is tapped, we crop the camera image using the corresponding rectangle, and use the cropped image as input to the _classifyDog future. This returns a text string with the most likely breed:

imglib.Image convertedImage = _convertCameraImage(_savedImage);
double x, y, w, h;
x = (_savedRect["x"] * convertedImage.width);
y = (_savedRect["y"] * convertedImage.height);
w = (_savedRect["w"] * convertedImage.width);
h = (_savedRect["h"] * convertedImage.height);
imglib.Image croppedImage = imglib.copyCrop(
convertedImage, x.round(), y.round(), w.round(), h.round());
_topText = await _classifyDog(croppedImage);

The _classifyDog future is somewhat complicated:

As you can see, we use another Tflite method here, Tflite.runModelOnImage, which requires that we first load the new Tflite model. Due to the async nature of this app, we must take precautions not to run the two Tflite methods at the same time. This is taken care of by the _tfliteBusy flag, which now also is used in _findDog.

Tflite.runModelOnImage wants the input image to be a .png file, so we encode the croppedImage and write it to a temporary file. The file path is obtained in _initializeApp using the path_provider plugin:

Directory tempDir = await getTemporaryDirectory();
_tempPath = tempDir.path + '/tempfile.png';

_classifyDog returns a string like Collie (95%) where the percentage is the confidence level, i.e. a measure of how sure the model is that the image really depicts a Collie. If the confidence level is lower, two such strings, separated by a new line character, can be returned. See for example the rightmost image above.

We can now “print” the breed(s) on the image to be displayed (and, in the next part, saved to the Gallery):

_topText = await _classifyDog(croppedImage);
imglib.Image fullImage = imglib.copyResize(convertedImage,
height: mediaHeight.round());
int marginToScreen = ((fullImage.width - mediaWidth) / 2).round();
List breeds = _topText.split('\n');
fullImage, imglib.arial_24, marginToScreen, 20, breeds[0]);
if (breeds.length > 1)
fullImage, imglib.arial_24, marginToScreen, 44, breeds[1]);
_snapShot = imglib.encodePng(fullImage);

The complete main.dart for part 5 can be found here.

Part 6. Gallery and Wikipedia

To the left of the white button in the standard Samsung Camera app is a button which takes you to the latest snapshot in the Gallery (see left image above). The button is a minimized copy of the snapshot. To achieve the same with Dog Camera, prepare a button image, have the ButtonPainter to paint it on the button, and use the image_gallery_saver plugin when the white button is pressed.

Run the following code when the white button is pressed, just before the setState. The imglib.Image is converted to an ui.Image which the ButtonPainter can use:

imglib.Image button = imglib.copyResizeCropSquare(croppedImage, 40);
Uint8List buttonPng = imglib.encodePng(button);
ui.Codec codec = await ui.instantiateImageCodec(buttonPng);
ui.FrameInfo fi = await codec.getNextFrame();
_buttonImage = fi.image;
await ImageGallerySaver.saveImage(_snapShot);

Then change CustomPaint(painter: ButtonsPainter(null)) to CustomPaint(painter: ButtonsPainter(_buttonImage)).

Use the intent plug-in to invoke the Gallery app when the left button is tapped:

var intent = android_intent.Intent();
intent.startActivity().catchError((e) => print(e));

This takes you to the Gallery, where you may have to tap Recent to display the dog image. The standard Camera app shows the full image directly, but I have not been able to figure out how to do that. Any hints are welcome.

Writing to the Gallery requires WRITE_EXTERNAL_STORAGE permission. To handle this, the permission_handler plug-in is used. Add the following statement to _initializeApp:

await PermissionHandler().requestPermissions(<PermissionGroup>[,,

Using an Android intent like this obviously will not work on an iPhone. So this part of the app is Android only.

The right button in Dog Camera is used to display the Wikipedia page about the current breed. This is done in breedinfo.dart using the webview_flutter plugin:

Run the following when the right button is pressed:

_showingWiki = true;
String breed = _topText.split('(')[0];
await Navigator.push(context,
MaterialPageRoute(builder: (context) =>
BreedInfo(breed: breed)));
_showingWiki = false;

The _showingWiki flag is used to avoid calling _findDog while the user is looking at the Wikpedia page. This makes scrolling the page a little smoother.

void _processCameraImage(CameraImage image) async {
if (_showingWiki) return;
if (_isDetecting) return;

The final main.dart can be found here.

Install the app from the Google Play Store. Search for NdsSoft or use this link. Try it, give it a rating, and a review if you want.

Google Play Store screenshot

Closing comments

Thanks for reading this rather long post. I hope you enjoyed it, and that you learned something.

I think you agree with me that Flutter is a great tool, easy to use, and there are plug-ins for basically anything you could want: we used eight in this app.

Of course, to make this a “production strength” app one would have to spend more time on it, for example to

  • Make it work in landscape mode.
  • Make it work on iPhone.
  • Test it on a large number of phone models and make it work on all, including Samsung Galaxy S4 and OnePlus 7.
  • Improve error handling.
  • Add a Settings feature which lets you change things which are now hard coded, like resolution, snapshot show time, shutter sound.
  • Maybe re-architect it to use the provider architecture.

And I am sure there are things in the current code which could be done smarter and better. If you have ideas, please write a response to this article, and/or comment the relevant GitHub gist. Thank you.

Using the same code base, you could easily make a Cat Camera. Or an Apple Camera, or a Tree Camera, or a Bird Camera, or a Flower Camera … the possibilities are endless, but you will need a lot of images with labels for the transfer learning. One could even merge all these apps and make a Smart Camera. The Smart Camera app surrounds an object with a yellow frame if it is something it knows about, and classifies it when you take a snapshot.

Medium's largest active publication, followed by +582K people. Follow to join our community.

Bo Hellgren

Written by

At 75, probably the oldest Flutter developer in Sweden. Develops apps as a hobby and to prove that you are never too old to learn new things.

The Startup

Medium's largest active publication, followed by +582K people. Follow to join our community.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade