How to Save Images to Supabase Storage from Expo Camera — React Native

William Schulte
8 min readFeb 13, 2024

--

Hey Folks!

Today we’ll look at how to save images captured with expo-camera to Supabase Storage. The app that we’ll build today is a simplified model of a camera app, with the single function of uploading captured images. For this tutorial, it will be helpful to already have some knowledge of React Native with Expo, as well as some familiarity with Supabase, a Firebase alternative for building Postgres backends with minimal configuration.

Let’s get started!

Project Setup

First, we’ll create a new React Native Expo project:

npx create-expo-app my-camera-app

Next, we’ll install the Expo Camera package:

npm install expo-camera

Then, we’ll install react-native-vector-icons, in order to give the app access to the camera icon in the MaterialCommunityIcons library:

npm install react-native-vector-icons

Finally, we’ll need to set up the Supabase Client and Supabase Storage in the project. If you’re new to the process of setting up Supabase Storage with React Native, please see the following Supabase Storage quick setup guide I put together. Once you’ve completed the setup, you can proceed to finish setting up the camera.

Setting up the Camera Component

Back in the project, open App.js and replace the existing code with the code below. Our App.js file should now look like the following:

import { Camera } from 'expo-camera';
import { useState } from 'react';
import { Button, StyleSheet, Text, TouchableOpacity, View } from 'react-native';
import MaterialCommunityIcons from 'react-native-vector-icons/MaterialCommunityIcons';
import { supabase } from "./supabaseClient.js";

export default function App() {
const [permission, requestPermission] = Camera.useCameraPermissions();
const [camera, setCamera] = useState(null);

if (!permission) {
// Camera permissions are still loading
return <View />;
}

if (!permission.granted) {
// Camera permissions are not granted yet
return (
<View style={styles.container}>
<Text style={{ textAlign: 'center' }}>We need your permission to show the camera</Text>
<Button onPress={requestPermission} title="grant permission" />
</View>
);
}

const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
// handle capturing of images
}
}

return (
<View style={styles.container}>
<Camera
style={styles.camera}
ref={ref => {
setCamera(ref);
}}
>
<View style={styles.buttonContainer}>
<TouchableOpacity style={styles.button} onPress={captureImage}>
<MaterialCommunityIcons name="camera" size={36} color="black" />
</TouchableOpacity>
</View>
</Camera>
</View>
);
}

const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
},
camera: {
flex: 1,
},
buttonContainer: {
flex: 1,
flexDirection: 'row',
backgroundColor: 'transparent',
justifyContent: 'center',
margin: 32,
},
button: {
width: 70,
height: 70,
borderRadius: 35,
backgroundColor: '#fff',
justifyContent: 'center',
alignSelf: 'flex-end',
alignItems: 'center',
},
text: {
fontSize: 24,
fontWeight: 'bold',
color: 'white',
},
});

The above code is a basic template for capturing images using the Expo Camera component. The template is configured to natively handle permission to access the camera, while the back camera is the default camera when running the app (for camera toggling, see the following Expo docs).

After implementing the above code, we’ll run the app to make sure everything is working so far. I recommend running the camera app in the Expo Go app on either iOS or Android. To do this, simply run npx expo start in the terminal and scan the QR code to open the app in Expo Go on your phone.

Now that our camera app is up and running, we’ll proceed to enable capturing and storing!

Enabling Capturing and Storing of Images

Before setting up the camera app to push the image to Supabase Storage, let’s first run console.log() in the captureImage() function to see what happens when capturing an image.

  const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
console.log("photo", photo);
}
}

The result of logging “photo” is an object containing several properties, one of which is the image URI:

The URI is the source from which we’ll extract our image data, before storing it in Supabase storage.

The next step is to create a new function called uploadImage(), which we’ll add below the captureImage() function.

const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
console.log("photo", photo);
}
}

const uploadImage = async () => {
// upload image functionality here
}

After calling captureImage() and retrieving the “photo” object, we’ll then pass the URI to uploadImage():

const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
uploadImage(photo.uri);
}
}

const uploadImage = async (uri) => {
// upload image functionality here
}

Inside uploadImage(), we’ll use a fetch method to retrieve the image from the URI. The fetch method will then return a promise that resolves to a response object:

const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
uploadImage(photo.uri);
}
}

const uploadImage = async (uri) => {
const response = await fetch(uri);
}

The process for storing images in Supabase Storage requires that image data be represented in binary format. To achieve this, we’ll need to return the image data as a blob.

If you’re not familiar, a blob (Binary Large Object) holds raw binary data representing files like images, audio, video, or even large text files. In the context of our camera app, the blob represents the binary data of the image that was fetched from the URI. This binary data is then processed further for uploading to the Supabase storage service.

To get the blob from the URI, we’ll call a .blob() method on the response object. This method reads the response body and returns a promise that resolves with a Blob object representing the binary data of the image.

const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
uploadImage(photo.uri);
}
}

const uploadImage = async (uri) => {
const response = await fetch(uri);
const blob = await response.blob();
}

This process works in terms of fulfilling the first stage of getting the binary data we need to store our captured images. However, at this stage we still cannot upload our image to Storage, because the Supabase upload method (which we’ll implement momentarily) does not accept a blob directly. To resolve this, we’ll use an ArrayBuffer method to further convert the blob to a low-level construct of binary accepted by Supabase.

const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
uploadImage(photo.uri);
}
}

const uploadImage = async (uri) => {
const response = await fetch(uri);
const blob = await response.blob();
const arrayBuffer = await new Response(blob).arrayBuffer();
}

For debugging purposes: it is important to note that the low-level construct of the binary by the ArrayBuffer is such that, if we were to run console.log() on arrayBuffer, it would appear to return an empty array. However, this does not necessarily mean the array is actually empty. The arrayBuffer is a low-level binary representation of the image data and, upon logging it to the console, it might not display the contents in a human-readable format. Nevertheless, we can still confirm whether or not the array is empty by running the following log:

If the byteLength displayed in the log is greater than 0, then the arrayBuffer has data.

After implementing an arrayBuffer, we’ll set up a fileName for each image. We can do this by creating a string with the target folder in Supabase Storage and a Date.now() method:

const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
uploadImage(photo.uri);
}
}

const uploadImage = async (uri) => {
const response = await fetch(uri);
const blob = await response.blob();
const arrayBuffer = await new Response(blob).arrayBuffer();
const fileName = `public/${Date.now()}.jpg`;
}

Next, we’ll use the Supabase client to access the methods needed to handle uploading to Storage, as well as to facilitate error handling:

const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
uploadImage(photo.uri);
}
}

const uploadImage = async (uri) => {
const response = await fetch(uri);
const blob = await response.blob();
const arrayBuffer = await new Response(blob).arrayBuffer();
const fileName = `public/${Date.now()}.jpg`;
const { error } = await supabase
.storage
.from('MySupabaseStorageBucket')
.upload(fileName, arrayBuffer, { contentType: 'image/jpeg', upsert: false });
if (error) {
console.error('Error uploading image: ', error);
}
}

At this point, the App.js file should look like the following:

import { Camera } from 'expo-camera';
import { useState } from 'react';
import { Button, StyleSheet, Text, TouchableOpacity, View } from 'react-native';
import MaterialCommunityIcons from 'react-native-vector-icons/MaterialCommunityIcons';
import { supabase } from "./supabaseClient.js";

export default function App() {
const [permission, requestPermission] = Camera.useCameraPermissions();
const [camera, setCamera] = useState(null);

if (!permission) {
// Camera permissions are still loading
return <View />;
}

if (!permission.granted) {
// Camera permissions are not granted yet
return (
<View style={styles.container}>
<Text style={{ textAlign: 'center' }}>We need your permission to show the camera</Text>
<Button onPress={requestPermission} title="grant permission" />
</View>
);
}

const captureImage = async () => {
if (permission.granted) {
const photo = await camera.takePictureAsync({ base64: true });
uploadImage(photo.uri);
}
}

const uploadImage = async (uri) => {
const response = await fetch(uri);
const blob = await response.blob();
const arrayBuffer = await new Response(blob).arrayBuffer();
const fileName = `public/${Date.now()}.jpg`;
const { error } = await supabase
.storage
.from('MySupabaseStorageBucket')
.upload(fileName, arrayBuffer, { contentType: 'image/jpeg', upsert: false });
if (error) {
console.error('Error uploading image: ', error);
}
}

return (
<View style={styles.container}>
<Camera
style={styles.camera}
ref={ref => {
setCamera(ref);
}}
>
<View style={styles.buttonContainer}>
<TouchableOpacity style={styles.button} onPress={captureImage}>
<MaterialCommunityIcons name="camera" size={36} color="black" />
</TouchableOpacity>
</View>
</Camera>
</View>
);
}

const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
},
camera: {
flex: 1,
},
buttonContainer: {
flex: 1,
flexDirection: 'row',
backgroundColor: 'transparent',
justifyContent: 'center',
margin: 32,
},
button: {
width: 70,
height: 70,
borderRadius: 35,
backgroundColor: '#fff',
justifyContent: 'center',
alignSelf: 'flex-end',
alignItems: 'center',
},
text: {
fontSize: 24,
fontWeight: 'bold',
color: 'white',
},
});

After reloading the camera app, try taking a picture:

Then, check Supabase Storage to confirm that the image was properly stored:

…and there you have it!

I hope you found this tutorial helpful! Please comment below if you have any questions or feedback. Also, be sure to follow me on Medium and X (formerly Twitter) to catch additional content. Take care!

Cover Photo by Redd F on Unsplash

--

--

William Schulte

Mobile App Developer, Coding Educator, LDS, Retro-gamer 🎮