QR code scanner in React Native

Varun Kukade
11 min readMar 17, 2024

--

In this article, we will see how to create a QR code scanner in react native.

We will use the following libraries: —
1. library for camera: react-native-vision-camera . I also tried and tested using react-native-camera-kit library for QR code scanning. However, I found the vision camera best, fastest, and more reliable than others. Huge thanks to https://github.com/mrousavy for all of his hard work.
2. library for permissions: react-native-permissions
3. library to make a center square hole for QR code scanning: react-native-hole-view

Step 1: Set the library react-native-vision-camera
For the vision camera, here is the installation guide. In the future, the installation process may change. Hence it's better to follow the official documentation.

As of now while writing this article, Vision Camera only supports Android-SDK version 26 or higher and IOS 12 or higher. If you are going to use this library in production, ensure that you handle the scenarios of the app being used on versions less than above.
You can either implement any other library supporting older versions also or you can change settings such that your app only supports Android-SDK version 26 or higher and iOS 12 or higher.

To ensure that for Android, android/app/build.gradle should contain minSdkVersion = 26.

For IOS, open the project in Xcode, select a project from the left tab, go to targets, and select the main project from the target. In the general tab, you will find “Minimum Deployments”. You need to ensure that it's above 12.

Also, check your Swift version. VisionCamera requires a minimum Swift version of 5.2.

  1. Open project.pbxproj in a Text Editor. This is present in the ios/Project_name.xcodeproj folder. This file includes project configuration settings, and build settings for Xcode.
  2. If the LIBRARY_SEARCH_PATH value is set, make sure there is no explicit reference to Swift-5.0. If there is, remove it. See this StackOverflow answer.
  3. If the SWIFT_VERSION value is set, make sure it is set to 5.2 or higher.

Make sure to remove project.pbxproj from .gitignore and push these new changes in project.pbxproj to the remote repo.

Run the following commands to install the vision camera.

Note — As of now while writing this article, react-native-vision-camera version 3.9.0 is released. In previous versions, there was a black screen issue after opening the camera on Android. It’s fixed in 3.9.0. But I tried to test and run this issue on 3.9.0. But the build failed for me on Android with React native versions 0.73.4 and also 0.72.2.
You can install the newest version and if you don’t face any black screen issues, then feel free to skip Step 7: Fix for black screen issue.
All other steps are the same irrespective of version.

Hence for the react-native-vision-camera, I switched back to 3.6.17 and fixed the black screen issue with some workaround which worked as expected.

yarn add react-native-vision-camera@3.6.17
cd ios && pod install

Step 2 : — For installing and setup of react-native-permissions, you can follow the setup instructions here.

Step 3 : — For installing and setup of react-native-hole-view, follow the documentation here.

Step 4: Get the necessary permissions from user
First, we need to get the camera permissions from the user.
For that, I created the following custom hook which I can reuse further for other permissions as well.

Create a new file named usePermissions.ts and add the following code to it

import {useCallback} from 'react';
import {PERMISSIONS, RESULTS, request} from 'react-native-permissions';

export type TUsePermissionsReturnType = {
isError?: boolean;
type: (typeof RESULTS)[keyof typeof RESULTS];
errorMessage?: string;
};

export enum EPermissionTypes {
CAMERA = 'camera',
}

export const usePermissions = (typeOfPermission: EPermissionTypes) => {
const getPermission = useCallback(() => {
//check if typeOfPermission exist in EPermissionTypes
if (
!typeOfPermission ||
!Object.values(EPermissionTypes).includes(typeOfPermission)
) {
throw new Error('Unsupported Type of permission.');
}
if (isIos) {
switch (typeOfPermission) {
case EPermissionTypes.CAMERA:
return PERMISSIONS.IOS.CAMERA;
default:
return PERMISSIONS.IOS.CAMERA;
}
}

if (isAndroid) {
switch (typeOfPermission) {
case EPermissionTypes.CAMERA:
return PERMISSIONS.ANDROID.CAMERA;
default:
return PERMISSIONS.ANDROID.CAMERA;
}
}

throw new Error('Unsupported Operating System.');
}, [typeOfPermission]);

const askPermissions =
useCallback(async (): Promise<TUsePermissionsReturnType> => {
return new Promise<TUsePermissionsReturnType>(async (resolve, reject) => {
//ask permissions from user
//if error present, return error
try {
await request(getPermission()).then(result => {
switch (result) {
case RESULTS.UNAVAILABLE:
return reject({
type: RESULTS.UNAVAILABLE,
});
case RESULTS.DENIED:
return reject({
type: RESULTS.DENIED,
});
case RESULTS.GRANTED:
return resolve({
type: RESULTS.GRANTED,
});
case RESULTS.BLOCKED:
return reject({
type: RESULTS.BLOCKED,
});
case RESULTS.LIMITED:
return resolve({
type: RESULTS.LIMITED,
});
}
});
} catch (e: {data: {message: string | undefined}} | any) {
return reject({
isError: true,
errorMessage:
e?.data?.message ||
e.message ||
'Something went wrong while asking for permissions.',
});
}
});
}, [getPermission]);

return {
askPermissions,
};
};

As you can see I am accepting the type of permission to ask and then asking for permissions using the request function.

In askPermissions, I am returning resolved or rejected promises depending on the result of the request function.

I am using this hook as follows. On pressing of “SCAN QR” button I will call the following takePermissions function.

CameraParent.tsx (Your parent file from where you would open the camera by pressing some button)

const [cameraShown, setCameraShown] = useState(false);

const takePermissions = async () => {
askPermissions()
.then(response => {
//permission given for camera
if (
response.type === RESULTS.LIMITED ||
response.type === RESULTS.GRANTED
) {
setCameraShown(true);
}
})
.catch(error => {
//permission is denied/blocked or camera feature not supported
if ('isError' in error && error.isError) {
Alert.alert(
error.errorMessage ||
'Something went wrong while taking camera permission',
);
}
if ('type' in error) {
if (error.type === RESULTS.UNAVAILABLE) {
Alert.alert('This feature is not supported on this device');
} else if (
error.type === RESULTS.BLOCKED ||
error.type === RESULTS.DENIED
) {
Alert.alert(
'Permission Denied',
'Please give permission from settings to continue using camera.',
[
{
text: 'Cancel',
onPress: () => console.log('Cancel Pressed'),
style: 'cancel',
},
{text: 'Go To Settings', onPress: () => goToSettings()},
],
);
}
}
});
};

If a user gives permission successfully, I show the camera.

According to Android Documentation, starting from Android 11 (API level 30), if a user denies specific permission more than once during the app’s lifetime of installation on a device, the user doesn’t see the system permissions dialog if your app requests that permission again. In this case, it's said that permissions are blocked.

If the user blocks the permission, I show an Alert popup saying “Permission denied”. If the user wants to continue using the camera, the user can tap on the “Go To Settings” button in the alert and Go to the settings to give permission.

Here is my helper file.

helpers.ts

import {Dimensions, Linking, Platform} from 'react-native';

export const isIos = Platform.OS === 'ios';
export const isAndroid = Platform.OS === 'android';

export const goToSettings = () => {
if (isIos) {
Linking.openURL('app-settings:');
} else {
Linking.openSettings();
}
};

As you can see for Android, I used openSettings function from linking core react native API and for IOS I used the openURL function. Read more about it here. This will open the app settings where the user can give permission or change any app settings.

Step 5: Code Camera Scanner

Create a new file named CameraScanner.tsx and add the following code.

import {
Camera,
CameraRuntimeError,
useCameraDevice,
useCodeScanner,
} from 'react-native-vision-camera';

export interface ICameraScannerProps {
setIsCameraShown: (value: boolean) => void;
onReadCode: (value: string) => void;
}

export const CameraScanner = ({
setIsCameraShown,
onReadCode,
}: ICameraScannerProps) => {

const device = useCameraDevice('back');
const camera = useRef<Camera>(null);

if (device == null) {
Alert.alert('Error!', 'Camera could not be started');
}

const onError = (error: CameraRuntimeError) => {
Alert.alert('Error!', error.message);
}

const codeScanner = useCodeScanner({
codeTypes: ['qr'],
onCodeScanned: codes => {
if (codes.length > 0) {
if (codes[0].value) {
setTimeout(() => setCodeScanned(codes[0]?.value), 500);
}
}
return;
},
});

return (
<SafeAreaView style={styles.safeArea}>
<Modal presentationStyle="fullScreen" animationType="slide">
<Camera
ref={camera}
onError={onError}
photo={false}
style={styles.fullScreenCamera}
device={device}
codeScanner={codeScanner}
/>
</Modal>
</SafeAreaView>
)}


export const styles = StyleSheet.create({
safeArea: {
position: 'absolute',
width: '100%',
height: '100%',
},
fullScreenCamera: {
position: 'absolute',
width: '100%',
height: '100%',
flex: 1,
zIndex: 100,
},
});

Here we first added a SafeArea view to display the camera in safe boundaries of the device to avoid overlapping with device sensors or audio outputs.
Then I wrapped the Camera module in a modal to open the camera in a modal sliding full-screen presentation.
Here I also mentioned to open the Back camera using useCameraDevice.
Also, I added codeScanner properties: codeTypes as ‘qr’ and onCodeScanned callback which gives us the codes array after reading the QR code.
Please note that to have a QR code scanning feature, we must disable the photo feature of the camera using photo={false} as a prop.

Step 6: Some more checks for the camera

1. I needed to add the check if an app was in the foreground or not. Because we need to show the camera only when the app is in the foreground. If the user opens the camera for a QR code scan and the app goes background (the user is not using the app but is still it is in Random Access Memory) that will drain the user’s battery super fast. Hence to save some memory and battery we made these changes.

2. We need to show and open the camera only after it is initialized properly. Hence I took one state isCameraInitialized and set it to true in onInitialized callback prop provided by the library.

3. Also, I made sure that the camera is active only when we are focused on the current screen in the navigation stack. I checked that using the useIsFocused function provided by react-navigation. If you haven’t used this library in your app for navigation, you might need to check for similar functionality in your library.

4. Finally I made sure that the camera was active only when all the above conditions were met and passed them to isActive prop of the library.

useAppStateListener.ts

const {appState} = useAppStateListener();
import {useEffect, useRef} from 'react';
import {AppState, AppStateStatus} from 'react-native';

export const useAppStateListener = (
onForeground?: () => void,
onBackground?: () => void,
) => {
//appStateRef holds current app states.
//Possible app states - 'active', 'background', 'inactive', 'unknown', 'extension'
const appStateRef = useRef(AppState.currentState);
const onForegroundRef = useRef(onForeground);
const onBackgroundRef = useRef(onBackground);

// setting refs to avoid passing the functions as dependencies to useEffect
onForegroundRef.current = onForeground;
onBackgroundRef.current = onBackground;

useEffect(() => {
const handleAppStateChange = (nextAppState: AppStateStatus) => {
if (nextAppState === 'active') {
onForegroundRef.current?.();
} else if (nextAppState.match(/inactive|background/)) {
onBackgroundRef.current?.();
}
appStateRef.current = nextAppState;
};
const subscription = AppState.addEventListener(
'change',
handleAppStateChange,
);

return () => {
if (subscription?.remove) {
subscription.remove();
}
};
}, []);
return {
appState: appStateRef.current,
};
};

First I created the app state listener hook using core AppState API to detect whether the app is in the foreground or background.
This also accepts two callbacks onForeground and onBackground. onBackground will be called when the app goes in the background. onForeground will be called when an app becomes active again in the foreground.

I used appState and new states as follows:

CameraScanner.tsx

const [isCameraInitialized, setIsCameraInitialized] = useState(isIos);
const [isActive, setIsActive] = useState(isIos);
const [flash, setFlash] = useState<'on' | 'off'>(isIos ? 'off' : 'on');
const isFocused = useIsFocused();
const {appState} = useAppStateListener();

useEffect(() => {
let timeout: NodeJS.Timeout;

if (isCameraInitialized) {
timeout = setTimeout(() => {
setIsActive(true);
setFlash('off');
}, 0);
}
setIsActive(false);
return () => {
clearTimeout(timeout);
};
}, [isCameraInitialized]);

const onInitialized = () => {
setIsCameraInitialized(true);
};

if (isFocused && device) {
return (
<SafeAreaView style={styles.safeArea}>
<Modal presentationStyle="fullScreen" animationType="slide">
<Camera
torch={flash}
onInitialized={onInitialized}
ref={camera}
onError={onError}
photo={false}
style={styles.fullScreenCamera}
device={device}
codeScanner={codeScanner}
isActive={
isActive &&
isFocused &&
appState === 'active' &&
isCameraInitialized
}
/>
</Modal>
</SafeAreaView>
);
}

Step 7: Fix for black screen issue
I needed to manually start and turn off the flash to make it work properly . I took one more state as isActive to store if the camera should be active or not.
Hence as you can see in useEffect, after the camera is initialized, I waited and then turned off the flash and made isActive true. Here I used the concept of setTimeout with 0 seconds.

setTimeout with 0 seconds: I will explain this in detail in some other article. But to explain it in short, whatever callback we pass in setTimeout with 0 seconds, won’t be executed immediately.
There are four things here. microtask queue, macro task queue, JS call stack, and event loop. Whatever synchronous code we have will first be put into the JS call stack to execute it by JS runtime. If we have an asynchronous task like setTimeout it would be put into a separate queue called macro task queue. If we have async await API calls, pending promises they would be put into the microtask queue. While executing code, JS first executes whatever we have in the call stack. Pending promises, setTimeout is put into the queue. Now when callstack is empty it first gives priority to the microtask queue and checks if there is something to resolve. Then after that it checks the macro task queue if there is something to resolve.
Now in our case, we used setTimeout with 0 seconds. Hence callback would be put into the macrotask queue. Whenever the JS call stack is empty and all synchronous operations are done for camera configuration and the camera is ready to start, then it would go to macrotask queue and resolve/execute our callback.

Now if you run the app, you will be able to scan the QR codes and read the message in QR codes. You can add more code types in codeTypes: [‘qr’] array. Here are the possible types: "code-128" | "code-39" | "code-93" | "codabar" | "ean-13" | "ean-8" | "itf" | "upc-e" | "qr" | "pdf-417" | "aztec" | "data-matrix"

Step 8: Add a square hole in the center of the camera screen
Finally, let's ease UI a little bit by adding the square hole in the center so that the user would be able to scan the QRs from that square hole.
Something like this:

Here’s the code for that.

CameraScanner.tsx

import {RNHoleView} from 'react-native-hole-view';
import {getWindowHeight, getWindowWidth} from '../../helpers';


<SafeAreaView style={styles.safeArea}>
<Modal presentationStyle="fullScreen" animationType="slide">
<View style={[styles.cameraControls, {backgroundColor: undefined}]} />
<Camera
torch={flash}
onInitialized={onInitialized}
ref={camera}
onError={onError}
photo={false}
style={styles.fullScreenCamera}
device={device}
codeScanner={codeScanner}
isActive={
isActive &&
isFocused &&
appState === 'active' &&
isCameraInitialized
}
/>
<RNHoleView
holes={[
{
x: getWindowWidth() * 0.1,
y: getWindowHeight() * 0.28,
width: getWindowWidth() * 0.8,
height: getWindowHeight() * 0.4,
borderRadius: 10,
},
]}
style={[styles.rnholeView, styles.fullScreenCamera]}
/>
</Modal>
</SafeAreaView>


export const styles = StyleSheet.create({
safeArea: {
position: 'absolute',
width: '100%',
height: '100%',
},
camera: {
width: '100%',
height: 200,
},
fullScreenCamera: {
position: 'absolute',
width: '100%',
height: '100%',
flex: 1,
zIndex: 100,
},
rnholeView: {
alignSelf: 'center',
alignItems: 'center',
justifyContent: 'center',
backgroundColor: 'rgba(0,0,0,0.5)',
},
cameraControls: {
height: '10%',
top: 15,
position: 'absolute',
flexDirection: 'row',
width: '100%',
alignItems: 'center',
justifyContent: 'space-between',
paddingHorizontal: 24,
zIndex: 1000,
},
});

Additionally, you can add your UI icons to handle the flash on/off. You can add it inside the following view. I have already added styles and Views for this in the above code.

<View style={[styles.cameraControls, {backgroundColor: undefined}]} />

You just need to use the following state and give it to the torch prop.

const [flash, setFlash] = useState<'on' | 'off'>(isIos ? 'off' : 'on');

You can call this CameraScanner.tsx in your CameraParent.tsx as

const [cameraShown, setCameraShown] = useState(false);
const [qrText, setQrText] = useState('');

const handleReadCode = (value: string) => {
setQrText(value);
setCameraShown(false);
};

<View style={styles.container}>
<TouchableOpacity
onPress={takePermissions}
activeOpacity={0.5}
style={styles.itemContainer}>
<Text style={styles.itemText}>SCAN QR</Text>
</TouchableOpacity>
{cameraShown && (
<CameraScanner
setIsCameraShown={setCameraShown}
onReadCode={handleReadCode}
/>
)}
</View>

Step 8: Handle the Android back button
Also on the same component CameraParent.tsx, when the camera is open on Android and the back button is pressed, you can handle the back button click and close the camera by setting setCameraShown as false.

import {BackHandler} from 'react-native';

function handleBackButtonClick() {
if (cameraShown) {
setCameraShown(false);
}
return false;
}

useEffect(() => {
BackHandler.addEventListener('hardwareBackPress', handleBackButtonClick);
return () => {
BackHandler.removeEventListener(
'hardwareBackPress',
handleBackButtonClick,
);
};
}, []);

That’s it for this article. You can find the complete source code here — https://github.com/varunkukade/VisionCamera

If you found this tutorial helpful, don’t forget to give this post 50 claps👏 and follow 🚀 if you enjoyed this post and want to see more. Your enthusiasm and support fuel my passion for sharing knowledge in the tech community.

I have already covered wide range of topics on react native. You can find more of such articles on my profile -> https://medium.com/@varunkukade999

Stay tuned for more in-depth tutorials and insights on React Native development.

--

--

Varun Kukade

React Native Engineer 🚀 Transitioning into a Full-Fledged Mobile Engineer (Hybrid & Native) 📱💡 https://github.com/varunkukade varunkukade999@gmail.com