Developing driver assistance systems using Android powered devices

Smartphones.. what about smartvehicles?

Razvan
8 min readAug 19, 2014

The mobile devices are getting more powerful and smarter, however the vehicles seem to be left out. We found a way to solve this problem by using mobile technologies in everyday traffic scenarios.

Smartphones + vehicles

The continuous evolution of smartphones in both software and hardware in the recent years facilitates the development of new solutions regarding the mobility, portability and the cost factor of a driving assistant system. The Android platform has proved to be an affordable and highly available alternative compared to an integrated, embedded solution. This post was originally published on MobileWay.

Screenshot from DriveAssist app.

Within MobileWay we have developed DriveAssist, a driving assistant based on Android (for now) that uses the mobile phone’s camera to alert the driver in case of an imminent crash. We have developed an efficient algorithm that can successfully detect any kind of obstacle in a car traffic scenario. By obstacles one can understand: bicyclists, pedestrians, other vehicles (cars, trucks, buses), animals, etc.

Computer vision

First let’s begin by explaining what computer vision and image processing is. According to a definition from wikipedia: “image processing is any form of signal processing for which the input is an image; the output of image processing may be an image or a set of characteristics related to the image”.

Computer vision is considered a subfield of artificial intelligence in the domain of computer science and is sometimes referred as “the emulation of human vision by a machine”.
Some popular libraries and frameworks include: OpenCV, matlab.

ADAS? What is it?

The acronym ADAS stands for Advanced Driver Assistance Systems, basically these are systems to help the driver in the driving process. Some examples include:

— Adaptive cruise control (ACC)

— Lane change assistance

— Collision avoidance system (pre-crash system)

— Traffic sign recognition

— Vehicular communication systems

Example of sensors used in Advanced Driver Assistance Systems, image source: Bosch

You can read more information here.

These ADAS systems using computer vision might be split into 2 main categories:

— single camera = monocular systems

— 2 cameras (or more) = stereo-based, binocular systems

More complex and high-end systems use a variety of sensors combined with the cameras to produce accurate and robust results. Video cameras are used to capture images and other sensors used to capture proximity/distance to objects: for example, radar/lidar, sonar.

High-end, luxury automobiles already provide these safety features, for example the Mercedes S-Class 2014 uses 3 radars together with a stereo-camera setup: www.youtube.com/watch?v=oU4XQvxO10k

DriveAssist app logo

DriveAssist

There are very few mobile and tablet applications that perform locally (on the device) the entire processing: starting from image acquisition, then the image processing and ending with displaying the results. Current existing approaches mostly offer lane detection features and very basic, limited obstacle detection that works mostly in highway scenarios.

The proposed ADAS can be integrated easily in all vehicles without additional costs. The only requirement is that the end user has a smartphone and a windshield mount for the mobile device.
DriveAssist is the first mobile app to provide detection of almost any kind of obstacle! This is one of the first apps to detect pedestrians, bicyclists, or motorcyclists, besides the usual vehicles that are in traffic! The main purpose of this software will be to aid and help the driver in different traffic scenarios when he might be tired, fatigued or for a brief moment might not pay attention to the road. It acts similar to a pre-crash system found on high-end and expensive cars, but with the advantage that it can be retrofitted and used on any existing car.

The app is available for a public beta and later this year it will be released to the public.

Technical details about the implementation

The entire algorithm was written in C++ and uses JNI to communicate between the C++ part and the Java part of the Android app. By switching to code written in native the performance is drastically improved. We went from a maximum of 5–8 fps on our test device up to 15–25 fps or even higher, depending on the scene and number of objects that are detected. High end devices will provide even better results regarding the number of frames processed per second. In some of our tests, a relatively old device, Samsung Galaxy S3, performed on average at a frame rate above 25.

Below you can find a chart with some tests results comparing a dual core (HTC One S) and a quad core (SGS 3) mobile device using the same input video for both devices.

Performance test results comparing quad core vs. dual core devices

Android NDK

Passing data between the native C++ code and Java proved to be a bit tricky initially. Fortunately we found some quick examples and solutions. For example, passing a simple string from C++ to Java can be done as described below.
Code from the .cpp file:

char* strUTF = new char[50]; JNIEXPORT jstring JNICALL Java_net_mobileway_driveassist_MainActivity_stringFromJNI( JNIEnv* env, jobject thiz) { return (env)->NewStringUTF(strUTF); }

Note: net.mobileway.driveassist is the package name of the app.
The usage in Java is:

String stringNative = stringFromJNI();
public native String stringFromJNI();

A better example is provided here where it shows how to pass more complex data to native.

Logging and displaying data using logcat can be achieved fairly simple from native. You have to include log.h:

#include <android/log.h> __android_log_print(ANDROID_LOG_DEBUG, “LOG_TAG”, “This the printf statement from jni”); __android_log_print(ANDROID_LOG_DEBUG, “LOG_TAG”, “for(%d) = %d”, i, array[i]);

You can use all the other log levels instead of debug: ANDROID_LOG_ERROR, ANDROID_LOG_WARN, ANDROID_LOG_INFO, ANDROID_LOG_VERBOSE
A good tutorial for starting out with JNI can be found here.

OpenCV

For some basic image processing techniques and filters we used the open source OpenCV library. There was no point in re-inventing the wheel and writing up algorithms that are already optimized and available for free. Thankfully, the OpenCV library supports Android NDK and this also helped with the development. A different approach would have been to use the FastCV library provided for free by Qualcomm, the company behind the Snapdragon processors found in most mobile devices today. There is also the option to develop apps for Nvidia Tegra devices using Tegra Android development kit, but you will cover only Nvidia based mobile devices.

One great resource to let help you start developing using OpenCV and Android is this Stanford class that has good information regarding how to setup the Android environment and also how to setup the OpenCV SDK and run some code samples and example projects. The OpenCV documentation is also packed with good stuff and there’s also a good tutorial on how to run native code using OpenCV.

OpenCV static initialization

Usually by using OpenCV in your Android app, the end user is prompted to install the OpenCV Manager app from the Google PlayStore. There is however a possibility to integrate all the library functions in your app without having to install the separate OpenCV Manager app.

In our case by using together with JNI, the initialization of the library is done in the launcher activity:

static { if (!OpenCVLoader.initDebug()) { Log.e(TAG, “Static linking failed”); } else { System.loadLibrary(“opencv_java”); System.loadLibrary(“drive_assist”);Log.i(TAG, “Static linking success”); } }

In the Android.mk file we added: OPENCV_CAMERA_MODULES:=on OPENCV_INSTALL_MODULES:=on

The LOCAL_MODULE name from this file is the same name used in the loadLibrary function in the launcher activity! For a more detailed tutorial you can check out the official OpenCV guide for static initialization of the library: here

Other issues

There were weird issues with the preview screen on some devices that needed more error handling. For some reason, on some devices the preview images were stretched. This is seems to be a general problem when developing Android apps that use the camera. Usually the issue relates to not using the optimal preview size according to the device screen. A good solution can be found in this answer from stack overflow website: camera preview stretched.

One cool feature that the DriveAssist app had was an ‘offline’ mode that was mostly used in early phases when developing the algorithm. Instead of having to go out in a car with a laptop and test the results in real world traffic, we managed to load up a set of videos (recorded while driving) into the app and use them as input for the algorithm. This move saved us a lot of fuel ☺. Another good approach is to play a video on a computer display and mount a mobile phone in front of it, but we decided to try loading up videos! Due to limitations of the OpenCV the videos had to be split into images and we loaded up each image consecutively using a simple for loop to provide input for the algorithms.

Distance computation

Also, the DriveAssist app computes an approximate distance to the detected obstacles. For now, the distance is expressed in meters and an option to change this will be added in a settings menu. By using a single camera, the results of the distance calculation might be sometimes wrong, off by some margin of error. This will further be improved before the first public release.

Example of distance computation to the detected obstacle.

Because of limitations of the new Gradle build system and Android Studio, the project is stuck in good old Eclipse. At the moment we don’t use any other external libraries other than OpenCV.

There will be a lot of new features built in the app, the detection part is only the beginning. On out to-do list is an extension for Android Wear and maybe Google Glass. The possibilities are endless!

Live demo:

https://www.youtube.com/watch?v=aYXgI5Br0cc

Download beta:

http://bit.ly/1nEZt0V

Contact

Make driving safer by having a context aware, intelligent car! Get the augmented driving app Drive Assist or contact us by email: contact /at/ driveassistapp.com

Feel free to contact us by email or by using the social media accounts listed on the app website!

--

--