# Autonomous Wheelchair using NVIDIA Jetson Nano

Kabilankb
4 min readJul 31, 2023

Autonomous Wheelchair

Welcome to the Autonomous Wheelchair project using NVIDIA Jetson Nano! This project aims to develop an intelligent and autonomous wheelchair using the powerful NVIDIA Jetson Nano platform. The goal is to enhance the mobility and independence of users with mobility impairments by enabling the wheelchair to navigate and avoid obstacles autonomously.

Table of contents

  1. Introduction
  2. Features
  3. Hardware Requirements
  4. Software Requirements
  5. Installation
  6. Usage
  7. Contributing
  8. License

Introduction

The Autonomous Wheelchair project leverages the NVIDIA Jetson Nano, a high-performance, low-power AI platform, to process sensor data, perform real-time object detection, and make intelligent decisions for autonomous navigation. The project integrates a range of sensors, including depth sensors, LiDAR, and cameras, to perceive the environment and provide accurate obstacle detection and avoidance capabilities.

The wheelchair’s autonomous navigation system is built upon a combination of perception, path planning, and control algorithms. Deep Learning models are employed for object recognition and localization, allowing the wheelchair to detect obstacles and create a dynamic map of the surrounding environment.

The wheelchair’s autonomous navigation system is built upon a combination of perception, path planning, and control algorithms. Deep Learning models are employed for object recognition and localization, allowing the wheelchair to detect obstacles and create a dynamic map of the surrounding environment.

Features

1.Real-time obstacle detection and avoidance using deep learning models.
2.Integration of multiple sensors for robust environmental perception.
3.Dynamic mapping and path planning algorithms for smooth navigation.
4.User-friendly interface for manual and autonomous control modes.
5.Extensible architecture, allowing for easy integration of additional features.

Hardware Requirements

  1. NVIDIA Jetson Nano Developer Kit
  2. Depth Sensors (e.g., Intel RealSense Depth Camera)
  3. RPLidar sensor
  4. Cameras (e.g., USB webcams)
  5. Motorized Wheelchair
  6. Power Supply for Jetson Nano and other components

Software Requirements

  1. NVIDIA Jetson Nano Developer Kit with ubuntu 20.04
  2. ROS (Robot Operating System) noetic
  3. Python 3
  4. OpenCV
  5. yolo object detection

Hardware architecture

This document outlines the hardware architecture for a project that combines the NVIDIA Jetson Nano, Arduino, Webcam, and RPLIDAR sensor. The goal of this architecture is to create an intelligent robotic system capable of real-time perception, navigation, and environment mapping.

Hardware Components.

1.NVIDIA Jetson Nano: The Jetson Nano serves as the main processing unit and runs sophisticated AI algorithms for perception, object detection, and path planning. Its GPU capabilities enable fast and efficient deep learning computations.

2. Webcam: The webcam is used for visual perception and object detection. It captures real-time video feed, which is then processed by the Jetson Nano to identify and track objects in the environment.

3. RPLIDAR Sensor: The RPLIDAR sensor is a 360-degree laser scanner that provides high-resolution 2D maps of the surrounding environment. It allows the robot to perceive obstacles and plan collision-free paths.

Hardware Workflow

  1. Sensor Data Acquisition: The webcam captures real-time video, and the RPLIDAR scans the environment to collect point cloud data.
  2. Object Detection: The Jetson Nano processes the video feed from the webcam using deep learning-based computer vision models to detect and recognize objects of interest in the environment.
  3. Obstacle Perception: The Jetson Nano analyzes the point cloud data from the RPLIDAR to identify obstacles and generate a 2D map of the surroundings.
  4. Path Planning and Navigation: Based on the object detection and obstacle perception results, the Jetson Nano plans collision-free paths and sends control commands to the Arduino to navigate the robot safely.
  5. Motor Control and Actuation: The Arduino receives control commands from the Jetson Nano and actuates the motors to execute the planned navigation and physical actions.
  6. Feedback and Localization: The robot’s sensors, such as encoders and IMU, provide feedback to the Arduino and Jetson Nano for accurate localization and position updates during navigation.

Depth Sensors (e.g., Intel RealSense Depth Camera) output

Teleop, mapping and autonomous navigation of wheelchair video

nvidia omniverse isaac ros

google drive link for wheelchair video

https://drive.google.com/drive/folders/1RTVol_MbjYQ5BfKElGUN9-NjobtHKXvq?usp=drive_link

Conclusion

The hardware architecture combining NVIDIA Jetson Nano, Arduino, Webcam, and RPLIDAR enables the creation of a versatile and intelligent robotic system capable of real-time perception, navigation, and environment mapping. The powerful AI capabilities of the Jetson Nano, along with the Arduino’s control and actuation abilities, form a robust foundation for building advanced robotics applications. This architecture can be expanded and customized for various robotic projects, such as autonomous vehicles, surveillance robots, or mapping drones.

--

--