Autonomous Cars & Pedestrian Safety
Building an AI model in Unity for Pedestrian’s socio-visual behavior
Problem
- As per the National Highway Traffic Safety Administration, on average, approximately 6000 pedestrians die every year
- Distracted walking or walking without paying proper attention is one of the major reasons for these losses
- Communication with pedestrians is one of the major challenges for self-driving cars
Objective
The present study aims to collect, analyze, and interpret the data for pedestrian’s socio-visual behavior while crossing the roads
- Develop the understanding of distractions to improve human behavior and avoid the loss due to accidents
- Assist the self-driving technology in effectively communicating with the pedestrians
Experiment Design
- Subjects navigate through a city-like virtual environment and cross a busy road junction which includes a number of distractions
- Subjects need to communicate with cars and other pedestrians while crossing the road without being hit by a car or bumping into other people
- Subjects’ head and gaze movement is recorded and analyzed to understand the pedestrian’s socio-visual behavior
- Subjects’ brain activity is recorded to associate the mental state with the socio-visual behavior
Simulating the city in Unity
People with human characteristics and actual emotions were coded to examine how they behave on the road when they cross it.
Study Components
Data Collection
- Head position and movement
- Visual behavior
- Triggered events
- Brainwaves (EEG)
Data Analysis: Background of Subjects
- Total: 10 subjects
- Age (Average age: 26.6; Median: 24; Standard deviation: 7.13)
- 5 males and 5 females with a strong engineering background
- VR Experience (Average: 4.1; Median: 2; Standard deviation: 3.45)
- No major physical issues with VR
Data Analysis: EEG
For most participants, the beta brainwave showed lower activity at the time of an accident which implies that the subject was less alert or distracted at the time of an accident
Data Analysis: Visual Behavior
Data Analysis: Post-Experiment Feedback
- Duration of experiment: 3–5 minutes
- Locomotive skills (Average rating: 8.1; Median: 8; Standard deviation: 0.83)
- Traffic rules (Average rating: 7.3; Median: 7; Standard deviation: 0.9)
- VR Experience (Average rating: 8; Median: 8; Standard deviation: 0.77)
- No major physical issues with VR (limited cable length for walking around)
- Distracted by the pedestrians, the cars passed by them, the traffic signals, and the surroundings
Summary
- The experiment was conducted to understand pedestrian’s socio-visual behavior
- Brainwaves activity, head and gaze movement were recorded. Subjects were surveyed before and after the experiment
- On average, the subjects were young female and male adults with strong engineering education and good locomotive skills.
- EEG data indicated that most participants were less alert or distracted during the period of an accident
- Their visual behavior analysis implied that most participants were looking at pedestrians, the cars passing by, and the traffic lights during this period. These observations were confirmed by the feedback at the end of the experiment.
Experiment Video
Subject’s view
Future Work
- The experiment was conducted with subjects from a strong engineering background. In the future, a large-scale experiment with a diverse population of subjects is required to get reliable results.
- The body motion of the subjects during their interaction with the VR environment was not captured and should be included in the future work
- VR environment did not simulate the driver inside the cars, and therefore, the communication between the drivers and the pedestrians was not considered
- For more accurate results, a more sophisticated and accurate EEG device could be used instead of the low-cost tool used in this study