Digital twins: An on-ramp to autonomous driving
Autonomous driving seems far away, but our vehicles already are equipped with some of its enabling technologies. Systems like adaptive cruise control and lane-keeping assistance on many new vehicle models greatly relieve human drivers of the burden of tedious driving tasks, such as cruising on an empty freeway or stop-and-go in rush-hour traffic congestion.
These capabilities can only be categorized as partially automated driving, and there are numerous challenges to achieving fully autonomous driving in the real world. For example, complicated scenarios like merging from an on-ramp into high-speed traffic flow on the highway, or making an unprotected left turn at an intersection (where left turns are permitted but there is no dedicated left-turn traffic signal) require the automated vehicle to make accurate predictions of surrounding vehicles’ behaviors. This is extremely difficult owing to the large differences among vehicle operators’ driving styles, which introduce numerous uncertainties into the calculations.
A digital twin — a virtual representation of a physical object or system — enables us to leverage not only real-time information but also historical data to facilitate decision making for an automated vehicle. Stored on a cloud server, a digital twin can learn from all the sampled data and build accurate models to make more informed predictions than it could by simply relying on real-time, on-board information.
The digital twin content is built up from an automated vehicle’s “perception sensors” — such as camera, radar, and light detection and ranging (LiDAR) — that can sense the surrounding environment of the vehicle and collect data. These data are labeled and categorized into various groups, such as highway driving versus urban driving, car following versus lane changing, or daytime driving versus nighttime driving. Advanced machine learning algorithms then can learn the behaviors of surrounding objects (including vehicles and pedestrians) in each scene. These learned models then are used in real time to predict behaviors.
Besides perception sensors on the vehicle, the Digital Twin Lab at Purdue that I lead uses wireless communication technologies to allow automated vehicles to transmit information among one another or with traffic infrastructures, making them “connected and automated vehicles.” This enables vehicles to see farther down the road, even for things that are not on their horizons.
For instance, vehicles on a freeway on-ramp can communicate with other vehicles on the freeway, adjusting their positions and speeds long before they actually can see one another. Vehicles traveling toward an intersection also can communicate with an embedded roadside unit installed at that intersection, so the real-time traffic information from all directions can be shared with these vehicles before they arrive at the junction.
We have proposed a comprehensive mobility digital twin (MDT) framework to enhance these mobility systems in the areas of safety, efficiency, and environmental sustainability. It is an artificial intelligence (AI)-based, data-driven, device-edge-cloud framework that leverages advanced technologies like machine learning, cloud/edge computing, and mixed reality.
The framework has three physical building blocks: human, vehicle and traffic. Human includes all human beings involved in the transportation system — not only drivers but also passengers, pedestrians, cyclists, etc. Vehicle is the core of this MDT framework, as it “hosts” the drivers and passengers, and is the fundamental component of traffic, including vehicles with automation and/or communication capabilities. Traffic comprises intelligent traffic infrastructures, such as traffic signals and road signs.
Additionally, the framework has three digital building blocks: human digital twins, vehicle digital twins, and traffic digital twins. The whole framework provides end-to-end coverage: The physical building blocks are responsible for data sampling and command actuation, while the digital building blocks manage all in-between processes, including data storage, modeling, learning, simulation, and prediction.
In the final analysis, we are looking into building human-centric connected and automated vehicles that really understand people and satisfy our needs. We have constructed digital twins for individual drivers based on their driving behaviors, head/gaze movements, facial emotions, and haptic feedback, aiming to better model their behaviors and predict their future intentions. Machine learning algorithms have helped us achieve great results in predicting driver behavior.
A major hurdle involves the amount of data we can collect to train our machine learning models. We are tackling this challenge by generating more data on a simulation platform and collaborating with mobility industry players like Toyota and Volkswagen, which are supporting our investigation of digital twins and connected and automated vehicles. We hope to develop the next generation of technologies that can better understand other vehicles’ intentions, enabling connected and automated vehicles to drive safely and smoothly.
At Purdue, we are working closely with the Institute for Control, Optimization and Networks (ICON) to help us overcome obstacles in complex autonomous and connected systems. ICON enables us to take advantage of the expertise of, and opportunities for interdisciplinary collaboration with, some 75 ICON-affiliated Purdue faculty across more than 12 disciplines, schools and departments, plus experts in industry and government agencies.
As research has become increasingly interdisciplinary, drawing on knowledge from many fields, ICON has provided a wonderful platform to allow my students and me to connect with fellow researchers at Purdue. We have benefited substantially from weekly seminars, where we learn about various research achievements from multiple faculty speakers and share our research progress on digital twins.
Ultimately, the utopia for connected and automated vehicles is a fully connected world where all humans, vehicles and infrastructures can transmit data among others in real time. All these physical entities will have their own cloud-based digital twins, which will understand every little detail of the physical entities and provide accurate recommendations to them.
As more and more data are generated on a daily basis, more personalized services can be provided to each mobility participant through learnings from the data. In addition, the productivity of our whole society will increase as our mobility industry advances, bringing reductions in traffic accidents, traffic congestion, and pollutant emissions.
Ziran Wang, PhD
Assistant Professor, Lyles School of Civil Engineering
Director, Purdue Digital Twin Lab
Faculty Contributor, Autonomous and Connected Systems (ACS) Initiative
Faculty Contributor, Institute for Control, Optimization and Networks (ICON)
College of Engineering
ICON Seminar in Autonomy: Professor Ziran Wang: ‘Mobility digital twin for connected and automated vehicles’ (YouTube video link near bottom of page)