Unity as a Testbed for Autonomy Development
Dylan Chua developed an autonomy testbed as part of his internship at C3 Development, DSTA, under the mentorship of Loke Yu Juan and Jerry Tamilchelvamani.
Introduction
Testing autonomy developments in the field is challenging for various reasons. The test conditions may not be readily available, be opportunistic in nature, or difficult to control. Additionally, tests may pose risks to personnel and prototypes, and there may be limitations on the availability of test facilities and duration.
Background
I participated in testing of an autonomy development for a drone versus drone concept. The concept was to use drones for the scanning and detection of unauthorized drones in the sky and hover over these errand drones till a response team arrives at the scene. We named our drone the ‘Eagle,’ and I was tasked to test the limits of its autonomous tracking and hovering capability. To do this, I developed a testbed using the Unity game engine and tested the performance of the capability under different scenarios.
Unity Drone Simulator
The Unity Drone Simulator testbed simulated the physical environment, the ‘Eagle’ drone, any arbitrary number of unauthorized drones, and all interfaces with external systems.
In our setup, the ‘Eagle’ drone transmitted its video feeds to a Ground Control Station (GCS) and received navigation instructions from the GCS, in real-time. It used the Real-Time Streaming Protocol (RTSP) for the transmission of videos and ZeroMQ to receive the navigation instructions from the GCS.
These features were preserved in the simulator. The camera perspective of the ‘Eagle’ drone was synthetically generated and transmitted to the GCS, and navigation instructions from the GCS were simulated in the simulator. The simulator too adhered to all the protocols of the actual setup.
Lastly to test, retest, and test again, the WASD commands to the unauthorized drone was recorded or pre-scripted for replay over and over again as needed.
Testbed Objective
I developed the simulator to effectively replace the real-life aspects of the pipeline, namely the drones and the actual field. This allowed us to experiment without having to travel to the field and fly the drone, and it permitted us to overcome several limitations, such as long and/or costly travel, short flight duration of the drone, and a limited variety of drones to test.
Building the Environment
To model the 3D world in the simulator, I used Unity’s High-Definition Render Pipeline (HDRP) to enhance the quality and realism of the terrain and lighting effects.
I leveraged assets from the Unity Asset Store and elements from the demo scene to construct the 3D world. I took care to ensure that the application loaded quickly and ran as smooth as possible.
I modeled our test site in the simulator, taking care to accurately represent details such as the color and texture of the grass and sand, which were critical for testing the autonomous tracking capability. Additionally, I added roads, trees, and lampposts to the 3D world to test the tracking capability over more complex terrains. To create the roads, I utilized the EasyRoads3D asset store package.
Streaming Video via RTSP
I used the Unity plugin, FFmpegOut for the recording and live streaming of the Unity cameras via RTSP.
I made modifications to the plugin to match the streaming performance of the actual setup. I experimented with different encoder presets to optimize the stream quality and latency. I recommended the H.264 NVIDIA preset though it will require an NVIDIA GPU. On my laptop, I was able to stream at resolutions and frame rates of 4K (3840 x 2160) @ 30fps, and FHD (1920 x 1080) @ 120fps.
The synthetic video feeds could be recorded or live streamed and latency could also be artificially induced to simulate network effects.
Receiving Inputs via ZeroMQ
I implemented ZeroMQ into the simulator to adhere to the actual setup.
There were many examples in github on the implementation of ZeroMQ and this was done quite easily.
The configuration of the ZeroMQ required details such as port number, ip address, etc. and I enabled them to be specified via a GUI during startup of the simulation application.
Automated Movement
To test, retest, and test again, the scenario needed to be scripted. I implemented three ways to script the movement of the ‘Eagle’ and unauthorized drones in the simulator. The three ways were Input Relay, Circling and Waypoint.
The Input Relay is a live recording of the WASD commands issued to the unauthorized drone that can be saved and reloaded. While it can be manually configured, it was best set using the recording function provided in the simulator.
The Circling method generates a set of 3D points by specifying the speed and radius about an arbitrary location. It was recommended that they were manually created and loaded onto the simulator.
The Waypoint method involves pre-determining a set of 3D points and speed for both the ‘Eagle’ and the unauthorized drone. It was recommended that they were manually created and loaded onto the simulator.
Perception Testing
The AI model, YOLOv5 failed to detect the drones correctly at times. False Positives and False Negatives were encountered due to the complexity of the background and poor contrast between the drone and the background.
To address the False Positives, we used the Waypoint method to fly the ‘Eagle’ and unauthorized drone near lampposts, which triggered False Positives as the lamppost were mistaken as drones. We configured a filtering algorithm to treat the False Positives and made iterative improvements, testing it against a set of control and treatment scenarios to arrive at the best filtering algorithm.
To address the False Negatives, we used the Waypoint method to fly the ‘Eagle’ and target drone over white sandy areas, which triggered False Negatives as the contrast was lowest when the white unauthorized drones flew over the white sandy areas. To solve this, we tuned a state motion estimator to use trajectory information to guess the position of drone when contrast was low. We used the simulator to tune the accuracy of the state motion estimator to reduce the false negatives.
Real-time Control Testing
We used the simulator to highlight the importance of a P and PID Controller. We considered two scenarios.
In the first scenario, we considered a stationary unauthorized drone that was off-center by 600 pixels. We found that the P Controller, represented by the blue line, overshot and oscillated before eventually centering on the unauthorized drone. The PID Controller, represented by the orange line, had a smaller overshoot and no oscillations, but it took a longer time to center on the unauthorized drone.
In the second scenario, we considered an unauthorized drone, starting from the center but moving away at a constant velocity. The P Controller, represented by the blue line, was not able to center on the unauthorized drone and had a steady-state error of 100 pixels. The PID Controller on the other hand, represented by the orange line, was able to center on the unauthorized drone but it allowed the unauthorized drone to pull away by 250 pixels first in contrast to 150 pixels by the P Controller.
Graphical User Interface
I worked to make the Unity Drone Simulator as user friendly as possible to enable even non-developers to use it for testing purposes.
Future Works
I wish the Unity Drone Simulator would be scaled to support more concepts and also the realism and physics be improved to be a digital twin of the actual environment.