Rapid Iterative Design and Testing with TRIKart

Toyota Research Institute
Toyota Research Institute
6 min readApr 20, 2022

Fail Quick, Fail Often, Fail Safely at 1/10th Scale

By Velin Dimitrov, Paul Drews, Thomas Balch, Xiongyi Cui, Anas Abou Allaban, Guy Rosman, Stephen McGill

The TRIKart 1/10 scale testing platform.

Despite the tremendous hype in the 2010s, the deployment of fully autonomous vehicles has been slower than expected due to various technical and regulatory challenges. Furthermore, many drivers remain reluctant to give up driving altogether. But at TRI we believe that the delayed deployment has created an opportunity. We can augment and amplify human driving ability with subsystems and approaches that have reached technological maturity for integration in novel advanced driver-assistance systems (ADAS). While we work towards various advances in autonomy, we also can develop rich joint human/AV situational representations and vehicle interactions that encourage safer driving, reduce monotony, and make driving more fun. We believe this approach will result in a quicker path towards trust and acceptance of autonomy in vehicles due to the gradual introduction of progressively more complex autonomy.

Existing ADAS design and validation approaches offer incomplete coverage of all the concerns that need to be addressed in testing next-generation human-centered ADAS. Holistic development of next-generation ADAS will require an additional intermediate stage of testing combining more realism than simulation, but less infrastructure than full-scale track testing. In the SAE Demo Days Survey Report, 73% of individuals preferred to share control with a self-driving vehicle after taking a demo ride. For “driving” to become less about actuating a steering wheel and more about directing a (semi)-autonomous vehicle where to go, the ability to quickly and safely test new interfaces, algorithms and control paradigms will be instrumental.

A graph comparing the tradeoffs for testing and development between realism and effort to setup, at 1/10th scale, and at full scale. Simulation has a very low barrier to entry with low effort required to set up, but there is a point of diminishing returns. Full-scale testing has excellent realism, but there is significant effort to secure track space, develop safety/test protocols, gather and analyze data. 1/10th scale testing has an initial effort greater than simulation, some overlaps with simulation, and begins to approach full scale both in terms of effort and realism.

By using semi-autonomous vehicles based on 1/10th scale RC chassis coupled with a teleoperation station, we can fill the coverage gaps in the testing and development of next-generation ADAS. This approach captures several of the difficult to simulate realistic elements of full-scale testing spanning high-level concepts such as nuanced human reactions involving multiple vehicles to low-level challenges like tire contact and temperature dynamics. On the other hand, 1/10th scale vehicles have significantly less safety concerns, which reduces overhead in preparation for tests and enables the testing of algorithms and concepts earlier in the development timeline. This combination allows for quicker iteration on unproven concepts, saving design and engineering resources by identifying problems and solutions early.

Testing of semi-autonomous intersection navigation with TRIKart.

Platform Description

The TRIKart platform is a modified version of the F1Tenth (F1/10) platform, an open-source small-scale autonomous cyber-physical platform based on a Traxxas Slash, used for affordable, rapid, low-risk autonomous vehicle experimentation. The TRIKart platform improves upon the existing F1/10th with the addition of the following features.

  • Smaller ZED Mini stereo camera
  • Wide angle rear-facing IMX219 MIPI camera
  • New powerboard with improved safety features
  • MicroStrain 3DM-CX5 AHRS
  • Additional magnetic encoders on the motor output shaft for improved low-speed motor commutation

The sensor suite is flexible and can be adapted depending on experimentation requirements.

A VESC provides PWM commands to the steering servo and controls the electric drive motor. The associated VESC tool has a built-in auto-tuning utility that generates the initial set of low-level motor control parameters. The maximum current parameters were increased to 120A, the speed proportional gain increased from 0.001 to 0.008, the speed integral gain zeroed, and ramping increased to 100,000. These parameters are tuned to result in a very stiff low-level control, making as much dynamics as possible available to the higher level controllers. Furthermore, a Nvidia Jetson Xavier NX provides onboard computation and ROS capabilities, interfaces with the onboard sensors, and implements network connectivity through a WiFi module.

A view of the testing arena with OptiTrack motion capture cameras highlighted in red.

Finally, an array of retro-reflective tracking markers are arranged in a unique combination on each TRIKart so the OptiTrack motion capture system can provide accurate pose estimates of each TRIKart. Our motion capture system consists of 12 OptiTrack Prime 13 cameras. The cameras are mounted so as to have an 8.5m x 7.3m trackable space. Pose estimates of any agents in this space are published at 120 Hz and have roughly 0.5mm residuals.

Synthetic Side Mirrors with Low-Latency

In order to enable testing of human-in-the-loop scenarios, the ability to stream low-latency video at high quality from the TRIKart platform is critical. We have extended the built-in ROS tools for compressing video and transporting over a network stream for use on embedded systems. We utilize the Nvidia hardware accelerated GStreamer library to efficiently compose multiple camera views onboard the Xavier NX and compress them for transmission over the WiFi network.

This allows us to generate synthetic side mirror views (from the rear facing wide angle IMX219 camera) overlaid on the main forward looking feed (from the ZED Mini camera) as shown in the figure below. The overlay and compression at full HD resolution and 21 FPS of this view consume approximately 40% of one of the six Xavier CPU cores, highlighting the efficiency of this approach. In order to also utilize the video data in ROS, we utilize the gscam package which can wrap various user-generated Gstreamer-based pipelines. The code example below shows the use of nvcompositor and tee functionality used to generate this view.

GStreamer-based synthetic mirror views, shown in the bottom corners of the main forward looking camera.

Control at the Limits

Drifting is a driving technique where a vehicle intentionally loses traction but maintains vehicle control. Expert human drivers are capable of incredible control while drifting, but most drivers cannot maintain control after losing traction. This makes autonomous drifting an excellent arena to demonstrate control at the limits of handling, and it increases our knowledge of operating in conditions encountered during emergency scenarios autonomous vehicles may encounter.

For example, we are investigating this capability with Model Predictive Path Integral (MPPI) control, highlighted here and here, which utilizes model predictive control and sampling approaches to find optimal paths; the algorithm easily scales with parallelization on the embedded GPU, such as those on commercially available vehicles, to increase performance. It works by forward simulating thousands of possible control sequences in parallel, computes each sequence cost, and averages the sequences weighted by their costs. MPPI allows us to use neural networks to learn an accurate dynamics model from data collected while driving the system and then push the vehicle to the limits of its dynamics.

A video showing TRIKart driving in an oval autonomously, highlighting MPPI’s ability to handle disturbances such as a bump to stabilize the vehicle in a dynamic situation.

Vehicles at the limits of their dynamics are exciting when racing, but can be dangerous when they lose control. The combination of TRIKart, low-latency video, and a teleoperation station allows us to iteratively develop and test algorithms at the intersection of autonomous controls and human interfaces. We envision a future where vehicles can make an average driver safely feel the thrill and rush of a high speed corner. That same future may help a new teenage driver recover from their first understeer on an ice patch and avoid the snow bank.

Teleoperation station which enables human-in-the-loop testing of semi-autonomous control modalities in a safe yet visceral and engaging manner.

--

--

Toyota Research Institute
Toyota Research Institute

Applied and forward-looking research to create a new world of mobility that's safe, reliable, accessible and pervasive.