Behind the Scenes: Interning at Level 5

Woven Planet Level 5
Woven Planet Level 5
8 min readNov 5, 2019

Five college students help us build the future of transportation

The Level 5 internship program invites the country’s most innovative minds to help us solve self-driving challenges. We believe in investing in students who want to expand their knowledge and be part of building the future of transportation.

Interns work closely with their mentor and team members to own and solve unique problems in the self-driving space. Months of iteration in a production environment provides interns with the opportunity to participate in tech talks, hackathons, and science fairs. They pick up not only technical skills, but also improve soft skills (such as public speaking) which sets them up for doing inspiring work during the rest of their education and upcoming careers.

Meet Linda, Shane, Davon, Ali, and Joseph — five recent Level 5 interns who spent their summers contributing to one of the greatest engineering challenges of our time: self-driving vehicles.

Improving monocular depth estimation in self-driving vehicles

Linda Wang, Perception Team Intern

Depth estimation is a key part of the perception system in a self-driving car. It helps the car figure out how far the surrounding objects are using cameras alone. Monocular depth estimation tries to solve the depth estimation challenge by creating dense depth maps using a single camera. Linda Wang, one of our interns on the perception team, spent her internship implementing and validating different deep neural networks for monocular depth estimation.

Currently, lidar is used to capture the depth of a scene. However, at long distances, the points returned can be sparse — which makes it difficult to distinguish distant objects. The goal of monocular depth estimation is to return a depth for each pixel captured by a single camera, thus generating a dense depth map. This is similar to a human trying to determine an object’s distance with just one eye. Estimating depth using a camera also provides redundancy, which can help in cases where lidar may not provide ideal readings (for instance, when there’s dust or smoke).

Linda worked on both “supervised” and “unsupervised” methods to predict depth. In the case of unsupervised methods, there were no ground truth values to compare the network’s predicted values against. This required the network to learn by gathering feedback another way. One common way is to process a sequence of consecutive images from a camera and utilize the movement across images to provide this feedback.

Linda implemented several monocular depth estimation models in Tensorflow, a deep learning framework. She also conducted experiments to evaluate the efficacy of each monocular depth neural network, which contributed to the knowledge of monocular depth estimation. In the future, we plan to explore improving depth estimation models by training them on in-house data at scale and fusing depth information from cameras and lidar to produce a better representation of the environment.

About Linda

Linda is earning her masters in computer vision and deep learning from University of Waterloo. She has presented her work at the Conference on Computer Vision and Pattern Recognition (CVPR) and backpacked through Southeast Asia.

In Linda’s words:

I appreciated the open, resourceful, and inclusive dynamic of the Level 5 team. I stretched both my technical domain as well as my presentation skills, presenting to a range of audiences across a variety of opportunities including a literature review, paper reading session, Lyft recruiting events, a science fair, project progress during team meetings, and the Level 5 intern science fair.

Prototyping and building a new trajectory generation system

Shane Barratt, Planning Team Intern

Trajectory generation is a critical piece of the autonomous vehicle stack that helps the self-driving car decide where it will go next. The inputs include perception, tracking, prediction, behavior planning, and localization. Broadly speaking, trajectory generation involves taking the current state of the car, the state (and predicted states) of all its surrounding obstacles, and a high-level goal — then coming up with a dynamically feasible trajectory for the car to follow over the next several seconds.

Shane Barratt, one of our planning interns, spent his summer experimenting with a new approach to trajectory generation. He came up with a new method of incorporating hard constraints to solve some of the limitations of using finite-valued costs as penalties. Hard constraints are sometimes necessary to limit where the car can go (e.g. it has to stay on the road) and what it can do (its throttle and steering). The use of finite-valued costs did not guarantee that all such constraints were being met.

Throughout the summer, Shane went from a very basic implementation in Python, to one in C++, to one that worked with the full stack in simulation. Finally, he ran the implementation on a real self-driving car.

About Shane

Shane is earning a PhD in electrical engineering at Stanford University, where he focuses his research on optimization algorithms and their application to machine learning and control. He loves to build, and built a mini self-driving car for the Level 5 Hackathon that still roams the halls of the office. (We may or may not have given a small dog a ride in the hallway.)

In Shane’s words:

I was impressed with the diversity and ownership opportunities across the Level 5 team; each team member had their own project or part of the system that they actually improved. During my internship, the team started a bi-weekly tech talk series where we learned about various parts of the current Trajectory Generation system in detail, along with very useful tips and tricks for Linux/C++.

Creating a characterization dashboard to improve scenario behavior detector performance

Davon Prewitt, Scenarios Team Intern

In order to create a reliable simulation platform to test our autonomous vehicle stack, our simulation team needs large amounts of meaningful data from the scenarios our self driving car encounters in real life.

It is the responsibility of our Scenarios Team to collect this data so that the Simulation Team can quickly iterate and increase productivity of the simulation pipeline. The more useful data they get, the better this process works! Davon Prewitt spent his internship on the Scenarios Team building a tool to better detect, track and extract scenarios.

Davon developed a framework to benchmark different versions of our scenario behavior detectors on their Precision and Recall (P/R) so that we could compare and improve their performance. He then integrated this detector performance framework into our workflow by automating P/R metrics for high-recall datasets, something that was previously done manually.

As part of this project, he created a customized characterization dashboard: a JavaScript-modified Mode Analytics dashboard with a parameter selection that dynamically updates the data visualization. This provides a consistent and reliable method for accessing scenario parameter distributions that help create diversified simulation content.

This framework produces a greater quantity of meaningful scenarios for the Simulation Team to test on, enabling them to make the simulation platform more reliable for testing our autonomous vehicle stack.

About Davon

Davon is studying computer science at the Georgia Institute of Technology. He originally wanted to be an aerospace engineer, but his high school physics teacher (and former NASA engineer) explained how important computer science is to help guide planes. So Davon started to code! When he isn’t coding, you can find him hiking; his favorite spots are the trails around Mt. Rainier.

In Davon’s words:

I appreciated the opportunity to own projects from end-to-end and the independence the Level 5 team provided to act as a full-time engineer. The team made me feel like my opinion mattered, which allowed me to gain the confidence to overcome imposter syndrome and focus more on making an impact (#MakeItHappen).

Benchmarking neural network accelerators for a more efficient self-driving system

Joseph DeChicchis (left) and Ali Toyserkani (right), Compute Team Interns

The compute platform in a self-driving car consumes a lot of power, resulting in a significant thermal profile. Together, Ali Toyserkani and Joseph DeChicchis worked with our Compute Team to investigate how hardware specialized for neural network model inference would make our future compute platform more energy-efficient and less thermally-taxing — all the while reducing latency of model inference.

To answer these questions, Ali and Joseph collaborated to benchmark neural network accelerators. Through their benchmarking, they evaluated and compared performance characteristics through both macrobenchmarks (running models) and microbenchmarks (running kernels and specific layer types), ultimately informing our next generation compute platform.

Ali focused on benchmarking low-power, embedded accelerators; while Joseph focused on high-performance datacenter accelerators. Across the set of accelerators evaluated, the two gathered accuracy, performance, and power metrics for various models of interest. The team worked hands-on with pre-production hardware, built relationships with hardware vendors, and developed a deep understanding of these vendors’ capabilities and their products’ suitability for use in a self-driving vehicle.

Ultimately, Ali and Joseph successfully built a framework with which to evaluate specialized neural network accelerators, and charted a path towards a more power efficient and lower latency self-driving system.

About Ali

Ali is studying mechatronics engineering with an option in artificial intelligence at the University of Waterloo. He previously ran the university’s student self-driving car team (WATonomous), which is where he first learned about Level 5.

In Ali’s words:

The team provided me with full ownership of the project, but also allowed me to get involved in smaller projects on different teams to expand my technical depth. I also loved the team’s positive spirit; people at Level 5 are enthusiastic and passionate about their work.

About Joseph

Joseph is studying computer science at Duke University and pursuing a minor in philosophy. He also enjoys reading and mixology, and DJs for various events at Duke.

In Joseph’s words:

I received great support and guidance from my Level 5 mentor and manager. They made me feel fully responsible for my project, helped me understand technical details and business goals, and supported my pursuit of various lines of investigation throughout the summer. They also helped me grow as a team member (not just as an engineer) by giving me advice on how to effectively navigate a growing organization. I felt like a contributing member of the team by attending weekly meetings, talking to vendors, and helping make the decisions that go into defining the future compute platform roadmap.

Linda, Shane, Davon, Ali, and Joseph are just a small group out of nearly 50 interns who did amazing work solving self-driving challenges this summer. While we’re sad to see them go, we are always looking to bring new talent to the table.

Interested in an internship? Level 5 is hiring interns and new grads across multiple teams and locations. Explore our University Programs page to learn more and view all open roles.

--

--

Woven Planet Level 5
Woven Planet Level 5

Level 5, part of Woven Planet, is developing autonomous driving technology to create safe mobility for everyone. Formerly part of Lyft. Acquired July 2021.