How to fall in love with Computer Vision

…with or without $1M dollars on the line!

Francisco 'Cisco' Zabala
12 min readMay 3, 2022
PBS’ NOVA has a fantastic documentary on the DARPA Grand Challenges!

In pre-historic timesas far as Deep Learning is concerned—a series of contests dubbed “The Grand Challenge(s)” were organized by the Defense Advanced Research Projects Agency (DARPA), and took place in March 2004, October 2005, and November 2007. Collectively, these events marked a major milestone in the history of autonomous driving as they attracted significant attention to the field, which accelerated both the research as well as the consequent innovations that have occurred since.

A Brief History of The DARPA Grand Challenges

The first two challenges featured a race of autonomous vehicles through a desert with dirt roads, cliffs, ditches, boulders, underpasses, open water, and any other “outdoor object” one can think of. You see, DARPA challenged anyone and everyone to develop autonomous vehicles that could navigate within a corridor specified in a Route Data Definition File (RDDF). The file, which was to be provided to competitors two hours before the race, contained a list of waypoint corridor coordinates, as well as associated corridor widths and speed limits.

Bob, Team Caltech’s vehicle in the 2004 DARPA Grand Challenge (image credit: Henrik Kjellander)

The catch? The vehicles had to not only be able to find a “road”, but also avoid obstacles using any sensor onboard. Aaaannnddd, although obstacles would presumably be static, there was no guarantee that two competing vehicles wouldn’t come in close proximity with one another during the race.

–“Good luck, y’all”, DARPA officials unofficially said.

The reward? An EZ Mil of the cash ($1M USD), not the rapper, kind.

So, the first team to pass a series of qualification tests held at the California Speedway and complete a “course” in the middle of the Nevada desert under the prescribed ten-hour time limit would receive a $1M prize.

Naturally, even though off-the-shelf vehicles were the platform of choice for most participants, some teams took the challenge to the proverbial next level. For instance, a crowd-favorite contestant, Ghostrider, was a two-wheeled (for the most part) motorcycle that could right itself up (for the most part) autonomously!

Ghostrider, an autonomous motorcycle, was a crowd favorite contestant!

Although 15 vehicles went on to pass the qualifying rounds—Ghostrider not being one of them )-:—the rugged Mojave desert course combined with the technological hurdles involved (e.g., loss of GPS signal) proved to be too challenging, pun intended. Consequently, on March 13, 2004, no team successfully completed the designated Barstow, CA, to Primm, NV, route, and the prize went unclaimed.

In a somewhat surprising turn of events, DARPA doubled-down and launched a follow-up contest for double the prize. That’s right, a not-too-shabby $2M USD purse for the winner, if any.

Just one day after the first contest ended, DARPA announced a second Grand Challenge that would take place the following year. And so, on October 8, 2005, another “Robot Race” was held in the desert near the California/Nevada state line. In this second event, things went a little better: 5 of the original 195 teams completed the 212 km (132 mile) course, with “Stanford Racing Team’s” vehicle finishing in 6 hours and 53 minutes, and thus collecting the coveted $2M prize.

A few noteworthy things about the second Grand Challenge:

  • The Stanford team, which was comprised of mostly Computer Scientists with Sebastian Thrun at the helm, had their vehicle custom-built by Volkswagen (aka The Vehicle Group in their Technical Report), and as you can imagine, it was a top-shelf piece of engineering.
Learn more about “Stanley” in Stanford’s Technical Report for the contest
  • Carnegie Mellon University’s team, comprised by an “army” (their term, not mine) of grad students, scientists and researchers, as well as seasoned industry engineers with Red Whitaker at the helm, manually adjusted the prescribed velocities in the RDDF for (almost) every meter of the route—DARPA’s plan to prevent this by providing route definition files only two hours before the race proved unsuccessful. The adjustments leveraged GIS topographic information to prescribe their vehicles’ (plural, as they had two entries) velocities for the race.
Learn more about Red Whitaker’s “Red Army” here: link
  • My school’s team (Caltech), comprised by over 50 undergrad students with Richard M. Murray and Joel W. Burdick at the helm, after having “Alice” perform a phenomenal set of runs leading up to the race, had the onboard software reach an unstable state triggered by GPS signal loss. The resulting overcorrections in the steering control system caused the vehicle to go over one of the concrete barriers on the side of the road—see the full video below for maximum entertainment!
Full video of “Alice’s” run available here: link

“The fresh thinking they brought was the spark that has triggered major advances in the development of autonomous robotic ground vehicle technology in the years since.”

–Lt. Col. Scott Wadle, DARPA’s liaison to the U.S. Marine Corps

The DARPA Urban Grand Challenge

Building upon the success of the 2004 and 2005 Grand Challenges and the increased interest in the research community to further develop driverless technologies, DARPA announced a third contest for 2007 dubbed The Urban (Grand) Challenge (DUGC).

This time around, participants were tasked with developing driverless vehicles capable of navigating a complex course in a staged city environment. Specifically, competitors needed to be capable of autonomously driving in single and double-lane traffic, performing complex maneuvers such as merging, passing, u-turns, parking, and queuing at intersections with other human-driven and autonomous vehicles.

“Alice”, Team Caltech’s vehicle entry for both the 2005 and 2007 DARPA Grand and Urban Challenges

Whereas the previous races, having been held in a harsh desert environment, were more physically and mechanically demanding on the vehicles, the DUGC presented numerous other challenges in terms of the navigation software. For successfully completing the course, teams were required to build vehicles that were not only able to drive autonomously following waypoints, but also obey California traffic laws and avoid obstacles that included other vehicles on the course — a rare occurrence in the previous two challenges.

A personal challenge, joining Team Caltech:

One of my Senior Design Projects at CSUF prior to joining Team Caltech

As an incoming Senior at Cal State Fullerton in the 2006–07 academic year, with an interest in Robotics and Computer Vision, I was very much aware of the DARPA Grand Challenges. So, it was to my utmost delight to hear that a new contest had been announced. As I started searching for nearby teams that I could join, I was beyond glad that Caltech—a school some 40 miles away from my own—would be entering the competition. I emailed one of the Team Leads (Prof. Richard Murray) and asked if he would accept me as a volunteer. I explained that I had participated in the Autonomous Lawnmower and Intelligent Ground Vehicle competitions, and much to my surprise, he quickly obliged.

Joining a team of researchers at a world-renowned institution was certainly intimidating, but imposter syndrome aside, I was in!

Myself and the rest of Team Caltech at the Santa Anita Westfield Mall parking lot during the DARPA site visit

The DUGC competition format:

After the DUGC announcement in May of 2006, the teams had to go through the qualification process, including a site visit in July, 2007, and the National Qualification Event (NQE) in October of that same year. If both went well, then the teams would proceed to the Urban Challenge Final Event (UFE) in November, 2007. This gave each team about a year to implement the Basic Navigation and Basic Traffic functionalities (as described in the Technical Evaluation Criteria), and around six additional months to implement the advanced capabilities for both the NQE and the UFE.

“Alice” performing a U-turn at the Santa Anita Westfield Mall parking lot during the DARPA site visit

The specific location (George Air Force Base, Victorville, CA) of the NQE and UFE was kept secret until August, 2007, a little under 3 months before the NQE. Furthermore, the teams did not have access to the site, so it was not possible to build a prior map like those used by some in the previous edition.

For both the NQE and UFE, the route and mission were specified by DARPA using two file formats: the Route Network Definition File (RNDF) and the Mission Data File (MDF). The RNDF consisted of a digital street map specifying accessible road segments and free-travel zones, and providing information such as waypoints, checkpoints, parking spots, stop signs, lane widths, lane boundaries, connections between lanes, zone boundaries, and entry and exit points to each zone. The MDF, on the other hand, specified the sequence of checkpoints to be visited by the vehicle as well as the minimum and maximum speed limits for each segment in the RNDF.

Some of the participants during the DUGC final event (credit: “The Great Robot Race”, PBS)

A few noteworthy things about the UFE:

  • The mission length and complexity specified in the MDF varied in different runs. It included 3 missions that covered approximately 60 miles. During the missions, interactions were expected between participants and other autonomous vehicles, as well as with human-driven ones. The main objective of the participants was to complete the missions as quickly as possible while complying with the Technical Evaluation Criteria, with infractions leading to potential, somewhat arbitrary time penalties or disqualification.
  • The waypoints specified in the RNDF were guaranteed to be within their associated lanes, but a straight line connecting them might go off road. In addition, connectivity of waypoints was not guaranteed as the road could be completely or partially blocked. Finally, part of the free-travel zones might not be drivable.
  • Whereas the RNDF was provided at least 24 hours in advance, the MDF was given to the team right before each run. After receiving the MDF, the team had only 5 minutes to load it and get the vehicle ready for an autonomous run. Again, this was to avoid (almost to an extreme) some of the precise annotation that was seen in the previous challenge.
Myself and the rest of Team Caltech at the DUGC National Qualifying Event (NQE)

A few noteworthy things about Team Caltech’s participation:

After our successful demonstration during the site visit, we proceeded to implement and test the advanced functionality needed for the NQE and the UFE. At the NQE, somewhat akin to what happened in the previous challenge, the complexity of system integration had its final say.

Subtle design bugs hampered one of our runs during the NQE, which prevented us from advancing to the final event.

As most other participants, “Alice’s” navigation stack included:

  1. a path planner, which generated a path for “Alice” to follow,
  2. a safety system, which controlled acceleration based on obstacle-avoidance and deviations from the planned path, and
  3. a steering system, which limited steering rates at low speeds to protect the vehicle’s hardware.

These systems were tested extensively under a wide range of evaluation parameters, however, during the NQE, “Alice” experienced a set of conditions never seen before. In particular, the vehicle had to make a tight turn while merging onto traffic, with a concrete barrier next to the road. This meant that the planned path contained a sharp turn, accelerating from a low speed, which the steering controller was unable to execute due to speed-dependent constraints. As a result, “Alice” deviated from the path and moved closer to the concrete barrier, triggering the safety system to slow down the vehicle, which in turn lead to more strict limits on the steering rate. This cycle continued, causing “Alice” to be stuck at the corner of a sharp turn, dangerously stuttering in the middle of an intersection.¹

One of “Alice’s” successful NQE runs (video credit: Nok Wongpiromsarn, Team Caltech)

Let me highlight that not every unexpected scenario was unsolvable, for instance, the video above is from a successful run during the NQE that was recorded from one of “Alice’s” onboard, roof-mounted cameras. The clip showcases:

“Around the 34th second, “Alice” was paused as it was heading towards a concrete barrier. DARPA then asked the team whether to let her continue the mission, a decision that was very difficult to make without any access to the vehicle — we did not even know whether the sensors or the computers were still running. We decided to take the risk, and “Alice” successfully completed the mission with quite a bit of difficulty to satisfy the obstacle clearance requirement due to her size.”

Nok Wongpiromsarn ¹, Team Caltech, Systems Lead

Takeaways and lessons learned

Although immensely fun, these competitions are particularly tough on students who have to worry about a full academic workload on top of it all. In particular, the 18-month timeframe for the DUGC felt quite short, which resulted in most teams extending pre-existing approaches rather than inventing radically new ones. A notable exception was the development of the now-retired Velodyne HDL-64E sensor, which ended up being used by 5 out of the 6 vehicles that finished the race. Somewhat siding with “extending [our] pre-existing approaches”, and adding the sensor’s low availability and high demand at the time, we chose not to include it in our perception stack, which relied on planar LIDARs and stereo cameras instead.

Raw data from the Velodyne HDL-64E 3D-LIDAR sensor

Most teams approached perception quite more differently than they did planning and control. Similar to ours, most planning subsystems were comprised by: a mission planner, a behavioral planner, and a trajectory planner. Broadly speaking, the mission planner computed a high-level route for the vehicle to complete the mission. The behavioral planner was a finite-state machine handling the expected states of operation during a particular mission: stay in a lane, proceed through an intersection, perform a U-turn, etc. The trajectory planner translated those states into an actual path for the vehicle to follow, and included different approaches for different driving situations. The paths were usually generated using variations of optimization-based (e.g., MPC) and graph-based (e.g., RRT, PRM) approaches.¹ These were then translated into low-level steering and acceleration commands, which were handled by a controller that typically based on pure pursuit and PID. More details about planning and control algorithms can be found in the article by Paden et al.

On November 3, 2007, at the former George AFB in Victorville, Calif. “Boss”, the entry from Carnegie Mellon’s “Tartan Racing” team crossed the finish line with a run time just over four hours. Nineteen minutes later, Stanford University’s entry, “Junior,” crossed the finish line. With four other robotic vehicles crossing the finish line, it was proven to the world that autonomous urban driving could become a reality. This event showcased (arguably) for the first time the interaction between autonomous vehicles with other autonomous and human-driven vehicles in an urban environment.

DUGC final event’s highlights (video credit: DARPA)

Although it isn’t easy to quantify the effects of these DARPA challenges on the development and deployment of autonomous vehicle technology, in the time since these races, different defense and commercial applications have proliferated. The rapid evolution of the technology and rules for how to deploy it are being driven by the information technology and automotive industries, academic and research institutions, the Defense Department and its contractors, and federal and state transportation agencies.

For more info about the three races, I encourage you to read: “The DARPA Grand Challenge: Ten Years Later”. For the more technologically inclined, the article by Campbell et al. gives a very good overview of the approaches and challenges faced by participants from Cornell, Georgia Tech, MIT, and Caltech.

¹ Wongpiromsar N, “The Journey of Autonomous Vehicles

I want to take a moment to thank professors Richard M. Murray and Joel W. Burdick for giving me (someone with little prior research experience) a chance to join this wonderful adventure. I, of course, would need to state my appreciation to every. single. member. of Team Caltech for their kindness and hard work, but I will make an exception and highlight two individuals (grad students at the time) who made a world of a difference in my career path:

Dr. Tichakorn (Nok) Wongpiromsarn and Dr. Noel du Toit

They were instrumental in getting me started in academia, research, and cementing my interest in Computer Vision, Autonomous Vehicles, and Robotics.

And so, now you all know why this project represents how I truly fell in love with these fields of study!

Feel encouraged to give the article a 👏 and/or follow me here on Medium.

--

--

Francisco 'Cisco' Zabala

CV Scientist @AWS ~ 2+ working neurons with sporadic synapses between them. I <3 neural nets 🧮