Tesla’s Ace in the Hole: 60,000 Full Self-Driving Beta Testers

The age of AI robots cometh

Yarrow Bouchard
Geek Culture
7 min readJan 29, 2022

--

Courtesy of Tesla

Resolving all doubt that its data collection efforts are at a quality on par with competitors like Waymo and Cruise, Tesla announced in its quarterly earnings letter that 60,000 people are testing its Full Self-Driving (FSD) beta software. An FSD beta tester does the same job as a Waymo or Cruise employee paid to test autonomous vehicle software, but Tesla’s customers pay for the privilege. FSD beta testing involves both automatic data collection through varied and complex data curation methods and manual data collection; beta testers can press a button on their touchscreen to manually flag a segment of their drive for review by Tesla.

A fair way to compare Tesla’s quantity of data collection to that of Waymo and Cruise is to divide its number of beta testers by ten. The average American drives only about an hour per day, whereas a Cruise or Waymo vehicle may be in operation ten hours per day, give or take. So, Tesla’s FSD beta testers provide the equivalent of 6,000 test vehicles, which happens to be ten times the most recent number that Waymo disclosed. Tesla has never disclosed how many FSD test vehicles it operates internally (but it has disclosed it does conduct internal testing).

I have long argued that both Tesla’s passive and active data collection efforts, that is, when Autopilot or FSD beta is either engaged or disengaged, are incomparably useful for FSD development. This newest disclosure should put that argument to rest. Not only is Tesla’s FSD beta program larger than the testing program of its largest competitor, Waymo, it conducts more testing than all other autonomous test vehicles on U.S. soil, which as of mid-2019 numbered 1,400.

This is very close to a straight-on apples-to-apples comparison. What a Tesla FSD beta tester does is essentially the same as what a Waymo or Cruise test driver does. They carefully observe the car as it drives in autonomous mode. They intervene when prudent. And, occasionally, they manually flag a segment of the drive for manual review.

What does this mean from a business perspective? There are two distinct ways that autonomous driving technology can contribute to Tesla’s revenue, profit, and cash flow:

1) Partially autonomous driving. Autopilot is the best known example. This takes the form of software add-ons or software subscriptions.

2) Fully autonomous driving, i.e. robotaxis. Were this technological nut to be cracked, most cars manufactured by Tesla would be owned by Tesla and operated as fully autonomous taxis, often referred to as robotaxis.

I disagree with (what I perceive to be) the consensus view on Tesla with regard to both partial autonomy and full autonomy. Specifically, I think partially autonomous driving has more future technological potential and potential as a consumer product than most people typically envision. (One exception is Piper Sandler’s Alex Potter.) With regard to robotaxis, I think:

a) they are more plausible in the not-too-distant future than most people believe,

b) Tesla is by far the best advantaged company to develop and commercialize them, and

c) there is an obvious financial case that robotaxis would ridiculously lucrative.

Responding to skepticism about autonomous driving tech

I have already addressed the way in which Tesla’s production fleet advantages it by way of automatic and manual data collection. How can I respond to skepticism about autonomous driving technology in general? This requires that I delve into technical detail. I hope to do so in a widely accessible manner. In short, there are four trends in deep learning or deep neural networks that I personally believe (and am not alone in believing) show great promise for AI progress:

1) Self-supervised learning replacing supervised learning

2) Sparse neural networks replacing dense neural networks

3) “Thick” artificial neurons replacing “thin” artificial neurons (my own terminology)

4) Neural networks growing larger as accessible, affordable computational resources increase

I will explain these below:

1. Supervised learning is the most familiar form of deep learning. It could mean a professional labeler sitting at a desk and using a mouse to colour in the areas of an image that correspond to different object types: road, vehicle, pedestrian, sidewalk, trees, sky, and so on.

An example of semantic segmentation, in which areas of an image are coloured by object category. Source: Nesti, Rossolini, et al., 2022.

In a self-supervised learning paradigm, AI researchers or engineers might hide half of an image from a neural network and ask it to generate the missing half from the half that it sees. This is a form of deep learning where the training signal comes from the data itself and no human labor is needed. Without the constraint of human labor, the sky is the limit. (For those curious, I recommend listening to Yann LeCun on this topic.)

2. In the human brain, a change to one neuron affects only a tiny percentage of other neurons in the brain. This property is known as sparseness.

In widely used artificial neural networks, a change to a single neuron cascades across the network, changing the weights of many other neurons, i.e. the number by which an artificial neuron multiplies its input and subsequently passes along to the next neuron in the network as its input. This property is known as density.

An increasing number of AI researchers believe that the way forward for artificial neural networks is to imitate the sparseness of the human brain. A few early proofs of concept support this notion.

White mater connections in the human brain. Source: Wikipedia.

3. Another major dissimilarity between artificial neural networks and the human brain is that, whereas biological neurons are complex tiny computers, the most common form of artificial neurons, called point neurons, are exceedingly simple entities that, as I alluded to above, essentially just perform multiplication.

Researchers in neuroscience with an interest in brain simulation have criticized point neurons as an simplistic mathematical abstraction that bear little resemblance to real neurons in the brain. Neurons of this sort I nickname “thin” neurons.

This state of affairs has prompted some AI researchers to try to develop more internally complex (and therefore more computationally intensive). I nickname these “thick” neurons. Some, like Numenta’s active dendrites, attempt to be biologically realistic. Others, like Geoffrey Hinton’s capsules, do not.

Courtesy of Numenta

4. A reliable historical arc fueling deep learning is the exponential pace of price-performance growth and miniaturization of GPUs and neural network-specific processors. This exponential trend will have to continue in order for us to have autonomous vehicles that can run a similar number of “thick” neurons as are found in, say, a cat’s brain.

Courtesy of OpenAI

Combine self-supervised learning, sparse networks, “thick” neurons, and bigger networks. There you have a recipe for a paradigmatic change and radical step up in AI capability.

Tesla is at the cutting edge of AI hardware and software. It continues to develop its own neural network-specific processors, with the intention of providing for itself a better product than it could buy from GPU market leader Nvidia. Increasingly, Tesla is integrating self-supervised learning into its AI stack.

Sparse neural networks and “thick” neurons remain in the early proof of concept phase. They aren’t ready for prime time. However, at times research in deep learning moves amazingly fast. I was personally surprised by how quickly self-supervised learning moved from the topic of academic lectures to an engineering implementation in partially autonomous cars. When discussing the long-term feasibility of robotaxis, we must keep in mind that AI researchers are working on fundamental improvements to both the atomic unit of deep neural networks, the neuron, and to the way neurons are connected (sparsely versus densely).

Conclusion

If you believe in the feasibility of robotaxis or in the eventual market for highly sophisticated partial autonomy software, then you should unequivocally believe in Tesla’s technological leadership in that domain.

If you are a skeptic that the technology has much further room to grow, then it is much harder for me to convince you and, indeed, with regard to robotaxis, I am much less confident I am right.

However, when it comes to partial autonomy, I can’t help but think that some major degree of success for Tesla is virtually assured. I don’t think this is being priced into the stock at present.

Disclosure: I am long Tesla. / Disclaimer: This is not investment advice. / Acknowledgement: This article’s title is an homage to a 2017 article by Scott Ritcey. Thanks, Scott.

--

--