Why we need Tesla cars

Sofía Rincón
5 min readJun 2, 2018

--

Artificial intelligence applied to reality, as Tesla autopilot cars or Google Maps, etc., has three levels of action:

  • Contingency
  • Obedience
  • Evaluation

One of the most important contributions of Aristotle’s philosophy was the observation of this fact: there’s no science of accidents. The accident is the non-being of a possible. To understand this phrase you must think about the meaning of “possible”:

Something is possible because it’s existence is given in power. When we talk about the possible it’s because we’re not even talking about that in terms of need. When the need happens, it’s because something is no longer in power to become a fact. So, the “non-being of a possible” means that the final fact of the event was a non-expected need.

When our intelligence contemplate the different agents and consequences of something to take a decision it decks the different possibles that it can discern (facts in power) to take the best consequence of the act we’ll do. If something we couldn’t know how to avoid happens, it is a non-being of a possible of the situation, because obviously it is (it’s a fact) but it wasn’t in our parameters of action. Here is the dividing line between accidents and negligence.

One of the great goals of an AI, is the capacity of contemplating the best number of possibles, speeder than humans and more accurate than humans. But, as I expose in my introduction article to this matter (here: http://www.desdeelexilio.com/2018/06/02/por-que-financiar-tesla-motors/), people distrusts of machines, and that’s because of the contingency. As Joyce said, we live into an epiphanic world, constantly opened to contingency, so, if we’re, as humans, exposed to it, machines too because machines coexist with us.

My defense of machines with regard to accidents is this: the circumstances are the same, but machines are faster and are more accurate than us.

Elon Musk, JB Straubel and Martin Eberhard. Founders of Tesla Motors.

The second point: obedience.

Machines are faster and more accurate: yes. BUT, it’s also true that they follow our instructions, concretely, the program instructions. If something wasn’t foresight because it was a contingency, it is not a problem because is part of life. But if the reason of not-reaching the target was a lack of foresight that is a negligence question. Machines never fail, humans of chance are the agents to machine’s failure, but the machine, by itself, never fail when it comes to executions. Basically because machines ONLY know about executions. It might be a lot of different types of programming, but all of them end into an imperative basis.

And that is why I’ve written that third point: evaluation. That is our role. Humans must evaluate. Instructions are useless if there’s no exist an evaluation of them. And if there’s an evaluation, there’s a creation. Our role is to create structures of thought and evaluate them, and one of the ways to do it are machines.

Machines give to us a relentless exposition of our failures (when the instruction concerns specifically to machines), if a program doesn’t work it’s because it’s badly written. The machine give to humanity a wonderful way to make an exercise of self-criticism, and the reason of this is simple: we cannot blame the machines. Blaming the machines would be not only illogical, but also stupid.

The creation of Tesla Motors was a way to develop and improve the knowledge of mental structures. The objective of a car is simple: going from a point A to a point B, and, as Patrick Winston say, “simple ideas are the most powerful”. The way to reach the objective of the car was broken into a great quantity of agents, choices, conditional appreciations… This simple idea of “going from A to B” was transformed into a world of complexity, and for understanding complexity, we need to find what are the simple ideas that compose it.

The way to do it is the creation of models, and if humans focus of solving this kind of “simple” problems without negligence (badly written AI programs), humans will have put the first stone to build the castle of this new era at logical level. New science-thought-structures will be built in the deepest abstract areas.

Steve Wozniak in the Nordic Business Forum

Obviously all this kind of things are too big for us for not to be impatient, but to continue learning and innovating in this way is necessary to have temperance. This is what Steve Wozniak reminded to Elon Musk in the Nordic Business Forum, and I think he’s right. I think, as I’ve exposed in this article, that humans need to meet the goals that Tesla has proposed with the autopilot cars, but I think too that to be in a hurry for doing it could be harmful because of two reasons:

  • First, the implementation in the society of bad written autopilot cars (fruit of the haste) will cost human lives.
  • Second, to be in a hurry is the opposite attitude we must have to learn and innovate. If we can learn quickly, that’s a wonderful thing, but if we must slow a lauch of a product to do it (quasi)perfect: it’s a mistake to have an impatience attitude.

Don’t forget that intelligence is building models, models must be carefully made because intelligence is creating structures, and a bad structure of thought ends up collapsing.

--

--