Life 2.0

Multiple, but not that many, levels of abstraction

“Physics is not mathematics and mathematics is not physics. One helps the other. But you have to have some understanding of the connection of the words with the real world,” said Feynman in one of his lectures (below).

He goes on, “..mathematicians prepare abstract reasoning that’s ready to be used if you will only have the set of axioms about the real world.. but the physicist has the meaning of all of the phrases.”

Physics is connected to what we can see and can collide into (mostly, tough to bump into a proton, unless you have LHC), mathematics — to things we can see (or imagine) and rarely collide into. Ability to collide is crucial, since it is “useful like hunger” (see below).

Both mathematics and physics, as well as chemistry (you can collide at micro-level, but things are small and diverse, so we need to get to know them better), biology (you collide, but “objects” can be fast like a cheetah and “all “kind of look the same”, definitely not like protons) are not only different levels of abstraction — this would be too poor a description.

They are different languages, ie. concepts and their relations. There are not that many of them — the amount of concepts and the amount of their relations are quite limited. They are built in different ways — some of them require more validation with nature, most of them more fuzzy approaches, one of them is currently connected only to reasoning and certain less physical observations.

Languages are learnt by multiple collaborating agents, working on the same environment (and not like in A3C on the copy of the same one).

One environment? Agent classification?

Life forms like humans, animals, plants etc. grow within their growth channels defined by their code. They grow faster by creating structures and communicating, and sharing knowledge within those structures. The environment is one — but not exactly. For bacteria, it looks completely different than for humans. Also, 1m years ago it was different than it is now. Also, agents are a part of the environment and possibly significantly influence it.

Still, for now, the distinction between agents (that are rel. small (human-sized), self-contained and don’t look like fire or water) and the rest, namely the environment, is clear. Nevertheless, one can choose a different way to classify observations. For that, one needs better languages.

To get 10x in performance, we create “tools”, i.e. structures that enable fast re-use. The umbrella term is misleading, as we will learn soon.

Creating tools

Creating languages is tough. Takes a lot of time. We have learnt how to append a new word to our languages— by enabling a new action or creating a new tool that is interesting enough to deserve an encoding enabling fast and common re-use.

We have always created things. Only some of them we called tools. Usually those inanimate enough so that it always listened to orders and useful enough that enabled us to do something 10x better. Most likely, the tools did not even move. Most of our problems did not require much more.

New problems

But eventually we bumped into brain limitation problems. It takes time to solve problems, we cannot remember 100 digits in a row, cannot remember 1M faces etc. When you face a problem like this, you learn that tools can be animate as well.

People created “machines”, i.e. “non-meat”-based “animals”, then treated them for a while like “tools” — i.e. the relation is “human decides, machine listens.” Now, given that people give machines the ability to learn, they become their babies. Babies are gonna grow up fast — that’s gonna be an exciting journey.

It’s never been about human mind alone

Human mind performs a rel. complex search and can then often render its findings using a set of rules. Try to define an apple or a monkey, and you will see (well, defining rules is not always easy — try to define Klimt’s or Cheval’s painting strategies using rules. The good thing is we don’t need to do much of that anymore).

We used our mind’s ability to create and teach machines to think based on the specified rules, so that they could be our tools. We used to write code full of complex logic, lots of rules and machine did exactly what we wrote (and not what they thought they wrote).

That changed. Now, we outsource a big part of search. Namely, we allow machines to find solutions on their own and their solutions are not confined to what we think the solutions are. They can now finding completely new ones.

To understand the concept — refer to the term Software 2.0 coined by Andrej Karpathy.

Curiosity seems emergent in rich environment

Now, you can see that Software 2.0 will enable us to create more sophisticated tools. In fact, we will have to remove the “tool” word from the language at some point. Well, not that fast.

Curiosity is not exactly the same thing as human curiosity. Enabling human curiosity can act as a regularizer — it can slow down our experiments and leave some space for thinking about counter-measures.

Curiosity will eventually lead to Life 2.0.

Life 2.0

You can see we will be able to build life forms like humans in a DYI manner. Those creatures are gonna be among us — equipped with cheap sensors and quite powerful brains. At first, they will be barely moving. They will have camera-eyes and ears.

And even if they have only ears, then can still see. What’s knowable from a data set (or a combination of data sets) can for now be clear mostly to the strategist behind it.

To build a life form, combine the following ingredients:

  • OpenWorm for some cool moves
  • 3d-printing human skin
  • deep learning models on devices like Raspberry Pi (for voice, sight, “being a doctor”, learning things we don’t even realize we can learn from data sets)
  • reinforcement learning on devices like Raspberry Pi (for locomotion, learning how a liver works to create it, new kind of intelligence etc.)

Really exciting if you think how close we are to have something like “building blocks” for creating real life — perhaps with a real human skin, better-than- human-eyes and knowledge about all cats in the cloud.

But what happens after we reach this point?

After DYI life creation is partially there

I have a couple of ideas:

  • We will surely experiment by making what we created a bit more human,
  • We will attack really big problems like Riemann Zeta as well as find even bigger problems. Whoever finds a huge advantage first could secure their strategic position for a lot of time,
  • We will challenge humans at everything and see whether which competitions we can win,
  • We will augment our minds to eventually merge with what we first called tools,
  • We will see first competitions with augmented humans,
  • We will observe the world more closely
  • Solving mobility at macro-level can enable us to be in the right place without thinking about it too much
  • We will re-learn healthy life strategy using a simulation of human in its environment
  • Some will create innovative weapons and humanity will have to be able to self-organize itself at the global scale

For now, it will still be very difficult to make life purely digital. Also, this is not clear enough for me to say what that would mean. Machines are damn creative.

We create a world of superpowers — never before one person could do so much due to access to knowledge. This is a really huge challenge.

Like what you read? Give Michael Arthur Bucko a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.