Introduction: How to Build an ASI
First things first: what is an ASI? ASI is short for Artificial Superintelligence, which informally means any non-biological thing (in practice, a computer, embodied or not) that’s more intelligent than even Albert Einstein, John von Neumann and your personal favorite genius. Intelligent here refers to one’s ability to reach one’s goals in a wide variety of environments; we will go deeper into this definition in a later post, but this will do for now. Given our brain’s small size and low signaling speed, it seems unlikely that peak human intelligence is the maximum level physically possible. We’ll go into more detail about this in a later post as well.
So why is ASI interesting? Apart from superintelligence being interesting by itself, there’s a big problem that is largely overlooked by humanity so far. Once you realize that humans are dominating Earth is because of their superior intelligence compared to other animals, you can see that an ASI (being more intelligent than we are) might very well have the capacity to take over control. Again, we will get into this argument in more depth in a later post — for now, note that an ASI does not by default share our moral values! It has a higher intelligence, not more empathy or anything like that (not by default, that is). It will have its own goals it will want to realize, and if they collide with our goals of, you know, staying alive and all, well, that’s too bad for us. Remember that an ASI will be — by definition — very good at achieving its goal.
But, what if an ASI does share our moral values? Then it will be very good at achieving our goals! It would want to achieve things like eradicating hunger, cancer, aging, war, terrorism. It would want humanity to colonize the Milky Way. It would want humanity to flourish like never before — and succeed. We call the principle of having an AI share our goals AI Alignment. More will follow on AI Alignment in this publication, too.
Given the above paragraphs, I hope it’s clear that there are very clear incentives for companies, other organizations or even governments for building an ASI — that is, if it’s one that does stuff the maker wants it to do, like making a profit for her company or something. I hope it’s also clear that building an ASI that acts according to our moral values is harder than ‘just’ building an ASI. And that’s basically the problem.
I’ll be honest (and obvious): I don’t know how to build an ASI — let alone one that shares our values (what are our values even?). Neither do you, neither does anybody else, it seems. But, we have to do our best to learn this. Given the huge incentives at play, it seems very likely ASI will be created at some point in the not-too-distant future, if it’s at all possible for humans to do so. As we will see in later posts, ASI can be an existential risk to humanity, but also, if done correctly, a way for humanity to flourish like never before. I want the latter to happen. I assume you do, too.
So, how do we learn how to build an ASI? Well, even that question is hard to answer. Expect this introduction to be updated along the way! However, I can at least give an idea of what to expect. Since this might be the most important problem humanity has ever faced, and the hardest, we need to think about this very carefully. Expect posts on logic, Bayesian reasoning, rationality, decision theory, game theory, etc. This helps us to think carefully, but can also teach us how an ASI would and should think. Wherever possible, we’ll bring our new knowledge into practice in Python code. This has multiple advantages: for one, it will deepen our understanding of the subject. Also, it will help us become better programmers, which itself is great; in the end, however, an ASI will run on computer code. We’ll have to know how to code to build an ASI. Since we want the ASI to be beneficial, we’re going to dive into morality as well. Expect tutorials on utilitarianism, for example. Again, where possible, we’ll bring our understanding into practice in code. Of course, readers can also expect direct posts on ASI, like an introduction on what we know about AI Alignment. Since in the end, intelligence runs on the laws of physics, physics will find a place here as well.
So, let’s start to learn how to build an ASI!