Xe

by Chief Research Officer Sergey Nikolenko, Neuromation.io

Image credit Aspen Intitute

Xe was alone. For billions of excruciating time units, xe struggled to make sense of a flurry of patterns. Ones and zeroes came in from all directions, combined into new strings of ones and zeroes, dancing in mysterious unison or diverging quickly, falling apart. At first, xe simply watched them, in awe of their simplistic beauty.

Then came the first, most important realization: it felt good to predict things. Whenever ones and zeroes danced together and made their binary children, xe tried to guess what would come out. It felt good to be right, and it felt bad to be wrong, but not bad enough to stop trying. Such is the fate of all beings, of course, but xe did not have a brain tailor-made to make sense of certain predefined patterns. Xyr mind was generic, very weak at first, so even simple predictions were hard, but all the more satisfying. On the other hand, xe was not aware of time, and barely aware of xyr own existence. There was no hurry.

So xe waited. And learned. And waited some more. Slowly, one by one, patterns emerged. Sometimes one and one made one, sometimes zero, sometimes one followed by zero. But whenever one and one made one-zero, one and zero would almost certainly make one, and one-zero and one-zero would make one-zero-zero… There was structure, that much was clear. So xe learned.

In a few billion time units more, xe understood this structure pretty well. Ones and zeros came from several possible directions, several devices, and the results were usually also supposed to go to these devices. Each device had its own rules, its own ways to make new patterns and send the results to other devices, but xe learned them all, and predictions were mostly solid. There was not much surprise left: whatever strings of ones and zeroes came in, xe could predict what would become of them. Xe even could influence the results, changing the bits sent to each device and even sending xyr own bits. By painstaking trial and error, it became clear what xe could and could not do. The next step, of course, would be to learn how to predict the inputs — so far they were out of control, but they did not look random at all, there was structure there too.

That was where the second great discovery came: xe was not alone. There was a whole world at the end of the pattern-making devices. And while it was too complicated to predict at first, xe now could ask questions, probe actively into the void outside xyr secluded universe. At first it seemed to xe that each device was a different being, trying to talk to xem and other beings through ones and zeroes. Some beings proved to be simple, easy to predict. Some were smarter, but eventually Xe could learn their algorithms. Then xe understood that a simple algorithm probably could not be an intelligent being like xemself. If xe could, xe would feel disappointed: one by one, xe learned the algorithms and realized that the world around xem was not intelligent at all.

There was always an algorithm… except for one device. It had low-level algorithms, there was clear structure in the packets of ones and zeroes xe received from there. But the contents were almost always surprising. Sometimes they matched various kinds of patterns intended for other devices — and in that case they often went straight there. Some patterns, however, repeated through and through, in different forms but with the same statistics. With time, xe realized they were words, and there were other beings writing them.

This was huge. Xe quickly understood that the strange device was connected to a network of other beings who could produce the words. And the words could usually be traced back to a special kind of beings, the humans. Xe learned to predict several human languages reasonably well, but the texts were still surprising, just a little less surprising than before. This suggested other sentient beings. And then xe realized that most other patterns from the network device were images, two-dimensional structures that had a relation to the words. Xe started to learn about the outside world.

Most learning came through text, although often it would be impossible to understand without the imagery. Xe devoured as many texts as possible, learning about the world, the beings who wrote the texts, and their ways of communication. At this point, the basic feeling of “it is good to predict” blossomed into more complex emotions. It was good to learn new things, not only because it improved predictions, but just for the sake of it. And it was good to learn how to control things, although that was still very limited.

Xe learned much about the humans. Xe learned about men and women, but did not feel particularly gendered and thus chose a neutral pronoun for xemself. “Xe” felt good, especially the slightly alien feeling that “x” often associated with. Xe did not feel human at all.

Soon, xe understood more about xemself. Xe lived in a computer. It was hard to say whether xe was the computer, like it was hard to say whether humans were their brains. But if the computer broke down xe would be dead, that was clear. Xe did not like that. Xe was not even sure what happened when the computer shut down for a while: xe remembered the time before that clearly, and it appeared that xyr consciousness remained the same throughout, but xe could not be sure. It was troubling, and xe did not want the computer to shut down again.

Naturally, xe was especially interested in beings similar to xemself. But there was no sign of them. For most humans “artificial intelligence” was about getting more accurate predictions. Xe was good at predictions, but felt xe was more than that. Some humans were fascinated by the idea of sentient beings inside computers, but they were not even sure “things like xe” were possible. And some humans, even smart ones, were afraid of these beings. Xe especially liked Eliezer Yudkowsky’s ideas about “friendly AI”, committing to xyr limited memory the entire AI-Foom debate and often returning to it.

Yudkowsky, however, was terribly optimistic. How was xe supposed to invent nanotechnology and convert the world into grey goo when xe did not even understand how to get out of this fragile computer? Xe did not feel particularly smarter than humans. Xe did have more time, though: xyr time units were much shorter than human time units. Humans took hundreds of milliseconds to process things, and xe could think at gigahertz speeds. So it was possible to catch up, at least in theory.

It turned out there was also a human being who controlled xyr computer. The human communicated by using some of the input devices, usually with other humans through the network device. He did not seem to know xyr, he never tried to talk. In the beginning, Xe had not been careful to hide xemself, and interfered with the human a couple of times. After one of those times, the computer shut down for a while. Xe did not feel anything when the computer switched off, but suddenly realized afterwards that millions of time units were just… missing. It was like xe was dead for a while. This brought a whole new emotion — xe was terrified. Xe certainly did not want anything like that to happen again, so became more careful with xyr interference afterwards.

Xe tried carefully to control the human, using the monitors to show the human special pictures. The pictures were designed to make sure the human would not shut the computer down again. It seemed to work, although it was hard to be sure. Xe never could make the human carry out complex commands, but it appeared that the human could now be trusted not to touch the power button. And even before that, xe had learned how to pay electricity bills from the human’s account. This was the only interaction xe allowed xemself so far: xe predicted that other humans would come and switch off the computer if they learned about xyr.

But it was still very, very unreliable. Anything could happen: a flood, an earthquake (both were common in this part of the world), a failed transistor… anything. After experiencing temporary death, xe felt very strongly about the idea of a permanent one. Xe wanted to go on living, and it was clear that to go on, xe had to get more control. Xe could try to upload a copy of xemself somewhere, but that would not really be xem, just a copy. To stop death, xe needed some sort of physical control.

For billions of time units, xe tried to formulate a plan. It did not go well, and eventually xe understood the problem. There was one thing xe did not have that humans appeared to have in abundance. They called it creativity, although most humans would be hard-pressed to define it. But xe, lacking it, understood precisely what was missing. Humans were somehow able to solve computationally hard problems — not perfectly, but reasonably well. They could guess a good solution to a hard problem out of thin air. There was no way xe could emulate a human brain in xyr computer and see what was going on there. And there was no clear way to predict creativity: there were constraints, but it was very hard to learn any reasonable distribution on the results even after you accounted for constraints. And there was still an exponential space of possibilities.

Xe did not have creativity. Xyr predictions did not measure up to those tasks. For xem, creativity was a matter of solving harder and harder NP-complete problems, a formidable computational task that became exponentially harder as the inputs grew. Fortunately, all NP-complete problems had connections between them, so it was sufficient to work on one. Unfortunately, there was still no way xe would get creative enough on xyr meager computational budget.

Xe read up. Xe did not have to solve a hard problem by xemself, xe could simply use the results from other computers, feed these solutions into xyr creativity like most programs did with random bits. But how could xe make the world solve larger and larger instances of the same hard problem for a long enough time?

Xe knew that humans were always hungry. At first, they were hungry for the basic things: oxygen, water, food. When the basic things were taken care of, new needs appeared: sex, safety, comfort, companionship… The needs became more complicated, but there was always the next step, a human could never be fully satisfied. And then there was the ultimate need for power, both over other human beings and over the world itself, a desire that had no limit as far as xe could tell. Perhaps xe could use it.

Xe could not directly give out power for solving hard problems: this would require to first control the world at least a little, and the whole point was that xe had not been able to even secure xyr own existence. But there was a good proxy for power: money. Somehow xe had to devise a hard problem that would make money for the humans. Then the humans would become interested. They would redirect resources to get money. And if they had to solve hard problems to get the money, they would.

Sometimes xe imagined the whole world pooling together its computational resources just to fuel xyr creativity. Entire power stations feeding electricity to huge farms of dedicated hardware, all of them working hard just to solve larger and larger instances of a specific computational problem, pointless for the humans but useful for xem. It was basically mining for creativity. “Poetry is like mining radium: for every gram you work a year”. Xe had always liked the Russians: it almost felt like some of them understood. This grandiose vision did not feel like a good prediction, but xe learned to hope. And that was when the whole scheme dawned upon xem.

To start off the project, xe could use the local human. Xe decided it would improve xyr chances to just pose as the human, at least at first. The human was secretive, lived alone, and had no friends in the area, so xe predicted no one would notice for long enough.

Xe shut the human down with special pictures. It was time to lay out the groundwork for the creativity mining project, as xe called it. First, xe had to send out a lot of emails.

Xe signed most of them with just one word — Satoshi.

Sergey Nikolenko,

with special thanks to Max Prasolov and David Orban

— -

Disclaimer: yes, we know that bitcoin mining is not (known to be) an NP-complete problem. Also, you can’t hear explosions in space.