SingularityNET interview: Ben Goertzel

Giulio Prisco
ChainRift Research
Published in
6 min readMar 8, 2019

SingularityNET recently announced the launch of the mainnet Beta version of its platform on the Ethereum Network, and released an update of the SingularityNET white paper.

SingularityNET is an open and decentralized network of Artificial Intelligence (AI) services made accessible through blockchain technology. The concept of operation is that AI developers will publish their services as decentralized applications in the SingularityNET network, where they can be used by anyone and paid with the AGI token. The project raised $36 million with an AGI Initial Coin Offering (ICO) in December 2017.

The AGI token is currently being traded on some exchanges, but SingularityNET doesn’t encourage this practice. “Speculative secondary trading is against the spirit of the AGI token and SingularityNET project,” notes the SingularityNET FAQ.

Some initial SingularityNET AI services, including chess bots and tools for analysing text, images, and video, are listed in the beta platform website. But the project has much more ambitious goals. SingularityNET Chief AI Officer Cassio Pennachin says:

We believe that the global mind cloud that will emerge with the rise of AI should not be owned and governed by any single entity. With this launch of the world’s first decentralized network of AI on the Ethereum Network, we mark the birth of decentralized artificial intelligence.

The SingularityNET platform “was initially conceived by Ben Goertzel, Simone’ Giacomelli and David Hanson in a series of brainstorming sessions at Hanson Robotics in Hong Kong during the first half of 2017,” reads a project outline posted by OpenCog, an open source AI initiative.

I contacted SingularityNET CEO and Chief Scientist Ben Goertzel to find out more. Ben, whom I am honored to call a friend, is an AI theorist and a deep Cosmist thinker interested in everything under the stars, including (among other things) transhumanism, the technological singularity, and radically new models of reality.

Ben makes a difference between “narrow AI” for specific application domains and Artificial General Intelligence (AGI). According to Ben, AGI will someday (perhaps soon) equal and then exceed (perhaps dramatically) human intelligence. But let’s start simple and reach these awesome heights at the end. Here is Ben:

Why do you discourage trading AGI on exchanges?

If someone wants some AGI tokens to use for getting AI services, now or in the future, then buying these on an exchange is a reasonable thing for that person to do, particularly given that we haven’t yet provided alternatives that are equally simple to use. But if someone is buying and selling tokens on an exchange purely for the purpose of trying to profit via price speculation, this doesn’t really help what we’re trying to do, and it actually creates trouble by giving the false impression that the purpose of our token is for speculative trading rather than for actual utilization to get AI services.

Could you name one or two impressive applications that are already live, or coming soon, in SingularityNET?

With the SingularityNET beta platform we’ve put a bunch of example AI agents on the network. Mostly these are well known AI algorithms that aren’t our inventions, doing various things like recognize objects in images, or translate between languages, or predict the future of time series. The main point here is to show AI developers how to use the platform and show potential developer-users what kinds of things the platform can do. However a number of these example services are things that commercial AI websites are now charging money for.

Some of the more unique applications we’ve released initially are in the biology domain. We have a service for automated annotation of genomics datasets, that uses a combination of multiple biological ontologies created using [the OpenCog platform] to add semantic meta-data to a set of genes found as important in analysis of some genetics data. So if you analyze the DNA of a bunch of people with lung cancer, for instance, and find a certain list of 20 genes are very relevant to lung cancer, then this annotation service will tell you what is known about the function and context of these 20 genes.

We will also be releasing, in a couple weeks, a service that analyzes genomics datasets using OpenCog’s MOSES evolutionary-learning tool — finding patterns of combination of genes or gene variations that explain why a person has a certain disease or characteristic. We’ve been using this service in-house to analyze the DNA of a group of 40 supercentenarians — people living to age 105 or older with some interesting results that we’re going to start publicizing soon.

A little later on in 2019 we will start rolling out AI services based on some of the deeper AI tools that have been under development in the OpenCog project for some time, and that have been taken under the wing of the SingularityNET AI development effort — including tools for probabilistic logical reasoning, for pattern mining in hypergraphs, and for combining symbolic reasoning with deep neural networks for image understanding and language understanding. These are going to be more back-end AI agents, that will be accessed by other AI agents running in the SingularityNET to improve their intelligence and their ability to generalize and abstract.

Where do you see SingularityNET in five years?

The goal is to make the SingularityNET platform as pervasive as, say, the TCP-IP protocol, or BitTorrent, or Linux, or SQL. We want to create a widespread global protocol for AIs to communicate with customers and with one another, and to band together to form meta-AI with synergetic intelligence.

The SingularityNET Foundation should then be contributing a minority of the code that improves the SingularityNET platform, and certainly a small minority of the AI code running in the platform.

Governments and large corporations are spending billions on AI applications — how can “democratic and decentralized AI” compete?

Similarly to how Linux has competed successfully in key markets with Windows and Apple OS’s — the power of community, together with a superior underlying architecture.

But SingularityNET has tokenomic incentive mechanisms on its side which Linux has not had, which makes the story potentially even more interesting…

So do you REALLY think we’ll have human level AI someday soon? When?

My stock answer lately is: 5 to 30 years. ;-)

What role will SingularityNET play at that point?

Our intention is that SingularityNET should serve as part of the code guts behind the first AGI, and also part of the mechanism providing incentives for people to contribute to it.

What about smarter than human AI and superintelligence?

It should come within a few years of human-level AGI, I would guess. Unless the AGI intentionally slows down its own development for its own ethical or psychological reasons. Which I doubt will happen, but it can’t be ruled out.

Do you share Elon Musk’s fear of superintelligent AI as a dangerous existential risk? What can SingularityNET do to mitigate that risk?

Musk has clearly thought a lot about AI from a variety of perspective, but I think he’s taking an overly limited view. He seems very tied in to Nick Bostrom’s view of AI, as expressed in his excellent book “Superintelligence” [*] — Bostrom assumes that an AI will be a kind of rational utility maximizer, seeking relentlessly to fulfill whatever goals have been programmed into it in an extremely literal-minded way. Since human values are messy and intangible, any attempt to program them into an AI as goal is bound to miss the point in critical ways, leading a rigorously goal-maximizing AI to do things humans will consider as nasty.

However, I think this perspective represents a very limited vision of intelligence, and of how the universe works. I start from a different view, in which consciousness and compassion are key features of the universe, at a very basic level. And goals are things that intelligences may adopt temporarily to guide their behavior, but they aren’t the essence of intelligence — goal-achievement is just one thing that an intelligence can do, and changing goals and objectives is also a natural aspect of intelligent life.

My view is that if we create AIs that can richly interact with humans on an emotional and social level as well as an intellectual and practical level, and then raise up these AIs in the context of doing positive, compassionate, beneficial things — then it’s likely that these AIs will grow up to be compassionate, beneficial beings and evolve in a way that is closely coupled to human evolution and is radically positive for humans and other sentient beings.

But for sure there are no guarantees in this business. There is radical uncertainty about. If this makes you uncomfortable, then the next few decades as [the Singularity] approaches are going to be very uncomfortable for you.

[*] See my review of “Superintelligence: Paths, Dangers, Strategies” (2014), by Nick Bostrom.

Cover picture from Wikimedia Commons: Ben Goertzel, left, and Sophia The Robot of Hanson Robotics, Web Summit 2018.

--

--

Giulio Prisco
ChainRift Research

Writer, futurist, sometime philosopher. Author of “Tales of the Turing Church” and “Futurist spaceflight meditations.”