AI & Nature: Are They Interlinked?

Matt Fitzgerald
DataSeries
Published in
10 min readOct 3, 2019

Machine learning and artificial intelligence are among the most significant and crucial technological advancements in recent history. A few fields promise in order to “disrupt” (to borrow a favored term) life, as we understand it quite like machine learning, but many of the mobile applications of ML technology go unseen.

Usually, the AI is conceived as the manifestation of any non-human intelligence, this latter one is generally described in opposition as “natural” intelligence.

Human beings are not the only creatures living on earth who are capable of demonstrating and computing intelligence. Fortunately enough for us, human intelligence is usually considered as — by far — the highest in the environment.

Nevertheless, various other living creatures as animals or insects are known to demonstrate different forms of organizations and “collective” intelligence that falls into the category of Artificial Intelligence.

The present-day artificial intelligence is built to imitate nature — the main pursuit of this field is replicating in a computer the exact prowess of decision-making that humankind creates biologically.

In the last 3 decades, most of the brain-inspired development of artificial intelligence has surrounded “neural networks,” which is a term borrowed from the neurobiology that explains machine thought as the data movement through interconnected mathematical functions that are known as neurons. But, nature has many other good ideas, too.

Nowadays, many computer scientists are revisiting an older study that suggests putting artificial intelligence through various evolutionary processes, like those that carefully molded the human brain over the millennia, and help us create smarter and more efficient algorithms.

But first, back to middle-school biology class. The concept of evolution, invented by Charles Darwin and refined by countless scientists, suggests that a random or slight change in the genetic makeup of organisms will give it either an advantage or a disadvantage in the wild. If the mutation of the organism allows it to survive and reproduce, then that mutation is again passed. If it does not, the mutation dies along with that organism. In the world of algorithms, this is known as neuroevolution. Whereas the artificial neural networks (also called ANN) usually replicate the process of learning different concepts, neuroevolution seeks to recreate the process that creates parts of the brain which is the process in which only the strongest (or the smartest) will survive.

However, neuroevolution has not been around the 1980s, this amazing concept is receiving renewed attention as many researchers dig into archives for various perspectives on machine learning. In recent times, non-profit organization OpenAI and Google Brain published papers on each topic, over image validation and the application of OpenAI as the best way to complete the master algorithm using Google’s “worker” algorithm.

Bringing biological development to an already complex area of artificial intelligence research can lead to confusion. So if it is easy, think of algorithms as horses. Horses learn throughout their lifetimes, but they are evaluated only on a somewhat different matrix, such as how fast they can move. Accuracy in image recognition is quite easy to estimate as a single number because of the amount of time it takes a horse to run around a track. But, the one that runs at a really fast pace is incredibly complex — a vast network of DNA that enables high capacity, muscle growth, and even intelligence. This complexity reflects the underlying parameters of the algorithm, or how an algorithm can be good (or bad) in image recognition. So if you get lost somewhere in the article, just take a deep breath and think “horses”. (It is also good life advice.)

For their research, the Google team designed 1,000 image-recognition algorithms, which were trained using modern deep neural networks in order to identify a specific set of images. Then each of these 250 computers selected two algorithms and tested their preciseness by identifying an image. The algorithm used to remain with optimum accuracy, while the one that performed poorly was “killed.” The survivor was then copied, and his clone (or “child”) was altered slightly — just as human DNA randomly changes during reproduction. But instead of blue eyes or a widow’s braid, this mutation changed slightly how the new algorithm interprets training data. The clones were then trained using the same data as their parents, and put back into batches of 1,000 algorithms to start the process again.

The researchers of Google found that the latest neuroevolution technology can cultivate an algorithm with about 94.6% accuracy, and similar (although not identical) results have been reported during 4 repetitions of the experiment. Some mutations that enhanced the image-recognition skills of the algorithm were rewarded (i.e. those algorithms survived), while performance-deficient mutations were killed. Just like in nature.

The difference between the 5 sessions illustrated a coherent problem. Google researcher Esteban Real says that while all these algorithms have stuck halfway through that process, it is certainly uncertain whether to continue the mutation or start muting. Real states that similarity in nature may be the development of wings. “A half wing cannot help you much,” he says, “but with full wings, you fly.”

The team at Google is now working on obtaining evolutionary models (to create full wings) in order to detect individual mutations. But it becomes difficult. The team only wants to mute the algorithm in a limited way, so it does not end up with a whole bunch of extra code that is not useful.

Explains: “The worst would be many half-feathers.”

By focusing primarily on image recognition, Google has tested both the ability of neuroevolution in order to deal with the biological brain and is the ability to resolve a modern problem. On the other hand, OpenAI used a more pure form of development to perform a different function.

Instead of training thousands of algorithms to get better at one thing, the OpenAI team wanted to use a “worker” algorithm to train a master algorithm to accomplish an unknown task, such as playing a videogame or walk in the 3 Dimensional simulators. The technique is not the primary way to teach machines how to make decisions, but rather a way how they have to learn more efficiently from specific information explains co-author and OpenAI researcher Tim Solomons. The evolutionary algorithm is able to monitor how its workers are learning, and it essentially learns to learn — namely, to extract more knowledge from the same amount of data.

To conduct its research, the OpenAI team set 1,440 worker algorithms for the task of playing the attic. They played until they reached the game over, and reported their score to the master. According to Google’s research, the algorithm that obtained the best score was copied, and the copies were randomly mutated. The mutated staff went into rotation again and the process repeated itself, the profitable mutations were rewarded and the bad guys were killed.

On the other hand, this approach has its limitations, chief among them is that the worker algorithm reports only a number, their high score, back to the master algorithm. The algorithms with the best scores survived but would require much greater computing power to try to make the master aware of any specific successful moves. (In biology, a parallel can be an ant colony: workers go out and find the most optimal solution; the queen is the central focus of information.) In other words, OpenAI learned a lot about success, but about scouring.

In the 1980s, neuroevolution and neural networks were similarly sized fields of study, says Kenneth Stanley, an associate professor at the University of Central California and most recently through Uber’s acquisition of AI team (Geometric Intelligence, which he co-founded. Was established)).

“There was a small community of people who wondered how brains are, which is really the only proof of the concept of intelligence in nature,” says Stanley. “Some thought that perhaps the most direct way to do this would be to create an evolutionary, Darwinian-like process in computers that works on slightly artificial brains.”

Three computer scientists — Geoffrey Hinton, David Rommelhart, and Ronald Williams — published a 1986 paper describing the algorithm that those networks extended to the way they learned from their mistakes, which is known as backpropagation. The findings greatly improved the efficacy of hand-built neural nets, but an imminent artificial intelligence winter-funding was slowed due to the lack of progress — hindering further development. It was not until Hinton and the company began publishing various papers that made neural networks too enticing for the larger computer science community to resist, showing that backpropagation neural networks had to grow a lot. Allows, and in turn, considers far more complex ideas. These networks were dubbed “Deep”, and the Deep Neural Network as the most popular flavor of modern artificial intelligence.

“Because of this, there was some loss of awareness for neuroevolution, which was this parallel thread of developing the mind,” says Stanley.

Back in the year 2002, early in his career, Stanley wrote an algorithm called NEAT, which allowed neural networks to evolve over time into larger and more complex versions. Their respective paper has over 1,600 citations on Google Scholar and has been referenced in the intensive design of neural-network and neuroevolution research since its publication. In the year 2006, Stanley published Hyper-NEAT, an algorithm that made neuroevolution much higher in scale, driven by DNA’s ability to be the blueprint for billions with billions of biological connections, despite having only 30,000 genes. (Fun fact: Hyper-NEAT’s full name is Hibbercube-based new revolution of augmenting topology. I challenge anyone to name a better algorithmic anagram.) Today, Stanley says that this is what his career work called Zagist. I am grateful to look back.

Like Stanley, OpenAI and Google are now dealing with two different ideas plundered from the same region. The hybrid approach of Google combines classic neuroevolution with technologies such as backpropagation, which has made deep learning so powerful today: teach an algorithm how to function in the world, let it evolve, and the child of that algorithm has the most Will be acquired knowledge. OpenAI’s approach was truer in how evolution works in biology.

The team only allowed each generation to randomly control the mutation based on how the network improved or failed, meaning the correction was made only through random mobile app development. But both efforts had very clear goals — recognizing an image or achieving a high score in a game (or driving a horse fast). How the algorithm got there depended on nature.

Stanley says of OpenAI’s work, “Individuals are born with a weight in their brain at the time of their birth, which they have for their entire lives.” “It’s like if we raised you and your children and your children’s children, and then they know the calculus.”

Why is it important for software development companies?

As machine learning technology makes its way into various business software, software development companies face the challenge of developing strategies to implement this technology efficiently and safely.

Historically, technologists have often sought inspiration in nature. Here are some ways that companies can use evolutionism in order to understand the potential impact of artificial intelligence:

Divergent Evolution

At first glance, it is more difficult to move to adjacency even in explicitly related data sets. The fact that you have to train ImageNet in object recognition does not mean that you dominate video recognition or facial recognition.

Convergent Development

Always be aware of people who have basically the same problems, even if it is a different data set. Think about how Google uses search query data in order to create a better spell checker. They keep track of what other users recommend and when they notice that millions of people have written something in a different way, they would suggest you do the same. A happy accident

Predators and parasites or prey and co-evolved hosts

Interesting things can happen if two AIs co-develop. Various cybersecurity companies (such as Bromium and Cylance) are developing machine learning solutions that continuously train their systems to detect new threats.

There are a handful of brilliant artificial intelligence companies that help us work more efficiently (we have [in the DCM portfolio]] x.ai helps us manage our busy lives, debut lets us web wisely Helps organize, etc.), but these applications are still in their infancy and there is a fundamental change in the way we predict their arrival. Perhaps it is better to place them in the context of an event that we already understand: development.

There is a great opportunity in artificial intelligence, and natural evolution provides us with a framework to study and prepare for the future of machine development. Meanwhile, it is important that the company’s leadership seriously consider its strategy for artificial intelligence and invest in the talent and infrastructure needed to transform its data into transformational solutions.

Let’s Wrap Up

Hope with this article, you will be able to correlate between AI and nature. No doubt, AI is changing the world for a good reason. There are many mobile app development companies that are using the features of AI to provide app development services. If you also want to have your own AI-powered app, you can hire mobile app developers from such software development companies.

--

--

Matt Fitzgerald
DataSeries

Working in xicom.ae | Business Analyst (12+ years) | Technical Writer | Tech Geeks | Tech Enthusiast