The Intersection of AI and the Human Brain

KTH AI Society
KTH AI Society
Published in
4 min readFeb 1, 2024

By Alice Dombos

A longstanding aspiration of researchers has been to create Artificial General Intelligence (AGI), computational machines capable of performing any intellectual task that humans can undertake. Although achieving AGI remains an ongoing challenge, significant strides have been made in this pursuit. The advancements can partly be attributed to the field of brain-inspired artificial intelligence (AI), where insights from neuroscience, computer science, and psychology converge to drive the development of more sophisticated systems. In a recent paper [1], Lin Zhao et al. discuss how AI and AGI have been inspired by the functions and architecture of the human brain. This article will further explore some of the ways in which the influence of the human brain is evident in AI, with a specific focus on AGI.

DALL·E’s creation: A fusion of human brain and AI, illustrating the symbiotic blend of neurons and digital networks.

The human brain is one of the most complex information-processing systems in the world, and its network of billions of neurons has come to serve as a blueprint for Artificial Neural Networks (ANNs). The roots of ANNs dates back to 1943 when Warren McCulloch and Walter Pitts proposed the first mathematical model of an artificial neuron. The next breakthrough occurred in the late 20th-century as a result of the invention and popularization of Backpropagation, an efficient method for training neural networks. Backpropagation mimics the way the brain modifies the strength of connections between neurons, but instead of modifying synaptic connections it adjusts weights between artificial neurons. Backpropagation, coupled with the development of graphics processing units (GPUs) and tensor processing units (TPUs), has facilitated the development of increasingly sophisticated neural networks. This has taken us closer to attaining AGI since neural networks are major components in its infrastructure.

As mentioned, numerous principles in artificial intelligence draw inspiration from the human brain. One such example is Convolutional Neural Networks (CNNs), used for processing of visual information. These networks resemble the hierarchical organization of the visual cortex as they consist of multiple layers of neurons, each layer recognizing increasingly complex features in images. While the outermost layers might only recognize edges, the inner layers identify forms and textures. Furthermore, evidence suggests common optimization principles such as small-world networks. In the biological neural network, this architecture, with its short pathlength and high clustering, facilitates efficient communication between brain regions. In computational applications, the concept refers to networks where the typical distance between two randomly chosen nodes grows proportionally to the logarithm of the number of neurons in the network. A recent study [2] has shown that neural networks based on Watts-Strogatz (WS) random graphs with small-world properties demonstrate competitive performances compared to hand-designed and NAS-optimized models.

The scale of biological neural networks of animals closely align with their cognitive abilities, and a similar correlation exists for artificial neural networks. Recently, GPT-4 with its 10 trillion parameters outperformed its predecessors GPT-3 and GPT-2 with 175 billion and 1.5 billion parameters, respectively. GPT-4 not only exhibited improved performance in advanced mathematics and reasoning, but also excelled at standardized tests like GMAT, SAT, and USMLE. Important to note is that other factors like the quality of data and the architecture of a model also contribute to performance.

In order to create systems that exceed human intelligence, they have to be able to acquire and ingest knowledge from various modalities in similarity to humans. Ultimately, all modalities should intersect through universal concepts. For instance, the concept of a cat should be the same regardless of how it is represented in different modalities. Current multimodular LLMs, including GPT-4, have shown improved performance at cross-modality taks (e.g. text-to-image, image-to-text and video-language-modeling) as well as at single-modality tasks. Overall, multimodality can be seen as a game-changer for the AGI development.

Furthermore, large-scale models pre-trained on massive multimodular datasets have demonstrated increased learning capabilities. By enabling systems to harness knowledge from previous experience, akin to the human brain, they can comprehend and execute novel tasks more rapidly without the need for extensive labeled data for fine-tuning. Regarding AGI, in-context learning is frequently denoted as the model’s capacity to adapt to new tasks when provided with few input-output pair examples. This ability is especially beneficial for fields like medicine and robotics where labeled data is often limited. Additionally, in-context learning mitigates the risk of overfitting downstream labeled training data. Despite increased computational costs, e.g. caused by massive parameter scale, greater adaptability represents a significant stride towards AGI.

The pursuit of attaining human-level intelligent machines is challenging, and there are various limitations. For instance, our knowledge of the human brain is fragmented which makes it difficult to replicate human intelligence. Nevertheless, the field of brain-inspired AI can be expected to continue to propel the development of AGI, a transformative technology that will change our lives.

Author
Alice Dombos
is a member of the KTH AI Society and, a student in Computer Science at the KTH Royal Institute of Technology. You can reach her on LinkedIn or by email at alice@kthais.com.

References

  1. Lin Zhao, Lu Zhang, Zihao Wu, et al. When Brain-inspired AI meets AGI. arXiv preprint arXiv:2303.15935 (2023).

Google Scholar

2. Saining Xie, Alexander Kirillov, Ross Girshick, et al. Exploring Randomly Wired Neural Networks for Image Recognition. arXiv preprint arXiv:1904.01569 (2019).

Google Scholar

Image credit: DALL·E, OpenAI’s AI model.

--

--

KTH AI Society
KTH AI Society

This is the official account of the KTH AI Society. We write blog posts and provide insights into all sorts of interesting topics in AI!