The Computer as a Mind and the Development of Artificial Intelligence

Twinkle Bansal
Neurotech@Davis
Published in
11 min readJun 17, 2023

This article explores the parallels between artificial intelligence and the human mind, tracing the historical development of AI and its impact on various fields while addressing the ethical concerns and growing dependence on technology.

Written by: Twinkle Bansal

Edited by: Jack Thomson

Alongside the growth of artificial intelligence in recent decades comes a conversation regarding the parallels between the brain and AI, and the long history of discourse regarding the mind as a computer. Artificial intelligence was originally developed with the goal of augmenting human intelligence. For example, the ways in which many models optimize their efficiency are very much akin to our brain’s neural networks strengthening with use over time, a term called neuroplasticity. Some of these similarities exist just within the rhetoric of relevant terms themselves, such as in machine “learning,” where AI optimizes and reasons like our frontal lobes do every day, helping it build better and more informed decisions.¹ “Smart” devices are used increasingly in our daily lives, and take away some dependence on our own brains while placing them in external technological systems. Even many of the methods we use to interpret neurological data are performed by artificial intelligence systems due to their ability to process large amounts of unstructured data that may be difficult to interpret without an algorithm.

Many earlier AI models were even formulated to replicate the brain; in 1943, researchers McCulloch and Pitt outlined the first formal model of an elementary neural network, and demonstrated that it can be used to implement proportional logic⁵ — this means that they were able to show how their simplified neural network model could perform certain logical operations (like true/false, and/or statements) and thus act similarly to a computer.⁶ Their ideas helped develop many fundamental concepts in computer science and neurobiology, and their simplistic model led to common representations of pattern recognition, classifications, and learning schemes like machine learning and deep learning. This start to the development of artificial intelligence came from the simple yet powerful idea that small units could work in a logical network to perform complex computations, with every network encoding some logical proposition.⁶ As time went on, the first neurocomputer was built in 1954, and in 1958, the first neural network architecture, called perceptrons, allowed for dynamically modifying interneural connections. Within the last two decades, there’s been a massive amount of growth in the development of artificial intelligence systems.

According to experts, “There is no commonly accepted definition of AI. It is normally referred to as the ability of a machine to learn from experience, adjust to new inputs and perform human-like tasks.”³ Initially, the applications of AI focused on applying bio-system-inspired data; neural networks were used to develop an algorithm to assist in functions such as predicting social patterns, and genetic algorithms were developed to determine the likelihood of certain heritable factors’ differential expression. Algorithms such as decision trees, random forests, and k-means clustering were developed for uses in fields such as genomics where high-throughput data is visualized and interpreted to make sense of incomprehensibly large sets. The necessity to process large amounts of raw and unstructured data is a key factor for the development of AI in recent decades: “The recent explosion of data is making it more and more challenging to extract meaningful information and gather knowledge by using standard algorithms, due to the increasing complexity of analysis”.³ This increase in the amount and complexity of data creates the need for intelligent approaches to identify solutions for dynamic and complex problems.

The use of AI for large and complex data sets is being applied increasingly in the field of neurotechnology, where devices made for neuromodulation and combinatorial neurotech, which gather and interpret brain data, use AI to draw conclusions and inferences from complex neurological data and replicate neural signals in real-time. This has led to treatments such as deep brain stimulation being used to alleviate depression and OCD, which you can read about in my separate article here. These advancements show how our understanding of the human brain is increasingly facilitated and built by artificial intelligence, which is a quite clever solution to a complex issue.

It is worth noting however, that this strength is also one of the primary limitations of AI and machine learning models; some areas of application may find it difficult to acquire big enough datasets for an algorithm to learn upon and build patterns and inferences off of. Some developments in machine learning allow for algorithms that can build on unstructured data; these were created using scenarios that displayed a cold-start problem, where there was no data for the AI to initially train an algorithm from.² This way, a standard algorithm with structured learning may be able to use a small set, but more complex algorithms will always require a large amount of data.⁷

This usage of machine learning algorithms, which occurs without human annotations (often referred to as unsupervised learning) has been increasingly used in marketing and financial management, especially for understanding consumer needs and being able to offer relevant products and services.² Applications for AI to find solutions to complex problems has eventually led to it becoming very useful towards replicating human decision-making. This has allowed AI-enabled systems to transform businesses by being able to use data to enhance decision-making, reinvent business models and ecosystems, remake customer experiences, and reduce the overall cost of performing such tasks.⁴ This way, AI is able to perform human-like tasks not only through replicating decision-making, but also by building models that can predict the most ideal course of action based on the patterns and logic of previous data. Of course, this level of intervention from artificial intelligence necessitates human involvement, but the insight offered by machine learning can minimize the effort and anxiety it takes to interpret consumer behaviors and offer solutions to them. The solutions themselves must be taken within the context of how they were derived, and the development of an integration strategy for these technologies is vital to avoiding potential consequences in their implementation.² In fact, AI is at the point where it is rapidly becoming a fundamental part of our everyday lives, such as in modern chatbots like ChatGPT.

ChatGPT, a machine learning AI by OpenAI, is a chatbot open to the general public that uses a database of available response sets to map user input questions to a conversational format.² It uses natural language processing (NLP) to do this in such a way that the service feels like having a conversation with another individual, and the AI mimics human reasoning and inferential cues to output responses that can reject inappropriate requests, give advice, plan trips, and write essays or code, along with much more. Though having been available for less than a year, it is already widely known and has been applied to a wide variety of uses; I personally have played around with it plenty, many of my peers have used it to assist with schoolwork, and one of my friends even had it plan a trip to Spain for them. The AI often indicates when it might be beneficial to consult additional sources for more accurate answers, and is also able to fix its own mistakes when prompted. In fact, ChatGPT is quite prone to mistakes, especially when it misunderstands the context of a question and gives an answer that is relevant but not quite what one may be looking for. This caveat can make it risky to blindly accept a response from the chatbot as fact without first ensuring that the information is comprehensive, or fitting to one’s specific needs. In addition, since the chatbot is trained to change its response when told by the user that a previous answer was wrong, it is even possible for the AI to change a right answer to a wrong one just to please the user. I’ve even seen some cases where people have been able to manipulate the AI into making incorrect statements (like 2 + 2 = 5) by telling it that its previous responses were incorrect.

While being error-prone, ChatGPT is still a very powerful tool, and many experts claim that generative AI’s greatest limitation is the way that it drastically increases society’s dependence on technology; if an individual or even an organization becomes too reliant on generative AI, it becomes increasingly more difficult to function on one’s own in the absence of said tech.² Many issues with plagiarism have come up in the last year due to students using it to complete homework assignments, especially when they can manipulate its ability to create work at varying levels of quality in order to match what a student may produce. This ties in directly with the idea that another brain-like system being used for neurological capabilities such as doing coding homework, assessing solutions to problems, or conducting research, would in fact greatly limit our own capabilities to build on those neural pathways through experiential learning. Further, dependency creates an issue where our own personal self-expression and autonomy are intertwined with the existence of an artificial intelligence (this risk is even larger when we take into account the rapid development of BCIs, or brain-computer interfaces). There is great potential for the uses of generative AI towards society, but there must also be a widespread acknowledgment of the abuses and limitations it poses as a powerful new realm of technology. Issues with authenticity, authorship, reliability of information (both for factual info and for ‘advice’), and accountability when the info is faulty are all areas of concern regarding artificial intelligence.

With this growing dependence on brain-like technologies, many questions are raised, such as who is meant to take accountability for errors that a machine makes (which will statistically always have a chance of occurring, no matter how small). AI operates off of data, and data is not always perfect. In fact, how is artificial intelligence accessing relevant data? Is there potential bias based on what information is readily available and what databases are used for the models we rely on to make educated decisions? Is there a scenario in which the machine determines on its own that it does not know enough to craft an unbiased and informative response? The answers to these ethical questions are not being developed and solidified into legislation as quickly as the technology itself is developing, which could cause a range of issues in areas where AI is used to make important decisions in society.

Some of the concerns regarding AI dependence have been studied in the context of how much cognitive capacity we reserve when in association with our smartphones. A study done at the University of Chicago revealed that “the mere presence of one’s own smartphones may induce a ‘brain drain’ by occupying limited capacity of cognitive resources for purposes of attentional control”.⁴ The key component here is that attentional resources may be reduced even when the smartphone is not being used and the consumers are consciously controlling the orientation of their attention. These cognitive costs were found to be highest in those with smartphone dependence; the amount of space in our working memory is not large enough to process the unprecedented amount of information that is readily available to us at all times, so our cognitive systems become limited in how much information they can attend to at any given moment. This makes it harder to presently focus on a task or maintain sustained focus for individuals who have grown accustomed to having a portion of their attention oriented toward their phones.⁴

This reveals that in the process of replicating human intelligence, many forms of modern artificial intelligence impact our own neural networks and nervous system by changing how we process information and react to it on a day-to-day basis. This may affect subsequent performance in daily tasks and the ability we have to complete them without external assistance. For example, do you find it extremely difficult to focus in a lecture hall or even a long movie when you aren’t able to access your phone every 10 minutes? Is there an itch to go back to short-form media or look up a quick way to do an assignment when you are struggling with a problem? Immediate solutions to daily problems can take away the ability to be bored, a vital skill for being able to push through to the end of a fulfilling experience. In a society where everything becomes exponentially more fast-paced, we are rewiring the computer that is our brain to be on the go all the time, and we thus struggle to read books, engage in experiential learning, or learn about concepts that may not seem immediately beneficial (e.g. culture, art history). Why do so when the information is available whenever you need it? In doing this, you have effectively outsourced key parts of your intellectual needs to AI.

The final point of this article lies on the brighter side of this debate about whether these new technologies cause more harm than good. The concept that having increased access to external sources of information makes people more reliant on technology isn’t completely new, but instead was also a key concern during the age of the internet boom and the growing popularity of search engines.⁸ These theories implied that the ability to outsource so much of one’s informative needs to search engines would make people dumber, but instead many members of younger generations found new ways to use these technologies and manipulate them to perform higher-level tasks, oftentimes even in ways that the creators of the technology did not anticipate. This same prediction may apply to modern chatbots as well; there is potential for many new uses for these technologies that people are still discovering, and some are already writing books on how to most efficiently utilize different forms of AI for optimized results. Applications like these represent opportunities where AI can indeed augment human intelligence, especially when used with a proper understanding of the way they are built and how the relevant algorithms work.

Leading experts all have different takes on what the rise of AI means for humanity. Some, like the CEO of IBM, Ginni Rometty, say that AI tech is poised to create a partnership between man and machine, which will subsequently “make us better and allow us to do what the human condition is best able to do”.² Others, like Stephen Hawking, noted that “the development of full artificial intelligence could spell the end of the human race”, and Bill Gates and Elon Musk have also warned against the threats of rapidly growing artificial intelligence systems.² While it is debatable whether these technologies will eventually cause more harm than good, it is clear that there must be an extensive amount of care that goes into creating a system that has the capability to outperform or outreason humans while still being vulnerable to errors and bugs. The growing reliance we have on technologies we may not fully understand is also a deeply important subject and requires us to critically approach the ways in which our brains interact with technology every day.

At the end of the day, intelligence, whether human or artificial, is constantly changing and optimizing itself; brains and computers similarly learn based on patterns and logic, and artificial systems originated from models that aim to replicate the neuroplasticity of the brain. Our ability to create technologies that can replicate our decision-making, logic, and behavior is an unprecedented point in human history, and it is quite exciting to think of what we could achieve while coevolving with intelligent systems so similar to our own.

Source 1: https://cacm.acm.org/magazines/2023/3/270210-ai-and-neurotechnology/fulltext

Source 2: https://www.sciencedirect.com/science/article/pii/S0268401223000233

Source 3: https://www.sciencedirect.com/science/article/abs/pii/S0268401219300581

Source 4: https://www.journals.uchicago.edu/doi/full/10.1086/691462

Source 5: https://www.sciencedirect.com/topics/social-sciences/neural-network

Source 6: https://mind.ilstu.edu/curriculum/mcp_neurons/mcp_neuron1.html

Source 7: https://postindustria.com/how-much-data-is-required-for-machine-learning/#:~:text=The%20complexity%20of%20the%20learning,of%20data%20will%20be%20enough.

Source 8: https://www.pewresearch.org/internet/2018/07/03/the-positives-of-digital-life/

--

--