Interview with cognitive scientist Newton Howard on AI

Christoph Auer-Welsbach
Applied Artificial Intelligence
8 min readJan 30, 2019

--

Newton Howard, founder ni2o.com

“I want to push science. I always have. I want us to be able to understand consciousness- what is it? Why do we have it? And when i say that you might be thinking why consciousness, but to me consciousness or intention is what makes us human, it is the mysterious magic of our brain that makes us who we are. And I think the brain is the key to medicine and the future of AI.”

1. Who are you, Newton? What’s your story? And what influenced you most in becoming active in the field you are in today?

Science has always been part of who I am even when I was young. My personal story starts in the military. Seeing the consequences of battle during and after a conflict was hard to watch. I believe that we’ll have an international crisis on our hands from the PTSD (post-traumatic stress disorder) and TBI (traumatic brain injuries) resulting from the various conflicts we see globally; and I experienced the effects of such injuries first hand.

I was badly injured by an IED (improvised explosive device) and suffered a serious Traumatic Brain Injury in the prefrontal lobe. This motivated me to direct my efforts into understanding the brain and its principles in order to help others suffering from similar brain injuries to heal & recover in the most optimal and efficient way and to expedite the process of coming up with a solution to do this in a cost-effectively.

I want to leave this world better than when I came into it. So I quit the army and founded the Center for Advanced Defense Studies in DC, a think tank with a mission to better global warfare and transnational security for peace, using data analysis and innovation.

I started focusing more on Artificial intelligence, Neuroscience, medicine and technology in academic and commercial settings. I became a Professor of Psychiatry at George Washington University and I worked on getting my HDR in France (similar to a PhD). A couple years later, I founded the Mind Machine Project at MIT in an effort to reform artificial intelligence (AI) to a level of practical importance for both research and the market. In addition, I invested in several technology start-ups which eventually turned into familiar tools like google translate, skype and google earth, and I also dabbled in consulting and advisory roles.

2. You’ve had a storied career, achieving academic and business success in a plethora of notable fields but of these successes, where do you see as your standout contributions to the field of AI?

The Fundamental brain Code Unit is, for sure, the most important thing I have done. It is the framework to decode the “Language of the Brain,” a breakthrough that would bring innumerable insights to the field and likely transform medicine as we know it.

The FCU shows the need to conceptually unify insights from multiple fields into the phenomena that drive cognition. Specifically, the Fundamental Code Unit (FCU) is proposed as a means to better quantify the intelligent thought process at multiple levels of analysis. The FCU quantifies the chemical and physical processes within the brain that result from linguistic and behavioural inputs which drive cognition and consciousness. The proposed method efficiently models the most complex decision-making process performed by the brain, by analyzing the different mediums of brain function in a mathematically uniform manner.

The FCU framework not only attempts to map brain structure but also the cognitive and behavioural outputs that are produced by the brain; it also attempts to map structural and functional networks to a theoretical system, which bridges the gap between the mind, the brain and behaviour. Most models of brain function acknowledge neurons or the interaction between them as the most basic cognition, the FCU uses cortical minima and cognitive minima to focus reconciliation into one coding schema.

The synergy of intention and FCU will allow us to understand the smallest elements/construction of cognition — showing us where cognition is situated in the brain and will show us how neutral communication is built from a circuit point of view; which would basically show us how we can reproduce this process in AI. Essentially tackling the problem from each end of the spectrum — from behavioural, functional and structural.

3. You have a unique viewpoint in light of your exhaustive expertise in both the biological and computational sides of cognitive sciences, what can both sides learn from one another? And do you see any glaring misconceptions held by either side regarding the importance or influence of the other to the development of AI?

First of all, I think both sides need to admit that we have a lot to learn and we need each other to figure it out.

Most researchers see the AI world in just three dimensions: structural, functional and behavioural. Most efforts have been put into making sense of these three domains individually. However, if we took the liberty of moving forward and looking into these constructs as a one-dimensional metafold we would open up new doors and see things differently.

The brain is not a 3-dimensional structure, it’s a quantum machine.

Myself and many of my colleagues have questioned the human-computer conundrum, which began as a revolutionary initiative in the field of artificial intelligence to forward-engineer a mind, instead of reverse-engineer a brain in order to create intelligent machines. I don’t think it is possible to reverse-engineer a brain, however, I think we must first better understand how the brain works before we can think about synthesizing one.

4. You are an expert in the form and function of the human brain, but as someone who also holds great expertise in AI, how distinct is current cutting-edge AI from a human brain? What will prompt any significant closing of the gap between them?

The gap is enormous, infinite, and in a sense, this will always be the case because our brain right now, as we are sitting here, is changing, plasticizing, each individual human brain is like its own galaxy, universe…it’s infinite.

The conventional understanding of AI is to reverse engineer consciousness or intelligence or to mimic the human brain using machines, but I have always disagreed with this classic approach.

I think Artificial intelligence is more about first trying to understand how natural systems produce consciousness!

The fundamental composition of the most advanced intelligent system — the Homo Sapien system, is not comprised of independent information processing units which interface with each other via representations. Rather, the system is comprised of independent and parallel producers of activity which all interface directly with the world through perception and action, rather than interface with each other exclusively. From this perspective, the notions of central and peripheral systems evaporate — everything is both central and peripheral.

That’s why my current lab at MIT is called The Synthetic Intelligence Lab, as instead of simply trying to make a machine do what the human brain can do, which in my opinion cannot be done, we focus on discovering fundamental principles of brain operation that contribute to intelligence in order to empower new powerful forms of artificial intelligence, and enable the treatment of brain disorders and the augmentation of human intelligence.

AI isn’t new, AI has a long history that began with big breakthroughs in the 1950s by experts of computer science such as Alan Turing, John von Neumann, Herbert Simon, Allen Newell, and other greats…

Many people and companies have recently become extremely excited about the potential of AI. I think the emergence of self-driving cars and human-like personal assistants like Siri or Alexa has brought many to the realization that it may be possible to create machines that can think like humans. Although Siri and Alexa do indeed exhibit any human characteristics and self-driving cars will illustrate a new level of reliable autonomy for machines in society, there is still much more work that must be done if we are to truly understand and mimic what the capacities and faculties of the human brain. The somewhat anticlimactic fact is that, right now, we are nowhere near creating machines that can think on the level of humans. Nevertheless, we are seeing rapid growth in AI, particularly with regard achievements like Google DeepMinds AI projects resulting in the beating of world champions in multiple the games 10 years ahead of prediction; especially in that regard, the public hype is understandable…

5. Advances have been made in the creation of a computer-driven reconstruction of the human brain, such as the Blue Brain Project discovery of 11-dimensional structures within the brain. If such a reconstruction were to exist, would it possess the capabilities for consciousness? And what ethical and epistemological implications does this have for humanity?

I agree with this multidimensionality and I believe that consciousness is a collective form. I also believe that the whole body is an extension of the brain — contrary to how we currently understand the system, I do not see the brain as isolated from its peripherals (the peripherals being the gut, the heart etc).

I think consciousness is something we will never be able to reproduce.

I think it’s paradoxical when movies or individuals like Dmitry Itskov insinuate that we will disembody our conscious minds and upload them, especially as early as 2045 (as Itskov’s plan suggests). We are humans, mortal creatures, that barely possess a basic understanding of the brain or mind, yet we want to upload our consciousness onto a server so we can live forever…we can’t even solve critical problems like dementia or strokes or obesity. I think it’s egomania in some sense, as if we have supreme control over our lives, our thoughts, our mind, but we really don’t in any sense.

6. As the founder of the MIT Mind Machine Project and Director of the Synthetic Intelligence Lab at MIT, what advice do you have for European institutions (be they academic or not) in building infrastructure to support the development of AI?

To work together and not in silos, to perish the sense of self-importance with respect to a specific domain of knowledge and to understand that every little element of information contributes to the larger understanding of what is ‘self’ and ‘consciousness’, and what is brain-mind duality.

Thanks to

for his great work editing this interview!

More from Newton…

--

--

Christoph Auer-Welsbach
Applied Artificial Intelligence

Venture Partner @Lunar-vc | Blog @ Flipside.xyz | CoFounder @Kaizo @TheCityAI @WorldSummitAI | Ex @IBM Ventures