Taro Toyoizumi, a RIKEN Brain Science Institute Team Leader, was recently awarded a Commendation for Science and Technology for Young Scientists by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) for his theoretical and experimental research on learning principles in neural networks. The award is given to young scientists under 40 years old for their outstanding achievements in conducting highly innovative research in science and technology. We asked Toyoizumi about his career path, experience abroad, and the future of computational neuroscience and artificial intelligence (AI).
How did you become interested in theoretical neuroscience?
My undergraduate major at the Tokyo Institute of Technology was in physics. As a senior, I studied combinatorial optimization problems in Prof. Nishimori’s lab where people were applying methods of physics. We studied a textbook on neural networks, and I was very interested to learn that, using physics techniques, a network of simple computational elements could solve a variety of problems and ‘calculate’ its performance. For this reason, I later joined Prof. Aihara’s lab at the University of Tokyo as a graduate student to study the actual brain.
The Aihara lab was very exciting. You could study anything. However, despite my high motivation to study the brain, I struggled and almost gave up pursuing neuroscience, because the brain is so complex and diverse, and I was lost in how to directly apply my skills from physics. Nevertheless, when I entered the doctoral course, I decided to continue with the challenge of brain science.
“Are there principle laws for brain science? If so, I wanted to find them”
Is the application of physics to brain science what drew you to Switzerland?
I spent one year during my PhD thesis work in Dr. Gerstner’s lab at École Polytechnique Fédérale de Lausanne (EPFL). Dr. Gerstner is a computational neuroscientist, also coming from physics, and a famous researcher in the field, having written a well-known textbook in the field of theoretical neuroscience.
I brought my own research project and some new ideas from Japan. In physics, we have principle laws such as Newton’s law of universal gravitation. Are there principle laws for brain science? If so, I wanted to find them. That was my starting point. I was influenced by the information theory proposed by Dr. Amari, the director of RIKEN Brain Science Institute (BSI) at the time. Combining this with my experience in the Nishimori and Aihara labs I thought I could find the principle laws for neuroscience from the angle of optimizing information.
The experience during my PhD of being able to do what I wanted in the place where I wanted to work was precious and valuable, although it was not easy for a graduate student who did not know the research field well. At the time, I had strong mathematical skills but was still immature to place the work in an appropriate context and determine which problem is the most important to solve. Dr. Gerstner taught me how to tackle problems and encouraged me by saying that my work was very elegant! Under his guidance, I wrote my first paper about plasticity and learning in the brain by means of optimization, a research interest that I am still pursuing.
“Grasping cultural differences in communication is necessary for international scientific collaboration”
What inspired your postdoc in New York, where you studied visual plasticity?
Coming back to Japan to finish my PhD, I felt a strong motivation to work on subjects less abstract and closer to reality, and this was why I decided to join Dr. Larry Abbott’s lab at Columbia University in New York, where experts from different fields were bridging experimental studies with theoretical models.
Among the many experts there, I frequently discussed my work with Dr. Ken Miller, who has studied visual plasticity for many years. Our discussions eventually led us to a project exploring the role of neural plasticity in establishment of the critical period for ocular (eye) dominance. Critical periods occur during brain development, limiting the time window for the flexibility of neural networks, during which we hypothesized that the change in the plasticity would be larger than any other period. Dr. Takao Hensch, a pioneer in this field who used to work with Dr. Miller in Michael Stryker’s lab, also joined this project.
The most exciting experiences at Columbia were the discussions that I had with other researchers. These were often about conceptual problems or big pictures in research, both of which are sometimes missing in the Japanese research environment. I learned a lot from researchers who had deep thoughts in both experimental and theoretical fields. I also learned cultural differences in the way to communicate with others, a skill absolutely necessary when you make collaborations with scientists in other countries. Building a network with top-class researchers in NYC was one of my treasures as a young researcher.
What are your group’s current research objectives?
The field of theoretical modeling I work in seeks new ways to understand the brain where different components and functions interact with each other. We recently became aware of the apparent paradox that studying the brain alone is not enough to understand brain function. In the past, we considered the environment merely a source of sensory input to the brain. Now we understand that the same brain can process the same sensory input differently if the environment is different. Thus, we are now taking into account the environment and its control of brain state in our new models of brain function.
I think the same concept may be applied to learning theory. Most established learning models are just learning by themselves like the rote memorization of textbooks. Instead, we need more dynamic learning models that can interact with an environment or society. Current theories of machine learning are achievable by one machine, but interactions between machines or between machines and brains may achieve more efficient learning. If we can create such models, robots can learn from humans or humans can learn from robots, or society itself can learn efficiently from itself in the future. Our group would like to make basic discoveries that may inspire new brain-like learning systems.
“In the future, even the boundary between the brain and AI may fade out”
How can neuroscience and artificial intelligence (AI) interact?
There are two directions: from AI to brain science, and from brain science to AI. First, AI may contribute to data analysis in neuroscience, enabling extraction of biological features from a large amount of experimental data through big data analysis. Conversely, discoveries in brain science are actively contributing to improve AI. A paper that we recently published describes one of these findings, establishing an improved independent component analysis (ICA) algorithm based on learning rules of the brain and applicable to engineering.
In the future, I expect that even the boundary between the brain and AI may fade out. The use of brain-machine interfaces (BMIs) is currently developing at an exponential rate and may eventually connect the brain directly to AI devices, enabling AI to become a part of our brain. In another scenario, if we are able to culture whole brains, we may be able to use its intrinsic circuitry as a computing device to calculate problems with appropriate inputs. Of course, this is still fiction but these possibilities might lead to an eventual understanding of what is consciousness in a computer and may become a part of the AI singularity.
This interview has been edited for clarity and length.