Imagine a machine with a hundred dials that you can turn, each of which controls a single parameter. The machine is performing “okay” but we know it could perform better if we could find the right ways to turn the dials. Optimization techniques will help to traverse each dial and smartly explore our large dimensional space to find a more optimal solution. We know the solution is likely not “the perfect solution” but we know it was better than the original initial conditions.
Let’s pose a slightly different situation. The machine is performing fine but the system has changed in some way and you need to adjust these dials to better fit the new task. We can start the optimization process again but it’s laborious to traverse all those dials. Even more problematic, the system changes constantly and by the time we learn the new model, we need to start again.
Meta-learning recognizes there is a relation between all these systems influencing the machine and therefore the dials are also connected. As we learn each new system, we start to see the connection between dials, and there exists a meta-dial that controls and restricts the parameter spaces of all the other dials. This meta-dial finds the right model more efficiently and speeds up learning.
Ideally faster optimization would not be the only advantage. Our machine learning models typically move through a changing world. If we could find that meta-dial, we could better understand the rules that rule our models.
AI has ventured into the science world:
CASP13 is an initiative to provide independent evaluation of the current algorithm design in protein structure modeling. Protein structure modeling is a long studied problem in computational biology. Scalable solutions would improve drug discovery and help us better understand disease. The teams that enter CASP typically work in a lab which has been studying this area for a decades.
Last year, DeepMind participated in CASP for the first time and sort of rocked the community by outperforming. In the case of the protein structures where homologous structures were known, DeepMind’s AlphaFold outperformed in the 25 of 43 cases. The second place finisher won three of the same 43 cases. A number of scientists pointed out the AlphaFold algorithm was based on some of the best performing existing methods.
DeepMind team will probably move on to the next problem but I hope not. I spent a number of years studying protein sequence-structure-function back in my academic days and this is really only scratching the surface. We often forget these structures are not rocks that never change shapes. Proteins move about, unfold and fold in new ways (often to bind DNA, RNA, and other proteins). If we had a model that could represent many to most proteins, we could dive in and ask new questions. Possibly, learn that meta-dial that makes everything possible (literally). After spending 20 years in physics and then biology, I can not imagine making the steps in scientific discoveries without AI.
The above was inspired from reading this some time ago.