There exists a fundamental mechanism in nature with its roots in causality.
All physical systems in the universe are causal (effects follow their causes),
including brains, robots, computers and AI machines. Causality leads to
symmetry and symmetry to binding. If, in addition, the system looses entropy, then binding takes place and creates structures, invariance and meaning. The mechanism is well understood and has been in physics for a long time, but has been ignored in AI.
The mechanism can be simulated by a simple algorithm that uses a causal set as the mathematical model for the system. Causal sets have a vast array of algebraic properties, which are precisely the ones most sought after in AI. This is not a coincidence. An action functional is minimized, and basic causal set algebra is used to calculate the emergent structures. The process is recursive, resulting in hierarchies of invariants, and the invariants have a physical meaning. (Limited) tests using the black-box brain experiments technique indicate agreement with actual results obtained by humans. The simulation can explain machine learning, structure, invariance, meaning, data compression, and a lot more.
All this means that any system, such as a neural network, has an underlying causal set, and that any kind of optimization that happens to reduce the
entropy of the system, even by coincidence, such as back-propagation, will
produce some of the structures of the underlying causal set. If, in addition,
the optimization is recursive, then hierarchies may be obtained. It also means that nature already has its own universal “interlingua,” except that it is not a “lingua,” it is the causal set.
There are papers published. See for example: On the Future of Information. Reunification, Computability, Adaptation, Cybersecurity, Semantics, IEEE
Access, Vol. 4, pp. 1117–1140 (2016). Other references are included in that
Unfortunately, you need to be a physicist to understand the science.
Fortunately, I am here, I am a physicist, and I can explain better.