Yeah that’s absolutely true.
Jieren Chen

Heh, sorry for maybe implying “malicious or selfish intent though”, didn’t mean that :) There’s nothing wrong in playing a good game in the attention arena. And the idea of pushing people towards digging up more on ideas like “log curve of income utility” is good, never though of the usefulness of “fodder” before :)

A slight off-topic remark though, since the whole deep-learning thing is closer to my current area of expertise: `Ideas guide mathematics, not the other way around. Hebbian plasticity is a simple concept that guides mathematical models in both comp neuroscience and deep learning. “What fires together, wires together.”` — This isn’t really how things worked out in either modern mathematics and modern AI/ML/”deep learning” (unfortunately, maybe). First, for lots of proofs and advancements in modern math (but not all), the “ideas” had to be discovered from the math after it had been calculated. There’s even computer generated proofs now, from which there’s people’s job to maybe extract “the useful ideas” after they have been proven. Anyway, leaving this aside, the who “What fires together, wires together.” is as far as it can get from how modern neural networks (“deep learning”) work. People started with a bio-inspired model in the 70s, but after seeing that didn’t get anyone anywhere, they just kept the very few distilled maths ideas (basically, “yeah, networks, like in networks of mathematical operations, but let’s keep calling then ‘neural’ because it helps with getting funded”…) and piled math upon math (basically figure out efficient ways to approximately optimize based on differential equations of some loss/cost function across a network, no deeper “ideas” than “compute w_1,…,w_n that minimize E”), lots of it even proof-less and empiric, and until they’we ended up with things that worked, and some people are still working backwards from what was engineered to work to discover why it works so well on real world data. And when it comes to biological neurons, the “Hebbian plasticity” things is still offered as the simplest explanation simply because no body ever has the time to explain the ideas because current state of the art understanding to the general public. Heck, neuroscientists have a hard time explaining their latest findings even to AI/ML researchers :|