Deep Learning is Splitting into Two Divergent Paths
Carlos E. Perez
3065

Intelligence can only be defined with respect to goals. Some things may be stupid to do according to one goal, but smart according to another. Over the set of all possible goals, it seems everything is equally intelligent (this has actually been formalized and proved for search strategies and statistical learning).

Our troubles with “AGI”, both defining it meaningfully and achieving it, come from our ambivalence about our own goals, our uncertainty about what our purpose is and what we really want.

Solving a more general problem can never be easier than solving a more specific problem . At best, it’s equally easy. This should be easy to appreciate for anyone working in software or mathematics, and really, everyone. Deep neural nets fit with stochastic learning algorithms are very general problem-solvers. Making them more useful is ultimately about making them more specialized. It may seem to be a divergence between those pursuing “AGI” and those pursuing specialized agents, but I don’t think there is (or if it is, it’s that the “AGI” people don’t know what they want).

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.