Thanks, Phil. To your question about alternatives, I found Legg and Hutter’s paper, A Collection of Definitions of Intelligence, illuminating.
The complexity of intelligence makes the task of identifying an essence a framing exercise; the purpose of the thing must be identified. I think most would describe machine learning as predictive. And what colloquially may be described as “solving problems” is formalized as autonomous, goal-seeking rational agents.
As the endeavour moves from automating existing knowledge (automation) to creating new knowledge (in the scientific, conjectural, revolutionary sense of the term), the frame will shift. Here, the problem becomes epistemological: processes of prediction and inductive inference are insufficient.
As noted in a previous post, I find the biological roadmap confusing, due to the bloat of evolved functionality and goals. I don’t think anyone would challenge the wealth of insights that a study of the natural world can provide. But the success of the functional approach, discussed in this present post, may be attributed to the clarity of purpose and the precise definition of the problems to be solved. I’m not suggesting that those pursuing a biologically directed approach lack this discipline, only that the serious gaps in knowledge represent impose risks.
So taking the proponents on their own terms, what you’re describing as predictive would certainly be within the reach of inductive inference systems: more than reactive, in that they can predict new observations. This is awe-inspiring, which is perhaps why some are even entertaining the idea that it’s the end of theory and explanations altogether.