I suggest that the question is incorrectly constructed. If we ask whether AI will emerge to rival or exceed the HI ability to handle certain specified tasks and make certain specified decisions, the answer is a resounding yes. However, as long as AI continues to be computer based in a binary system, it will continue to fail to adequately model HI. You can load up an binary, two-value logic algorithm with as many “self-learning” (actually self-modifying) feedback loops as you want. The algorithm remains basically a dumb two-value sorting tree, whereas HI decision making and other functions exhibit multi-value logic not yet captured by work in AI. Which again, doesn’t mean that you can’t, for example, build a robotic automobile that will be safer on the road than one with a human driver. Nice piece, Rohan. Cheers!