Abstractly, I completely agree. We might generally say that people are not the special snowflake they may believe themselves to be, on average.
But I have terrible terrible anxiety about machine learning specialists wandering into areas of social science in which they are not at all trained or prepared, to do work that could (possibly insidiously) impact the way that people think. [“Naive” NN models, for instance, could turn out to be racist, just because the training sets powering them are incomplete or represent racist mechanisms.] While ML researchers spend time thinking about computation and NN representation, it seems like proportionately very little time is spent considering representation or qualitatively driven feature engineering problems for the problem area domains.
Further though, human behavior has randomness that machines will not very well approach. For instance, while AI might be able to do basic prediction of taste (e.g., Spotify Discover Weekly), it will be unable to predict future taste which is actually the more interesting problem (what will you think is cool next month that, if presented to you now, you might not realize is cool? What is the song that you will like next month but you won’t like right now?).
Until ML can answer that problem, it’s condemned to amplify the basic. [You like LuLuLemon? I know you’ll love Outdoor Voices! #basic]