Great article. Very informative. Happy to see that FB has some of the best and brightest working on this.
Let’s not ignore the control loop problem here. In predictive systems, future outcomes are derived from past activity. Having supervisory ML is necessary while consequences are completely unknowable.
For something as important as a 2 billion member social network to have an ‘unsupervised’ machine-learning algorithm is frightful at best. At worst it could be easily gamed by orgs like Cambridge Analytica and others in no time flat. Read this article for more on C.A.
Election echo-chambers and fake-news tornados are nothing compared to the kind of manipulation that will be done when even a weak A.I. fails to include morality. (Check Nick Bostrum’s Super-Intelligence, for some background on that.)
The best that we as social-aware technologists can do is to maintain a vigil over the unintended consequences of our personal data explosion. There will be other conflagrations in society which will be attributable to — “We’re the social network. If I’m going to make predictions about people’s feedback on a piece of content, [my system] needs to react immediately, right?”