Google & Facebook should stop training deep learning systems in real world situations
The Financial Times published a story today written by Aliya Ram and Mark Vandevelde: “Advertisers quit YouTube over videos featuring children”. The writers recount a number of strong negative responses from Google/Youtube advertisers: Lidl objected “[saying] it was “shocked and disturbed” by the revelations that explicit comments had not been removed from videos of young children by Google’s video-sharing site.” HP is reported to have gone so far as to say Google had “wrongly classified its content”.
Keep in mind most of the ad placement work is handled by algorithms “to identify inappropriate videos and abusive language in the comments section”. Deep learning is the method of choice (think computer vision, and think, again, natural language processing) for both of these tasks.
Also keep in mind some of the history of neural networks. They actually went out of favor, I think, in the late 1970s. Marvin Minsky and Seymour Papert wrote a book, “Perceptrons” which challenged the notion neural networks could do much of what they had been touted to be capable of doing. In a very short video available, of course, on Youtube, Prof Minsky sums up their work and, of much greater importance, the limits of neural networks.
Neural network advocates, today, argue, the availability of enormous amounts of data provides their algorithms the “nutrition” they need to beat the limitations Profs Minsky and Papert presented.
But the very recent revelations from the BBC and the Times about what one can only call the disaster of Youtube’s automated ad placement system is the strongest possible rebuttal to this argument by neural network advocates.
Google is not alone. Facebook is also a keen proponent of “pedal to the metal” deep learning AKA neural network development and deployment. As ProPublica recently deduced, not once, but twice (first in late October 2016, and, just recently, in late November 2017) Facebook ad targeting “showed a significant lapse in the company’s monitoring of the rental market.
Given the obvious, glaring mistakes made by these neural networks, one has to ask why Google and Facebook continue to use them to shoulder most of the work of their ad placement and targeting solutions. HIRE SOME HUMANS