Of machines and men
Google, under pressure from brands considering withdrawing their advertising from YouTube because of scandals linked to inappropriate content on children’s videos, says it will carry out much more intensive monitoring to “prevent bad actors” from “exploiting” its “openness”, and will hire up to ten thousand people during 2018 to manual inspect content.
In recent weeks, YouTube has suspended 270 accounts and deleted some 150,000 videos considered offensive or inappropriate for children. In addition, the company plans to continue using machine learning to help supervisors eliminate almost five times more videos than they would be able to do so by reviewing manually. According to YouTube, the content it reviewed and identified algorithmically would have required the supervision of 180,000 people working 40 hours a week. In addition to carrying out the aforementioned supervision, the company will release regular reports on its progress and will adopt a new advertising approach based on stricter criteria and more manual processes.
YouTube’s efforts mirror Facebook’s, which last May announced it was hiring three thousand more people to join the 4,500 it already has around the world working on content monitoring, and that in October , coinciding with the revelation of Russian capital investments in advertising campaigns aimed at skewing electoral campaigns, announced it was hiring a thousand more people to supervise ads.
How many people will end up monitoring content in the future? Could this be one of the “new professions” that will define the societies of the future, or is it simply another facet of the gig economy: dehumanizing, part-time, low-skilled employment? According to some studies, the impact of spending hours inspecting content that reflects the worst aspects of human nature can generate psychological problems of various types and even post-traumatic stress syndrome, meaning that the need to train algorithms for precise and adequate supervision, thus sparing people from unpleasant or mind-numbing content is ever more pressing.
The companies now hiring thousands of people to check content seem to assume that, in the future, those jobs will be performed by algorithms, and that these jobs are temporary. These people are being hired because machines are not accurate enough yet, but with training based on human monitoring of content, they will be in the future. The paradox persists: regardless of how many people Amazon hires, becoming one of the biggest employment generators in the United States in the process, and its workers are recycled to be used for other functions as their work is carried out by robots, the reality is that as the army of robots grows, the total employment generated by the industry decreases, and that in the future, fewer and fewer jobs will be carried out by people.
Will this process be limited to the works of the so-called 4Ds: Dull, Demeaning, Dirty and Dangerous, or will we see something more complex and nuanced? Will we see in the future thousands of people hired as content inspectors, subject to work-related illnesses, or will they just be temporary jobs until an algorithm is sophisticated to take over? Does it make sense for somebody to be running around a warehouse while receiving orders through an earpiece, scanning boxes on an assembly line, driving or cycling through heavy traffic in all weathers or watching dismal content for hours on end? And what about the people who currently make their living from doing these jobs?
With every day that passes, new, low-skilled temporary jobs in logistics, driving, or carrying out repetitive tasks that in no way match human potential appear… Is this simply a period of transition, or yet another side of the unacceptable face of capitalism sweeping away the hard-won achievements of the last 150 years? In short, what is the future of the relationship between the division of tasks between machines and men?
(En español, aquí)