Just what does anybody expect Facebook to do when somebody commits a murder and posts it online?

Enrique Dans
Enrique Dans

--

Last Sunday in Cleveland, a man used his smartphone to film the murder of a randomly chosen person walking through a park. He simply stopped his car, picked a 74-year-old, and shot him, supposedly to attract a woman’s attention. Later, he uploaded the video to Facebook and filmed another, this time live, explaining the killing and confessing his guilt. After a brief police chase in Cleveland on Tuesday, the man shot himself dead.

The incident has triggered a discussion regarding Facebook’s responsibility for allowing the horrific video to be posted and for not removing it faster, or even if Facebook Live should be shut down until the technology exists to prevent such videos being posted.

In the illustration we can see the timeline of events since the posting of the first video, prior to the murder, until its withdrawal and the closing of the murderer’s account. In total, a little more than two hours. The system used by Facebook is based on complaints by users, and from the time sequence it follows that the company was reasonably quick to react. The video is clearly content that should not be made public and that obviously causes a great deal of pain to anyone related to the victim, but … what is the basis of the claims exactly? Facebook cannot, no matter how hard it tries, prevent someone from killing someone, and it cannot prevent someone from uploading the video: the most we can ask is that the company remove it as soon as possible and avoid its possible retransmission.

But some people are holding Facebook responsible to something that technology is simply not able to do, nor will it for a long time. Of course, Facebook, along with some other companies like YouTube or Twitter, has managed to provide anyone with a smartphone the potential to broadcast live to the world. Even before this technology existed, when live broadcasting equipment was required, along with a few links and some licenses, there was already discussion about how to stop broadcasters from recording inappropriate material or what to do if a journalist committed suicide live on air. What can Facebook do? Nothing, other than have offensive material withdrawn as soon as possible. There is no point in holding technology companies responsible for the evils of the world.

No machine learning system can prevent someone from broadcasting or uploading a video of a murder to a platform like Facebook. It’s possible to recognize certain patterns, but the task is enormously complex, and there would be any number of circumstances in which a murder might not seem to be taking place, or vice versa.

Asking technology to do the impossible is typical of those who understand nothing about technology, and an easy blame. Obviously, nobody wants to see innocent people killed, but Facebook, as long as it acts quickly to remove those contents and avoid their retransmission, is not guilty of anything. What we choose to do with technology is our responsibility.

Some might argue that broadcasting a murder could encourage copycats. That’s pretty much the same approach as not filming a streaker so as to deprive them of their five seconds of fame. The point here is that television stations are only responsible up to a point, and all we can ask of them is to try to avoid broadcasting such material.

Facebook, within the limitations it faces and given that it is unable to vet each video uploaded onto its platform, does what it can to prevent inappropriate material from being shown. That means having staff available 24/7 to receive complaints from users, make quick decisions and try to avoid false positives. If within reasonable means, you can minimize that time from two hours as happened on Sunday night to just half an hour, great. But I’m afraid that, for the time being, that will only come about by increasing the number of human supervisors available, rather than through machine learning algorithms.

The moment a platform like Facebook is able to detect that a video is of a murder or a violent act and decides, autonomously, to interrupt that broadcast is still far away: we’re talking about technology, not magic. Like so many other possibilities turned into reality by technology, lifestreaming is what it is. For every sinister or detestable use of such technology, a thousand examples of possible positive uses can be found. Is there anything to be gained by eliminating Facebook Live just because one person uses it to broadcast a murder?

Demonizing Facebook because its platform was used by a murderer to show his crime shortly after committing it is absurd, and equivalent to blaming the knife manufacturer for the robberies or murders that are committed with them. The facts are terrible and painful, yes. It’s not good that people are able to see them. But for the moment, all that Facebook can do is eliminate such content as soon as possible with the means available to it and avoid its retransmission. No more, no less.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)