New York Post

Facebook’s Suicide Algorithms are Invasive

We think of artificial intelligence as something that should better humanity, but user monitoring is an invasion of privacy. Facebook’s incessant experiments on us, whether with dating or blockchain are going to take a toll on us.

But to be rated by how likely we are to self-harm? That’s state monitoring at its worst. It’s worse I think than Chinese parents wanting GPS smart clothing for their kids. There’s a place for AI to benefit people, but it’s not a company like Facebook to warn us or our loved ones if we are suicidal.

Facebook automatically scores all of us.

That is it scores all of our posts in the US and select other countries on a scale from 0 to 1 for risk of imminent harm.

Facebook is also secretly ranking how “trustworthy” we are.

I think you get the idea, Facebook is using artificial intelligence however it (he, Mark Zuckerberg) sees fit on its users.

Users will not know that their behavior is being ranked, the company appeared to suggest. If we surveyed people on Instagram, would they know they are being ranked on self-harm and a trustworthiness spectrums? How about people on WhatsApp just communicating with their families?

Facebook Positions itself as a Socially Benevolent Company

Facebook is scanning nearly every post on the platform in an attempt to assess suicide risk. Sadly, Facebook has a long history of conducting “experiments” on its users. It’s hard to own a stock that itself isn’t trustworthy either for democracy or our personal data.

Facebook acts a bit like a social surveillance program, where it passes the information (suicide score) along to law enforcement for wellness checks. That’s pretty much like state surveillance, what’s the difference?

Privacy experts say Facebook’s failure to get affirmative consent from users for the program presents privacy risks that could lead to exposure or worse. Facebook has a history with sharing our personal data with other technology companies. So we are being profiled in the most intimate ways by third parties we didn’t even know had our data.

In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence but what is the real reason they make these contructs? It’s to monetize our data, it’s not to “help humanity” or connect the world.

Live streaming on Facebook? They were used for some of the most disturbing content in 2017 and 2018. Following a string of suicides that were live-streamed on the platform, the effort to use an algorithm to detect signs of potential self-harm sought to proactively address a serious problem. So they use their top AI talent to police their platform and rate citizens. Sound a bit black mirror for you? Well it’s true.

Artificial Intelligence is Entering Dangerous Territory

So as analysts are saying online, Facebook is creating new health information about users, but it isn’t held to the same privacy standard as healthcare providers. As I have said, tech companies getting into healthcare creates more ethical problems than it solves with AI. But there isn’t regulation to handle such cases. As we saw with Senators they don’t even understand the basics of how the internet works. What does Trump know about AI? The past and the future don’t necessarily converge, in a 70-something year old brain.

The algorithm touches nearly every post on Facebook, rating each piece of content on a scale from zero to one, with one expressing the highest likelihood of “imminent harm”. Does online animosity towards Facebook itself raise my score? I’d like to know. I don’t want to live in a system where I’m rated without my consent.

Facebook is creating “sensitive mental health data” out of our own data which it thinks it owns, for profit. The best minds in AI might as well be creating a social credit data weaponization economy here. Facebook is leading the way — and it makes China’s rating system look benevolent and conscientious in comparison. This is because the Chinese Government wants conformity, not invasive profit from our data, while pretending to be benevolent.

Facebook is conducting data fraud on global citizens. There’s really no other conclusion to make.

Data protection laws that govern health information in the US currently don’t apply to the data that is created by Facebook’s suicide prevention algorithm, according to Business Insider.

To make money, Facebook doesn’t just need users. It needs users that are active and engaged. It’s creating a system to use predictive analytics to know what you will do next.

It needs to know not just which link you’re likely to click, but also what makes you more or less likely to click it. It’s creating a captive-data ecosystem. It has the Billions of innocents, it now needs to evaluate their mental health, behavior, community, vulnerabilities, etc…

Companies such as Facebook that are making inferences about a person’s health from non-medical data sources are not subject to the same privacy requirements. This is a dangerous and malignant use of AI, if I ever saw one. Facebook has invested in a “messenger for Kids” platform, it’s pretty sick.

Facebook studies things like massive scale emotional contagion and sentiment manipulation online. The NSA and DARPA, Facebook is a mass surveillance channel. Just as many Chinese companies now have active ties to the Chinese Government, this after all has been going on for quite some time. But to make interfences on our personal data with AI, that’s going down a dangerous path. It’s where something like democracy and capitalism just folds upon itself in corruption.

Facebook should make a rating of how likely I’m to leave Instagram, Messenger, WhatsApp and Facebook’s flagship app, because this kind of behavior is a violation not just of our privacy, but of the unsaid rules of artificial intelligence researchers, ethics, social justice and human rights.