1984 came late but 2084 will come early, and why we need A.I safeguards.
1984 came late but 2084 will come early. Look at what’s happening in China with their social scores, how the use of big data creates a socially inhibiting metric on people’s lives based off how the state deems their value. Look at what’s happening in America with health-scores and risk modeling being a process that uses big data to predict how much people’s healthcare will cost, all while leaving people out of the process to help them intervene costs. We’re seeing a chilling reality arrive that Orwell feared and the likes of Hawking or Musk are warning about. The world described in 1984 may not be entirely realistic in 2019, but given some of the implications around the world from technologies advancing hold on society, some are beginning to worry.
Before we dive much deeper let’s get back to where we started in this article, with China’s social score. Before we paint the futuristic nightmare suggested with a 1984 reference, why might a government or anyone consider implementing something like a social score sourced from big data? China is under a lot of scrutiny right now with their Muslim internment camps and their naval activity in the South China Sea. So it’s easy to assume these social scores are of no good, but let’s look at these “social scores” with an open perspective to truly dissect it.
China calls it their Social Credit System, with the goal in mind of valuating someone’s social and economic impact. China’s State Council first started implementing this in 2014 as a mass surveillance and big data project. The main goal being to improve major social, economic, and behavioral aspects divided into the following four sections — honesty in government affairs, commercial integrity, social integrity, and judicial credibility. The outcomes of this Social Credit System so far has caused things like travel bans, exclusion from private schools, regulated internet speeds, loan approvals, and various uses in social status. Some of the uses in social status are being implemented where apps can tell you if you’re surrounded by people in debt.
America is no alien to the use of big data and impact on people’s lives either. Although no known cases are being orchestrated by the government other than surveillance, the large world of health and life insurance uses big data to predict how much people will cost them. This form of risk modeling determines your future health outcomes based on data from everything like your medical records to the size of clothes you look at online. While determining costs is a necessary task for any business ,especially those in the business of insurance; the concept of going around people to obtain their data might actually not work well in risk modeling. There’s a consistent correlation between health and finance, where poorer people are more likely to be unhealthy. If raising costs is the main means of mitigating risk for health insurance, that’s taking away peoples resources to be healthy. A happy medium here might be to make risk modeling with big data a transparent process that engages directly with people to mitigate and prevent health problems.
Without trying to paint some majorly biased Orwellian nightmare, let’s look at the facts. Big data and artificial intelligence per predictive modeling is here, it’s already impacting our lives. It could be used for bad or good all depending on how the corporate or state entities behind them see fit. Should we just simply hope they keep our best interests aligned or should we have some involvement? Given the data running those systems is sourced from us and generates revenue, we at Unity Health Score suggest people should be involved. We think it’s the only credible artificial intelligence safeguard, for people to be able to say “I don’t like how that A.I system impacts my life with my data, take my data off of it!”
On an ethical, legal, and human rights level we need person data autonomy. For artificial intelligence to be accurate, the quality of the data it uses must also be accurate. Sourcing it directly from people in a transparent manner with their consent is key to aligning incentives and utilizing artificial intelligence in a way that’s efficient. The current path big data and A.I are on will only continue to lead to lucrative outcomes like astronomically high insurance premiums or social scores that notify you of debt ridden individuals in your immediate location. If we don’t start taking personal data ownership seriously, we won’t have to wait till 2084 to see the Orwellian nightmare unfold.