SesameCredit (IMAGE: Zheping Huang — Quartz)

Will algorithms rule our lives, or do they already?

Enrique Dans
Enrique Dans

--

A great piece by the great Bruce Schneier called “The risks — and benefits — of letting algorithms judge us” reminds me of another good article about Sesame Credit, a rating system based on social metrics created by Alibaba for the Chinese market, published in Quartz in October, “All Chinese citizens now have a score based on how well we live, and mine sucks“.

Credit rating algorithms have been around for some time, and those of us who have tried to built a credit history in another country will have experienced their vagaries.

Sesame Credit doesn’t just monitor credit card use and payment of debts, but instead uses a secret algorithm to measure what people are up to on line: who they talk to on the social networks (Facebook has already patented a ratings system based on our contacts, and that I have already discussed in Facebook and your mortgage), and what kind of books or products they buy (subversive material could damage your score) or social media updates.

I noticed how the US system worked when I first lived in the country in the 1990s: to build up a credit history I found myself having to take out all kinds of credit cards (which I didn’t need) with tiny limits, then having to make purchases with them, and then punctually repay them each month.

It was obvious at the time that the system was based on algorithms, albeit it not very bright ones that were unable to recognize certain patterns that seemed out of the norm. Applying this approach to issues beyond spending, to say our social behavior, to our circles of friends, to what we say online, or about what we buy — beyond the amount — is obviously going to raise fears about social control: don’t be seen in public with this or that person, don’t say this or that, don’t buy this or that product in this or that place… if you don’t want to be ostracized, refused credit, and to be judged by anybody from a possible employer to a partner.

The question becomes more complicated as we apply algorithms to other areas of our lives, for example, dating, as Blinq, a dating app that uses artificial intelligence to assess the age and attractiveness of users from their photographs. The system has proved a draw, with more than two million people signing up in two days to see what rating they are given. The company says it intends to use the metrics, which means that a photograph will decide what opportunities you have to meet somebody else, which is pretty much the same as happens out in the real world I guess, although in this case it’s a robot that calls the shots. Tinder uses a similar system, the Elo score, which establishes how desirable a user is, based not just on information gleaned from photographs and profile, but It’s not hard to see how all this could end up producing a certain homogeneity in terms of our tastes. But again, the question is whether this is something created by artificial intelligence, or whether the algorithms are simply reflecting something that already existed. For example, is somebody with a high Tinder score somebody who would attract attention when they walked down the street? Is the algorithm simply a reflection of underlying human tendencies, or is it able to see things we can’t? Are we better off being assessed by humans, or would we prefer the certainty that comes with a score, even if it’s generated by a machine?

But dating is one thing, and our credit rating quite another; or is it? In the case of Sesame Credit, a high score could be taken into account by somebody looking for a partner. China seems set to become the social laboratory of the future: it has a relatively young society, is relatively socially flexible, it’s changing fast, and has a government with all seeing eyes that would not be tolerated in countries with a tradition of democracy.

In fact the whole planet is moving toward the creation of societies in which more and more aspects of our lives will be assessed through algorithms. In the old days a request to borrow money might have been decided by a committee that could be subject to a certain influence, but now a machine decides. Once, our friends and family told us if we were attractive, but soon, our smartphones’ cameras will give us a score.

Is this a good thing? If an algorithm can be designed, it will, and from what I know about people, it will make better decisions then people. It’s going to happen. The question is what kinds of controls we decide to introduce into the process: transparency to make it impossible to monitor such programs, control over the variables that feed the algorithm; supervision? I’m not sure at all. There are no easy answers, and there are any number of aspects to this we need to start thinking about, now.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)