Trust Systems - Quantifying Relationships

Digital Earth
Digital Earth
Published in
4 min readMay 7, 2019

By Valia Fetisov

The Social Credit System (SCS) is an effort to quantify and digitalise trust. Built on top of a financial credit system, it is expanding well beyond that foundation. But this digitalisation of trust is neither exclusive to China, nor has a sudden appearance. Rather, it seems to be a basic requirement for a globalised society and a replicable notion of digital reality: trust is central to the issues of fake news and cross-state influences on social media. The digitalisation of trust is also the main innovation of blockchain technology, which threatens to eliminate trust in institutions and replace it with cryptography. Many countries’ banking sectors already analyse credit risks based on the previous actions of a particular client, using a broad range of information. All available data is taken into consideration, thanks to deep machine learning algorithms that are able to extract high-level patterns automatically.

"…digital reputation is becoming public and a part of social relations, influencing not only one’s interactions with banks and governmental organisations, but also with neighbours, family and others."

If social credit systems are neither new, nor particular to China, what makes SCS so visible? China’s SCS is in competitive development, with different provinces implementing their own solutions. Some make bad reputations public in an attempt to influence their citizens through social pressure. For example, a court in Hebei (河北) released a mini-program running inside the WeChat messenger to allow people to see “a map of deadbeat debtors (老赖地图)” and report on them; in Shanghai and Shenzhen, jaywalkers are automatically recognised and displayed on a big public screen accompanied by their initials. Common among these examples is that one’s digital reputation is becoming public and a part of social relations, influencing not only one’s interactions with banks and governmental organisations, but also with neighbours, family and others. Just as social media has extended and replaced some parts of real-life interaction, eventually changing the meaning of friendship, so too does SCS have all likelihood of doing the same with trust.

Illustration by Valia Fetisov

The ubiquity of trust systems is made very evident by their physical presence. Most readers are familiar with security lines in airports, but in China or Russia, for example, they are a part of daily life in the form of various types of scanning equipment installed at every subway entrance and in many shopping malls, museums and public institutions. In some of China’s provinces, they have been installed every several hundred meters. Security personnel there not only check one’s physical belongings, but also use digital tools to scan phones. Digital inspections are also common at border control points in the USA.

Security lines in airports are models of trust reinforcement under post-privacy conditions because of the absence of laws between borders and the need for “protection”. For example, while Europe has pioneered data privacy within its borders with the GDPR,it is at the same time testing an advanced AI-powered system to protect its borders. The Automatic Deception Detection System is a pilot program that uses an avatar to ask questions and determine if people are being truthful. Once someone is labeled a liar, human agents are summoned to investigate further.

Illustration by Valia Fetisov

Metal detectors have long been outmoded; with the obsolescence of metallic weapons in the age of a 3D printed guns, and when explosives can be made out of regular duty-free products, new digital and psychological inspections are taking their place. These AI-powered systems no longer check one’s physical belongings, but people and their intentions.

When there is no way to check whether someone is dangerous or not, it is important to point out that security lines are designed to function not only technically, but socially. The line and the inspection process are visible to others in order to normalise behaviour of being searched, but also to present care and to dramatise the idea of “being safe”. Trust itself is not solely a psychological problem, but rather sociopsychological: it can not be located in a single person alone, but is measured through their behaviour towards others in a given moment and in relation to the past. This is where the particular nature of digitalisation comes into play, with it’s not forgetting memory and predefined, automatically-issued judgments.

AI-powered systems no longer check one’s physical belongings, but people and their intentions.

The question is, do we trust systems that want to quantify our relationships with each other and establish a new social order? Or, will the design of these systems, their performativity and their social normalisation become so dominant that they will be able to take over our decisions?

Illustration by Valia Fetisov

About the author

Valia Fetisov is a visual artist from Russia currently working on his Masters in Surveillance Architecture at the Academy of Media Arts Cologne, Germany. In his practice, he often alters standard algorithms and works with automatic systems in order to bring into view their ambiguous nature, making them take on a threatening, rather than auxiliary form. Fetisov has taken part in exhibitions, such as General Rehearsal at the Moscow Museum of Modern Art (2018), The Electric Comma at the Palazzo delle Zattere, Venice (2017) and Qidian at the Zendai Zhujiajiao Art Museum in Shanghai (2017).

You can find out more about his work here: https://valiafetisov.com/

--

--

Digital Earth
Digital Earth

An online publication exploring materiality and immateriality of digital reality.