The Corona app does not protect us from diseases; it makes society unhealthy

Roanne van Voorst (Ph.D)
7 min readApr 13, 2020

--

The Dutch government, following in the footsteps of other countries, intends to use tracking apps to curb the spread of the coronavirus. This is a potentially dangerous decision, which will not protect society from disease, but will actually make it unhealthy.

Sociologists have described in dozens of books the conditions needed for a society to maintain its health. Recurring concepts in them are trust and social cohesion, the possibility for citizens to think independently and critically without being punished by authorities, heterogeneity, solidarity and equality.

If the Corona app, which is currently being produced on behalf of the Dutch government, would be taken into use, it threatens to destroy most of those conditions in our — still relatively healthy — country. And once destroyed, a society cannot easily be healed.

How often would you check your phone, if your health app would promise you to keep you safe?

The Unhealthy-app

Why this app will not keep our country healthy: first of all, it will work with bluetooth technology, which is slightly less privacy-sensitive than GPS, but also relatively inaccurate. If you and a neighbour are standing on the other side of your living room window, you are in each other’s neighborhood, but of course you’re not contaminating someone else — obviously, your phone does not think so. The app will still identify someone who has a fever or cough, but no corona, as ‘suspicious’. And people with mild Corona complaints, although equally contagious, will not be included in the database. The app thus creates a sense of security, but no real, reliable information about hotbeds of infection.

This lack of trustworthiness is reinforced because not everyone will be able or willing to use the app; in addition, participants have to be a true reflection of society, for the results to be somewhat accurate. For example, if city dwellers, young people or people who work from home are involved, this will give a skewed, statistical picture of the health situation in the Netherlands as a whole, or for groups of people who meet many other people, such as shop assistants. For the latter group of people, the app can also make them rather nervous: the more people you meet with (mild) flu symptoms, the more often you will get reports, and the more unsafe you may feel — but more about that later.

In addition, the app only keeps track of which potentially infected citizen you have already been in contact with. That’s interesting information for the government, because it can use it to (somewhat) analyse health trends and possible sources of infection, but for the citizen it’s already suffered — you can’t reduce your chance of infection with recurrent force, and although politicians are now suggesting that all potentially infected citizens will be tested once the app gives them a notification, the tests will have to be massively available, reliable and immediately deployable — something which is not the case yet.

What is better: social isolation, social distrust, or accepting the fact that humanity is and will always be vulnerable to risk?

Fear culture

So what does the app do for our health? In the short term, it frightens us, potentially leading to a lack of solidarity with other citizens — which means that we will tend to distrust each other more and more, causing us to take less good care of each other. We don’t dare to visit the old neighbour who doesn’t have a smartphone, because who says he isn’t already infected?The colleague who sneezed while walking by, but about whom we didn’t get a report from our phone afterwards, is being checked suspiciously: hasn’t he installed the app, and if not, why not, does he have something to hide, does he want to hurt people? Such a culture of fear undermines our social cohesion, and teaches us to rely on algorithms, rather than on our fellow human beings.

This makes us forget the fact that most people in this country are nice people, who will follow reasonable advice from a government in an attempt not to infect other people. Yes, there are occasional groups of citizens walking too close to each other on a beach, and indeed, there have been a number of ‘fuck corona’ parties in recent weeks, but the vast majority of citizens work, as instructed, at home, teach their children at home and keep a meter and a half of distance where they can.

An app can measure, but it can’t think rationally. It sticks labels: sick, healthy; must be quarantined, allowed to move freely. These categories simplify reality and have little to do with the risk considerations inherent to being human: I have to visit my old grandmother to bring her groceries, I don’t show any symptoms, but I may have been in contact with an infected person a week ago. What should I do?Those kinds of complicated, human questions, which are important for the functioning of a society, can’t be answered by an app.

Instead of functionality, the app offers us a sense of security and control that is not justified and may even work against us. In reality, we can’t fully protect ourselves against the corona virus, nor against later viruses or other calamities. At best, we can try to deal with the uncertainty and tragedy of life, while investing in social relationships that can save our lives in times of crisis, or support us mentally.

A corona app teaches one to trust technology. But should we?

Surveillance society

In the long-term, the app reinforces the position of data farmers and the government, threatening the vulnerable in our society, and ultimately our democracy. The language used by politicians when they promote the app is fucked up. If they were to exchange the word ‘metadata’ for what it actually means: registration of your daily behaviour and very personal, physical characteristics, it all sounds a lot less abstract (and more annoying). Because that is exactly what this app does: each mobile device with the app constantly transmits a unique number via bluetooth, which is then captured and stored by nearby devices using the same app. If it later turns out that a user has corona, all devices with which he has been in the vicinity recently receive a message: beware, you were in contact with an infected person.

That’s what the app does, even though at the moment we don’t have any clarity about the way all that data is collected and used. For instance, it is unclear under what conditions the government will be allowed to use the newly-developed technologies and powers. It is also unclear when this large-scale surveillance must stop and the data must be destroyed: when this pandemic has ended (and when will that be? The crisis probably does not have an unambiguous end moment), or when there is a vaccine, or only when the next pandemic is over, or the next one, and the one after that? And what if this cabinet promises that it will be a relatively short emergency measure, but decides differently after next year’s elections because it believes that constant monitoring of our temperature and other health data can sometimes help prevent a virus that is as yet unprecedented?

This is not an unlikely scenario. Not only because the data will still be in place by the time the next elections take place, but also because we know from other crisis situations, in which ad hoc surveillance technology was also used, that many laws and resources that were supposedly enforced temporarily would remain in place. Governments tend to retain data longer than promised. Think, for example, of the attacks of 11 September 2001, after which the surveillance state was hastily set up, and all kinds of far-reaching ’emergency measures and laws’ were introduced. These were never abolished.

Does protecting oneself from others, avert the inevitable: that we will all die, eventually?

Abuse

The most disturbing aspect of this story is that the techniques could be used for other goals. Suppose a murder has been committed, who guarantees that the police won’t look into the corona app to see who was near the victim? And suppose a hacker succeeds in copying the data and reselling our health data to, say, a health insurance company or another body that benefits from it — then who will protect the sick person (according to the app) from adverse policies?

What is also worrying is the fact that the involved politicians seem to have little idea of the dangers and drawbacks involved. In recent days, spokespersons have emphasized that the Dutch app would, in fact, respect our privacy. That promise seems to originate from a naive belief in surveillance technology, but it does not mention that abuse could always occur because the technologies and their protections cannot be flawless. Neither is it said aloud that most users will barely understand what will happen to their data, let alone who controls it and who is actually responsible for its protection.

Privacy rules should not be taken lightly in a healthy society: they are about the right to keep information private, and about the right for autonomy and critical thinking, without being punished for it. In this sense, privacy rules exist to protect the health of society, and also to protect the vulnerable members of society. Sick people, for example, but also minorities, witnesses, whistleblowers, people who are discriminated against, or people who behave differently from a government but do not yet have to be considered contagious, or criminal.

The Corona app gives us a sense of control and a false sense of safety, but it does not guarantee personal health.On the other hand, it does guarantee a distrustful, lonely and therefore unhealthy society: one that might not recover, post-corona.

Roanne van Voorst (PhD) is a postdoctoral researcher, writer and anthropologist of the future. Find more info on www.roannevanvoorst.com or follow her on Instagram @ roannevanvoorst

--

--

Roanne van Voorst (Ph.D)

Anthropologist of the Future, internationally published author & public speaker