Why ethics can no longer be ignored in technology

Introduction

As a teacher in ICT and Media Design at Fontys University of Applied Sciences I know but all too well what many of my students want. They are part of a makers community. Most of them just want to build cool stuff with whatever (new) technology may be relevant. My students are by grosso modo millennials or even post-millennials: a synthetic generation for whom the concept of digital ‘transformation’ is alien (van Doorn, Duivestein, & Pepping, 2019). So most of my students are digitals natives: they use ICT as a tool to make a positive impact on society. But does that mean they use their (ICT) skills to build the right things? Or more specifically: do they automatically build things in a right manner? I don’t think so and I will be happy to explain my reflective thoughts about this later on in this essay. But let me give you an example from my daily practice first.

Student Case: Facebook messenger’s hidden gems

Last year I was coaching a group of students and one of them came up with a tool that could extract data from Facebook messenger so that he could see which one of his friends was online or offline at a specific moment. He therefore used and combined some off-the-shelf software in a creative manner and then transferred his data into an Excel sheet, plotted against time.

After doing so he told me he could now easily pinpoint and tell me which one of his friends was bragging the most about working so hard whilst in fact being on Facebook all the time. As the data was publicly visible for everybody all the time, he thought it was legitime to make an extract of this data arguing this was similar to looking at the status information of his friends every once in a while. I explained to him that although it was indeed public information, his friends would probably not know about this and would probably not be happy if they saw his statistics of their status updates over time.

Screenshot of Facebook Messenger with chat function turned off
Figure 1. Source: http://www.createregisteraccount.com/2017/05/facebook-messenger-online-status.html

It is not easy to exclude yourself from participating in such a cycle if you are not aware of the fact that you are part of it. First of all, as said, you have to know that you expose personal information and to what degree. For example: even if you do not look at Facebook for a while, Facebook’s Messenger app will tell others the length of time that you have not been active. So even when you’ve only opened the App but not using it, you might be sharing more about yourself than you know or want to.

At the time there are a lot of tutorials online (Rahman, 2017) about how to turn off your online status, but many will not have a clue so the search on this topic might be limited. And if you do decide to switch off this function, you are subject to the fact that you are also not able to see the statuses of your friends anymore, so that might feel uncomfortable too.

At the end of the project I told my student that he might have came up with a trojan horse, because after a second look at his data he might conclude that his father had been online every night between 2 a.m. and 3 a.m. with his mother not knowing about it (or vice versa). And then even he, as an innocent visitor of his own generated data, could not go back to not knowing about what might be concluded upon that. What a dilemma this could have brought to his family!

Wat I’ve learned from this is that even well intended tools can become tech regrets in the future if the technology can be applied in other contexts or if not anything is taken into account before launching new tools or techniques in human or social contexts.

Other examples of tech regrets

One of the most striking examples of tech regrets came from Tim Berners-Lee, the man who created the world wide web. He has seen his creation debased by everything from fake news to mass surveillance. In 2009 he started the World Wide Web Foundation, to protect human rights across the digital landscape. Berners-Lee has, for some time, been working on a new platform, Solid, to reclaim the Web from corporations and return it to its democratic roots (Brooker, 2018).

And this is not the only example available. Zuckerberg probably didn’t found Facebook with the intent to manipulate elections; Jack Dorsey and the other Twitter founders didn’t intend to give Donald Trump a digital bullhorn. And the founder of 8chan, one of the internet’s darkest corners, expressed regret after the Christchurch mosque shooting, taking responsibility in the possible controversial role the platform might have had in this case.

Figure 2: Tim Berners-Lee invented the internet while working at CERN Switzerland

The impact of technology on humans from a philosophical viewpoint

As more and more engineers who leave Silicon Valley hit the news because of their tech regrets, the impact of technology on society can no longer be ignored. The influence can be positive (utopian) as well as negative (dystopian). But, to really get a grip on this, we have to understand where the relationship between people and technology started and where we are today:

In times of upcoming industrialization, Karl Jaspers (1883–1969) was one of the first philosophers who had to deal with upcoming technologies. To support this industrial system, people had to perform with the help of technology which, according to Jaspers, alienated them from their natural human environment. Jaspers statement at that time was that technology is neutral, but in current times, this statement might not be tenable (Verbeek, 2000, p. 47).

German philosopher Martin Heidegger (1889–1976) opposes the image of technology as a neutral means. Heidegger calls that image “instrumental”. He agrees with the analysis of technology as instrumental, but says that we need to look deeper: he investigated technology more as a social and cultural phenomenon. Heidegger’s theory can be defined as determinism: he sees technology as an unstoppable force of nature.

Heidegger’s determinism has the pitfall of getting stuck in extreme utopian or dystopian scenarios from which passivity arises. Just as with Jaspers’ theory this one can be helpful, but it might not be helpful enough when answering the more complex questions of our era.

Current philosophers of technology like Peter-Paul Verbeek (1970-present) and Don Ihde (1934-present) stress the importance to look at the intertwined relationship between humans and things. As things are shaping the user and vice versa, we should focus more on the embodied relationships. Ihde states that both people and technology are constantly changing in interaction with each other. Technologies are multistable: the same artefact can have different meanings or identities in different contexts (Verbeek & Jong, de, 2017).

Leaning on the work of Ihde Verbeek describes multiple ways in which technology, people and the world mediate (Verbeek 2000, p. 141–44). He calls this the theory of technological mediation: its central idea is that technologies, when they are used, help us to shape the relations between human beings and the world. Rather than approaching technologies as material objects opposed to human subjects, or as mere extensions of human beings, he sees them as mediators of human-world relations.

Screenshot of a short animation movie in which technological mediation is explained. Follow link below to see the movie.
Figure 3: Explaining Technological Mediation, see https://www.youtube.com/watch?v=FVhrLwBNbvU

A common misunderstanding: technology is only for techies?!

Judging by the name ‘philosophy of technology’ you might be fooled that thinking about the influence of technology on our human world is something for techies or tech-nerds only. But, as technology is influencing our lives in many ways (for instance: who does not have an internet connected, data collecting mobile phone nowadays?) more and more people are involved in this kind of mediations (with or without knowing about it, see the student case example above).

Engineers are involved because they are creators, but non-technical persons are involved as well because they can be users or decision makers for others, politicians are involved because they make the rules in which technological companies can operate and journalist should be involved because they can function as smart thorns in our digital world by asking critical questions.

But there is a difference. “One of the main differences between science and engineering is that engineering is not just about better understanding the world but also about changing it. Many engineers believe that such change improves, or at least should improve, the world. In this sense engineering is an inherently morally motivated activity” (van de Poel & Royakkers, 2011).

Translating theory into action

Back to my students. What does all of the above mean in a practical sense? In my opinion it is crucial to let students (and others!) be aware of the possible (un)intended social (side) effects their design choices might have. Therefore a lot of questions have to be asked (about stakeholders, human values, impact on society, data, fairness, privacy, transparency, sustainability, and so on). In my opinion, the implementation of technology into our society, in what form whatsoever, can no longer be done without taking any form of responsibility and without questioning the choices we make.

Verbeek’s mediation theory can help us to think critically about the entangled relationship we have with technology. His theory is important, because if we keep on discussing things from an utopian or dystopian perspective it will not get us any further. His theory sketches a field in which people and technology both appear as actors — intimately connected — sometimes even without distinction, but both with their own dynamics. From this viewpoint technology can no longer be seen as neutral or passive: it does something to the world just as we do something to technology. There is continuous interaction and it is not entirely predictable in which direction these interactions will take us.

Technology has not only made decision-making more difficult but also more complex. The difficulty to predict interactions requires designers that take their responsibility during all stages in the design process and who can arrange their designs in such a way that policymakers, citizens, media, politicians (especially at municipal level) or even other engineers or designers can easily understand the choices made and intervene when necessary.

There are so many processes that we do not have grip upon, that we might have to accept that we will never be able to control everything. But we have to keep on trying. And we must come to action.

But, based on what should we go on? What should be our guiding lights? Here are my suggestions:

1. Technology should be multidisciplinary

Thinking about the impact of technology is important for both tech and non-tech students as well as for professionals. Multidisciplinary teams with different backgrounds should be encouraged, because the people you make your products for might be diverse and possibly not represented by your team elsewise. Discussions from different viewpoints should be encouraged and if you disagree on things, it might just be an indication to dive deeper.

2. Ethics as a driving force for innovation
An important thing we should avoid is to use ethics as a criterium for what not to make or what not to do. The case should be how to make or do things in a better sense. So, the question we have to ask here is not: “Are we pro or contra”? but “How are we going to design or redesign human-technology interactions in favor of our well-being?”

3. Non-normative
Peter-Paul Verbeek argues not to judge but to guide technological development. I can add here that our role as a university is not to decide what our students should think or what the moral just answer should be to an ethical dilemma, but to let our students make up their own minds. The questions we ask should therefore be non-normative: there may not be such thing as an absolute wrong or right, there are just a lot of options and choices to make.

4. Part of the (design cycle) process

Consciousness and discussion are important factors: recognizing human values, wondering whether your values ​​are central or at stake. And, as most design and development processes are cycles, asking questions should be a part of each stadium of the process. So, we have to apply ethics “from the inside out”, and not “from the outside in” or only at the end, as is often the case in ethical discussions (Lancee, Prüst, & Kamp, yet to be published).

5. Context is king

There are no ‘one size fits all’ solutions here, as every case has a different context. From the vision of Mediation Theory ethics should not be limited to whether or not a technology is acceptable (a yes/no question), but should be concerned with how a new technology could get a place in society and how that affects our human values, considering the impact of this specific technology within its applied context.

So, to avoid possible future tech regrets like Berners-Lee, Zuckerberg, Dorsey e.a., we should ask ourselves a lot of questions and start discussions in each stage of the design process (called ethics from within). Doing so we can design, invent or use technology with positive impact on society!

Literature:

- Atos Scientific Community (2018). Resolving Digital Dilemmas Atos Journey 2022 Future Vision. Geraadpleegd op 5 september 2019, van https://atos.net/content/mini-sites/journey-2022/

- Becker, M. (2015). Ethiek van de digitale media. Amsterdam: Boom.

- Brooker, K. (2018, 9 juli). “I Was Devastated”: Tim Berners-Lee, the Man Who Created the World Wide Web, Has Some Regrets. Geraadpleegd op 31 oktober 2019, van https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets

- Doorn van, M., Duivestein, S., & Pepping, T. (2019). The Synthetic Generation: growing up in an uncertain and changing world (Digital Happiness 03/04). Geraadpleegd van https://www.sogeti.com/explore/reports/digital-happiness-reports/the-synthetic-generation/

- Lancee, W., Prüst, H., & Kamp, J. M. (not published). Technofilosofie Framework: Naar een analyse-instrument voor de ethische aspecten van de relatie tussen mens en technologie. To be published on technofilosofie.com.

- Institute for the Future and Omidyar Network. (2018). Ethical OS. Geraadpleegd op 5 september 2019, van https://ethicalos.org/

- Poel, I. van de, & Royakkers, L. (2011). Ethics, Technology, and Engineering: An Introduction. West Sussex, UK: Wiley.

- Rahman, A. (2017, 7 mei). Facebook Messenger Online Status. Geraadpleegd op 21 oktober 2019, van http://www.createregisteraccount.com/2017/05/facebook-messenger-online-status.html

- Spiekermann, S. (2015). Ethical IT Innovation: A Value-Based System Design Approach. Palm Bay, Florida USA: Taylor & Francis.

- Verbeek, P. (2000) De daadkracht der dingen: over techniek, filosofie en vormgeving. Amsterdam: Boom.

- Verbeek, P. (2014). De vleugels van Icarus: hoe techniek en moraal met elkaar meebewegen. Rotterdam, Nederland: Lemniscaat.

- Verbeek, P., & Jong, de, R. (2017, 1 juni). MOOC Philosophy of Technology and Design — University of Twente. Geraadpleegd op 21 oktober 2019, van https://www.futurelearn.com/courses/philosophy-of-technology

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store