Nihilism and Technology
Nolen Gertz talks Nietzsche and Chill with Rowman & Littlefield International Commissioning Editor Isobel Cowper-Coles
ICC: Hi Nolen, thanks for coming in today to talk to us about your new book, Nihilism & Technology. I wondered if you could introduce yourself and tell us about the book.
NG: Yes, sure. Nolen Gertz, Assistant Professor of Applied Philosophy at the University of Twente in the Netherlands. I am originally from the United States. The book is pretty much the result of my time in both countries. The United States and the Netherlands are both very technological, and the University of Twente is certainly very tech-focused — the motto is “high-tech, human touch”. I’ve certainly spent a lot of time thinking about just what that means.
I think more and more in debates around technology you see people asking — is technology good or bad? I wanted to explore a different dimension to that question, and explore not what technology is doing to us, but what we’re doing to technology. That opens a more philosophical perspective on what it means to be human and what it means to be human when using technology.
ICC: Why did you choose Nietzsche’s philosophy to analyse technology?
NG: That’s a good question. When students constantly demand the newest philosophy and get very aggravated when I assign readings that are ten years old, it certainly looks odd to go back to the 19th century to understand the 21st. But what’s good about Nietzsche is on the one hand he himself constantly joked — well, maybe he wasn’t joking — that his readers hadn’t been born yet, and that he was writing for the future. Even though he didn’t really describe technology that much in his writing, he was focused specifically on what he described as “cultural diseases” and saw himself as a “cultural physician”.
I’m trying to look at technology in a similar fashion, in how tech can be understood and how nihilism can be understood as more of a cultural phenomenon — which is why again it can’t just be “good” or “bad” — because it’s not a merely individual issue.
ICC: Speaking about frameworks, could you tell us a little bit more about the post-phenomenological framework that you use in the book?
NG: Right. So post-phenomenologystarts from Don Ihde at Stony Brook, he’s influenced by Heidegger. Ihde has got Heidegger’s descriptions of technologies, but at the same time he has Heidegger’s attack on technologies — and there’s this sort of contradictory way of thinking in Heidegger’s philosophy about technology. Ihde is trying to figure out how to basically rip the descriptions, rip the framework, rip the analogies, out of Heidegger’s big picture way of thinking about technology, with this question about “being”, and extend that more towards everyday life. Which is again, what attracted me towards Nietzsche (and Heidegger is following Nietzsche) and his analyses towards everyday life and bringing philosophy to bear on everyday life.
ICC: In the book you examine a lot of different technological phenomena, such as Twitter, Fitbit and Netflix. In Chapter 6, you explore how racial discrimination through the selection of guests within AirBnB embodies the nihilist will for power. Do you think tech companies have a responsibility to manage their users’ choices and prevent this discrimination?
NG: This is something I looked at within the hashtag movement #AirBnBWhileBlack. There was an Op-Ed piece written by someone who had experienced this herself, that expressed the idea that there were lots of things the company could be doing — such as kicking off anyone who was clearly using racial discrimination when picking potential guests.
Something I was interested in — and this is really where the issue with Uber, Lyft and Flywheel came in — is all three offer similar services as car-share apps, and they all present to the driver information about the pickup at different points throughout the pickup. The earlier the driver got a profile photo and the earlier they got a name, the greater the likelihood of discrimination. So, what I’m interested in specifically is not so much “how do we police racists using apps?” I’m interested in how the apps themselves shape discriminatory practices. Then it’s not so much about “can the company police user behaviour?” Rather, “can the company understand its own role in that behaviour?”
ICC: In reference to data-driven nihilism and embracing un-noble algorithms, you say in the book — “the question is not whether we can understand and regulate algorithms, but whether we can understand and regulate nihilism.” Could you talk a bit more about this and whether you think it’s achievable to regulate nihilism?
NG: Right. So the issue with algorithms seems to be — and you saw this recently in Zuckerberg’s congressional testimony — this idea that, “sure, Facebook has a lot of problems, but they can be solved by artificial intelligence.” The suggestion that the only way to stop a “bad” app is a “good” app, and artificial intelligence will solve everything. What was interesting in the research was that on the one hand, because it’s called artificial intelligence, and we often use terms like “smart technology”, people automatically think AI must be smart — and that’s not really true. It’s simply automated rule following. But on the other hand, we sort of have a faith in artificial intelligence.
It was that faith specifically that I was interested in, and that idea that — again, how Nietzsche thinks about faith — there is something about our willingness to believe someone has the answer. The greater the mystery, the greater the faith we put into it. So, it seemed kind of important that artificial intelligence is very mysterious. Even the engineers who write the code, can’t always investigate the code and explain why a certain answer comes up. If you watch that famous Jeopardychallenge, with Watson taking on Ken Jennings and the other Jeopardychallenger, and it was destroying them, you know, 30,000 to negative-whatever. But occasionally it would say something completely odd, and it’s these odd moments that give people pause because that could be then the medical field, and the diagnosis you get is an “odd moment” — and no one knows why it says “Toronto” when it was clearly a mathematical question. It’s the mystery that inspires the faith, again pointing back to the question of what is it about humans that leads us down that path in the first place? It becomes less about artificial intelligence itself.
ICC: One of your proposed solutions for the problems humans face with technology is bringing back together “freedom” and “responsibility”. How do you think this should be achieved?
NG: Yes, this is something that existential philosophers are perhaps most well-known for. As humans, we like talking a lot about freedom. We’re clearly obsessed with the idea of freedom. Brexit and Trump both seem to be about the need for a greater freedom, to “take back control”. But when people understand that the flipside of freedom is responsibility — then you start asking for freedom from freedom. You saw this in Brexit leave votes and pro-Trump votes — especially in Brexit. I think the number one Google search after the day after was “what is Brexit?” This idea that you could vote for something just to see, “am I sending a message?” Again, with some pro-Trump voters, they were sending a message. “Obviously Hillary is going to win, so I’m just going to send a message.” But if enough people do that, then there’s your answer. It’s this idea of how to confront on the one hand, the desire for freedom without responsibility, and on the other hand what it really means to be free as being responsible.
I think technology can help us to see how dangerous we can really get. In the fifties, this was something a lot of philosophers were interested in after the rise of fascism — studying why people were so quick to embrace fascism. Specifically, you have this kind of “Mussolini idea” — that people don’t question things as long as the trains run on time. You can see that technology was helpful in mediating people’s desire for fascism, because “the trains are running on time and I don’t really pay much attention to what’s going on around me.” Sartre writes an essay about this called Paris Under the Occupation, where he talked about this idea that living under Nazi rule in France, it was possible to kind of “get used to it” — because the trains are running on time. Technology can be very helpful, if you want to be a fascist. I’m not giving pointers! It can be very helpful in getting people “used to it”, and it played a role in normalising fascism.
At the same time, a lot of people see technology as a “tool for democracy”, and thus the best way to “fight”. That’s why it’s so fascinating to see something like Twitter — which is supposed to be a democratic, open space — being weaponized by someone like Trump. We really do need to take seriously the role that technologies play in both fascism and democracy, as well as freedom and responsibility.
ICC: I see what you mean. One thing that has arisen since you finished writing the book is the Cambridge Analytica scandal and their misuse of Facebook data. In the context of your work and this discussion, how would you analyse these events? Do you think the public reaction is a result of the breaking on an illusion? Or have we been forced to face our nihilistic reality?
NG: Yeah. I was very intrigued by Zuckerberg’s testimony, because you saw a lot of people afterwards and beforehand talking about “regulation, regulation, regulation”, and Zuckerberg was very quick to say “yes, please regulate us.” But what I was interested in was the way he framed what Facebook is — and nobody, as far as I saw, really pushed back against this. Zuckerberg keeps saying repeatedly in the testimony, “Facebook is a tool”. We build tools. The idea then that — and this is very important for Heidegger — the idea of viewing technology as neutral. This idea is very popular in the US within gun control debates — “technology is neutral, so it’s only bad if bad people use it, and it’s good if good people use it. So clearly all we need to do — we don’t need gun control, we just need mental health measures to keep bad people from using guns. The only way to stop a bad guy with a gun is a good guy with a gun.” You could see that Zuckerberg, in describing Facebook, was mimicking these forms of arguments. Again, we come back to the only way to stop someone bad like Cambridge Analytica misusing Facebook is to have something good, like artificial intelligence, using Facebook. What I wanted specifically to investigate was the idea of looking at “the neutral thesis” and what it means to view technology this way. What it means to take it for granted that technology is a neutral medium — rather than as I suggested earlier that it is “shaping”. With Cambridge Analytica, it’s very easy to say, “is Facebook violating our privacy, yes or no? How do we regulate our privacy with things like GDPR?” What I’m interested in and what post-phenomology is good for, is rather how Facebook redefines what privacy means. This is one of the reasons why I think philosophy of technology is really necessary for these sorts of political debates.
ICC: Given your fascinating analysis of technology, would you class yourself as a techno-optimist or a techno-pessimist?
NG: Right! Well, one of the inspirations for the book was clearly my son, who I describe in the preface as being a little too into technology, and how he perhaps helps me to see how I’m a little too into technology. In my lectures, I am a techno-pessimist. But then outside, I have my phone, I use technology to teach, I have various social media accounts. So it seems much more that perhaps techno-hypocrite is the better term. Which is again, why I say very early on in the book I am a nihilist. I use something that I know anyone who is philosophically trained is prepared to attack me on, I use the term “we” in the book a lot. But that was part of the thinking, that I am not separating myself from this, so “they” was clearly inappropriate. But it’s also I don’t think just “me”, so “I” didn’t seem appropriate. When you say something like “users”, that again sounds far too limited. So I take pronouns very seriously and I think if people want to opt-out of my “we” then go right ahead. But I do think it’s important, whether you’re a techno-optimist or a techno-pessimist, to really think about the degree technology frames who and what you are, such that you think you have to use a prefix like “techno” in your self-description.
ICC: Finally, what do see as the future, in terms of humans relationship with technology?
NG: Well it’s certainly nice to imagine us having a future. So, that’s comforting to think that we’ll still be here. I can imagine something like transhumanism becoming more and more popular — as self-driving cars, Internet of Things and smart technologies in general become more and more popular. What will be the greatest concern is the ubiquity of technology. That it’s just there, and it’s in everything. So you’re refrigerator has it’s own social media account, and you then have to be concerned about “is Facebook spying on your eggs?” I think that that’s kind of where we’re going to go, and this is where we’ve already been. So it’s likely this will just keep happening. Tech companies offer something great, we think only about the benefits, then discover the costs. Then we push back, and then we go forward again. Yet what’s fascinating — and I wrote an article about this a few years ago — if you trace Facebook’s evolution, it’s never affected their stocks. So even while he was making the testimony, he was making billions of dollars. And people were tracking his stocks. So he was making money by testifying. Which is probably why he’ll run for President. Maybe that’s really the future. Is that we’ll just have a full technocratic state, and that will be the future.
ICC: Thanks very much for coming to speak to us today, Nolen.
About Nolen Gertz
Nolen Gertz is Assistant Professor of Applied Philosophy at the University of Twente, and a Senior Researcher at the 4TU.Centre for Ethics and Technology. He is the author of The Philosophy of War and Exile (Palgrave-Macmillan 2014) and Nihilism and Technology (Rowman & Littlefield International, 2018). His work has appeared in The Atlantic, The Washington Post, and ABC Australia.