DIGETHIX: DIGITAL ETHICS TODAY — EPISODE 12

CAPTCHAs, Bots, and Informed Consent

A talk with Bernd Dürrwächter

Center for Mind and Culture
DigEthix

--

If somebody thinks they need to influence me, they should be honest enough to say, “this is our intent.” Trying to undermine or change me without meaning where I’m being changed. I consider that very unethical.

— Bernd Dürrwächter (31:56)

In this episode Seth and Bernd discuss the role of CAPTCHAs and bots in our automated systems and the potential obstacle they pose to people making informed decisions.

This conversation explores these central questions:

  • How do we think about the tradeoffs between security and convenience?
  • How can companies properly inform their users so that they can make informed decisions about what they’re doing?
  • Why is it that we have to prove to our technical systems that we are human?
  • How should we think about the trade-offs between security and convenience?

(Jump to important links here)

Click the link below to listen to the episode:

About the guest:

Back with a fellow friend, Bernd Dürrwächter, an expert and Principal at AnalyticsDimension.com, a consultancy for big data analytics & data science projects. He studied computer science at the Frankfurt University of Applied Sciences from 1988–1992. Bernd is a seasoned practitioner in software engineering, IT architectures, business intelligence & data analytics solutions. In that capacity, he has done project work in education research, supply chain management, healthcare, internet services, and media.

Bernd Dürrwächter | LinkedIn

Any technical system that can enforce security can also be programmed to subvert it.

— Bernd Dürrwächter (15:50)

Transcript:

Seth Villegas
Welcome to the DigEthix podcast. My name is Seth Villegas. I’m a PhD candidate at Boston University, working on the philosophical ethics of emerging and experimental technologies. Here on the podcast, we talk to scholars and industry experts with an eye towards the future.

Today I will be having a discussion with one of my main collaborators, Bernd Dürrwächter. Bernd is Principal at AnalyticsDimension.com, a consultancy for big data analytics and data science projects. Bernd is also a seasoned practitioner in software engineering, IT architectures, business intelligence, and data analytic solutions.

So, instead of going into normal kind of description of the conversation everything, I wanted to be responsive to some of the feedback we’ve been getting. In order to do that, I want to explain the key issues that we’re going to be talking about today. While the episode itself is going to be talking about CAPTCHAs and the different kinds of things that are necessary in order to avoid, say bots getting into your email account, or into other important parts of your computer, I wanna talk about something called scaling and how scaling is kind of a critical piece of automation of face systems in general.

So, when I’m talking about scaling, I’m specifically referring to how many people can use a given system at any time. Security is one of those things that needs to be scaled. If you build a large data system and you want users to have access to it, how do you go through a process of verification that is secure, but doesn’t overly inconvenience the people who are trying to use your system? And this brings us to one of our key questions which is:

How do we think about the tradeoffs between security and convenience?

If we’re looking at the ways in which CAPTCHAs are routinely used, it’s in order to prove you are a human. And one of the reasons why this is necessary is actually shown in the Imperva report, which basically talks about how 25% of all traffic is from so-called bad bots, whereas 15% of traffic is from good bots. The good bots are gonna be things that make inquiries, you know, finding information that people need, whereas the bad bots are often times trying to access people’s accounts, trying to spam people, or doing other things that, well, most of us would probably prefer that they don’t do.

Source: https://www.imperva.com/resources/reports/Imperva_BadBot_Report_V2.0.pdf

So in terms of scaling, what’s really important to note here is that people increasingly have to interact with automated systems, even to prove that they themselves are not human. And so this leads us to a curious situation as John Mulaney brings up, one of my favorite comedians, of trying to prove to a robot that you are, yourself, are not a robot. And I think that there are a number of things to keep in mind with this kind of situation that are specifically, very ethically charged.

Virginia Eubanks gives an example in her book, Automating Inequality, of Indiana Medicare and the way they are processing information on individual cases. What’s telling about this particular example is that the system had been completely re-engineered so that it didn’t depend on so many people. They did this in part so that more people could actively use the system and hopefully benefit from that system, but in fact what ended up happening is that the automated system started flagging people for noncompliance at a far higher rate than had been done before.

Noncompliance is when someone outright refuses to give the correct information in order to receive their Medicare benefits. However, what ended up happening is that the engineers, in creating this particular system, redesigned it so that it would automatically flag people for noncompliance if they failed to get their forms back in time. However, there were a number of issues. Basically, the system would confirm when it had sent out a given notice even though it could often take people, say, a week or two after that in order to actually receive it. Meaning that they received far less time than they did before. Not only that, but the consequences of noncompliance were much higher than they would have been otherwise.

Another effect of automating this particular system is that the number of skilled case workers were actually dropped in favor of people who weren’t necessarily case workers, but who were there to manage the automated system and to make sure it was working properly. In effect what this led to was an increasing amount of rigidity in this particular bureaucracy, making it less flexible to the people that it was actually supposed to serve. And this is actually an example of the sort of thing that I am talking here with scaling. Something that we have to continue to keep in mind, which is: as we kind of interact with these automated systems, these automated bot systems, because these systems are not managed by any particular person, if your particular problem falls outside of the expertise of that automated system then it can be very, very hard for you to get help for your problem.

What can end up happening is that this system, instead of being something useful to give them person for navigating their particular problem, instead becomes a kind of obstacle for someone to either get the help from a person that they actually need, or for them to receive the service that they’re actually looking for from that system.

So with these things in mind, the key questions for this episode are:

How can companies properly inform their users so that they can make informed decisions about what they’re doing?

Why is it that we have to prove to our technical systems that we are human?

How should we think about the trade-offs between security and convenience?

This podcast would not have been possible without the help of the DigEthix team, Nicole Smith and Louise Salinas. The intro and outro track Dreams was composed by Benjamin Tissot through bensound.com. Our website is DigEthix.org and if you’d like to get in touch with us, you can reach us on Facebook and Twitter @DigEthix and on Instagram @DigEthixFuture. You can email us: DigEthix@mindandculture.org. Now I am pleased to present you with my conversation with Bernd Dürrwächter.

Seth Villegas

(6:15) What you had mentioned to me last week that I want to talk about today is about CAPTCHAs.

Bernd Dürrwächter

Right.

Seth Villegas

I think there’s a couple things we should probably touch on too. You know first off, you know, what are we talking about when we talk about CAPTCHAs? So, so CAPTCHAs are going to be those little… They’re like pictures that show up and yet to kind of categorize them i t’s like a, “Are you a human?” quiz. So we should talk about, a little bit about what CAPTCHAs are, and then about why it’s necessary to have CAPTCHAs, which I think speaks to how bots are trying to access accounts, like all the time.

Bernd Dürrwächter

Yeah. So just to recap, recap CAPTCHA with a captive audience. CAPTCHA has evolved over time it started up was recognizing numbers and now it’s at the point because there’s countermeasures from the bots, they figured out how to solve CAPTCHAs in AI, right? So the latest incarnation, this is where they show you a mosaic of pictures, fragments of a photo and then ask you “can you see the bicycles, which pictures contain the truck or traffic light,” things like this.

And at some point in time, it came out, somebody reported that they use that to train image recognition algorithms completely unrelated to the actual main or communicated use case of proving to a website — you know, like your online banking — that you’re actually a human, not a web bot. And from what I’ve heard, it’s like half of the clicks that you do — let’s say you do four clicks and it says, “now we think you’re human,” and then the other four clicks, will actually train the AI. And my ethical concern was that, I didn’t know about that. What if I don’t want to train AI? Right? What if I’m worried about the AI takes my job, so I won’t have the choice to know? I don’t want to train an AI and I’m not informed about it.

And then the second part is on the captive audience in order to get on my online banking — this vital thing that I need to do my life I can’t do without — I’m forced to use this CAPTCHA so I can’t opt out. So I have no decision autonomy.

A) it’s not informed me
B) I have no way to opt out.

And that’s, you know, they can push on whatever, there’s no committee that decides whether that’s okay or not. It’s… the company creates a CAPTCHA and my bank was using it make that decision on behalf of me, and there’s no recourse or what you call, reverse, you know. And that fits into the scheme, one of the ethical principles of autonomy, decision autonomy, and you know, informed consent. And somebody else makes decisions on behalf of me, and I’m not… And they keep it from me, right, so there’s no “Why didn’t you tell me about this? Let me make decisions”. You probably assumed that I might not consented, so you kept it from me. That’s the low concern.

Seth Villeges

The CAPTCHA technology’s outsourced, isn’t it? Like, don’t companies just kind of…

Bernd Dürrwächter

Right

Seth Villeges

…take it wholesale from some other developer or not? Well, I mean, I see the same CAPTCHAs everywhere so I feel like they’re probably connected.

Bernd Dürrwächter

reCAPTCHA was a company and I think Google bought them. And then yes, the bank, basically, it’s a web service, they don’t disclose that they link to another website, it’s embedded. And that’s why I have a trust issue with my bank, but I inadvertently, because it’s often not obvious to most people that reCAPTCHA, it’s actually another company who has my data, right. It’s like you said, the bank doesn’t actually do it. They link back to the company doing the service and that third party company is harvesting my data, when I think I’m at my bank. That’s where the lack of informed consent comes in, right. I mean, I’m this with, my bank was like, I don’t appreciate that you’re not communicating it. I’m used to the European standards, where there’s a long webpage explaining exactly what happens with this and that, which is annoying to the Europeans. But when they said, “We have no control over a site”, well, somebody in your bank must have made the decision to use that service, right? And it’s always hard to reach an accountable party and even express your dismay, let alone have any control over the process.

Seth Villegas

These CAPTCHAs are in essence supposed to protect you from bots, but they’re being used to train bots, and that the previous generation of CAPTCHAs, was debunked by bots in such a way that they had to completely change the way the technology works. And so you have to go through this arcane process of strange picture identification, which, I mean, I have to admit, I don’t always get it right, because the pictures can be very grainy, bad quality…

Bernd Dürrwächter

That’s a good point. Sometimes you feel as a human to prove that you are one, and then it comes in as an ethical concern.

Seth Villegas

Right, exactly. So… and even what you’re saying now, is that the way that CAPTCHAs are working now is also being used to train AIs — specifically for image recognition — which means we’re gonna be on a treadmill, aren’t we? Where eventually there’s going to be bots that are really good at this form of CAPTCHAs, and then they’ll have to do some other weird thing to prove that you’re a human. It’s…

Bernd Dürrwächter

Yeah, even more like Mechanical Turk, but where they hire people to do image recognition training and these people know it, and they outsource that and we become, you know, slave labor without even knowing it. But this is not even the latest generation if you… The ones I like best where there’s a button, “Click here to prove you’re human”. Like click on it and then continues like “That was it? How did they know I’m a human”, but what they do, they track your mouse movement, and they have statistics to figure out how humans move their mouse. And the latest generation actually has a footprint like a cookie where they have certain parameters from your system, on which they infer that you’re a human because the hackers or the block makers have a different footprint. So that goes back to gather a whole bunch of data about that you don’t know about, and they already know you’re human like, “Wait, that’s getting to the creepy level, how do you know I’m a human? I want to prove that I’m human. But how do you already know that?” It’s convenient, but it goes back to the… I feel creeped out by some surveillance.

Seth Villegas

It definitely is a bit unsettling to think of how the verification of mouse movements is tied to a virtual profile of how it is that you use my mouse already. It’s actually something I haven’t really thought about before. I mean, it’s plausible, it makes sense. But…

Bernd Dürrwächter

You’ve seen them, right, where you just click a button, it’s the ultimate CAPTCHA. We say, “Oh, that’s convenient, which is awesome.” This is often how companies…

Seth Villegas

I just assumed it was a bad CAPTCHA. Like it was just the entry level package or whatever, and they couldn’t afford the photos. So here’s a checkbox.

Bernd Dürrwächter

I think it also has to do… one way it might be working as — like I said, I’m not actually sure exactly and as a technologist I’m inferring — would also be that the rendering of the “Click that button” is actually an image that… traditional web pages have a lot of markup that would tell a bot, what the structure was, what a human is. There’s a way to render a visual on the screen without it being an HTML code. So it might just be only a human could possibly see that button, right. That there’s a bot and if I say “click here,” the bot says “click where,” right, only the human sees that visualization. That will be the more ethical version but, I have heard about Google using the footprint of your computer to… They were told not to do cookies anymore. So what they did, they figured out a better version of the cookies, more hidden and not obvious, so people can’t have an opinion about it. I mean, they own the browser, and we do most things through a web browser so they have all kinds of proprietary measures in the browser. And if I have to provide a solution, it’s like, “Okay, here’s the criticism”. Be transparent, right? Let people opt in and out. Tell them exactly what you’re doing, for what purpose, how it benefits them, and let them decide whether it be, “Yes, I decide what benefits me not some naturalistic organization.” And transparency always saves your reputation, right? A company never got in trouble for, “This is what we’re planning to do, involve me.” It’s like, “Okay, I’ll choose not to do business, but thank you for involving me, and for letting my voice matter.”

Seth Villegas

(13:55) Again, a couple things that we’ve already touched on, but I think it’d be important to bring up here, I do think that Google will have a lot to gain from explaining the actual security problem. So the, the amount of bots that are just trying to hack into things every moment, every day is… Yeah, it’s a lot more than I think most people imagine. That’s why we need things like two-factor identification.

Bernd Dürrwächter

Even that can be faked. Two-factor can be hacked, right.

Seth Villegas

Yeah, I mean, it no system is going to be perfect, but the nice thing about two-factor identification is, at least in my case, if I see something where a password is compromised, and I know I didn’t log into something, and I change it like immediately, just because that’s very, very scary. I don’t want to lose access to that account.

Bernd Dürrwächter

And just a quick, you know, what’s two-factor authentication? That is when you try to log into a website, and then it calls your phone number, or sends you a text to a different device. That’s like assuming only that person would have that phone number or that phone handy and need it. That’s been subverted and it goes back to — cause I worked in a lot of IT security — there’s really no, technically… you can’t secure the whole chain just through technical means. You can always socially engineer. If you look at Kevin Mitnick like, the most notorious hacker in the US, he wrote a book, he says, “80% of my hacks were social engineering where I pretended to be somebody else and often somebody else gave me that password.” You can do the end-to-end encryption and in the end, an app can watch what you do on the screen or somebody call you like, “hey I’m Bob from… and I need your new password,” and there’s no way to technically enforce it. It needs to go back and fostering trust amongst humans and not rely entirely on technical system because any technical system that can enforce security can also be programmed to subvert it.

Any technical system that can enforce security can also be programmed to subvert it.

Seth Villegas

Certainly, and it’s actually funny because, I don’t know if you’re familiar, but there’s this comic strip called The Oatmeal, and — I believe it is from the Oatmeal — but, but there’s basically a way of using words, like really small words, to put like two or three of them together in order to make a longer password. Because you know, the way that password encryptions worked at that point was you know, you just kind of run through, you know, like A through like zero, basically, right? Like you run through all the letters, all the numbers, you just run through all of them. So basically, it’s like, well, the longer your password is, the less likely it is that that can be compromised. But, again, if you have a method like that, we’re using a simple algorithm, even like a simple algorithm that supposedly humans can use, you can create a bot to do that.

Source: https://xkcd.com/936/

Bernd Dürrwächter

Right.

Seth Villegas

So it can get it can, it can make up small passwords like, “cat horse dog… 22” or something.

Bernd Dürrwächter

Well so, this is an illusion, just because it’s complicated to humans doesn’t mean it’s complicated to an algorithm. And the other fallacy with this long password is, uh… it’s weakest link, right? The hacker doesn’t need to guess your password, but you will eventually store it somewhere else, right? Or it’s saved as a cookie. Security is always weakest link. And to do a good analogy, while Europe is underwater, right — the Netherlands, you know, they had lots of rain and flooded. And the dams in the Netherlands — because it’s under sea level — the dams were designed to keep the sea out, it was never intended to protect from the other side. So they have like hundreds of miles of dam and when there’s… it doesn’t matter where the hole is, then the sea will come and flood the whole country so it doesn’t matter how much you enforce one area of the chain, then the weakness somewhere else will still flood the system. The same thing with computer security. The password is just one part of bigger security changes and if that’s bulletproof, then they’ll just find another attack. It’s called attack vector, right.

So this is all designed for humans to be able to take care of, but it doesn’t really make you… and a complicated number sequence is only complex to human brain. Computers were designed to recognize long numbers, to handle long number sequences. So a lot of these security measures are for social purposes to make, you feel more, like with… are you familiar with the credit card? How they started, at some point in time to put another three digit number on the back of the CV, CVV number. All that does, it makes the 16 digit number three digits longer. But if somebody has your card, they just copy the number on the back, you know how to basically take a copy of your card. That’s the same process, you can copy the number on the back. So what does that really solve?

Seth Villegas

It also makes me think of… you know how your password will get like rated it, when you create it? Like there’s like a green, yellow, and a red.

Bernd Dürrwächter

It has to be an exclamation mark or like a special character, upper or lowercase. I know, yeah. But this is the scripts of computer security. If we go back with an ethical lens, it’s really how do you train people to not… you know, the developers — because not everybody’s a hacker, right? There’s corporations who use that in a non-hacking, well, that’s where this... Facebook actually is located in 1 Hacker Way. They have this hacker ethic, but not in the term of ransomware, you know, criminal organized crime. How do you tell people it’s not okay, right? It’s just because you managed to hack that, like the guy — was it Clearview, was it — were hacked into a dating site’s accounts. You get the profiles, and on the notion that anything that’s on the internet is public good. It’s like “yeah, but you hacked into accounts to get there.” It was clearly not public, right. He’s like, “well, they should have picked better passwords,” was the argument. Like, you don’t get the point. The spirit of the password is to think what people view as private, not if I can hack it, it’s mine. It’s like, if I managed to break into the bank, it’s my money legally, right?

Seth Villegas

Yeah, that’s actually, it’s really funny that you put it that way. Because, you know, one of the things we know from things like Cambridge Analytica is, you know, is if there’s certain exploits in the system, where you can suddenly get access, not just to that person’s data, but also to all of their friends’ data, and maybe even friends of friends’. And you can just kind of, you know… it’s actually staggering how much you might actually be able to get access to just by being able to crack into accounts that are low hanging fruit, so to speak.

But even as you mentioned earlier, you can ask, also hack say, like the cookie system, so the thing that’s holding the password. I think that part of the reason why security is so important at the moment is because of what you mentioned earlier in terms of ransom attacks — which I mean, I hadn’t been thinking about them as much until you know, we had really started talking about them like a month or two ago. But now that I’m more aware of it, I just see them in the news all the time of, there’s been a ransom attack against this company or against that company. Basically holding their information system hostage in exchange for money and then using usually like some form of cryptocurrency to, to pay out.

Bernd Dürrwächter

And then just to be clear, these ransomware are driven by organized crime or partially state actors. Like Russia harbors a lot where they openly tolerate them because you know, there’s political Russia versus United States, historical Cold War and that’s not… Clearly they behave unethical but that’s organized crime. Crime by definition is unethical and they will never change their behavior just, you know, they clearly don’t really care about laws. And the bigger issues in the, in the modern — I don’t know how to frame the demography, but it’s not necessarily just younger generation of technologists. We have that in the 1980s where a lawyer couple spammed the USENET, that was a volunteer network where people were discussing kind of like similar to Reddit, right. And they commercially exploited it even though the ethic of that forum, because it’s run by volunteers, was not exploited commercially. They kind of invented spam this, they said, “we do it because it’s possible, because no law keeps us from it and it’s technically possible. Therefore, we’re morally justified to do that,” which is basically diverging value systems. And that’s kind of the audience we’re trying to reach to say, just because it’s possible… legal is just one way, but ethics is kind of like, is this the right thing? I’m not waiting until somebody slaps me in the hand. I proactively think about is this okay, what if somebody would do this to me? And there’s this hacker ethic, “if it’s possible, then it’s alright. And I decide that as an individual versus, how do other people feel about that?”

I don’t think we have that argument of organized crime, because they don’t really concern themselves with that. You know, they operate in the shadows. They don’t really suffer as an individual from some social repercussions, because they’re usually in the dark. There’s all these individuals that usually don’t know. So they have no reputation.

Seth Villegas

(22:15) Right. And I think when we’re having discussions like this one, it is important to keep in mind that we’re not explicitly talking about people who are looking to operate outside of the rule of law — people who aren’t looking to act ethically — but rather people who are looking to exploit systems exploit people, and other sorts of things. Where, whereas I think, if we were to talk about those people, it’s more, how can we protect ourselves against those things in reality, rather than feeling like we’re protecting ourselves when we’re not, which I think from everything we’ve talked about so far has been a big point, right.

And it’s actually funny because analog — writing stuff down — is, in some ways more secure than it’s ever been, even though that used to be, “don’t leave your password to your computer by your computer.” And I still wouldn’t recommend doing that. But in a lot of ways, because everything’s so remote, it actually is more secure than just leaving things in your browser.

Bernd Dürrwächter

Yeah, they literally call that cold storage. There’s Swiss companies and the Swiss army used to have hangars in the Swiss mountains in the Cold War, where they have their… all the military gear stored, so there was like three miles of granite on top of it, only nuclear. And as they demilitarized, they rent out a lot of the space for cold storage where it doesn’t have any connection to the internet, so nobody can reach them as they walk, which was super obscure.

The other I just mentioned was the… security by obscurity. No matter how much data you put on your computer, and how you name it, or how your algorithm sifts through, that’s where computers were invented to do: sort through vast amounts of information. And if you look, my office has a million Post-Its, I have paper all over, flying all over, I have millions of books. If you were in my office and knew there would be a password — which I actually have some of my stuff written down — you wouldn’t know which of my scribblings are an actual password or just my ideas, so it’s gonna… and there’s no way to automate this process, right? There’s no algorithm that — I mean, maybe they could take pictures of them but it’s super… it’s almost not worth it. That’s the benefit of an analog method. But 90% of people write down long passwords and put it on their keyboard. I noticed when I was a computer technician, that’s like… I literally sometimes, “Oh, dang it they wanted me to fix their Windows, they already went home, how do I login?” And the keyboard password login was ethical in that case of user passwords. But so that analog version doesn’t work when there’s a common standard that they…

Seth Villegas

Yeah, no, if you just keep something as easy to locate is that it’s probably not quite as secure as it otherwise could be.

Bernd Dürrwächter

What I’m personally interested in — because when we’re still back at defense mechanisms, because I would like to raise more awareness and come back to a society where there’s all social chains. And I watch the Chinese developments with curiosity. I’m not saying I’m endorsing them, but they have a lot of mechanisms, they do this total surveillance. But in the end, if you jaywalk over the street, the first thing they do is this big billboard, and they show your face and everybody else: “look! there’s a jaywalker!” It’s like, that used to be the old model of accountability when people didn’t have an online world, where everything was face-to-face, where it kept a lot of the non-hardcore criminals from doing stupid stuff because other people saw you and get socially shunned or get yelled at. And that’s missing online.

Actors — anonymous or some data scientists or technologist — nobody really knows who created that algorithm or who caused that problem because they’re part of this large supply chain. I believe, a lot of problems, ethical problems could be solved if it just becomes more transparent who’s doing what. As soon as people know, “oh, hey,” whatever… that example of people at Cambridge Analytica, that’s like, “oh, it was your idea to do this. How dare you?” Right like, just creating the transparency and you move away from imposing rules or micromanaging everything, or you know, just going back to a more social accountability, I believe will clear up a lot of these minor ethical violations because a lot of what happens is there’s no consequences or nobody will find out.

Seth Villegas

A couple things on this point. I think, in some sense what you’re saying is true in that there’s, there’s not as much potential for recognizing who the bad actor might be in a lot of cases, if there’s no trail, right, there’s some way to kind of preserve anonymity. But I will say, for a lot of people, they experienced social shame much more acutely than they did in the past because of things like social media and whatnot, over stupid things. I don’t really know how to put it, right? But, but just like the, the social enforcement mechanisms seem to be kind of hyper focused on like, performative things.

Bernd Dürrwächter

Well, it’s a slippery slope too because if we use social shaming, you know, the whole saying there’s also social shaming that’s not justified, that’s kind of more like vigilante justice. That we the mob decide you did something wrong. It’s like, wait, that sounds like lynch — like the same they had in the Wild West, with the lynch mob just because 50 guys are really upset with me, it gives you the right to judge them.

Seth Villegas

I think one of the other things I will say is having individual communities with their own social rules and enforcement, I think is, at least to me, seems fine. Where, like, you know… so it’s like, for instance, subreddits have their own moderators, they have their own rules, they enforce those rules. And it’s the kind of the relationship between then the website, Reddit, and the subreddit and the subreddit users, that kind of becomes complicated. Where it’s like, “well, we have this thing, but maybe we don’t agree with the existence of this thing.” And depending on what it is, maybe that’s justified, maybe it’s not.

Bernd Dürrwächter

What’s interesting, I spend a great deal of reading about political science, especially about the Cold War and how diplomacy used to work. And one of the methods of preventing the war is to slow down communication. At some states, the slower you communicate, the more you deliberate, your rational brain kicks in. So one thing that diplomats do is stall. I mean, you know, like make it slower — the thinking and a lot to keep them from rash decisions. And I keep thinking one of the ideals could be to have like, a democratic process, kind of like ballot initiatives. But it has to be slower than social media because I don’t think people deliberate enough, where it’s like “oh, I just like this post and look 5 million people didn’t like this. Clearly, something’s wrong.” Yeah. But you didn’t deliberate. It was very impulsive. It’s hardly like justice, as it were.

Like, how can we find a process that’s faster than Congress that takes 10 years to pass a reasonable law? Versus science minute, you know, thumbs down, judge, well, that doesn’t work. But somewhere in the middle ground, like before you judge somebody, you need to spend a week thinking it through, right? And then if enough people get upset about it after you’ve thought it through a week, and then talk with other people. But it’s like, it goes from one extreme of the law’s way behind the technology developments — every week, something new comes out but it takes Congress 10 years to understand on the level of the actor — versus the mob kind of mentality that you have. I think that that’s more of an infrastructure thing than the technical like encryption thing. It’s more… it requires both process change, technology change, and even mindset change.

Seth Villegas

One thing they started doing on Twitter — and I’ve had to be more active on Twitter, in part because of this podcast — is actually when you if you go to retweet something without clicking on it first, you’ll get a little message just telling you about that. Like “oh, hey, do you still want to post this? It looks like you haven’t read whatever this is.”

Bernd Dürrwächter

Oh, like these “are you sure?” buttons?

Seth Villegas

Yeah. I mean, honestly, it’s actually been pretty effective in my mind of like, “oh, yeah, I haven’t read this maybe I should look at it first before I tweet it.” Because you know, I’ve seen this on Facebook of like, “oh, did you read this?” and being like, “oh, no.” Right? So, so it’s funny because you know, at times people respond to something as if you’ve read it when you haven’t.

Bernd Dürrwächter

(29:56) It’s interesting you mention that because I just had a day with a German friend. We talked about the whole TLDR and how we’re like old school. I get upset about it at the same time, he said there’s a lot of content that’s written so badly that creates this artificial tension that makes me read far more than was really necessary. Where it’s like the same thing could have been said in 5 paragraph, instead of 5 pages. So… and I find myself too, where it’s information overload, where I constantly feel there’s more stuff on my reading list and I have to move on to the next important piece, so I take shortcuts, I speed read. Like lawyers have always done that right, way back when you have a lengthy document but you have only 5 minutes, so you speed read. It’s kind of like… what the bigger theme this is more philosophical than ethical. Like, uh, decelerate instead of accelerate, right? Decelerate would be to live a more conscious life, right? We go back to more deliberate by using our outer brains instead of everything emotional reactive, which we’ve been kind of engineered into from the whole marketing… the whole marketing complex, right? Like this was by design to say, act on this offer now, it’ll expire in five minutes, right, or other people go hurry, or don’t miss out, right? That’s the whole conditioning to act fast return so we turn off rational thinking. I think just decelerating the thinking and whatever built social media mechanisms like you said, “are you sure you want to do this?” Stop and think, don’t just reflex.

A lot of politicians… intuition these days is conditioned and engineered. Intuition is basically when you feel exposed enough to a pattern, it becomes like riding a bicycle, you don’t have to think. Like you, you’ve used it so much that it’s internalized subconsciously. But this subconscious can be engineered. You can, you can condition somebody’s belief is their intuition. And they do this politically, right, where you think this is… you’ve heard it enough, you start believing it must be true. And at some point in time, you don’t even think about it. So it’s dangerous when people think, “but my intuition told me.” It’s like, you don’t realize that intuition was manipulated or engineered. And that goes back to the informed consent. If somebody thinks they need to influence me, they should be honest enough to say, “this is our intent.” Trying to undermine or change me without meaning where I’m being changed — I consider that very unethical.

Seth Villegas

So, to kind of tie this into informed consent, as you’ve mentioned a number of times so far, I think there are a couple of different problems. The first one is information overload. So people kind of being bombarded with perhaps legal information, you know, stuff that’s not very digestible, things that don’t make sense. So, for instance, lots of terms and conditions people don’t read. There’s actually this really famous case of someone sending like a bank, a revised terms of condition with like an extra clause in there. And so the bank filed a lawsuit. But they agreed they agreed to the terms as well. So it was kind of this interesting, Judo flip on them. With everything that happened.

Bernd Dürrwächter

The guy who changed their own conditions, and they signed off on it?

Seth Villegas

That’s right.

Bernd Dürrwächter

That was funny.

Seth Villegas

And they tried to sue him. And their defense was, “well, we didn’t read it.”

Bernd Dürrwächter

It’s like, oh, the very thing you expect of us. And I’m not sure if we talked about this in the last conversation, or if we talked about this offline. To use a positive example is to watch Google over the years and how they kind of really put effort into whenever they had a new version of the terms of service. They had… it’s kind of like when software comes out with a new version, “here’s the features that have changed,” like, that. And then they also have like this summary, like, “here’s the legal text, and here’s like a sentence of what we mean by that in normal language.” And that was a good move in the right direction, because like, what was your thought behind that, make it understandable for you, I think a lot of those securities by design isn’t… so I’m going to throw one more European perspective in there. In Germany, there’s this concept, if… a person can consent to something they don’t understand, right? If you throw a legal document at somebody in legalese language, and somebody clicks, okay, on it, like it’s not really concerned that they didn’t understand. It’s like, a monkey can click on an okay button, but they can’t read English, or the language. You can hardly say the monkey consented to anything if he didn’t know the concepts.

And, on the other hand, here in Germany, you have this concept, just because you’re not aware of a law doesn’t keep you from — if you violate it — from prosecution. But that’s your duty, to familiarize yourself with the law. But there’s also the way of — as part of information warfare — to overload people deliberately, that there are no… and now it’s a service attack, like you throw so many data at a person that you know they can’t absorb it and render some dysfunctional defender acquisitions. So this is… How do you address that, right? How do you… “This is the law, you need to understand it” versus “this is abandoned and we’ve left you with crap”. So you just click okay, knowing that you didn’t really agree to anything because you didn’t understand it.

Seth Villegas

To kind of tie this to CAPTCHAs — one of the kind of central issues that does come up in relationship to bot attacks and being actually informed about the situation is the increasing sophistication of bots to impersonate people. This can happen in, you know, bad ways, like people get calls from automated systems. So for instance, almost every call I get these days, that’s not from someone I know is from some bot telling me it’s a government organization. But it can be much harder to tell if something’s a bot, say over email, in a message. You know, like sometimes people just aren’t good at communicating. So it’s… Dating sites are filled with bots and catfishers…

Bernd Dürrwächter

You go through a Turing test. With every online social interaction, you basically have to do a Turing test.

And what’s, what’s cute — I know this sounds funny — but there’s some chat bots, when you go on the website and something pops up, it’s done so badly that you know it’s just a script. It pretends to be a person but it’s really almost mechanistic, it has no clue what it’s talking about. And I appreciate that because it’s almost like they put effort to make sure that I know it’s not a real person. It was like, it’s a picture of somebody: “Hi, I’m Sally and I’m your customer service representative.” But then the way they act like, thank you, making it easy for me knowing you’re not a real person versus the ones “I need your social security,” you know all these sophisticated things, and then I realize oh crap, I just gave out my stuff. It’s the misleading aspect again. To me, it’s almost like a text version of deep fakes. And that is one of my troublesome — observations of troublesome trends is — using AI not to solve a complex puzzle or problem, but say… I observed for my observation is now designed on fooling people to believe it’s a person. And that’s, to me, really fundamentally unethical that you put effort into deception.

What is it the marketing engineering program at one of the California universities? The guy who is basically responsible for dark patterns, where he literally there was a university course, I don’t know which university it was…

Seth Villegas

Oh, yes, yes.

Bernd Dürrwächter

… he was a marketing guy, he wasn’t a technology guy. But he basically encouraged us on how we can use technology to deceive people — well, not deceit — but nudge them onto a certain decision path like the, “yes, buy it,” as like a big, prominent 3D button and the little “no thanks.” Or it would say, “no, I’m a loser, I’m not interested,” I’ve seen something like, “I’m not cool enough to accept this offer.” Where it’s like light gray and dark gray background dark. Dark patterns — you know, what I’m talking about.

Seth Villegas

I had the, the unfortunate pleasure of downloading a game that I thought would be fun, you know, for a few minutes. And it was mostly ads and trying to close those ads was very difficult, a kind of Herculean task in itself.

Bernd Dürrwächter

So this whole complex of the Apple ecosystem where I would say, a large share of the apps are right out fraudulent, and Apple doesn’t catch up with it. You know, they have the intention to filter them out, but they can’t catch up with it. And that’s… From an ethical standpoint, it’s like the old Trojan horse pattern, right? You lure somebody with something attractive, but it’s really subverting the system. And it’s also an education thing. How do people still fall for that?

Once I was aware that most apps are malicious… And I’ve experienced that too, when you try to click on something and they swap up the UI, so you click on accept even though you didn’t mean to. Amazon did this a while back, this, when they started this one-click purchase. And they moved the button around or made it so that I clicked on something I didn’t intend and oh, I didn’t want to purchase. And then you could opt out, you could reverse the purchase, but… And they undid that because a lot of people complained about it. But these are small enough players that they, if they rip off, like a couple thousand people, and then they close up shop — and they made their, their profit.

It’s just really sad also, we’ve kind of implicitly generalized Russia as bad but that actually damages a lot of legitimate and well-intended people. There was a — I do electronic music and everyone once in a while I look for a new device for my music, and there’s a guy in Russia who’s like a do-it-yourself. And he’s kind of like a kickstarter, he built this stuff. And now… he is very amateur, he does it himself not a big corporation, he builds it in his own garage. It looks really cool and I know a lot of people here who bought it, but the ordering process is like a very amateurish website and it asks for your credit card number, or PayPal. And it’s like the whole thing smells fishy, except I know from the community, and that guy probably doesn’t get as much business as he should. Because, oh, he’s Russian, and he’s online, and everything looks fishy, because everybody judges him by the stereotypes, even though I’m pretty convinced he’s a good guy, at least. But so… it’s… that makes it hard too. Then it goes back to the trust model, because the bad actors speak to that conflict. Where people… I’d like to trust somebody, I like to give them the benefit of the doubt, and then they undermine that. And once you get to that level of cynicism, well there’s nothing but betrayal out there.

Seth Villegas

Yeah, and another example — actually, well, well, not an example — but the kind of the converse of this is: things can seem really official, but still be scams, which I think is also the case.

Bernd Dürrwächter

Definitely the case. That has nothing with online. In the mail, I’ve always gotten like the US Mint or from “email urgent” and it looks like it’s like government and you open it up and it’s just spam marketing. So it’s almost, that mindset has been there before online and the emails.

Seth Villegas

Yes, certainly. Because actually, in the kind of the cryptocurrency space, one of the biggest things that can happen is you know, kind of turning — I don’t know — your Bitcoin into something that’s actually worthless, but because they had a pretty good website, people think it’s a real project, that it has real backers behind it and everything else when it doesn’t.

Bernd Dürrwächter

(40:23) I mean, this is one of the things where we need to be careful that we don’t make it sound like digital tools invented that ethic. We go all the way in the wild west and they have the snake oil that’s like, “this will heal all ailments,” and “this will…” like, the snake oil. And that makes it also hard to combat it. I’m not surprised that the marketing as a business by social media and modern, you know… Google and Facebook, how do they make their money? With advertising, it’s their core business. And all the deceptive practices go back to: some marketing companies want to influence consumers into buying their stuff or spending the money with them. That has been around long before digital tools. Almost like you had an existing cultural business. It’s almost like part of the American way to persuade somebody to buy your stuff, or even politically. And a lot of people like that like persuade me or create leadership that I can follow.

I personally find it hard to get traction amongst a lot of people, it’s like why is this bad? I like to be informed, I like to be persuaded, right? And they don’t realize how there’s a fine line between unknowingly being manipulated versus somebody who tries to attribute value proposition. This is actually… There’s a whole different… if you look at, for example, German culture where there’s this level of distrust that exists from precedents like, “yeah, you prove to me what you say that you like, but I’m not gonna believe it just because you say it or you see it in a pleasant way.”

Seth Villegas

A lot of those things work just based off of numbers, right? It’s not that they have a high return at all. So, for instance, they have this really elaborate scam related to like stock prices, or like, crypto coin prices, where they’ll say, “okay, look, the price is going to go up,” right? But, but they send to 50% of people, they say, it’ll go up to 50% of people, it’ll go down, right? And so to all the people they succeed with, with their “tips”, right, they’ll keep sending them emails, right, and they just do the same thing. But by the end, they’ll have sent someone six or seven correct tips, right? And then they’ll ask for money at that point.

Bernd Dürrwächter

Right. Yeah.

Seth Villegas

And, and so and so it seems credible. But it’s strictly on the basis of managing numbers and probabilities, that they’re able to get something that seems legitimate because the information was accurate, the forecasted information.

Bernd Dürrwächter

And it goes back to the social engineering, right? It’s not just manipulating data, but they’re pinging and they realize which are the gullible — or which are the people who they can get their trust. So this is kind of what modern technology with social media — they allowed this feedback loop that if you have bad intentions, or you want to make money on your own, you’re more like Machiavellian. And your goal is not to hurt people but you focus only on your own interest and in the process you hurt people. This being able to put something out there so you can see how people react — this is why one of the guidance for spam is don’t ever respond to them. If you say, “haha, do you think I’m that stupid?” Like my brother does this. He literally argues with spam emails, and you just send them the signal that your email is legit. And they profile you — everything you say now, they profile and they send that to somebody else.

And we talked about this a while back; how the elderly — like our parents’ generation, you know, like in the 70s and 80s — were just… They still — the older generation — they believe everything. They trust the figures on TV, right? It was a little bit more of a narrowly-managed ecosystem. So they look at the screen, and our dad says that, like it said, “your window systems infected. Call us so we can uninfect it,” right? It’s… they didn’t actually infect anything, they just suggested that something’s wrong. And then you pay $100 for an hour, where they really didn’t do anything.

I know, it sounds finicky and morally discussable and I’m actually missing legitimate opportunities or things. But during COVID, this thing really ramped up when the, when the the less ethically-concerned figured out that everybody’s doing stuff online and really, really ramped up in the weaknesses in the reports of security companies. And it’s gotten so bad that I took the default stance of, “it’s fraud.” And it’s also not made easier. So for example, a my phone is from T Mobile, and they use the third party… And the whole process, even Google told me, “don’t click on this!” and my malware software kicked in, “this is fraudulent!” Like, okay, ignore this. And then it turned out it was a late bill, you know, they stopped my account because I didn’t pay the bill, because “we tried to reach you!” It’s like, “yeah, but you guys used spammer and phishing methods.” All the security mechanism warned me right, like you guys are a… you’re a legitimate company. You should know better how to how to do your your, your communication with the customer.

But I had literally people I hadn’t heard from for years send me emails, and then all my emails ended up in the spam folders, so I had to manually determine what’s real. And one of the ways spammers or the… criminals, they can read your email — email, unless it’s encrypted, everybody can read this, right, and translate. So they do like, “hey, it’s your brother, so and so,” using name, using your details, you know, I thought that only my brother knew, but we talked about it in email. And it gives a sense of familiarity. Who else would know this information? Well everybody listens to all your conversations with a loved one or the like. It’s almost like, there’s only one way to describe it, is very cynical — Zuboff describes this very eloquently, how that’s already the illusion of society.

Seth Villegas

To kind of give a more positive example of the ways in which trust can work. For instance, something like eBay, for instance, really only succeeded because people acted in good faith, basically. At that crucial point in which they’re trying to get that network effect for people to really take it on, the overall number of interactions was so positive, right, that people believed that it was legitimate. Even though you know, even though there is kind of a bit of distrust on the back of people’s minds, in the early days of eBay — which I think is still fair, by the way — I’m just saying that it’s, it’s actually an interesting thing where if those systems do work, they can work remarkably well, but that also makes… incentivizes spamming.

Bernd Dürrwächter

I haven’t followed eBay in a while, and I’m familiar with what you’re talking about. eBay kind of pioneered that whole using feedback to generally make a level of trust. I would say Amazon did the same thing. But it’s probably like, what, 10-15 years later. And the Amazon system is being exploited left and right, including by Amazon, or its vendors, like so, I’d be curious now why did it work for eBay? And why does it not work for Amazon? But it’s also different times. There’s probably 20 years in between one or the other. You know, eBay is not that popular or not that conscious as Amazon is. Because… Okay, here’s an example, a compounding example. Ok, great: negative example, positive example. What did they do different? And it’s not always they did something different — it was just a different time. Why did MySpace fail and Facebook succeed, right? Looks like the same thing. Right? Well, it was slightly different times. Like Facebook, proliferated through mobile devices. When MySpace came out, there were no mobile devices, right? So it’s often factors that are not obvious why one worked over the other.

Seth Villegas

Yeah, Facebook or, and MySpace is actually funny, because you will see really young people wishing that they could play a song on their profile, or it was on their Twitter profile and whatnot. It’s like, oh, but you’re just you’re asking for something that was already implemented. People hated it. So, so they did something else.

Bernd Dürrwächter

Yeah, I am always fascinated by the technology progression. I’m old enough to have seen like at least three technology evolutions. Like the original Internet, the fact that you network with people online. Then the whole web that’s graphic, this hypertext internet, everything was text, or even CompuServe. It was pretty much text, that’s where the semicolon-dash-parentheses smiley ;-) came from, there were no graphics, right. Then you had websites where suddenly you had graphics, you know, and then social media is a third generation that I’ve seen where now you have not just the text, but the whole emojis, and like, and then network, and then social newsfeed.

And so an example was back in 91, 92. On CompuServe, you could type and you could see everybody’s letter in real time. Most… CompuServe was a closed system. They had a mainframe computer and millions of people in the world from all over worked directly, so when you dialed up, you dialed to the mainframe, so it was all real time. And to me, it felt like I was talking with a real person. They typed fast, they backspace, you could see every single letter in real time. And you felt like you were talking when the person was when you talk in person, somebody says something stupid. Wait, I take that back. Right? They can’t undo it. It’s out there. Just like that’s how a conversation happens, like a spontaneous. And I felt that was the next big thing online with pics. And at some point in time, Yahoo Messenger in the 90s said, “people don’t like that, they don’t like that other people see their typos.” So now, you just sit there. And then at some point, it’s like, “well, I don’t know if they’re typing, are they still there?” And then they did this, somebody — and like you see this on the phone now — somebody is typing. Meanwhile, when it freezes up, it still says somebody is typing, and they’re long gone. You sit there and wait. It’s like, that’s a degeneration! We were there 20-30 years ago, where we created the situation of closeness by seeing all the little flaws that have been buried in the process. But we made it actually more dysfunctional. Although the mere fact that people actually use a phone, mostly to text, when we could do video chat. We have the means, why are people still texting? Because they don’t want to be seen, they might not have make up on, they might not have the right clothes on. So, texting creates this abstraction level.

Seth Villegas

The other thing that’s funny — to kind of bring this back to what we were talking about earlier — is if you do see people typing in real time, you’re probably not as worried about it being a bot, than you would be otherwise. I mean, maybe you could kind of mimic...

Bernd Dürrwächter

(49:58) That’s the irony, it’s totally… the bots already know that. They can, randomize or speed up the typing. It’s really only human perception thinking, “oh, that just signals a person.” I think people have the wrong notion of what the bot or what a computer can do. Lay people underestimate how sophisticated computers can behave these days, based on what they learned from human patterns. I mean, it’s not that somebody programs them, it’s literally they observe how humans behave.

This is actually one of the arguments for privacy, right, that us having multiple personalities — you said that — like, we talk differently to parents than you talk to your peers, your friends, versus business partners or like research partners. That is actually… we always have multiple personalities, depending who we interact with, right? That’s why it’s special when you have a partner, you have a level of intimacy you don’t have with strangers or… to take that… a lot of this surveillance state or — what do you call it — service capitalist takes it away saying, “We assume you’re only really one person, right? We’ve judged you by all the data we have and reduced you to one personality,” when that’s actually not who we are.

Seth Villegas

I think from kind of a more abstract, philosophical perspective, that’s always the case, right? Like, you’re not gonna be able to completely capture who somebody is just by the way they behave on stuff. Though, I mean, you can learn a lot about somebody by the things that they choose to talk about and who they choose to talk about them with. But I mean, personally, I’m probably not comfortable with that… I don’t know, that like… I don’t know, there are things that I would like to keep private, right? Especially things I haven’t thought about yet, things I like to think about more…

Bernd Dürrwächter

I meant more, not so much in the privacy sense, but more being judged, or, you know, like in a very simplified way. We were just talking about how GDP — the Gross Domestic Product — is a single number that’s supposed to reflect the… you know, how the US, Germany, Japan are the three richest countries. And yet, if you look under the covers, they have the highest degree of homelessness, and, you know, even Germany, there’s 200,000 people who can’t read or write. Or, uh… Basically that number, you can have 100 billionaires and everybody else lives in the street, that still will be, the US will have the highest GDP. However, GDP is a statistical central measure that doesn’t reflect the actual society. And it’s the same, like your Facebook reduces me to one score. And… or it says, “oh, they’re African American, therefore, they like this,” as if every African American behaves the same way. It’s like, “you reduce me to these five numbers, and then wonder why you discriminating me, or you informing me of inappropriate apps.”

And so, the bigger risk is here to for them:, reducing us. And if we’re treated long enough this way, we start believing it, right? Like, you know, many teenagers when they have low self-esteem, because it’s based on what Facebook, you know… how they interface with Facebook, and how many likes, how many friends they have. Social media has as an effect on us as people and who we are when it reduces us. Which, in ethics I use the… dignity is one of the factors like, respect my dignity as a human and reduce us, like, don’t say I’m vermin like they did with, uh… whenever a genocide happens, right they uh, they always talk about people like they’re animals or lowly and it’s almost... Social media does the same, you’re just being reduced to somebody who buys jeans, or who needs this product, or… It has an impact on us. And…

Seth Villegas

Maybe this will be a good point to end on, of the different kinds of strategies that people are taking. So on the one hand, there’s kind of the privacy strategy. So people will use things like VPNs, they’ll use, like AdBlock, uBlock, stuff that kind of strangles the data, right? Like it makes it so you can’t… it’s harder to capture your profile. But there’s actually this program that I’ve been looking at, that I’m actually interested in called ad nauseum, which kind of spoofs clicks on ads to make it so that your, your profile data is useless. And basically what I mean by that is because it clicks on everything, it doesn’t know what you’re interested in, it can’t construct a specific profile around you. And so that’s a very different thing. And I think this even speaks to — you know, people design like anti-surveillance clothes and whatnot of — how can you subvert the ways in which data capture happens, through these different technologies and make it so that it can’t, kind of hone in on you the way that it could before?

Bernd Dürrwächter

I wanna add a comment — and this is actually something Shoshana Zuboff speaks to deep in her book — these are practical measures, right? And they kind of make it go back to, some would call it the Cops and Robber game, or — what do you call — the Cold War… an arms race, right? And there’s a huge risk. They already started doing this, where you kind of screw yourself. You can pass a law that says if you try to escape surveillance, it makes it a crime, just like when early encryption was considered a crime. Like Phil Zimmermann, the whole case in the 90s. When he wrote an open source, he gave the code away and that made him a criminal based on the US law. It was a crime to export encryption, it was considered military technology and he broadcasted it worldwide. So therefore, he is exporting US military technology, even though he invented it. It was very easy to outlaw all this like guerrilla warfare. And some of the big internet companies, actually are trying to frame it that way of, “you must have something to hide, you must be criminal if you try to obscure it.” And it goes back to “well, if that’s your way, we’ll outsmart you as a technology, but we take that as a challenge,” instead of “oh, you’re signaling us that you want more privacy,” it’s, “we take that as an arms race challenge.” So as much as I understand these activists — and I hope a lot of that was just to raise awareness — but it can’t be a long-term solution that everybody defends a house where they own a shotgun when we should be having an effective police force, right?

Seth Villegas

Right. And that, again, raises the larger question of… it’s always going to be, say, a combination of technology and society, right, you know, things that people are doing, because I think the last thing that I want to be involved in is in some sort of a, you know, some sort of a technical arms race between say, you know, anti-surveillance technology and surveillance technology, right? That just seems like an endless cycle that’s going to take up a lot of time.

(58:07) Thank you for listening to this conversation with Bernd Dürrwächter. You can find more information about DigEthix on our website, DigEthix.org and more information about our sponsoring organization, the Center for Mind and Culture, at mindandculture.org. If you’d like to respond to this episode, you can email us at DigEthix@mindandculture.org or you can find us on Facebook and Twitter, @DigEthix and on Instagram @DigEthixFuture.

As we close out this episode, I think it’d be great to just think about how much we’re increasingly dependent upon sophisticated automated systems for many of the services that we’re looking to get today. However, one of the ways I think that we’re the least informed today is the way in which our interactions with these automated systems are using machine learning processes to train, yet further bot systems. As we sort of talked about in this episode, it seems just a little bit nefarious that our use of CAPTCHA systems is used to train CAPTCHA systems, which leads to more sophisticated CAPTCHA systems.

And I think this speaks to the place in which we ended the episode and my conversation with Bernd — just talking about how there might be a kind of digital arms race between the security systems that are meant to keep bots out, the length of the extent that we have to go to prove that we are who we say we are. And as these systems have an increasing amount of reach, it seems like the verification process is going to be increasingly automated in ways that might not always be helpful. While I’m certainly very optimistic about changes and developments in these systems for the better, I think we always have to keep in mind the way in which direct automation can actually make systems far more rigid than it otherwise would be, and prevent us from getting the help we need if we have a specific personal situation that we have to take care of.

With that in mind, I’d really love to hear from you about:

  • What has your experience been like when you’ve been dealing with these kinds of automated systems?
  • If you go to call a helpline, and you get a bot instead of a regular person, how does that make you feel? What is your immediate reaction to that?
  • Have you been able to get the kinds of services that you need when you go through those processes?
  • Have you had problems accessing particular accounts in which you weren’t able to do so because of a particularly stubborn form of a CAPTCHA? I know that I’ve definitely had that happen before.
  • And is there anything else that we should probably keep in mind as we’re… Is there anything else we should keep in mind as we’re increasingly dependent upon these kinds of automated systems?

As always, I’d love to hear from you before our next conversation.

This is Seth signing off.

--

--

Center for Mind and Culture
DigEthix

Research center innovating creative solutions for social problems in the mind-culture nexus. Powered by a global network of researchers & cutting-edge tools.