Oxford professor debates tech entrepreneur about surveillance and big data

Mimi Nguyen
manasearch
Published in
29 min readAug 16, 2019

With our new podcast “Searching for Mana” being released in the very near future we thought it would be fun to bring back some highlights from our old podcast, host, to give you a taster of what’s to come. In this episode, our hosts Lloyd and Sneha talk to tech entrepreneur Alec McCarter and Oxford professor Luciano Floridi about everything from big data to government surveillance. Check out the highlight below to see their differing opinions on whether the government should be watching everything we do.

Here is the transcript from the full episode, enjoy!

[00:00:14] Lloyd: Welcome to host. In this series, we’ll be focusing on the tech innovation in finance, fintech. This is the series where we meet the people shaping the future of finance. I’m Lloyd Wahed and I’m a head hunter. I’m privileged to spend my days with some of the most innovative and in demand minds in tech and finance, from unicorn companies that ride in to fix our everyday problems to financial disruptors. In this show you’ll hear from the founders themselves and hopefully pick up some awesome ideas for where you can invest and win big in the future. The future of tech belongs to the consumer. So listen up, hit, subscribe and get your weekly dose of host.

[00:00:53] Lloyd: I hope you’re sitting comfortably because today we have two incredible guests hashing out the finer points of information, cyber ethics and digital careers in the face of automation. Joined by Professor Luciano Floridi, a world renowned scholar and professor of philosophy and ethics of information at the University of Oxford, and Alec McCarter, entrepreneur and algorithm whisperer. Welcome, guys.

[00:01:25] Sneha: So let’s just get started on the questions and I’m going to start with Luciano and I’m going to ask him a few questions on his background as an academic. And so, Luciano, your work is highly regarded by both scholars and non scholars alike. So what exactly drove your interest towards philosophy and ethics of information?

[00:01:46] Luciano: So I grew up as a as a very, very boring philosopher. The worst kind, the kind that works on things that nobody cares about, and at some point I really got bored myself with myself. You know, you just kind of remember when you were younger and enthusiastic and you thought you had a fire. And this philosophy is what changed the world. What makes me understand the world, what gives meaning to life? And then all of a sudden you’re working on something that nobody cares at all, not even your mom. And I thought when I woke. What? What is really driving my passion and I had always been attracted. I grew up as a logician, basically. So halfway between computer science with that interest in how things work. Inside and then I thought, well, there is more to do for philosophy today than just writing papers for some specialized journals in areas that, as I said, are really of no interest to anyone. And I had quantitative data to show that no one cares. just in case. The point became what to do and you start looking around and I had been struggling with this topic of a philosophy of knowledge where youre interested really in the stuff rather than in the people. And I came across an article by Karl Popper that was called Epistemology Without Knowing Subject, basically to do with knowledge, without taking care of the agent. But just the knowledge itself. And I just realized I would call that really information, basically. And so that was the passion that I had. And I thought, well, I can join the two things, my philosophical training and my passion into a single block. But I remember this ha ha moment like, oh, I had to give a talk in London. Maybe I can call this philosophy of information and doesn’t seem to be too bad as a label.

[00:03:48] Sneha: So I’m going to move on to a few questions for Alec. Welcome to the podcast. So I’m going to open up with your entrepreneurial background. So you’re you’re a serial entrepreneur and your first company being Riva. Could you talk us through where that entrepreneurship took you and what kind of ignited that spark right out of school?

[00:04:11] Alec: So for me, it was quite early in my life looking around and saying, who were the people that I look up to? Who are the people that are having an impact in the world and how they’ve been able to put themselves in a position to have that platform? And it seemed that there were two things, two things that they often shared. One was technology and the other the other was being up to build a big company off the back of that. So this was part of kind of aged 6, the thinking in my head that led me to start programming. And so that’s what led to me straight out of school trying to do a startup. It didn’t work out, but I learned a hell of a lot. And it gave me more, more confidence to in the next to do things a little bit differently. And, you know, in that one I did manage to bring I brought on board a co-founder convinced him that this my vision for how it happened to be we were trying to hyper accurately, locate people wherever they were on the planet, indoors, outdoors. But I managed to convince someone to buy into my vision and not just him, but clients. And it felt good. And this then gave me the confidence to go a step further and try and aim for something that would be bigger than that was ever going be aim for something that could in theory be huge and started just, me six months, I managed to convince a co-founder to join me. Six months after that, we got a couple of people on board and within a year or so we managed to get enough traction and enough kind of backing from academia. Given I have no I didn’t get university and no background where we got to we got enough kind of free points that we were able to raise a decent amount of money and bring on some really, really talented people. And this is the kind of journey trying to take what I love, which is programming and scale it beyond me.

[00:05:57] Lloyd: What’s really interesting there is I mean, you went to a really great school and you learnt programming skills really early on. And I think Luciano, it’s interesting for the listener to hear both perspectives here. At that point, at 18, let’s say in the UK education system, where you have to decide, do I go in academically, take a course, at a great institute or do I just go and learn by setting up a startup, whether it is prosperous or not to start with? How do you decide what the right next move is for you? What was your thinking there? Because going and getting Cambridge or Oxford on your CV, talking to great professors, learning ethics is useful for many people, but transactional experience is also useful. What would your advice be to listeners and how did you make that decision first, Alec?

[00:06:52] Alec\; It was always assumed that I would go to University. I was kind of my parents thought i was reasonably smart. It was assumed that I’d go to Oxbridge, what made me start to question this narrative was that all of my friends went without even without that, It looked to me like they were kind of falling into it and a lot of them went to do courses that they didn’t seem to have much passion for. I think my skills as a programmer were sufficiently in demand that I actually, frankly, don’t need to be too concerned about being able to make ends meat. So why the hell not? Why not try to maximize the rate at which I learn. Why the hell not? Try and see if I can make some kind of impact.

[00:07:34]\; Lloyd That makes sense. So then taking that point on Luciano. If you were talking to an Alec at age 18, who is gonna go for the very risky moonshot path. Would there be any encouragement you’d give him to come to Oxbridge and study computer science or philosophy?

[00:07:56] Luciano: It would be difficult. I think I’ll just just mention some of the major ingredients that you need to succeed. Passion, a vision, ambition. And sometimes those things. They’re just there. And. And you don’t need to make them grow any further. So on education in our three years in Oxford. Often what we what we are breeding is the is the golden medium that like it’s the pathway lawyer or the very perfect politician. Trust me. So do I really need to get a degree in theology if you ask me to be an entrepreneur? Probably not. But let me highlight something else. University is also a place where you build your network not to be underestimated, especially if you are not a bad politician.

[00:08:48] Lloyd: That leads quite nicely onto a big topic that we’re all talking about is automation of talent. And for me, what needs to be looked at is what university is acquiring talent in and then taking them out into the field. What are the courses that beyond being prestigious and building up a network? We should be looking at. There is an abundance of and should be being pivoted into potentially engineering, computer science, philosophy, whatever it might be.

[00:09:21] Luciano: If I was recommending to someone a particular kind of education, my recommendation is, first of all, look very carefully inside yourself and check what is your passion? This sounds trivial. It is trivial and is so damn true. And trust me, people don’t know. Most people have no idea about what a real, real passion is. They think they have a passion for something because society tells them they should have a passion for it. And disentangling what a society expects from what I am, who I am, what I really, really want, and looking at the mirror and say, that’s me, good and bad. And that’s what I’m not. My strengths, my weaknesses. That’s the first step. The second step is to learn the languages that are spoken by information today. So instead of learning facts that are so abundant today, learning the languages in which facts speak, and then you are your master.

[00:10:18] Alec: In terms of people choosing what they want to do, I think, or at least what I’ve seen amongst my group of friends. There’s a new challenge that’s emerging at the moment. People have to be more forward thinking than they have and they may have had to have been previously. They’ve got to look at not just the state of job market right now, but where they think it’s going over the next 10 years, 20 years, etc.. There was a recent tweet that Andrew Ng made of an email that someone sent to him which was said, Dear Andrew, I am four years into specialist radiology training. Should I quit and do something else? How close are radiologists to being replaced? Andrew Ng, of course, is a famous AI researcher and this is a real problem that a lot of my friends face. They don’t know whether they should be. They don’t know which fields that they should be exploring because they don’t know which ones are going to be deprecated. At some point there’s going to be a huge step change when we start seeing, say, driverless cars. And it’s only going to like this is going to start to be more and more all consuming. And people are scared for how can they? They basically betting their life on something, but they don’t have the understanding or the context to know like what that landscape is ultimately going to be.

[00:11:26] Lloyd: Yeah. I mean, everyone’s compelled with that right now in America, the number one profession is driving, isn’t it? And that’s clearly in the next several years. Easy to see how Tesla has just uncovered a prototype of the driverless lorry, long haul lorries that they’re going to be bringing out. And then obviously Uber is not hard to see in certain ecosystems. Silicon Valley to start with American Western countries where that’s going to become automated, but then if you actually go into academic typical vocations like in law, I mean, I think that AI just did 10000 hours worth of legal work in one hour in the last couple of months. So it’s the toughest thing. It’s what do you go and invest the time in? I come back to I think it is for education to change the syllabus and teach people how to upskill, because clearly if I think the future is going to be more on a consulting upskilling basis and so getting a degree that gives you 40 years of well-paid work is just not the ecosystem that people are living in anymore. So companies like maybe Kodacity and there’s some good American examples as well, people need to be educated to buy into.

[00:12:48] Sneha: I mean, what’s really interesting here is we’re talking from a very Western-centric point of view. We’re not necessarily thinking about what it’s going to what’s going to happen to the developing nations in the world when automation really takes hold.

[00:13:04] Alec: I think it’s hard to see how it benefits developing nations. Already, a large amount of employment in developing nations is essentially kind of picking up the scraps of work that happen to be not particularly cheap, to automate weaving garments and clothing, et cetera. But as the cost of creating this technology and new technologies like 3D printing come along there are fewer and fewer of these kind of scraps of work that will be going into the developing world. And we in the West have a bit of monopoly on creative jobs. Almost all creative jobs are in the West. And the creative the more creative the job, the more likely it is it’s going to still exist in 40 years time. But I mean, we need to think long and hard about how we’re going to avoid totally destroying the developing world when capitalism no longer is flowing Money there. Luciano, what do you think?

[00:14:03] Luciano: Oh, I agree. Especially about the remark on where creativity. Not not as a phenomenon, but as a paid job. Is. I mean, clearly, it is a human feature. But where do you get paid to be creative? Well. So the northern hemisphere, developed countries and so on. So that is important to have in mind. At the same time, I think it’s just complementary what we’ve said so far. There is a way in which the world should go and there’s a way in which the world is going the way the world is going. Is towards a polarization of jobs. We will have imagine Like like a sandwich where the two slices of bread get further and further apart. Because what you put in the middle is more and more automation. So a lot and massive amount of jobs will be automated by making sure that cheap are not particularly smart, not particularly attractive. Boring, repetitive jobs. That, however, is too expensive to have a robot doing or just not possible then will be allocated to developing countries. People now gig economy included. And then on the other side, the managerial, strategic, the creative, the so touch nice element that there will be further polarize. Also, in terms of pay, I think that’s the way we are so developing at the moment. If you have to predict what is their next railway station at which this particular train is is going well that seems to be it shouldn’t be that way. It doesn’t have to be the way. There is no necessity. I mean, history is not physics, there arenot fixed laws. We make them, we make this happen. So it’s entirely up to us. And it is our responsibility that this polarization is happening, how we could counterbalance this, how we could make sure that these two slices of bread don’t move further and further apart from each other. Well, there is social and political decisions. We could decide, for example, that. Let me give you an example. We stop subsidizing agriculture at a level that is ridiculous in Europe. And I’m talking Europe, not European Union only, in Europe, because that kind of stuff is produced much more cheaply, much better elsewhere. Developing countries, which we then provide funding for, meanwhile, undercutting them in terms of what they can export. For example, in this case, we need to provide Europe with its agricultural independence and autonomy. Should anything happen? I mean, really, is this a 19th century sort of mentality? So the same happened to the United States. There are huge pockets of value that we are either under exploiting. This polarization is unfortunate. And I’m not terribly optimistic about our ability to reverse this trend, but I’m absolutely certain that we could if we wanted.

[00:17:14] Alec: You mentioned that 19th century mentality. Well, I think that I think from here is bigger than automation. I think what we’re seeing is the Western world getting getting better and better at extracting value from the developing world. The lower half of the socio economic spectrum. So a company like Google, I think is an example of this. Google is getting better and better. It’s kind of gaming the gaming the minds of the less educated lower half of this economic spectrum in order to optimize their advertising revenues. And this. What is happening here? I mean, I use adblock. I got a lot of people like you guys possibly using that blocker. I think a big concern is companies like Google. They’re taking not from us. I’m providing almost no money to Google. It’s from the less educated whose behavior is being manipulated by companies like Google. And the money is ending up in the pockets of the very wealthy, the elite. And I think this is probably a bigger problem than automation, even bigger automation.

[00:18:22] Luciano: I think that that might be a reasonable view. I don’t disagree with that. At the same time, remember that we all know that and all this works in terms of either to advertising. I mean, the profile of someone who earns one dollar a day is pretty useless. The profile someone earns a hundred dollars a day. Well, that’s what I want. So if I’m a company and I come to you Google and I want to advertise something, you better provide me with the profile of a hundred thousand people who can have disposable income for it like millions of dollars. Then my new whatever is going to be better advertised and makes a profit rather than having half a million people who can hardly afford like a sandwich or a glass of clean water. So I’m not disputing what you’ve said. I think that we are seeing an overall get from where ever you can get sort of mentality and some people are way more fragile and at risk than others. I completely agree on the point of education. Essentially there where the boundaries are lower, the barriers are easier to be overcome. Well, that is exactly where the digital business seems to be more aggressive. Inevitably, for good or bad, I mean, I don’t want to build any sort of conspiracy theory that it’s just in the logic of profit making. If I start making a little profit in that corner and by investing more, I get more profit. Well, guess what? I invest more and I get more profit. And until I get to a wall and a can’t move. So this is. A procedure that maybe not for today unless we want to explore this, but then again is a socio political responsibility of making sure that these mechanisms are properly regulated. You can’t complain about the logic of the mechanism is like thinking, Oh my goodness, no. I cut myself with a knife. Its the knives fault because it’s too sharp? Really? I mean, in what sense. You’ll find that We left as a society for a couple of decades now. An empty space of policy that is been occupied by digital companies. There’s a fault of the digital companies, surely, but also fault of society for leaving that space completely empty. And from my experience, without mentioning too many names, I can tell you working at the highest level in this country, in Europe and in Italy, Washington included, there’s an unwillingness to take responsibility for this public space. Next thing you know is occupied by companies because someone has to take decisions. So in that respect, I’m equally critical, say, of the social political side as I am of the business side. It’s a kind of core responsibility here.

[00:21:14] Lloyd: So I just want to really champion this point, which is if you look at the FCA in finance or any of the regulatory bodies in finance which go back 20 years ago, your peers from Oxbridge or great schools would have been jumping into trading or hedge funds. Obviously, that needs to be regulated because the system we’re living in is I’m going to take what I can get. You’re right. That’s the game that everybody’s playing and being told to play on is getting kudos for. Obviously, that has changed over the last several years. And the greatest minds and the people who want to get to the top of the pyramid are going into technology. So there needs to be either a body championing or there needs to be regulation. If the parties aren’t prepared to put certain clauses into, what, let’s just say a Facebook or an Amazon or Google is able to do, of course, these publicly limited companies are going to try and take all of the profit that they can from the corners where it’s possible to do that. Have you seen or are there any ideas that we could do to try and even lobby or get governments to really pay attention to this and put something that actually changes the landscape?

[00:22:31] Luciano: We don’t have many tools to operate here. You can have the competition, which is always a good idea, which we don’t have anymore. There’s only one Amazon, only one Facebook, only one google, only one Apple. And all of a sudden you find them. So chopping the world in two areas, a mobile phone area, the search area, the social media area, the tweet area. And there is no real competition here, no matter what they say. So these are defacto monopolies and they act inevitably as monopolies. You know, if you have the monopoly position on a particular job, guess what? I mean, you you make the price and you may be very enlightened, very nice, very kind. But it’s going to be very difficult to self-regulate, which is the second option. Second option is to expect these companies in this case to come up with own self regulation. It’s a lot to ask. It’s like asking a smoker to stop smoking. So that’s the third kind of thing. Namely. Governments are sovereign national institutions putting rules there to make sure that certain constraints are in place. So fiction, self-regulation, rules of the game, even in each case, I’ve seen the beginning of something that is promising. But it’s the beginning. There’s a long way to go.

[00:23:59] Sneha: If we move on to the second topic, which is AI in ethics, I’d like to start off by asking what the dangers or red flags of A.I. are. I mean, we can move on to a more positive tone after. But there’s always that time old warning of what if it gets into the wrong hands and ends up being utilized for some sort of nefarious purpose. So what do we think the red flags are and what how do we mitigate these red flags?

[00:24:26] Alec: So I say there are two problems, first of all, the data that’s being collected that feeds the site. The second problem is what we’ve been doing with this, and I think this is highlighted by, for example, by contrasting, say, the NSA with, say, Google, what someone like NSA is doing are mining our information and trying to stop, Speaking very generally, stop terrorism. I think what’s. I have no qualms particularly with that. But I think the issue is when companies use this data to try and alter our own behavior, so by trying to sell stuff to us by influencing our lives, that that is my biggest concern. And I think we’re seeing the very, very tip of the iceberg now. Right now, it is already an incredibly unfair playing field. We have algorithms that are incredibly potentially expensive, running across thousands of machines, trying to sell something to change someones behavior to make them buy something, that’s already unfair. But as AI gets better and better, better. We’re going to see this to, like a really quite terrifying black mirror degree. I can see us going down this path and it’s scary because how can you say. I mean, I’ve dealt with that tech. I know the lengths that go to I know how shady that some of that stuff is. It’s it’s very hard for regulatory bodies, to understand this inten To be able to discover this intent and then to be able to connect the dots just to kind of work out what is a general principle that enables us to work out who’s malicious and who is just going to help customers. It’s very difficult.

[00:26:00] Luciano: Yeah. I agree, and then I think that one of the major problems is its control of people’s behavior. If you have a massive amount of data as we do and you have amazing automated ways of processing the data and taking decisions automatically, and those data are on people’s lives, the temptation of making sure that you just nudge a little bit. A segment of the population in that direction or they, you know, undermine some kind of policy in another corner is huge. The sort of strategy we have is it’s kind of fingers crossed. But I’m aware of the fact that this is based on nothing else but the goodwill of the people in question, that if tomorrow that goodwill goes elsewhere or changes or the ownership even change hands or we have different pressure. Well, there’s nothing to stop those companies or anyone in charge to cause some kind of real disaster. The real things are happening and they are troublesome. They’re really scary, as Alec was saying. Massive data control of human behavior at the same time. We have a world that is increasingly dependent on this sophisticated technology. Remember when the B.A. had that crisis at the airport? I was at the airport. I had to fly to Berlin. I was grounded for hours. And why? Because someone had unplugged something from something else. The power shortcut. Imagine a whole world that depends on AI to make things work, and all of a sudden there’s a bug or there’s an attack or something. All of a sudden we really are in trouble. That’s one of the things, dependency. The other one is our own flexibility. We are the flexible agents. We are malleable. We could do it. It’s possible we should do it. I don’t think we’re doing it enough.

[00:27:53] Sneha: That kind of brings us on to the third topic, because it is all about surveillance and the ethics of data collection. And one of the most important things about AI working is the vast amounts of data behind it. And so I’d like to open up by asking where do each of you lie in terms of privacy and the right to be forgotten? And is this then different for government surveillance? And how do you then feel about third party contractors being used like Booz Allen Hamilton, Northrop, Dark Trace and companies like this to actually mine and collect your data in order to help the government with security? So, I think I’d like to open up with Lloyd on this one, actually.

[00:28:41] Lloyd: I don’t think that you’ve got a huge amount of choice. I think if you want to progress advantageously in the ecosystem that we’re in, you need to sign up to it. In terms of ethically, whether I think that right or wrong is probably redundant. What is happening is we’ve got PSD2 to coming into place that’s allowing people to be aware for the first time that their data is going to be sold on by the financial industry. So I find that encouraging. I think that that’s also something that should span across different sectors. So if you were aware, for instance, we have the ability to give our data out and our bank is going to profit off it, but they’re going to share that with us, then at least we both have some some benefit to that happening. If you choose that, you don’t want to have that and you want to, you know, in this day and age, live in a dark corner, but be secure. That’s your decision to make. That’s the wisest one of all of them. But I just think it’s the system that we’re in. You know, we all have been signing for several years T+C’s that we probably aren’t reading properly. I think that this comes back to the point that on all of these topics I think needs to be. People need to be made more aware of that. There should be a body in different sectors. This includes tech. There’s A making people aware. But then it is also allowing people to make decisions on policy of what’s right and what’s wrong.

[00:30:17] Alec: I think the difficult thing for policy is, is as you touched on big data is used for things that are very helpful for us. So I use Google services, I search the Web, I have e-mail accounts and I use Siri, etc.. But that is that is an increasingly intertwined with the things that are of negative value to me, or the things that I deem to be of negative value me. So the harvesting of my data to better target me, but also also other people like my data is used to help inform them or else train them models such that they can better target other people. And for me, it’s all about the use. I don’t I’m fine with them having my data. What I don’t like. It’s it’s it’s intense, but that’s why that’s what policy has always really struggled to deal with.

[00:31:03] Luciano: Yeah. And then I think that one of the problems we have among many others is a. Is that we built a society that has all the incentives to make sure that personal data are interesting to companies. That’s called advertising. And at the same time, a backlash thinking we don’t like this. You should self regulate, which is like so with a different metaphor. It’s like going into a shop of sweets say everything is free and you can grab anything you want as much as you want. But don’t touch anything in touch, really. So we make this immense sort of added value that is called advertising available to companies as long as they can provide free quote unquote free sort of services to the public and make enormous sort of profit from advertising. And then we tell them you shouldn’t really touch personal data. And how am I going to advertise stuff? Please explain me. So inevitably, that model has the contradiction in it of giving something and saying something, but don’t use it. I have a rather unreasonable and certainly impractical suggestion we could either ban advertising online whoa or limit the amount of money that any company can spend on advertising. Remember that advertising is a is a kind of Cold War escalation. If I produce a fridge and you produce a fridge, you have to advertise. Then I had to advertise. If I advertise more, then you have to advertise even more. And whoever gains from here is Alec, who was actually in the middle allegedly with a platform where the advertising is taking place. So the analog is really not in the hands of the digital because inevitably the analog has to self advertise more and more and more take the car industry, the more they spend on advertising, the more they have to spend to make sure that you buy that car rather than another car. All of a sudden you start thinking, where is this all this money going and is not litetle money. I mean, is the last time I checked it was the equivalent of the GDP of Sweden every year as a cake. So there is many hundreds of billions of dollars that could go into something else. It’s not exactly the most productive way in which we can employ our intelligence, our technology and our resources.

[00:33:37] Alec: Interesting. That you raise banning online advertising, because I suspect this is what we’re heading towards. I think advertising, online targeted advertising. I mean, target advertising is going to get so good as we advance AI. As this technology trickles down from that kind of top point of one percent, that is deep mind through to everyone else, advertising is going to get so good at manipulating our behavior and like affecting people’s lives. Getting people to make life decisions that they wouldn’t otherwise have made. I think it’s going to become more and more apparent. We’re going to becomemore and more paranoid and at some point. I think the only line that can really be drawn concretely by policy is targeted advertising. Is it advertising Kind of specific for one person. And it this feels like really quite like it feels almost inevitable to me that this is going to happen at some point.

[00:34:38] Sneha: And how how do you, Luciano, feel about government surveillance and and the surveillance architecture that is present in the Western world particularly? And where do you lie on the privacy issue?

[00:34:50] Luciano: Yes, the surveillance is part, in my view of this hysterical reaction that has the counterproductive effect of undermining the values that we allegedly want to defend. Yeah. So we are putting up this defense for a democratic, liberal, decent society. By doing what? By Digging the grave for this society? by building a surveillance society? So I’m incredibly strongly, remarkably against any idea that this government currently know the Theresa May 2 government. But theresa May 1 wasn’t much better either. And previous governments were also quite problematic. ID introduction and so on. All that line is self-defeating. And I’m sure they mean well, I’m sure they have the good of the people in mind. I don’t think that they play the next move, which is not just a reaction to terrorism. Put up a surveillance society, but one that once you have that surveillance society, what is left of our own decent world that has not been undermined by terrorism. That is exactly what the terrorist wants to transform this world into a world in which I’m checked by the government. I’m worried about what I’m doing. And I have soldiers everywhere I can see even the police gets asked, is that the world in which we want to live? But we’ve been here before. We move forward by being more pacifist and more tolerant, more inclusive than the people who’re attacking us. That’s the only way we win this battle.

[00:36:24] Alec: I find it hard to connect that with kind of my experience and how I feel. This affects me as an individual. So I’ve been operating under the assumption I’ve been monitored for a while. I happen to have a criminal record for hacking, quite a large multinational, not Google. When I was 14. And so I’ve kind of assumed that I’ve been on at least some lists and a lot of my friends are very liberal. But I kind of realized I actually don’t particularly care if my data is used to kind of, I guess, dismiss me as a suspect of terrorism. I don’t care if algorithms is going have my data. I do. I do appreciate a tiny, tiny. I appreciate a tiny reduction in my likelihood of death in the next year. I see that gain outweighing approximately zero cost of say, GCHQ having my data. I don’t. I never quite know if I got a negative to this to this surveillance state. As long as the intent is good, as long as there’s a kind of a solid, solid framework, solid structure, sort of process in place, such that the intent is kept good. But from what I’ve seen, I mean, I know people who previously worked at GCHQ. I employ, at least one of them. I know that. I know that I have a good sense of their intent and I know they may well. I struggl to reconcile this kind of panopticon image that people feared. Say the UK turning into. With what I’m aware. Like my friends intents are. Which is trying to trying to reduce the likelihood of innocent people being blown up.

[00:38:11] Luciano: I agree.

[00:38:11] Alec: They’ve been doing a pretty good job. There’s a reason why the deaths, the death toll is so low. And if you look at the number that get through and managed to harm people vs. the number that have actually been stopped, there is a that’s a solid track record that has brought that number down to a hundred from something that actually would’ve been a lot higher.

[00:38:28] Luciano: What I would agree on is its purpose and time limit. It’s what for and how long for? once you’re not told what for and you’re not told how long for but its forever and whatever purpose we need it for. Then I start getting a little bit suspicious.

[00:38:43] Sneha: Do you believe that cyberwar, specifically, cyberwar exists?

[00:38:48] Luciano: It’s just a fact. I mean, there are Web sites that are easily available where you can monitor the number of attacks are happening every every minute of this conversation, literally. So the question is not, is it happening? What happened? The question is like, what are we doing about it?

[00:39:01] Alec: I think that the complexity of it makes it incredibly difficult. So war as you’re saying, troops on ground there and there are definitions you can use and from the Geneva Convention, etc. with cyber warfare, that’s a hell of a lot harder because as you say, it can’t be reliably attributed. You don’t notice it and the effects can be very, very subtle. You can’t construct some sound guidelines and try and get people to sign onto them because there’s no way to check adherence to these guidelines and there’s no way we can know if we’re being breached. Hacking is so difficult to actually pin down.

[00:39:37] Luciano: Now in an age in which basically we lead by information. Oh, there’s so much opaque interface with where the attacks come from in what forms, what for, what they really mean to do. Because sometimes an attack seems to be meaning to do X. What else is something else? That is it. So who is in charge of the defense of the fragile infrastructure that we’re building? Well, all of a sudden it’s no longer the army is is the Microsoft of the world. That’s why they were complaining recently For example, when they say, look, we don’t want to be the first the first line of defense. Well, the bad news is that they are and they’re not gonna get out of that rule for a long while. And the sad news is that we actually are not having the conversation, despite the fact that some people like myself are pushing hard for that conversation to happen.

[00:40:26] Sneha: And moving on from that because you mentioned that in terms of criminality, where does where do you think the darknet lies in this whole kind of legal and illegal space in the virtual. And how do how is that affecting security?

[00:40:42] Luciano: The darknet is here to stay. I mean, I don’t think that we’re going to see something. It’s like asking, you know, will there ever be corners in towns that will not be dodgy. Well, if you have a town there are dodgy corners, you can go there at your own risk. Or maybe because you want to find the nasty guys.

[00:40:59] Alec: So right now, the only way to infiltrate is basically by relying on their incompetence. And so what if they’re not incompetent? What if what if they hadn’t made a mistake and they can’t be infiltrated? Do you think there’s a place for policy to come in and try and find a general rule that say, stop the spread of child pornography on the darknet, but doesn’t hamper the freedoms that you think we should aspire to?

[00:41:27] Luciano: So on this, I’m a bit more cynical than usual. I think that you need to make sure that as it happens in organized crime, there’s certain kind of things that are not happening because the organized crime itself doesn’t like that happening because they attract too much attention.

[00:41:42] Alec: I think it’s quite terrifying letting letting sex of society proliferate. Basically live there. Originally, what enamered me about the Internet was there was this kind of liberal, If anyone could do it, anyone could be anyone you could be a dog and you could still use the Internet. And it’s a fantastic ideal. But my concern is that it’s going to be in the best interests of society to curtail that. And this is not a particularly popular opinion.

[00:42:10] Luciano: I agree that the future will see a much less free for all unregulated Internet. We’re going to see something a bit more constrained. It’s really up to us to decide where the constraining happens. And in that sense, I wish we will go for a slightly more liberal and tolerant environment.

[00:42:32] Alec: I think they’ll need to be some pushing in this direction for companies to actually start caring about this.

[00:42:37] Sneha: And I think I think on on that note, we’ll we’ll wrap it up. I’d just like to say thank you to all three of our panelists. Alec, Luciano and Lloyd. And thank you for all of your contributions. You’ve made this a really interesting podcast.

[00:42:56] Luciano: Thank you. Thank you very much for organizing this. Thank you very much.

[00:43:05] Lloyd: Thanks for listening. I hope you enjoyed that wide ranging conversation and have had your next big idea. Please share this show with like minded disrupters, investors and trend hunters like yourself. It’d be great if you could take a moment to rate or review the show on iTunes and you can find me Lloyd Wahed or host on Facebook, Twitter linked or hostjob.co [UPDATE: the link is now manasearch.co.uk]

Originally published at https://manasearch.co.uk on July 26, 2019.

--

--

Mimi Nguyen
manasearch

PhD fellow at Imperial College London & Royal College of Art | Design Engineering, Tech & Finance | Virtual Collaboration