Privacy Talk with Privacy Talk with Hussein Dia, Professor of Civil Engineering at ​​Department of Civil and Construction Engineering Swinburne University of Technology: How should we design future ethical models in the autonomous mobilities future?

Kohei Kurihara
Privacy Talk
Published in
7 min readApr 22, 2024

“This interview recorded on 27th March 2024 is talking about future mobility and ethics.”

Kohei is having great time discussing future mobility and ethics.

  • What do we have to consider privacy and security for ethical autonomous vehicles?
  • How should we design future ethical models in the autonomous mobilities future?
  • Message to listeners

Hussein: Very good. That’s a very important topic as well and it will determine eventually the success or failure of these systems in the long run. So there are probably three sets or three levels of concerns about the AI self-driving from an ethics point of view.

As you know, these self-driving algorithms are trained on data and they need a lot of data from real world situations. To understand you know, this is a passenger, this is a vehicle, this is a bus and you know, this is a kangaroo crossing the road etc.

So, the first level of concern is bias in the performance and behavior, which can result from using data that doesn’t reflect the main population. An example we have seen in self-driving vehicles is when the software is not very well trained to recognize children or people of color.

If the data set does not include a diverse set of images or videos to train the model on it might not recognize certain instances or certain categories of people and it might end up in a crash or in hitting a pedestrian or a child.

The second concern is around fairness. And what we mean by that is that because the training data sets are based on historical data, they might not reflect the best practices, they might reflect unfair practices from the past, or they could be manipulated or discriminating against certain groups.

The third level of concern is around unethical behavior of the AI self-driving model. When the data that it is trained on is either distorted with a certain set of morals or does not behave in a reasonable way.

An example of this is we have seen use of fake data or fake information to train models. And that can result in a bias in the performance and the security as well as you mentioned, as well as the privacy.

So there is a big responsibility on AI developers to avoid such biases. And the best solution to do this is diversity. So what we mean is that this means diversity of backgrounds of the people, the researchers themselves to avoid having every member in the team thinking in the same way.

There also needs to be a diversity of the mindset and morality of people who are developing these systems and also diversity of data. So not relying only on one source of data, but also to have multiple sources of data.

(Movie: The ethical dilemma of self-driving cars — Patrick Lin)

And finally, because AI is developing very quickly there are always different solutions within AI and different algorithms. So the best way to overcome this bias is also to use a number of different algorithms.

So that, even if it’s the same dataset, but different algorithm, and then choosing the best one. And I think the best example about AI and ethics in autonomous vehicles is the classical problem about, you know, a moral question around you know, what should the AI do in traffic situations where things go wrong and there is a risk of avoidable harm.

So for example, if an autonomous vehicle experiences a system failure, and a crash is about to happen, should it continue driving on its course, and maybe run into a wall and kill all the passengers, or should it for example, go left or right.

But in that process, it will hit a pedestrian. This is a classical problem that has been discussed in the literature. And MIT did a study on this a few years back and they found very interesting results they identified, you know, that around the world there are different clusters of opinions on these issues.

(Movie: Moral Machines: How culture changes values​)

And this actually has some implications in certain cultures, for example, valuing the lives of older people at the expense of younger people.

So this makes it very difficult and at the same time very interesting for car manufacturers when they want to develop the AI self-driving technology.

To ensure that it meets the expectations of the countries in which the vehicle is going to be driven.

Kohei: That’s a very huge challenge not just only for the developers, but also the car makers, manufacturers.

Many kinds of stakeholders have to work in this movement together. And so you also have been broadcasting very important content on your YouTube record. And you mentioned that diversification and diversified background is the key part of the resources to create an ethical approach.

So from your perspective and experiences, so how should we design the future ethical models in our autonomous mobility future just like a lot of the different stakeholders involvement or discussing different perspectives? So is there any essence from your perspective?

  • How should we design future ethical models in the autonomous mobilities future?

Hussein: So the information and the evidence we have today suggest there are implications of these moral considerations that we need to take into consideration when looking at the design and regulation of self-driving vehicles.

And according to the MIT study, these moral considerations could be different between countries. Now that may be very challenging to go and you know, design different AI for different countries, it might not be even realistic.

But nevertheless, I think the important takeaway here and consideration is regardless of how rare it is, extreme situations will be, you know, when the vehicle loses control and it needs to make a decision.

These ethics principles and decisions should not be dictated by commercial interest. They should not be dictated by the car manufacturer or the developer of the AI algorithms.

What we have been saying is that these decisions need to be agreed beforehand based on societal preferences, based on what people think, rather than what the commercial operators think.

Because there was an incident where a company said in the future, my self-driving vehicle is going to always protect passengers. And you would expect them to do that right?

Because nobody is going to buy their vehicle if they’re not going to protect the passengers. But these sorts of decisions need to be made by society, and then need to be informed to the regulators and to the car manufacturers.

So that car manufacturers put in place systems that correspond to the moral preferences of the society, you know, regardless of where they are, whether they are in the east, north, south or west of our clinics.

Kohei: Thank you, we should look at not just for commercial purposes but also for societal challenges in this space. That’s every day, good actions for the different stakeholders together.

So at the end, I would like to ask you some of the messages for the listeners. Maybe your work is very important and sharing your experiences and your thoughts to the listeners, so please.

  • Message to listeners

Hussein: Very good, very good. I mean, I would leave your listeners with maybe a key message which is to keep watching this space, it is going to be very exciting to see how AI in transport will develop over the coming years.

There are a lot of innovations and there are a lot of advances. So AI and other advanced technologies are just expected to help us in solving persistent problems that so far we haven’t been able to solve in transport.

So there are a range of AI solutions and applications that are available today.

Others that are being developed and they will be very helpful in solving the most challenging problems in transport including road safety, as we mentioned, that using the number of crashes, reducing the number of fatalities, as well as you know, congestion management, improving public transport making our infrastructure more reliable.

Like you know, with predictive maintenance and so forth. I agree. And we all agree there will be challenges along the way, such as the challenges of the ethics of AI we talked about today.

But generally, if AI applications are well planned and implemented, innovations in AI will transform all aspects of future transport, and it will help us to achieve sustainable outcomes that will benefit all the transport users as well as our planet.

Kohei: Thank you for your very important message. So that’s again, then I was very glad to have a conversation with Professor Hussein about work and future mobility, it’s very important not just only for the privacy and security experts, but also from the industry as well.

So let’s keep updating and sharing more insight from your actions.

Hussein: Fantastic. Thank you so much for the opportunity to share these insights with you.

Kohei: thank you.

Thank you for reading and please contact me if you want to join interview together.

Privacy Talk is the global community with diversified expert, and contact me below Linkedin if we can work together!

--

--