Responsibility of AI Systems

AI & Tech by Nidhika, PhD
3 min readSep 7, 2023

--

#AI #Artificial_Intelligence #Ethics

How do AI systems perform? Who measures the correctness of AI systems? In case of some problem, who is responsible for the cause? Who is to blame in the case of a mishappening? What if AI systems do not perform as they should have been? These are the questions that govern the responsibility of AI systems and products. Many of these questions have no direct answer. Below are some explanations and examples where there is a dilemma of what and whom to be held responsible for the situation primarily related to the use of AI.

Imagine that you are using an AI system that was well-hyped for its outputs. You start to use it and it shows some nominal to fatal concerns. Concerns can be any or more of the following (Not limiting to these):

  1. It uses a device camera and the camera is not set well, hence the target in case of gun shooting was misdiagnosed. Who is responsible here?
  2. The AI system generates an automated response may be a phone call, and it uses bias in its answer. Who is responsible here?
  3. The product prints its recommendations and prints misogyny as a part of its learning from old history articles. This was not accepted well and the reason behind this need to be analyzed. Who is responsible here?
  4. The AI system is a part of an automated weaponry system (AWS). The target of this system was fed into the AWS system by a human as a man, and a lookalike of the man got hit by the AWS system. Who is responsible here?
  5. The automated car company delivers an automated car that ensures correctness and accuracy in driving, and precision in its processes. Autonomous cars take automated work for a duration of 10 seconds Bartneck[1] and then give back control to the driver and this continues. This made drivers of this car relaxed and comfortable while operating these cars. However, still, once there was a red light that the car missed due to it missing detecting the right color of lights on a rainy day. This was a problem in the image recognition algorithm. But the image recognition algorithm never got the right color. This led to a fatal accident. Now whom is to blame? The company, the person on the driving seat, the software maker, or the implementation engineers?

The decision of whom to blame is not in one hand. As per Bartneck[1] there must be a black box that deciphers what instruction was followed at what time. This black box can help in detecting who was responsible for the cause, the company or the human user. Also, it can be used to improve the products and systems as well.

Further, these notions need to be understood at the grassroots level of the making of the products, that is starting with the engineers everyone through the project managers to management should know the implications. And the responsibility should be there for the company for missing out on things and advertising the system works well. This is because the system should give warnings, stop or move otherwise if such a situation is faced, while these systems missed a crucial case of such situations.

Note 1:

This is a review with our own comments and recommendations. The insights are based on a chapter 5 from the latest book on AI Ethics, by C. Bartneck et al. (2021) and has been self-written.

Note 2:

The article in original is present on the author’s website, nidhikayadav.org.

References

C. Bartneck et al., An Introduction to Ethics in Robotics and AI, SpringerBriefs in Ethics, 2021, https://doi.org/10.1007/978-3-030-51110-4_2

--

--