The Ethical Dilemma in Intelligent IoT System Design

Snehal Bhatia
7 min readDec 28, 2023

By 2025, the world is predicted to have over 21 billion IoT devices. From smart homes and wearable fitness trackers to autonomous vehicles and healthcare monitoring systems, the IoT has become ubiquitous, promising unprecedented convenience and efficiency. This proliferation is not just limited to personal gadgets; it extends to industrial applications, agriculture, and smart cities, creating an intricate web of interconnected devices that communicate and share data.

What distinguishes today’s IoT landscape is the widespread integration of artificial intelligence (AI) technologies, propelling these devices beyond mere data collectors to intelligent entities capable of autonomous decision-making. The emergence of fields like Cognitive IoT (CIoT) and Social IoT (SIoT) underscores the shift toward intelligent systems that not only process data but also learn, predict, and engage with other connected entities autonomously.

Image generated using DALL.E

Needless to say, this pervasive connectivity comes with its share of security and privacy risks. According to Palo Alto Networks, more than 50% of network-connected devices are vulnerable to medium or severe security attacks, underscoring the pressing need for robust security measures.

However, achieving ethical behaviour in IoT systems goes beyond classical security and privacy concerns in technology, by also encompassing philosophy, law and governance. IoT systems interface between the digital and societal spheres, impacting our day-to-day lives like no other technology does. What’s more, the tools and skills required to build AI into applications have never been more accessible to developers — enabling the release of intelligent and autonomous IoT products for consumers with unprecedented speed. This emphasises the critical need for thoughtful and fair design principles in this era of intelligent IoT.

The increasing adoption of self-driven vehicles is a case where the above concerns are particularly evident. Autonomous vehicles have enormous potential to ease our lives and reduce road accidents, however, in case of an impending collision, should the vehicle swerve to save the pedestrians about to be hit, or instead save its own passenger at the cost of others? Such moral dilemmas raise complex questions about user autonomy, accountability, and the societal impact of these interconnected technologies.

In this article, I will introduce how such ethical dilemmas are bound to arise in IoT Systems across various industries and day-to day scenarios, with examples. I will then highlight the underlying themes of these problems, which will help us understand the potential solutions to such issues. These solutions will be discussed in detail in Part 2 of the blog.

Ethical Quandaries Across Industries: Examining Real-World Examples

You can also watch my talk on this topic at MongoDB .live 2021 Conference on YouTube

IoT Systems Revolutionising Healthcare:

IoT medical devices are playing an increasingly critical role in health-related domains, ranging from personal fitness trackers which analyse our activity, heartbeat, and more, to offer personalised diet and exercise regimes, to remote surgeries (or ‘telesurgeries’) where doctors can perform surgical operations even if they’re not in the same physical location as the patient. Some examples of ethical concerns in healthcare are:

  • Personal and Informational Privacy: Patients rightfully expect not to be continually observed by their healthcare monitoring devices, particularly in sensitive environments like elderly nursing homes. Striking a delicate balance between privacy and the necessity of continuous observations for patient well-being becomes a critical ethical consideration. Moreover, users should have agency over the sharing of their health data, as its unintended use could result in denied health insurance or treatment based on lifestyle choices.
  • The Risk of Non-professional Care: The advent of IoT-controlled robots in healthcare introduces the risk of non-professional care. While technologies like the Da Vinci Robotic Surgical System aim to democratise access to complex surgeries, the potential for patient injuries raises questions about accountability and liability.
  • Algorithmic Diagnoses: The use of AI algorithms for medical diagnosis, such as the likelihood of a patient having a disease, is becoming increasingly common. However, the precision of these diagnoses is only as good as the dataset and the design of the program. Overly cautious algorithms may detect too many people as having a particular disease, causing unnecessary further tests, wasted medical resources, stress-induction in patients, and more. On the other hand, if we consider a software for detecting a rare disease, which is trained on a dataset where 99% of the people are disease-free, it is quite possible that it would declare almost all of the patients it tests as disease-free too, which is a harmful prediction.

Ethical Crossroads of Autonomous Vehicles:

In the realm of autonomous vehicles, the integration of connected technologies opens the door to potential vulnerabilities, as exemplified by the infamous case of a hacked Jeep being remotely controlled on the highway, leaving the passenger helpless. While the promise of autonomous vehicles lies in their potential to reduce road accidents stemming from human error, a myriad of ethical concerns surfaces.

  • Automating Driving Ethics: For instance, imagine a scenario where the vehicle detects its passenger experiencing a heart attack. Should the vehicle prioritise rushing the passenger to the hospital, potentially violating traffic laws and risking accidents on a busy road?
  • Responsibility Allocation: Determining responsibility in such situations becomes a challenging question. Is it the passenger, the vehicle manufacturer, or the regulatory authorities (who didn’t put appropriate frameworks in place for avoiding such scenarios)? In situations where an accident is caused by a human-driven car due to an error, and the autonomous vehicle fails to respond adequately, the ethical complexity deepens.
  • Moral choices are not universal: Studies conducted by MIT show that the moral decisions guiding drivers vary by country. For example, in a situation where a collision could result in the unfortunate outcome of harm to either pedestrians or passengers, individuals from more affluent nations with robust institutions demonstrated a lower inclination to prioritise sparing a pedestrian who illegally entered traffic.

“People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,”

— Iyad Rahwan (Computer Scientist at the Massachusetts Institute of Technology)

Ethical Threads in Social and Assistive IoT

Assistive devices for social aid, psychological care, etc. are becoming more and more common. They have the potential to engage in conversations with humans, exchange emotions, and establish natural connections with users. Ethical concerns are bound to arise in this domain, particularly when children or elderly users are involved.

  • Caregiving Challenges: In scenarios where these robots take on caregiving roles, ethical concerns surface when malfunctions lead to harm. Consider a device meant to provide reminders and ensure that a patient has taken their medicine. The malfunctioning of such a device could cause serious harm, and can even prove to be fatal in some cases. If a patient refuses to take their medicine dosage despite reminders by the device, the question of whether a robot should override a user’s freedom of choice in executing its caregiving responsibilities adds another layer of ethical complexity to this evolving landscape.
  • Security Threats and Manipulation: If malicious actors gain control of devices providing medical or psychological assistance, they can misuse this control for manipulation of users, compromising their well-being and trust in technology.
  • Device Dependency: Users may form emotional bonds with smart devices that learn and adapt to individual preferences. The prospect of device damage or discontinued manufacturer support raises concerns about the impact on users, especially in scenarios where these devices become integral to daily life.

Unveiling Ethical Undercurrents in IoT Dilemmas

As we dissect the varied examples of ethical dilemmas within IoT applications, certain overarching themes emerge, delving into the core problems inherent in these diverse domains:

Algorithmic Bias: Navigating the Labyrinth of Unconscious Prejudices

  • Origins of Bias: Algorithmic bias within autonomous systems may stem from developers’ beliefs, training data, design oversights, or malicious intent.
  • Human-Facing Devices: Notably evident in devices interacting with humans, such as facial recognition doors or voice-activated systems, where bias can lead to discriminatory behaviour.
  • Positive Bias Dilemma: While algorithms may be biased for positive purposes, like encouraging sustainability, the ethical challenge lies in discerning the line between promoting the greater good and potential harm for a few.

Cooperative IoT: Navigating Trust in a Network of Connectivity

  • Scope of Cooperation: Taking the example of autonomous vehicles, the scope of cooperative IT extends beyond peer-to-peer (vehicle-to-vehicle) communication. It would also involve interactions with various entities like cyclists, pedestrians, and physical road signs. This further complicates the balance that needs to be achieved.
  • Malicious Threats: The interconnected nature of IoT introduces the risk of malicious activities, such as fake messages directing vehicles to the same route to deliberately cause congestion, or manipulating parking lot sensors to display that there is no space availability.
  • Ensuring Data Integrity: In a cooperative scenario, it’s crucial to establish methods ensuring the truthful exchange of data between IoT devices, making data immutable where applicable and preventing misuse beyond the intended purpose.

User Choice and Freedom: Striking the Balance in Autonomy

  • User Data Privacy Laws Need to be Designed for the Use Case: Users should have control over the use of their personal data being collected, and it should be stipulated that the data collection and usage aligns with the original purpose. However, ethical concerns arise in scenarios like child location tracking devices, where the device may have to override a child’s desire to not be tracked, in order to help parents ensure the safety of their children.

Part 2: Towards an Equitable Solution

As we navigate this intricate tapestry, a holistic approach encompassing rule-based ethics, transparency, social contracts, and education is pivotal.

If you are interested in how to tackle these challenges by developing a multi-faceted ethical framework for IoT design, read Part 2 of this blog series ‘The Ethical Dilemma in IoT Design — And How to Address It’

--

--

Snehal Bhatia

A Solutions Architect passionate about everything data, ethical and equitable use of technology, and the implications of modern tech on our society and planet.