Ethical Artificial Intelligence Frameworks — Accountability & responsibility

Law and Ethics in Tech
Brass For Brain
Published in
6 min readSep 2, 2023

Accountability is a crucial aspect of ethics, emphasising responsibility and legal liability. In the context of the artificial intelligence industry, it raises the question of identifying who should be held accountable in cases where an AI system (AIS) malfunctions, requiring reparation, redress, restitution, or punishment. Currently, except for the humanoid Sophia, AI lacks legal personality.

humanoid Sophia

According to OECD, “accountability” is described as an ethical and moral expectation that guides individuals or organisations, allowing them to explain their decisions and actions while taking action for improved outcomes. “Liability” refers to the legal consequences of actions or inaction, while “responsibility” encompasses ethical expectations and causal links between actors and outcomes. Considering these definitions, the term “accountability” best represents the principle discussed, emphasising the expectation for organisations and individuals to ensure the proper functioning of AIS in accordance with their roles and applicable regulations, demonstrated through actions, decision-making processes, and documentation.

The principle 9 of Montreal Declaration for responsible development of artificial intelligence asserts that only human beings should be held responsible for decisions resulting from recommendations made by AIS and the subsequent actions taken. In situations where decisions significantly impact an individual’s life, well-being, or reputation, the final decision should be made by a human, based on freedom and informed judgement. The decision to cause harm or take a life should always be made by humans, and the responsibility for such decisions cannot be transferred to an AIS. Individuals who authorise AIS to commit crimes or show negligence in allowing AIS to engage in unlawful activities are accountable for those actions. However, when an AIS is reliable, used as intended, and inflicts harm, it is unreasonable to blame the people involved in its development or use.

Currently, legal personality is granted to natural humans or legal entities. While some individuals advocate for granting artificial intelligence systems legal personality, there are significant challenges in doing so. Governments, scholars, and researchers tend to prefer holding human beings accountable for AIS, particularly in the case of narrow AI. Since narrow AI is predominantly present in our surroundings, attributing harms and discrimination solely to the AI itself would not be logical.

However, as we anticipate the arrival of general artificial intelligence, which possesses human-like capabilities in speech, thought, and action, the situation may change. The principle of “capability caution” emphasised in paragraph 19 of the Asilomar AI Principles urges us to carefully consider that since there is no consensus on the limits of artificial intelligence, we should not assume there are any. It is important for our society not to dismiss the possibility of artificial intelligence ruling over humans in the future.

Source of responsibility/accountability risk

  • use of third-party: when relying on these components, organisations often lose control over various aspects, such as development, testing, and maintenance, which can impact accountability. Accountability extends beyond legal responsibility and encompasses being held responsible in various ways. Purchasing third-party products, including pre-trained models or datasets, introduces potential issues if the product does not meet the organisations’ standards or align with risk management processes. Closed-source proprietary solutions further complicate accountability as it becomes challenging to assess potential issues or assign responsibility. Open-source software, although easier to verify, still poses challenges in determining accountability due to the dispersed nature of development and loose affiliations between contributors. The discovery of vulnerabilities or exploits in widely used libraries further complicates accountability, as assigning responsibility becomes difficult.
  • automation bias: automation bias refers to the tendency of people to rely on decisions made by automated systems, even when they may be incorrect or contradict human judgement. This bias can arise due to psychological factors such as perceiving automation as consistent and associating it with intelligence. It can lead to complacency and a lack of critical evaluation of automated decisions, which may result in worse outcomes compared to human decision-making. Automation bias can also pose accountability risks as it raises questions about responsibility and control over automated systems. Holding someone accountable for faulty decisions becomes challenging when there are numerous components and limited human involvement. Examples of automation bias include spell checkers changing the meaning of words, pilots trusting autopilot systems without monitoring, and drivers over-trusting autonomous vehicles.
  • out-of-the-court judgement: out-of-the-court judgement poses an accountability risk when decisions with significant consequences to personal liberty are made without legal authority. Systems implementing such decisions, whether directly or indirectly, can result in someone being banned from multiple establishments (i.e. blacklist of Youtube, Instragram, casinos, bars and pubs) or based on a single infraction. This creates a separate system of private law that limits personal liberty and lacks clear accountability. Poor transparency, bias, privacy loss, and the absence of recourse further increase the risks associated with these systems. Ethical emerging technologists must be mindful of how their systems could be used to infringe upon individual liberties outside the boundaries of the law.

Mitigating measures (industry best practice)

  • governance structure: Governance in an organisation defines the rights and responsibilities of stakeholders and ensures accountability in carrying out tasks. While there isn’t a one-size-fits-all approach, common components of governance include structure, oversight responsibilities, talent and culture, and infrastructure. Proper governance sets forth areas of responsibility, business goals, and reporting obligations for stakeholders, enabling accountability. Building a robust governance structure that promotes ethical communication is crucial for consistently ethical organisations.
  • responsibility assignment matrix (i.e.: RACI): In order to uphold accountability, it is important to clearly define and assign responsibilities to relevant stakeholders throughout the project life-cycle. A responsibility assignment matrix, such as a RACI matrix, helps map tasks and milestones to specific roles and their corresponding responsibilities. The matrix identifies four primary responsibilities: responsible (carries out the task), accountable (owns and approves the task), consulted (provides input), and informed (kept up to date). Best practices for creating a RACI matrix include ensuring each task has a responsible role, avoiding assigning too many responsible roles, and obtaining buy-in from stakeholders.
  • policies: Documentation is crucial for accountability in organisations. Policy documentation, such as ethics and conduct policies, cyber-security policies, and data protection policies, plays a vital role. Best practices for writing policies include using clear and concise language, providing examples, and ensuring easy reference. Policies should be distributed to relevant stakeholders and accessible to all personnel, including those with disabilities. Well-written and distributed policies mitigate accountability risks by informing stakeholders of their responsibilities, reducing unacceptable behaviors, promoting consistent behavior, justifying accountability, and enabling fair assignment of blame or liability in case of incidents
  • document design & auditing process: documentation plays a crucial role in the design process of data-driven projects. It enables monitoring, identification of success or failure factors, and facilitates continual improvement. Design documentation acts as a source of record, tracing ethical issues back to the design phase and assigning responsibility. Following guidelines such as documenting each stage, listing relevant functionality, and seeking feedback from stakeholders ensures comprehensive and clear documentation. Similarly, documenting the auditing process helps in consistently applying audits and holding auditors accountable for their findings. Guidelines for auditing documentation include documenting each stage, specifying the credentials of auditors, and documenting evidence and potential ethical issues.

Assigning blame to AIS is often a convenient tactic employed by firms to evade accountability and legal liability. However, true accountability should be placed on human beings, and firms and governments should not deflect responsibility by pointing fingers at AIS. To ensure this accountability, it is crucial to establish effective governance structures that empower senior officers, board members, or dedicated committees with decision-making authority regarding artificial intelligence. Without such authoritative bodies, the governance framework would be inadequate in ensuring accountability and might be viewed as a superficial attempt to appear ethically responsible.

To the readers: Would you give a legal personality to an artificial general intelligence? Why? Do you think manufacturers of self-driving automobile and software makers should be accountable for accidents? If so, what do you think of individual car insurance policies? Share with me your mitigating measures and controls for a better accountability/responsibility.

--

--

Law and Ethics in Tech
Brass For Brain

Private lab specialising in emerging tech (AI & Blockchain). Ensuring ethical practices and promoting responsible innovation. Writer: Sun Gyoo Kang