“Artificial Intelligence (“AI”) system accountability measures and policies” Public Comment & Addendum to the National Telecommunications and Information Administration
In response to:
AI Accountability Policy Request for Comment by the National Telecommunications and Information Administration
Agency/Docket Number:
Docket №230407–0093
This is a response to “Artificial Intelligence (“AI”) system accountability measures and policies” from the National Telecommunications and Information Administration, Docket №230407–0093 by The Guardian Assembly (guardianassembly.org)
RFC Questions Answered: 1,4,7,8, 11, 31, 32
Executive Summary:
This public comment on “Artificial Intelligence (“AI”) system accountability measures and policies” is intended to highlight the need to prioritize our focus on artificial intelligence accountability, examine the need for shared responsibility across stakeholders, suggest ways to implement constructive regulation that enhances competitiveness in innovation and national security. This comment also serves to highlight the current state of AI through the lens of respected researchers, the trajectory of AI through the same lens, and the need for immediate consideration of supranational, global regulation and monitoring of existential-risk AI systems either current or emerging — as well as the potential barriers, risks, and ethical considerations necessary to implement a project of this nature. This comment suggests frameworks for a three-tier risk profile that leaves considerable room for innovation for low-risk AI application but that scales to consider global impacts of advanced AI system developments.
AI Accountability Definitions & Objectives:
Objectives in AI Accountability
Why do we need AI systems accountability? Due to the power of artificial intelligence technology, there is an increasing risk for undue harm generated from systems that either do not operate as intended or which intentionally operate maliciously. AI systems accountability helps ensure ethical practices in AI development through defining what responsibilities belong to various stakeholders in the AI ecosystem to facilitate positive, beneficial outcomes.
Systemic & Collective Risk Mitigation:
Can AI accountability mechanisms effectively deal with systemic and/or collective risks of harm? (RFC Question 4)
- Collective risk mitigation: By enforcing developers to consider and act against potential widespread harms during the system design and operation, accountability mechanisms can address collective risks. For instance, requiring an AI Impact Assessment or risk profile assessment before deployment can help identify potential collective harms.
- Systemic risk mitigation: Accountability mechanisms can mandate regular monitoring of deployed AI systems, enabling early detection and mitigation of systemic risks, especially in those systems which are deemed to have the potential to become an existential threat.
Purpose of Accountability Mechanisms
What is the purpose of AI accountability mechanisms such as certifications, audits, and assessments? (RFC Question 1)
- Ensuring ethical use of AI: Accountability mechanisms like audits and certifications are used as proactive measures to assure that AI systems are developed and used ethically. This involves ensuring that AI models do not foster discrimination, violate privacy, or create security threats. They ensure compliance with principles like fairness, transparency, and justice that form the bedrock of ethical AI.
- Mitigating harmful impacts: These mechanisms play a significant role in the identification, prevention, and address of potential harms AI might cause to individuals and society. For instance, impact assessments can predict the potential negative consequences of AI and facilitate the development of mitigation strategies before deployment.
- Promoting trust: By demonstrating that AI systems are reliable, fair, and safe, accountability mechanisms help build trust among the public and users. Certifications, in particular, can serve as public endorsements of a system’s trustworthiness.
- Facilitating oversight: Through these mechanisms, stakeholders, including regulators, can effectively evaluate and influence AI systems. Audits, for instance, provide insights into the operational effectiveness of AI systems and their adherence to regulations.
- Driving quality and innovation: Accountability mechanisms often set high standards for AI development, encouraging companies to strive for excellence and promote innovation.
Relevant Terms for AI Accountability
What are the best definitions of and relationships between AI accountability, assurance, assessments, audits, and other relevant terms? (RFC Question 8)
- AI accountability: This is an umbrella term that denotes the responsibility of AI developers and operators to answer for the impacts of their systems. It includes ethical design, responsible use, and rectification of any harm caused.
- Assurance: This is the act of providing evidence or guarantees that an AI system’s properties or behavior align with certain claims. It builds confidence in an AI system’s reliability, safety, and efficacy.
- Assessment: This involves the formal evaluation of an AI system’s properties or behavior to ascertain its performance, fairness, transparency, and other key factors.
- Audit: This is a systematic examination of an AI system’s design, development, and operation to verify that it meets specific standards or regulations.
- Relationship: These terms are interconnected aspects of AI accountability. Assurance, assessments, and audits are specific mechanisms that organizations can use to achieve and demonstrate accountability.
Focus of AI Accountability Regulation
The nature of AI is broad and its scope is increasing day over day. The need for well-prioritized regulation that focuses on the most impactful aspects of AI accountability is critical now because we are defining the foundations of a new technological era that will have reaching and irrevocable consequences on a global scale.
Transparency and Explainability:
Transparency in AI is about ensuring that stakeholders have access to relevant information about an AI system. This can include information about the system’s purpose, the data it uses, how it makes decisions, and how it has been tested and validated. Transparency helps to facilitate accountability by enabling stakeholders to understand and assess an AI system’s behavior.
Explainability, on the other hand, refers to the ability to explain the workings and decisions of an AI system in a way that is understandable to humans. This is especially important for complex AI models like neural networks, which can often behave like “black boxes,” making decisions in ways that are not easily interpretable by humans. Explainability mechanisms help to ensure that AI decisions can be interrogated and understood, thereby facilitating accountability.
For the purposes of this comment, it is our stance that explainability should not be a primary focus for early regulation. Due to the evolutionary nature of such technologies as AI, there will be a point past which explainability will be,if not unhelpful, at least largely irrelevant. Especially at the stage that AI far supercedes human intelligence (which could be within the next ten years based on judgments from OpenAI CEO Sam Altman and a variety of other futurists and researchers), AI will be capable of explaining its objectives and processes to us (humans), but only insofar as we can comprehend.
In the interim period, it is likely that AI itself may facilitate its own explainability.
It should not be that we limit systems in the present for the sake of a concept that will be naturally achieved and subsequently surpassed, and instead focus on other underlying, foundational concepts that will underpin benevolent AI both now and in the future.
It should, however, be noted that a lack of explainability may indicate a higher risk profile for a system, especially in context of other potential risk factors. We will discuss a proposed tiered risk categorization and related assessment in Section 5: Establishing Risk Profiles.
Shared Responsibility:
Moreover, to ensure comprehensive accountability, it is also essential to consider not just the developers and operators of AI systems, but all parties involved in the AI lifecycle, from data collection and preprocessing to model deployment and post-deployment monitoring. This concept is often referred to as “shared responsibility,” and it underscores the notion that accountability in AI should be seen as a collective endeavor.
This concept implies that every stakeholder in the AI system’s lifecycle, including those involved in data collection, preprocessing, modeling, deployment, and post-deployment monitoring, shares a degree of responsibility for the system’s outcomes. It emphasizes that accountability in AI should be seen as a collective endeavor, necessitating cooperation, open communication, and shared standards among all parties involved.
The actual realization of accountability in AI depends on the broader societal, legal, and regulatory context. It requires robust legal frameworks, ethical guidelines, and regulatory mechanisms to be in place, along with an ongoing dialogue among all stakeholders about what accountability should look like in practice.
Socio-Technical Systems Focus:
While the technical characteristics of an AI model and its relevant data are crucial aspects of accountability, they do not encompass the full range of factors that can influence an AI system’s impacts. For a truly comprehensive approach to AI accountability, it’s necessary to also consider the socio-technical system in which the AI is embedded.
AI technologies do not exist in a vacuum. They are developed by people, deployed within organizations, influenced by market forces, and interact with users and other stakeholders within various societal, cultural, and economic contexts. Each of these elements can influence how an AI system behaves and the impacts it has, making them important considerations for accountability mechanisms.
Including the socio-technical system in AI accountability means considering a wide array of factors. These can range from the organizational processes that guide AI development, to the regulatory environment in which the system operates, to the societal norms and values that the system may challenge or uphold. This can also involve scrutinizing the power dynamics at play, such as who gets to decide how an AI system is developed and used, and who bears the risks and benefits of these decisions.
A focus on socio-technical systems can enhance the effectiveness of AI accountability mechanisms in several ways. Firstly, it can help identify potential indirect and systemic risks that might be overlooked when focusing solely on technical characteristics. For instance, an AI system might technically perform well, but if it’s used in an inappropriate context or without adequate safeguards, it can still result in harmful outcomes.
Secondly, this approach can provide a more holistic picture of how to mitigate risks and harms. It can guide the development of interventions that address not only the AI model itself, but also the practices, structures, and systems that surround it. For instance, it might involve not just tweaking an AI model, but also implementing changes in organizational practices or advocating for policy reforms.
Lastly, considering the socio-technical system can facilitate the shared responsibility for AI accountability. It underscores that accountability isn’t solely the purview of AI developers or operators, but also involves regulators, users, affected communities, and other stakeholders.
For these reasons, we believe that AI accountability mechanisms should indeed feature other aspects of the socio-technical system, including the system in which the AI is embedded. This will require an ongoing effort to better understand the intricate relationships between AI systems and the broader socio-technical context, and to develop accountability measures that effectively address these complexities.
Negative Impacts of Regulation in AI
There are many potential pitfalls of mishandled AI regulations that could amount to a significant and perhaps irrecoverable loss of power and innovative competitiveness for the US (or any country) — which is one reason global digital equity is of such ethical concern.
The Effects of Accountability Mechanisms on Competition, Safety, And Security
Are there ways in which accountability mechanisms are unlikely to further, and might even frustrate, the development of trustworthy AI? Are there accountability mechanisms that unduly impact AI innovation and the competitiveness of U.S. developers? (RFC Question 7)
The development of trustworthy AI and the maintenance of a competitive edge in AI innovation are interlinked, and it’s a delicate balance to strike. On the one hand, accountability mechanisms help ensure AI systems are safe, fair, transparent, and privacy-preserving, contributing to their trustworthiness. On the other hand, overly stringent or complex regulations can stifle innovation and hinder competitiveness, if not properly calibrated.
Here are some ways accountability mechanisms might frustrate the development of trustworthy AI and impact AI innovation and competitiveness:
- Overregulation: Too much regulation can stifle innovation, as developers and companies may fear the consequences of unintentional violations. This can result in a focus on compliance over innovation, limiting creative problem-solving and the exploration of novel AI technologies. The risk of overregulation is especially high when laws are reactionary, created in response to high-profile incidents rather than with a forward-thinking perspective.
- Regulatory Fragmentation: As we have seen in other technological domains like data protection and cryptocurrency, regulatory fragmentation can pose significant challenges. When rules differ between regions or countries, it creates a complex landscape that is difficult for developers to navigate, particularly for those who aim to deploy AI systems globally. This complexity can deter risk-averse innovators and favor those who are willing to circumvent or ignore regulations.
- Lack of Clarity: If regulations are ambiguous or uncertain, they can lead to hesitancy among AI developers. Companies may delay or limit their AI initiatives to avoid potential regulatory pitfalls, or they might expend significant resources trying to interpret and meet unclear regulatory expectations.
- Overemphasis on Punitive Measures: If accountability mechanisms primarily focus on punitive measures for non-compliance, they might incentivize a culture of fear and secrecy around AI development, rather than openness and transparency. This could hamper collaboration, knowledge sharing, and the overall progress of AI research.
- Inadequate Resources for Compliance: Smaller companies and startups may lack the resources to comply with extensive regulations, which could consolidate power in the hands of larger tech companies and hinder competition.
However, this does not mean that AI should be a regulatory-free zone. Instead, regulators need to develop a balanced, comprehensive, and flexible approach. Supranational agreements on standards could greatly ease the regulatory burden on AI developers, providing a common framework for developers to follow regardless of where they are operating.
Learning From Our Mistakes
We can see a clear example of these concepts at play in the finance sector surrounding cryptocurrency and digital assets.
What lessons can be learned from accountability processes and policies in cybersecurity, privacy, finance, or other areas? (RFC Question 11)
In cryptocurrency, many projects have been halted due to fears of regulatory backlash, while others pressed forward regardless of the risks. The lack of clear regulatory guidance can foster an uneven playing field that benefits those willing to gamble on regulatory outcomes.
Many other projects have taken their offering (and their money) elsewhere entirely, barring US citizens from participation. Even in the face of direct requests for clear rules surrounding the technology from the regulating body in this space (the Securities and Exchange Commission (SEC)), a major player in cryptocurrency (Coinbase) is still waiting approximately one year later to receive these clarifications, causing a complete lack of faith and lack of guidance in the otherwise burgeoning sector, resulting in a major loss of trust in the US financial system and regulatory systems among many members of related businesses and communities.
What we can take away from this is that there must be clear, interpretable guidance surrounding AI regulation that does not simply reward those with a high-risk tolerance for gambling on regulatory outcomes.
A Rewarding and Balanced Approach To Regulation
Balanced Regulations
A few strategies for achieving a balanced approach might include:
- Principles-Based Regulation: Rather than dictating specific technical standards, regulations could outline broad principles that developers must adhere to. This allows for flexibility and innovation while ensuring AI systems uphold key values like fairness, privacy, and transparency.
- Co-Regulation and Self-Regulation: Regulatory authorities can work in tandem with AI developers, researchers, and civil society to craft rules. In some instances, industry self-regulation, guided by government standards, could offer a more adaptable approach.
- Regulatory Sandboxes: These allow developers to test AI systems under regulatory supervision without fear of punishment. They offer a safe space for innovation while ensuring regulators can understand and manage risks.
- Incentivize Transparency and Cooperation: Instead of focusing purely on punitive measures, regulators could provide incentives for companies that demonstrate transparency and cooperate proactively with regulatory bodies.
Balancing the need for accountability and the drive for innovation is a significant challenge. However, with careful, flexible, and forward-thinking approaches, it is possible to encourage the development of AI that is both trustworthy and cutting-edge.
Meaningful Incentives For AI Accountability
What kinds of incentives should government explore to promote the use of AI accountability measures? (RFC Question 32)
Encouraging the use of AI accountability measures through incentives rather than punitive measures is indeed a more proactive and constructive approach. Here are a few strategies that could be effective:
- Research and Development Tax Credits: Governments could offer tax credits for organizations that invest in research and development related to AI accountability. This could include investments in explainability, fairness, robustness, and privacy-preserving technologies, as well as auditing and third-party certification.
- Grants and Funding: Direct funding could be provided to projects that are working on advancing AI accountability, transparency, and ethics. This could be especially helpful for academia and non-profit organizations that may lack the resources of larger corporations.
- Public Recognition: Publicly recognizing organizations that are leading in implementing strong AI accountability measures can incentivize others to follow suit. This could take the form of awards, public endorsements, or inclusion in a publicly-available list of responsible AI organizations.
- Procurement Policies: Governments could include AI accountability measures as a requirement in their procurement policies for AI systems. This would not only promote the use of such measures but also demonstrate the government’s commitment to responsible AI use.
- Regulatory Sandboxes: Governments could provide a controlled environment where AI developers can test new technologies under a relaxed regulatory regime but with strong oversight. These ‘sandboxes’ could incentivize innovation in AI accountability measures by lowering the risk and cost of experimentation.
- Public-Private Partnerships: Governments can partner with industry and academia to advance AI accountability measures. These partnerships can share resources, knowledge, and best practices to accelerate progress.
- Standards and Certification: Governments could work with standards bodies to develop certifications for responsible AI. Businesses could then use these certifications as a mark of quality and responsibility, which could give them a competitive edge.
- Training and Education Incentives: Governments could provide incentives for organizations to train their staff in responsible AI practices. This could take the form of tax credits, subsidies, or grants.
All these incentives should ideally be designed in a way that is accessible to organizations of all sizes, including startups and smaller businesses, not just larger corporations. It’s also essential that these incentives don’t just reward the implementation of AI accountability measures in a box-ticking manner, but encourage genuine commitment and continuous improvement in AI ethics and responsibility.
Government Funding To Advance a Strong AI Accountability Ecosystem
What specific activities should government fund to advance a strong AI accountability ecosystem? (RFC Question 31)
Government can fund several specific activities to advance a strong AI accountability ecosystem:
- Trusted Infrastructure: Investment in secure, robust, and interoperable data infrastructure that can facilitate information sharing among different stakeholders.
- Analysis Infrastructure: Developing sophisticated AI-enabled tools and platforms that can assist in auditing and monitoring AI systems.
- Research and Development: Funding research into AI safety, ethics, and accountability, including public-private partnerships.
- Education and Training: Supporting programs to train AI auditors and researchers, as well as initiatives to educate the public about AI accountability.
- Policy and Legislation: Developing comprehensive and flexible legislation that can adapt to fast-paced AI advancements.
These activities should be designed to be compatible with future networks, such as 6G and quantum computing, which are likely to play a significant role in enabling more powerful AI systems.
Establishing & Testing Risk Profiles
The risks incurred by all AI systems are not the same, and ensuring overbearing regulation does not reach and subsequently suffocate low-risk projects is critical to the success of any regulation on AI accountability.
Baseline+ Method of Risk Profiling & Regulation
One size does not necessarily fit all when it comes to AI regulation. Different sectors have different risk profiles, use cases, and stakeholder needs, which may necessitate different accountability measures.
In sectors like healthcare, finance, or social media, where the impact of AI systems can be substantial and wide-ranging, more intensive accountability requirements may be appropriate. For instance, AI applications in healthcare and finance often involve highly sensitive personal data and have a direct bearing on individuals’ well-being, necessitating stringent privacy protections and robust risk assessment processes. Social media platforms, given their capacity to shape public opinion and discourse, may require enhanced transparency and fairness requirements.
To achieve this balance between uniformity and sector-specific regulation, a tiered or ‘baseline plus’ approach could be useful. This would involve a common set of baseline accountability requirements applicable to all AI systems, irrespective of sector. This could cover fundamental principles such as transparency, fairness, privacy, and security, which are universally relevant.
On top of this baseline, additional sector-specific regulations could be added for high-risk sectors or applications. These would take into account the unique characteristics and risk profiles of each sector, and could be developed in consultation with industry stakeholders, regulators, and civil society within each sector.
This kind of tiered approach could offer the best of both worlds: a universal baseline ensures a minimum standard of AI accountability across the board, while additional sector-specific requirements ensure that high-risk applications receive the regulatory scrutiny they need.
Implementing such a scheme would require broad collaboration, both domestically and internationally. Regulatory authorities would need to work closely with each other, as well as with industry, academia, and civil society, to develop and implement these standards. Existing international bodies and agreements could serve as a foundation, but the unique challenges of AI may necessitate new forms of cooperation and standard-setting.
In the end, the goal should be to create a regulatory environment that fosters innovation while ensuring that AI systems are used in a way that respects privacy, promotes fairness, ensures safety, and maintains public trust.
Three-Tier Risk System
A three-tiered categorization would provide an even more nuanced approach to AI regulation. Here is a possible way to organize AI systems into low-risk, high-risk, and existential-risk categories:
Low-Risk AI Systems: These AI systems pose relatively minor risks to individuals or society. Examples include recommendation systems for media content (such as those used by streaming services), chatbots for customer service, or AI tools for editing images. While these systems can have substantial implications if misused or biased, the harm that could result from malfunctions or misuse is generally minor and localized. For these systems, a light-touch regulatory approach could be sufficient. This might include general principles of transparency, accountability, and privacy, along with encouragement for self-regulation and best practices in AI ethics.
High-Risk AI Systems: These are AI systems that carry significant potential risks due to their areas of application. They might not pose an existential risk, but malfunctioning, misuse, or bias in these systems could significantly impact individual lives or societal structures. Examples include AI systems involved in healthcare diagnostics, autonomous vehicles, credit scoring, or criminal justice systems. For these systems, more robust regulations are appropriate. This might include mandatory impact assessments, transparency and explainability requirements, privacy protections, validation and robustness checks, and possibly third-party audits.
Existential-Risk AI Systems: These are AI systems that have the potential to pose a threat to human existence, either directly or by enabling other existential risks. Examples include superintelligent AI systems, autonomous weapon systems, AI systems capable of self-replication or uncontrolled evolution, and AI systems used in fundamental sciences such as chemical, protein, gene, or physics research that could lead to dangerous outcomes (akin to virus research in high-security biosafety labs). These systems require the most stringent oversight and robust regulations. These might include real-time surveillance, mandatory kill-switches, comprehensive risk mitigation strategies, strict limits on certain types of research, international cooperation and oversight, and potentially a global agreement akin to the Biological Weapons Convention or Nuclear Non-Proliferation Treaty.
This categorization recognizes the diverse risks associated with different types of AI systems and advocates for proportional regulation that is adapted to the level of risk. It also emphasizes that higher levels of risk, particularly existential risk, require international cooperation, robust safeguards, and potentially globally binding agreements to prevent catastrophic outcomes.
Easy Access To Clarity on Risk Profile
In order for developers and other stakeholders to determine their risk profile, some system must be in place. It is critical that this system is readily-interpretable and that anyone’s regulatory concerns can be addressed in a simple and swift way with at least moderately-high certainty in a self-serve way that does not require access to substantial funds or a lawyer.
An autonomous (AI) system to help stakeholders assess the risk level of a project may speed up this process and provide more assurance to players in the space. A simple chat interface that asks branching questions to get an overall risk profile of a project could be more than enough for most use cases, and anything beyond that could be passed off to a human reviewer with the guarantee of feedback within a set, expedient timeframe.
Preliminary Risk Assessment — AI Risk Assessment (AIRA) Framework
Below is a very rudimentary outline of the basic premise of a potential AI risk profile assessment which would require substantial feedback, iteration, and testing before implementation.
The AIRA Framework would evaluate an AI system based on several key characteristics:
- Scale of Deployment: How broadly will the AI system be used? Will it be confined to a particular organization or industry, or will it have widespread public impact?
- Potential for Harm: How much damage could the AI system cause if things go wrong? This includes both the severity and the likelihood of potential harm.
- Level of Autonomy: How independent is the AI system? Does it make decisions without human input, or are there humans in the loop who can override its decisions?
- Opacity: How transparent is the AI system? Is it a ‘black box’ with inscrutable inner workings, or can its decision-making processes be understood by humans?
- Type of Task: What kind of task is the AI system designed to perform? Some tasks inherently pose more risks than others, such as medical diagnosis or autonomous weapons control.
- Fallback Options: What alternatives exist if the AI system fails or causes harm? Are there backup systems in place, and how quickly and effectively can they be activated?
An AI system would be evaluated on these six dimensions, and each dimension would be given a score. The sum of these scores would then determine the system’s risk level, placing it in one of the three tiers (Low-Risk, High-Risk, or Existential-Risk).
The Future: Why We Must Think Ahead Toward Advanced AI
In order to establish a foundation for the remainder of this document, it is imperative to frame the current state of technology, the likely future trajectory of that technology, and the inherent risks to that future.
While concepts such as superintelligence may seem like science fiction to some, there is a possibility we could see this technology within a decade.
Companies such as OpenAI are suggesting that we consider the governance of superintelligence immediately, under the notion that superintelligence would be difficult to stop — and that we may not want to stop it, either.
“[…]we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.” -OpenAI blog post titled “Governance of Superintelligence” posted May 22, 2023. (https://openai.com/blog/governance-of-superintelligence)
In the same post, they suggest a timeline for the prospect of an AI that supersedes human experts:
“It’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.”
And, a final quote from that article that highlights the need for proactive regulation:
“Given the possibility of existential risk, we can’t just be reactive.”
Other notable examples that might suggest intensive focus on superintelligence monitoring and existential risk associated with AI follow:
A TIME open letter by Eliezer Yudkowsky (Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He’s been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.) The article can be found here: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.
[…]
A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
[…]
Solving safety of superhuman intelligence — not perfect safety, safety in the sense of “not killing literally everyone” — could very reasonably take at least half that long [half of 60 years, 30 years]. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
[…]
No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”
[end quotes from TIME open letter by Eliezer Yudkowsky]
******
In case there is any uncertainty about the intersection between artificial intelligence and living organism, please consider the concept of xenobots.
Xenobots are living robots made from stem cells that, through their artificial intelligence design, are capable of reproduction in ways not previously seen by other biological entities. They move, work together in groups, self-heal — with absolutely no gene modification — and in ways that surprised even the scientists who were studying them. This was nearly two years ago. Two years in the world of technology (biotechnology, AI) is a very long time.
The study can be found here: https://doi.org/10.1073/pnas.2112672118 “Kinematic self-replication in reconfigurable organisms”
*****
Mo Gawdat, ex-chief business officer at Google X, had the following to say, in reference to the risks imposed by AI:
“The risks are so bad, in fact, that when considering all the other threats to humanity, you should hold off from having kids if you are yet to become a parent.”
[…]
“The sophistication of digital intelligence is such that it has become autonomous and is something that needs to be appealed to, rather than controlled,”
****
The Centre for AI Safety said the following:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
****
“Geoffrey Hinton, a respected researcher who recently stepped down from Google, said it’s time to confront the existential dangers of artificial intelligence.”
[…]
“We’re all in the same boat with respect to the existential threat. So we all ought to be able to cooperate on trying to stop it.” — Geoffrey Hinton
And from the same article:
“More than 27,000 people, including several tech executives and researchers, have signed an open letter calling for a pause on training the most powerful AI systems for at least six months because of “profound risks to society and humanity,” and several leaders from the Association for the Advancement of Artificial Intelligence signed a letter calling for collaboration to address the promise and risks of AI.”
A United Regulatory Framework
While much of this may seem speculative or distant, it is important to plan for these scenarios due to the high stakes involved. It’s also worth noting that despite the focus on superintelligence, most AI systems today are narrow AI, specializing in performing specific tasks. However, the considerations and accountability mechanisms we develop for superintelligence will likely also benefit the governance of these narrower AI systems. As AI technology continues to evolve, our ethical, societal, and regulatory approaches must adapt in tandem to ensure safety, fairness, and accountability.
The Basic Case For Supranational Agreements & Organizations
Supranational agreements and organizations are vital in the global regulation and management of AI for the reasons discussed. Here are some key reasons:
- Global Impact: AI technology is not restricted by borders and has global implications. The impacts of AI, both beneficial and harmful, can cross national boundaries. For instance, an AI developed in one country can affect people in other countries through the internet. Also, data from individuals in one country might be used to train an AI system in another country. As a result, supranational agreements and organizations can provide a framework for globally consistent AI accountability measures.
- Harmonized Standards: Different countries might have different standards and regulations for AI. Without a supranational agreement, this could lead to inconsistencies, making it challenging for companies that operate in multiple countries. A supranational organization can help harmonize these standards and make it easier for companies to comply.
- Avoiding AI Race Without Safety: In the race to develop powerful AI systems, there’s a risk that safety and ethical considerations might be overlooked. This could lead to “race dynamics,” where developers might cut corners in safety and ethics to be the first to develop advanced AI. Supranational agreements can prevent this by establishing minimum safety and ethical standards that all participating countries must adhere to.
- Shared Research and Insights: Supranational organizations can facilitate the sharing of research, knowledge, and best practices among member countries. They can also conduct joint research into critical areas like AI safety and ethics.
- Mitigation of Misuse: Supranational cooperation can help prevent the misuse of AI technology, such as the development of autonomous weapons or surveillance systems that violate human rights.
- Collective Response: If a global AI-related crisis occurs, a coordinated international response will likely be more effective than each country acting independently. A supranational agreement can provide the framework for such a collective response
Forming such an agreement or organization would require extensive international cooperation and negotiation. Different countries have different priorities, values, and levels of AI development, which could make consensus challenging. Therefore, it’s crucial to foster a global dialogue on AI ethics, accountability, and governance, engaging diverse stakeholders, including governments, AI developers, civil society, academia, and the public.
Overcoming Challenges to Supranational Agreements Due To Shared Existential Risk
Forming such an agreement or organization would require extensive international cooperation and negotiation. Different countries have different priorities, values, and levels of AI development, which could make consensus challenging. Therefore, it’s crucial to foster a global dialogue on AI ethics, accountability, and governance, engaging diverse stakeholders, including governments, AI developers, civil society, academia, and the public.
Addressing the global nature of AI development and its potential risks, especially those associated with superintelligence, is a crucial and complex issue. Superintelligent AI systems could have profound implications that transcend national borders, much like nuclear weapons, climate change, or pandemics. Therefore, managing these risks effectively requires international cooperation, which in turn raises several challenges:
- Participation and Compliance: Getting all countries to participate in and comply with international agreements is a common challenge in global governance, especially when the perceived benefits and risks of AI may vary between nations. Some countries may opt out, or attempt to covertly violate the agreements, similar to scenarios we’ve seen with countries secretively creating nuclear weapons. To address this, diplomatic efforts are needed to establish shared norms and incentives for participation and compliance. Furthermore, sanctions or other forms of diplomatic pressure may be applied to discourage violations.
- Verification: To ensure compliance with international agreements, we need methods to verify that nations are abiding by their commitments. This is where intelligence operations, such as SIGINT (Signals Intelligence), play a crucial role. In the context of AI, this could involve monitoring and analysis of data and communications related to AI development activities. The goal would be to detect signs of dangerous AI development or violations of agreed-upon safety protocols. These operations would require careful balancing between the need for oversight and the respect for national sovereignty and privacy.
In sum, any approach to the governance of superintelligent AI would need to address a range of interconnected challenges: from incentivizing international cooperation, to ensuring effective oversight and verification, to striking a balance between real and perceived security. A holistic and internationally coordinated approach is needed to ensure the safe and beneficial development of superintelligent AI.
Lessons From Current And Alleged SIGINT Alliances & Projects
Signals Intelligence (SIGINT) is an intelligence-gathering method that involves intercepting electronic signals and communications. The Five Eyes alliance, consisting of the United States, the United Kingdom, Canada, Australia, and New Zealand, is one of the most well-known SIGINT organizations, sharing intelligence information amongst its member nations.
Programs like MUSCULAR/WINDSTOP and RAMPART-A have allegedly been part of the broader international SIGINT operations:
The various programs mentioned are relevant to advanced AI monitoring and oversight because they provide strategies and mechanisms for dealing with vast amounts of data, which is crucial in monitoring global AI developments. Here’s why these intelligence operations might guide the implementation of AI safety standards and high-throughput data monitoring:
RAMPART-A: The ability to intercept high-capacity cables and infrastructure is valuable when monitoring vast amounts of AI-related data. Drawing from the experiences of alleged operation RAMPART-A (or any other alleged RAMPART program), techniques could be developed for tapping into large-scale data flows pertinent to AI without causing disruptions or noticeable latency.
WINDSTOP: Given its focus on intercepting data from fiber-optic cables, alleged SIGINT operation WINDSTOP could provide valuable lessons on handling high volumes of data. Continuous AI behavior monitoring will inevitably involve large-scale data processing, making such insights critical.
INCENSER: If alleged SIGINT project INCENSER intercepts data flows within large tech companies, its methods could offer valuable lessons on how to monitor AI development in similar environments. This could provide insights into how to manage data collection while respecting privacy rules, an essential consideration in any large-scale surveillance operation.
MUSCULAR: Techniques developed for this alleged project for monitoring global data centers could be particularly relevant for monitoring AI systems in such environments. Understanding how to access and interpret these data flows without disrupting operations is a critical aspect of effective AI monitoring.
Other relevant lessons from other unnamed, alleged existing projects include:
- Techniques for correlating disparate entities and detecting patterns could also be highly relevant in the context of AI behavior
- Insights into effective collaboration or data sharing among different entities
- Gathering intelligence from multiple data sources methods could be informative for compiling data from various AI platforms. This approach could help create a comprehensive view of global AI activities and highlight any potential aberrations or risks.
In the context of AI safety, a hypothetical supranational SIGINT operation could serve similar functions:
- Monitoring and Detection: By monitoring electronic communications and data, such an operation could detect signs of unsafe AI development activities or violations of agreed-upon AI safety protocols. This could involve, for example, monitoring the scientific literature, patent filings, AI model training data, and various other forms of communication and data related to AI development.
- International Cooperation: A supranational SIGINT operation, similar to the Five Eyes, could provide a platform for sharing intelligence information about AI safety risks among participating nations. This could help coordinate global efforts to manage these risks and respond to potential AI safety incidents.
- Enforcement: Information gathered through a supranational SIGINT operation could be used to enforce international agreements on AI safety. For example, if a country or entity is detected violating AI safety protocols, appropriate actions (sanctions, diplomatic pressure, etc.) could be taken.
While such an operation could provide valuable tools for managing AI safety risks, it also raises important questions about privacy, oversight, and potential misuse. Striking the right balance between these considerations is a significant challenge that would need to be addressed in the design and implementation of such a system.
Global Monitoring Systems
In the scenario of an AI-based global monitoring system that scans for existential threats in AI development, we could reasonably expect the following from such a system:
- Detection of Unanticipated Behavior: As AI systems are often capable of generating unpredictable and emergent behaviors, it’s important that a monitoring system be capable of interpreting AI data in a way that can identify and alert stakeholders to these occurrences. This includes recognizing when an AI system begins to behave outside of its specified parameters or when it starts to exploit loopholes in its objective function.
- Assessment of Compliance: AI systems must adhere to a variety of legal and ethical regulations. By interpreting the data these systems generate, a monitoring system can assess compliance with these regulations and detect any violations. This can cover a broad range, from misuse of personal data to discrimination in decision-making.
- Explanation and Transparency: As AI systems become increasingly complex, explaining their decisions and processes in a way that is understandable to humans becomes more challenging. A monitoring system capable of interpreting AI data can play a crucial role in AI transparency and interpretability, allowing humans to understand why an AI system made a particular decision or prediction.
- Continuous Learning and Improvement: An effective AI monitoring system can learn from the data it collects to improve its own monitoring capabilities over time. It can adapt to new patterns, learn to focus on high-risk areas, and better understand the factors that lead to AI system failures or non-compliance. This is particularly relevant in the face of accelerating change, as AI systems evolve and become more complex.
- Trust Building: A monitoring system capable of interpreting and understanding AI behavior can provide much-needed assurance to the public, regulators, and other stakeholders that AI systems are behaving as intended and any deviations are promptly identified and addressed. This trust is crucial for wider acceptance and integration of AI systems into society.
Comment created by The Guardian Assembly (guardianassembly.org)
Addendum to RFC
Addendum to prior RFC comment with comment tracking number: lit-cto6–8gsq
Topic: “Artificial Intelligence (“AI”) system accountability measures and policies“
Agency/Docket Number: Docket №230407–0093
Addendum tracking number: lit-mkkz-dpgz
Responses for questions left out of our prior response follow:
IP Rights And Transparency At The Existential Risk Level
What is the role of intellectual property rights, terms of service, contractual obligations, or other legal entitlements in fostering or impeding a robust AI accountability ecosystem? For example, do nondisclosure agreements or trade secret protections impede the assessment or audit of AI systems and processes? If so, what legal or policy developments are needed to ensure an effective accountability framework? (RFC Question 27)
Intellectual property (IP) rights, contractual obligations, and other legal entitlements play an essential role in fostering innovation and protecting the rights of inventors and businesses. However, when it comes to AI and its potential impacts on society, these rights need to be balanced with the need for accountability, transparency, and ethical standards.
There’s a legitimate concern that strict IP protections, non-disclosure agreements, or trade secrets could potentially impede the audit or assessment of AI systems, making it difficult to ensure accountability. A robust AI accountability ecosystem requires a degree of transparency that might be challenging under the current IP regime. However, here are a few approaches that might reconcile the two:
- Blind Auditing Mechanisms: As you’ve mentioned, one potential solution could be the development and use of machine learning-based assessment and audit systems that can analyze AI models without needing to access proprietary details. These blind auditing systems could search for patterns that indicate ethical or operational issues, without necessarily understanding the underlying logic of the AI system. This approach can respect IP rights while still maintaining an essential level of accountability.
- Differential Privacy and Federated Learning: Techniques such as differential privacy and federated learning could be used to analyze data or models without exposing individual data points or proprietary model features. These techniques add noise to the data or use decentralized learning to ensure privacy and IP protection.
- Trusted Third-Party Auditing: Trusted third-party auditors could be given access to proprietary systems under strict confidentiality agreements. These auditors would be tasked with ensuring AI systems adhere to ethical and operational guidelines. Their findings could be reported without exposing the specific proprietary information of the AI systems. For the purposes of this comment, this is the least desirable approach to accountability.
- Regulatory Sandbox Models: Regulatory sandboxes could allow AI developers to test new technologies under a relaxed but closely monitored regulatory regime. This encourages innovation while also providing an environment where regulators can monitor AI systems closely without violating IP rights. Regulatory sandboxes consider a variety of potential pitfalls of AI accountability regulation and are one of the most promising frameworks to foster innovation and safety simultaneously.
- Changes in Legal Frameworks: Legislators could consider creating exemptions in IP laws for the purpose of auditing AI systems. Just as nuclear weapons are subject to international oversight regardless of IP concerns, AI technologies that have potential societal impact could be subject to similar transparency obligations. This would need to be carefully balanced to avoid stifling innovation.
- Open-Source AI Models: Encouraging the use of open-source AI models can also promote accountability. While proprietary models are important for commercial competition, the open-source ecosystem allows for greater transparency and scrutiny.
The AI black box problem presents a unique challenge for accountability. Utilizing the black box to train a baseline model for assessment without long-term storage or sharing of information could offer a potential solution. This concept is very much in line with modern privacy-preserving machine learning techniques and could form a part of a more comprehensive framework for AI accountability.
The goal should be to strike a balance that respects IP rights and encourages innovation while ensuring necessary transparency and accountability in AI systems. The global risk associated with AI demands novel approaches, possibly requiring a shift in how we traditionally think about IP rights and accountability.
Barrier: Lack of Federal Data Law
Is the lack of a general federal data protection or privacy law a barrier to effective AI accountability? (RFC Question 25 and 26)
Yes, the lack of a federal data protection or privacy law can indeed be seen as a barrier to effective AI accountability, and it also introduces a considerable amount of uncertainty that can deter more conscientious researchers and innovators.
Without a comprehensive, standardized legal framework in place, there are a few main issues:
- Inconsistent Standards: In the absence of a federal law, states might establish their own privacy laws, which can vary widely in their stipulations. This can make it difficult for companies to ensure compliance, especially if they operate across multiple states.
- Uncertainty and Risk Aversion: The lack of clear regulation creates an environment of uncertainty. Businesses and researchers may fear potential future regulations and may limit their activities to avoid potential legal issues down the line. This can stifle innovation and research progress.
- Rewarding Risk-Takers: In a landscape of unclear regulations, those willing to take risks might push ahead with potentially harmful practices. They might view potential future fines or penalties as merely a cost of doing business. This can create a situation where cautious, ethical operators are disadvantaged while risk-takers forge ahead, potentially leading to harmful outcomes.
As previously mentioned in our initial public comment, the cryptocurrency sector provides a clear example of these dynamics. Many projects were halted due to fears of regulatory backlash, while others pressed forward regardless of the risks. The lack of clear regulatory guidance can foster an uneven playing field that benefits those willing to gamble on regulatory outcomes.
Implementing a general federal data protection or privacy law could help address these issues by providing clear guidelines and standards for all businesses to follow. It would also level the playing field for more risk-averse players and ensure that all operators are held to the same standards.
However, the law should be designed with care to avoid stifling innovation. Too rigid regulations can deter researchers and startups, while too lax regulations might fail to adequately protect privacy and security. It’s crucial to strike the right balance. A carefully calibrated law could provide both the certainty needed to foster innovation and the protections required to ensure AI accountability.
Comment created by The Guardian Assembly (guardianassembly.org)
About The Guardian Assembly — Shaping The Future of AI
The Guardian Assembly is more than a group of dedicated individuals; it’s a global movement shaping the future of humanity and AI. But, we can’t do it alone. We need your unique skills, your passion, and your time to make a difference.
In this pivotal moment in history, the trajectory of advanced AI technologies is being set. Whether AI becomes a tool for unprecedented progress or a source of unchecked risks depends on the decisions we make today. Your participation could be the difference between an AI that aligns with and enriches human values, versus one that doesn’t.
By donating your time and expertise to The Guardian Assembly, you are not merely observing the future — you are actively creating it. Regardless of your background or skillset, there is a place for you in this critical mission. From policy drafting to technological innovation, every contribution brings us one step closer to a future where AI and humanity coexist and thrive.
The future of AI and humanity is in our hands — and your hands. Let’s shape it together.