Title: The Case for Self-Regulation: AI’s Evolution in the Midst of Systemic Government Failures

Nathan Pierce
4 min readMay 16, 2023

--

Credit: U.S. Senate Photographic Studio

In response to Sen. Hawley’s recent statement dismissing the idea of AI companies self-regulating as ‘laughable’, I would argue that the alternative — increased governmental oversight — has demonstrated, time and again, its own set of flaws and inefficiencies. Given the rapid pace of technological advancements and the government’s recent track record, it is worth considering that self-regulation could indeed be a more viable and effective solution.

Firstly, let’s address the elephant in the room: government regulations have often been counterproductive and inadequate in addressing the complexity and dynamism of the tech industry. Regulatory bodies have failed to keep pace with technological advancement, leading to antiquated laws that stifle innovation. Case in point, look at the current data privacy regulations. They are a patchwork of differing state-level laws, causing confusion for both businesses and consumers. A centralized, efficient approach to this issue is still lacking.

Government regulations have also shown a propensity to cater to large corporations, inadvertently creating barriers to entry for startups. The cost of compliance can be prohibitive for smaller companies, stifling competition and innovation, and ultimately harming the consumer. In the AI sector, this could mean fewer innovative solutions being brought to market and a reduction in the competitive drive to improve existing AI systems.

Secondly, the idea of government regulation assumes a level of competence and integrity that has been noticeably absent in recent years. The escalating number of corruption scandals involving government officials and departments has eroded public trust. If the government is struggling to maintain ethical standards within its ranks, can we realistically expect it to effectively regulate complex industries such as AI?

In recent years, a spate of government scandals has emerged that seriously calls into question the integrity of our governing bodies, making it prudent to question their capacity to effectively regulate industries such as AI. These scandals have not only revealed misuse of power but also exhibited systemic issues within these organizations.

One such controversy involves the Durham probe, an investigation led by U.S. Attorney John Durham into the origins of the FBI’s 2016 Russia investigation. The probe raised alarming questions about potential political bias and misuse of power within the FBI and the broader intelligence community. In fact, the probe led to the indictment of a former FBI lawyer who pleaded guilty to altering an email that was used to justify surveillance of a former Trump campaign advisor. This instance of unethical conduct within a key government agency demonstrates a clear abuse of power and an alarming lack of oversight.

Meanwhile, the Internal Revenue Service (IRS) has faced its own scandals, the most egregious of which was the alleged targeting of conservative groups. It was revealed that the IRS had subjected these groups to additional scrutiny and delays when they applied for tax-exempt status. This incident revealed a disturbing politicization within the IRS, a body that should remain neutral and unbiased. It brought to light the potential for government agencies to be weaponized for political ends, a clear violation of public trust.

Furthermore, there have been concerns about the misuse of other government agencies. For instance, allegations have surfaced about the manipulation of intelligence by the Department of Justice and the misuse of surveillance powers by the National Security Agency. These instances reinforce the narrative of a government struggling to regulate itself, let alone industries as complex and rapidly evolving as AI.

These scandals, among others, have cast a long shadow over the credibility and reliability of our government institutions. They raise questions about whether these bodies can effectively oversee industries like AI, particularly given their apparent difficulties in maintaining ethical standards within their ranks. If we are to entrust the regulation of AI to government bodies, we must first demand transparency, accountability, and reform within these institutions themselves.

A clear instance is the revelation of insider trading within Congress. Elected officials were found to be making stock trades based on privileged information, leading to a clear conflict of interest and a violation of the public trust. This example, among others, raises serious questions about the government’s ability to self-regulate and enforce ethical standards, let alone regulate external industries.

Finally, and perhaps most importantly, the notion of government regulation fails to account for the expertise that exists within AI companies. Who better understands the intricacies and potential pitfalls of AI than the people who are developing and working with it every day? Government regulators often lack the technical expertise required to fully understand, let alone regulate, such a complex and rapidly evolving field. This can lead to poorly conceived regulations that do more harm than good.

Instead of dismissing the idea of AI companies self-regulating as ‘laughable’, we need to seriously consider the potential benefits. Self-regulation encourages industry-wide collaboration, incentivizes companies to prioritize ethical considerations, and is more likely to keep pace with technological advances.

In conclusion, while it is important to have checks and balances in place to ensure that AI companies are behaving ethically and responsibly, it is also essential to question the efficacy of government regulation given its current track record. AI self-regulation may not be a perfect solution, but it’s certainly not ‘laughable’ when compared to the alternative. It’s a conversation worth having.

--

--