AI Discussions Leave Out Those Who Will Be Most Impacted — The Citizen

Tina Rose
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
13 min readMay 16, 2023
Photo by Clay Banks on Unsplash

Artificial intelligence (AI) CEOs and researchers have called for the development of AI regulation. Regulations are now necessary to encourage AI development and education in the workforce and manage associated risks.

While specific risks need mitigation, should US citizens be more present at the negotiating table? Soon, citizens may bear the most significant losses due to AI innovation. Companies are doing a great deal to be transparent and invest in research to help develop a human-first AI eco-policy for the future, but is it enough? This article discusses the who, what, and why behind the possible AI issues, proposed solutions, and, more importantly, how the human element can be better brought to the forefront of these discussions with non-profit organizations or other representatives.

· The US needs AI regulation, citing various risks, including furthering inequality with power being held by a few.

· What federal agencies have the responsibility for AI regulation?

· US met with AI CEOs with a limited number of AI companies at the table.

· Citizens and non-profit agencies that would represent their best interests were outside the meeting. They were left out, though decisions about what would impact them were made.

· On May 4, 2023, the Biden/Harris administration announced $140 Million in funding for seven AI research Institutes.

· In August 2020, the Trump administration announced $1 Billion in funding with a specific allocation of $140 Million in financing for seven AI research Institutes.

· Was the investment made twice in three years?

· NIST released voluntary guidelines for AI companies.

· The average citizen is absent at the negotiating table.

Photo by Tingey Injury Law Firm on Unsplash

US Needs AI Regulation — Today

The dangers of AI as a technology have yet to be identified and defined. Washington has been largely hands-off on AI rules, even as several lawmakers have pushed to tighten oversight. Many needed policies that relate to AI, such as privacy legislation, often get kicked down the road. On May 2, 2023, the civil rights division of the Justice Department’s Consumer Financial Protection Bureau (CFPB), along with the Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission (FTC), prepared and released a joint statement on the issue according to reports by American Banker.

The statement was reminiscent of statements given on other important policy issues. The joint statement reaffirmed a commitment to existing civil rights, consumer protection, privacy, and competition regulations — which may or may not be currently effective. In other words, affirmations of the problem with no actions or policies can potentially produce protections for citizens or corporations. No recommendations for procedures or best practices were made at that time, despite all agencies’ agreement that AI has the potential to unlawfully discriminate in every industry, including the finance and housing markets.

Federal Laws Regulating AI — Who Needs to Act

It can take time to determine which agency is responsible for creating federal laws on AI regulation and who will hold agencies accountable. In 2023, this isn’t a new topic for FTC. In 2021, the FTC and the European Commission introduced guidelines, policies, and regulations for artificial intelligence to govern how corporations use AI algorithms to manage personal data. Americans were also demanding privacy protections in 2021 which never came to pass. These were only guidelines against bias with no real methods of enforcement.

According to reports by TechTarget, when it comes to who needs to act, it may be far more challenging to go further than guidelines developed just two years ago. The FTC has the authority to regulate AI as it applies to the use of personal data and its consumer impact, but it often needs to follow through. To make any policy impactful, it must go through congressional approval. What chance is there of any fundamental protections being created even with pressures from top AI firms?

Photo by Markus Spiske on Unsplash

Effective Regulations Need Humanity

The importance of effective government in AI, technology, and privacy regulations cannot be overstated. Governments can look to — and learn from — the tech leaders regarding AI implementation and the rules needed. Negotiations can be done through non-profit organizations promoting privacy, data, and other electronic protections for citizens. However, the average citizen whose life, work, and future will be unalterably changed by AI and these regulations should be at the forefront of the discussion.

The human value of AI regulations comes from their significant role in shaping AI’s future while maintaining humanity at the forefront. This ideology will determine who will emerge as the true leaders in this field. Today frontrunners in AI regulations and governance endeavors are, of course, the United States, China, and the EU.

Photo by Jason Dent on Unsplash

Lack of Federal Privacy Regulations and Its Impact on AI Regulations

According to a recent Brookings report, Congress is now challenged to pass privacy legislation that protects individuals against any adverse effects of using personal information in AI. While the EU has long since passed the General Data Protection Regulation (GDPR), countries have followed suit globally. Americans are, undoubtedly, frustrated. Californians created legislation for themselves. Considering where we sit with privacy law, will the US lead the way in AI regulation? Check this list of countries that have enacted data-privacy legislation:

· Europe (EU): General Data Protection Regulations (GDPR) passed in 2016.

· Sweden: The Data Act was passed in 1973. (Now under GDPR.)

· Israel: Protection of Privacy Law, 5741–1981, which shows it was enacted in 1981.

· Saudi Arabia: Adopted the Personal Data Protection Law (PDPL) in September 2021.

· California: Not a country, but in frustration, passed the California Consumer Protection Act (CCPA) in 2018.

· The African Union (AU): The African Union Convention on Cyber Security and Personal Data Protection (CCSPDP) was adopted in June 2014.

· Canada: Asserted the Personal Information Protection and Electronic Documents Act (PIPEDA) in April 2000.

· Regardless of the label — data, privacy, AI — what is wanted and expected are regulations that protect citizens.

While this list isn’t comprehensive, it quickly becomes apparent that a global superpower is incredibly absent. Privacy legislation matters because these laws concern how data is handled. AI is data-based, and it would only be possible to regulate AI with comprehensive privacy protections.

The FTC may have the autonomy to act now, but if this is to be done correctly, certain things are going to have to occur:

· Protections and legislation should not unduly restrict AI development.

· Comprehensive protection of data should be citizen-based, not political-party based. Protecting citizens’ data isn’t a left or right issue.

· Top CEOs, AI thought leaders, and companies are rushing to help government agencies better understand the technology.

· Create a board of non-tech thought leaders in psychology, human development, history, and others to protect citizens from manipulative laws created to benefit corporations and government while giving only the appearance of protection.

· Annually review policies and technologies to see what needs to be addressed as technologies advance.

· Insist limitations apply equally to government and corporate data use and harmful AI tech.

Photo by Xu Haiwei on Unsplash

CEO Expectations on AI Regulations

Like the EU, the US must develop comprehensive AI regulations that determine compliance with automation, AI-enabled robotics, personal data, moral safety guardrails, and risks for timely oversight. CEOs would be bound to comply with any EU AI services deployed in the G7 countries in the future. Effective AI regulations should center on the highest-risk applications and be outcome focused. Laws need durability in the face of rapidly advancing technologies and changing societal expectations.

US President Joe Biden has met with CEOs of top artificial intelligence companies, including Microsoft and Google, making clear they must ensure their products are safe before deployment. Companies were asked to be more transparent with policymakers about their AI systems. The government leaders’ message to the companies is that their role is to assist in reducing the risks and that they can work together with the government.

The White House plans to issue draft policy guidance on federal government use of artificial intelligence systems officials announced ahead of Thursday meetings between Vice President Kamala Harris and heads of companies with significant AI investments. The forthcoming Office of Management and Budget draft guidance, which will be released later this summer, and be open for public comment. The draft will set up specific policies for developing, procuring, and using AI in government.

Photo by Tom Parsons on Unsplash

Who Will Protect the Citizen?

Vice President Kamala Harris recently met with the heads of Google, Microsoft, OpenAI, and Anthropic to discuss the risks of artificial intelligence. The Biden administration announced measures to address the challenges of AI, including policies that shape how federal agencies procure and use AI systems and spending $140 million to promote research and development in AI.

Citizens need to be involved in AI and privacy regulation discussions to ensure that regulations protect citizens and are not bent toward government and corporate misuse. What organizations outside AI companies could be present in legislative and policy meetings to defend human rights?

· Electronic Frontier Foundation — Defends civil freedoms and rights in the digital world.

· Center for Democracy & Technology — Promotes democratic values by shaping technology policy and architecture.

· Future of Privacy Forum — A catalyst for privacy leadership and scholarship

These three non-profit organizations have been contacted regarding whether they expected to be invited to this type of meeting and if they plan to attend future meetings. No response was given at this time.** There are other possible organizations and there should be several at the table to represent the diverse citizenship they represent.

** Karen Gullo, with the Electronic Frontier Foundation, responded to the main idea of how groups like theirs can protect citizens. She checked in her office to note that the organization had not been invited. However, the organization gave it thought, “Going to those meetings doesn’t matter so much… we’ve all been in meetings where you’re not listened to or the key policies were made elsewhere.”

In this case, they are likely correct. We are all just dreaming when we think we can vote in a government and pay their wages with tax dollars, and our “voted-in government temporary employees” actually care what the people want and be held to do their “job.” Suggestions were made on how to advocate and support the EFF. Donate — support the organization with funding and “activating for good bills/proposals and against bad bills/proposals.” If workers want to be included, be loud, and help support EFF by amplifying their voices and ideas in political/public discourse.

Donate to Electronic Frontier Foundation or join here: https://supporters.eff.org/donate/join-eff-4

Photo by Jp Valery on Unsplash

Two Announcements, $140 Million, and Seven AI Research Facilities

Draft policies that detail how the government will use AI are open to the public. According to MSN, announcements were made that the White House would spend $140 million to launch seven new AI research institutes under the NSF. The announcement of $140 Million being invested into the seven AI research facilities came after a May 4, 2023, meeting with Kamala Harris and top AI researchers.

Photo by ANIRUDH on Unsplash

Double Take — A Glitch in the Matrix

Noted in a report by Green Car Congress, the White House Office of Science and Technology Policy (OSTP), along with two other agencies, the Department of Energy (DOE) and the NSF, will invest over $1B to help build 12 research institutes with a focus on AI and quantum science. In August of 2020, announced in The Verge, as part of the $1B in funding, the agency partners dedicated $140M over five years to seven AI research institutes that will further research in machine learning, precision agriculture, and other beneficial areas.

Questions were posed to OSTP and the NSF to determine if the $140M investment occurred twice in three years to the same seven AI research institutes. It was also asked which seven institutes would receive the investment funding. This question was posed to both agencies, and neither would comment. Without a response, it is unclear whether Americans paid $140M for seven agencies or $280 for fourteen agencies.

Photo by Claudio Schwarz on Unsplash

Will Regulation Happen Soon?

The US government has not enacted any primary federal data privacy policy. Instead, the government has approached privacy and security by regulating only specific sectors and types of sensitive information (e.g., health and financial), creating overlapping and contradictory protections. However, US Congress made bipartisan progress on comprehensive federal privacy legislation last year, advancing the proposed American Data Privacy and Protection Act to the cusp of a US House floor vote. The prioritization was no fluke, and the 118th Congress is compelled to prove as much with eyes toward finalizing a national standard again this year. As CEOs continue to push for data regulations under AI, it may come with a resolution for the US need for major privacy reform.

The Artificial Intelligence Risk Management Framework 1.0 (the RMF) is considered voluntary. According to Goodwin Law, in January 2023, the National Institute for Standards and Technology (NIST) launched it as the first guideline for businesses that design, develop, deploy, or use AI systems. The NIST office is the federal AI standards coordinator and works to establish acceptable standards for AI globally.

Photo by Matt Collamer on Unsplash

In Conclusion

There is plenty to consider here. We will face a massive change in our global society, for better or worse, with AI. Interesting as a topic of discussion, it should stay on the forefront for some time. There is a plethora of hope as we see tech companies working together to help mitigate any future problems that AI systems could create. OpenAi has a unique business strategy combining non-profit and for-profit entities that dually control the business. Tech companies are stepping up to make sure problems are being addressed. The biggest problem noted above is the need for more representation of organizations that speak for the average Joe or Jill. The mass of the population who hold 40% or more positions expected to be lost to AI — should have some presence. Perhaps, this will change before the citizen is entirely left out of the picture.

Further Reading:

Ajao, E. (2021, October 19). FTC pursues AI regulation, bans biased algorithms. Retrieved from Tech Target: https://www.techtarget.com/searchenterpriseai/feature/FTC-pursues-AI-regulation-bans-biased-algorithms

Berry, K. (2023, April 25). Four federal agencies cite dangers in AI systems’ ability to discriminate. Retrieved from American Banker: https://www.americanbanker.com/news/four-federal-agencies-cite-dangers-in-ai-systems-ability-to-discriminate

Burt, A. (2021, April 30). New AI Regulations are Coming: Is Your Organization Ready? . Retrieved from Harvard Business Review: https://hbr.org/2021/04/new-ai-regulations-are-coming-is-your-organization-ready

Cohen, B. S., Denvil, J., Degroff, S., & al., e. (2021, July 8). FTC authority to regulate artificial intelligence. Retrieved from Reuters: https://www.reuters.com/legal/legalindustry/ftc-authority-regulate-artificial-intelligence-2021-07-08/

Felz, D. J., Austin, A., & Kieffer-Peretti, K. (2022, December 9). Privacy, Cyber & Data Strategy Advisory: AI Regulation in the U.S.: What’s Coming, and What Companies Need to Do in 2023. Retrieved from Alston & Bird: https://www.alston.com/en/insights/publications/2022/12/ai-regulation-in-the-us

Haddad, M. (2023, March 17). The Race for AI Governance: Navigating the International Regulatory Landscape of Artificial Intelligence. Retrieved from Jurist: https://www.jurist.org/commentary/2023/03/mais-haddad-international-regulations-artificial-intelligence/

Hawald, S. (2023, April 27). 2023–2025 Board and CEO Research Insights: Generative AI Playbook: Regulations, Productivity, and AGI. Retrieved from Forward-Looking Ideas for CEOs Newsletter: https://www.linkedin.com/pulse/2023-2025-board-ceo-research-insights-generative-ai-playbook-hawald/

Heaven, W. D. (2021, April 21). This has just become a big week for AI regulation. Retrieved from Technology Review: https://www.technologyreview.com/2021/04/21/1023254/ftc-eu-ai-regulation-bias-algorithms-civil-rights/

Jthijssen, J. (2020, August 27). US awards more than $1B to establish 12 new AI and quantum science research institutes. Retrieved from Green Car Congress : https://www.greencarcongress.com/2020/08/20200827-aiqs.html

Kerry, C. F. (2020, February 10). Protecting privacy in an AI-driven world. Retrieved from Brookings: https://www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/

Lewis P.C., J. (2022, June 21). Congress Releases Draft Federal Privacy Law with Potential Traction To Pass. Retrieved from The National Law Review: https://www.natlawreview.com/article/congress-releases-draft-federal-privacy-law-potential-traction-to-pass

O’Connor, N. (2019, January 18). Reforming the U.S. Approach to Data Protection and Privacy. Retrieved from Council on Foreign Relations: https://www.cfr.org/report/reforming-us-approach-data-protection

Quach, K. (2023, May 8). White House pledges $140 million for seven new AI research centers. Retrieved from Microsoft: https://www.msn.com/en-us/news/technology/white-house-pledges-140-million-for-seven-new-ai-research-centers/ar-AA1aTOMP

US President Joe Biden meets with Microsoft, Google CEOs, outlining expectations on safe artificial intelligence use. (2023, May 4). Retrieved from ABC News Austrailia: https://www.abc.net.au/news/2023-05-05/joe-biden-meets-with-microsoft-google-ceos-on-ai-dangers/102307256

Vincent, J. (2020, August 26). US announces $1 billion research push for AI and quantum computing. Retrieved from The Verge: https://www.theverge.com/2020/8/26/21402274/white-house-ai-quantum-computing-research-hubs-investment-1-billion

Also, Read

--

--

Tina Rose
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨

Freelance technical writer for 20 years with a personal interest in AI research, quantum computing, philosophy, and rescuing animals, like our mascot, Enyo.