Photo by Google DeepMind on Unsplash

The AI regulation discussion: who are we leaving out?

Jacky Zeng
Automated Decision-Making and Society
5 min readSep 3, 2023

--

by Kimberlee Weatherall and Jacky Zeng

Australian governments, both state and federal, are in the midst of a conversation on the use and regulation of Artificial Intelligence (AI). The South Australian Government announced a nation first adoption of AI in public schools; the release of the Draft National AI In Schools Framework for consultation; Both NSW and South Australia have inquiries underway; at a Commonwealth level the Department of Industry, Science and Resources has recently consulted on governance measures to ensure AI is developed and used safely and responsibly in Australia. ADM+S made a submission to the inquiry, raising some questions about the way this discussion is headed.

Proposals globally, and in Australia for regulating AI have coalesced around ‘risk-based approaches’, like that proposed in the Commonwealth’s Safe and responsible AI in Australia Discussion Paper which puts forward a “risk management approach”. This would mean more stringent requirements for systems assessed as posing higher risks. In other words, an organisation implementing a low-risk AI system would only need to complete a ‘basic self-assessment’, but high-risk systems would need to complete a comprehensive assessment, reviewed by experts. The EU’s draft AI Act, Canada’s mandatory directive on automated decision-making and the US NIST’s AI Risk Management Framework are all examples of regulatory developments underpinned by a risk management approach.

But what counts as ‘high risk’? And importantly, who gets to decide? The business deploying it? Some government-appointed experts? Merely defining risks levels and allocating responsibilities to AI developers in a closed room of technocrats is not enough. As new AI capabilities threaten to shift the dynamic of how we engage with individuals and institutions, the questions we ask around what is and isn’t acceptable needs to include the full array of stakeholders: including perspectives from vulnerable and/or impacted communities. We need to also ask how AI may exacerbate existing gaps of digital exclusion; how communities without basic digital access are at risk of being left behind.

Inclusion and the digital gap

Any risk assessment must consider the challenge of digital exclusion in Australia as well as broader questions of inclusion and diversity. ADM+S research indicates that 23.6% of Australians are digitally ‘excluded’ or ‘highly excluded’ — they have challenges accessing the internet, or paying for it, and/or they have low digital literacy. This is even more of a problem among Australia’s First Nations communities. If the future is AI-driven tutoring or medical care, what does this mean for the significant parts of the population without access to that future? Do we need to require governments to provide alternatives or investments to bridge the gap? Against this background too, we have to ask serious questions about the Commonwealth government’s Draft National AI in Schools Frameworks which envisages ‘schools engag[ing] students in learning about generative AI tools and how they work, including their potentials limitations and biases, and deepening this learning as student usage increases’. Which schools will be able to do this?

More broadly, the risks and benefits of AI are unlikely to be evenly distributed or safely managed without due care for, and the active involvement of, diverse groups of Australians.

New capabilities need new public discussions

AI brings to the table new capabilities and new directions in the relationships between people on the one hand, and public and private institutions on the other. Used in combination with large-scale data and data collection, and automation, AI systems are making possible analyses and uses not previously feasible. All these new capabilities have potential to bring tremendous benefit while simultaneously posing risks at scale.

The capacity of AI to enable companies or governments to gather, link, and analyse large amounts of data, make inferences and predictions raises questions:

  • should there be limits on what kinds of predictions can be made, or how they can be used? Do you want government making predictions about whether your kid is likely to finish school — and intervening on the basis of that prediction?
  • AI can enable significantly expanded automation, but do you want all your interactions with government to be through a portal? When is a human touch, or a human judgment important? Do we want all our interactions with government to be digital, or do we need alternatives?
  • AI vendors will claim they can enable shops and stadiums to exclude people who are perceived as troublemakers — on the basis of past behaviour, perhaps, or maybe some (likely error-prone) assessment of emotional state. Are we OK with that?
  • Private sector actors can use AI to test how much people will pay for a good or service before switching — and charge different prices for the same thing. Should there be limits on that?

Striking the right balance between harms and benefits for these new capabilities is not a mere technical assessment or a question of ‘risk management’: it’s a political choice and it will shape the world we all live in. It’s also a choice that is going to be made again and again, by people at every level of society, every time they choose to use an AI product in their life or their business. That’s especially why the questions of what is and isn’t acceptable practice needs to reflect our democratic values and involve many public conversations, right across society.

‘No decision about us without us’ — investing in public discussions.

Conversations about AI and its future directions have been dominated, to date, by a technocratic discussion among experts from government, industry, and academia with limited efforts being made to involve Australians. Involving communities most at risk of harm from the use of AI and ADM enacts the principle of ‘nothing about us without us’ — that people should have a say in the decisions which impact their lives.

This isn’t as simple as ‘running a consultation’. We need to think about what is needed for representative participation to work; what communication and information people need; what resources will sustain ongoing processes that, like technologies, are never ‘done’? These questions are already being taken on in many contexts: in participatory processes in government; in the work of Australian organisations like the Sydney Policy Lab and the new Democracy Foundation; and in government-funded projects such as Just Reinvest and the National Disability Data Asset. This need not be a solely public process: the Ada Lovelace Institute recently studied the use of such processes in commercial AI labs.

AI is everywhere across the news and it’s being discussed at every level of government. But treating it as only a ‘risk management as usual’ question to be discussed and determined in rooms of technocrats in large capital cities would be a mistake. It’s time for a broader discussion about what uses are and aren’t acceptable.

--

--