Where Do We Go From Here? : 2024 AI Forecast

Nidhi Sinha
Women in Technology
7 min readDec 27, 2023
Photo by Dustin Humes on Unsplash

2023 has been a whirlwind year for AI, to say the least. From the launch of ChatGPT 4 in the beginning of the year to the EU reaching a provisional deal on the much-debated EU AI Act in December, there has been no shortage of developments in the innovation and orientation of artificial intelligence. AI has never been a more exciting field, with great strides made in its capabilities this year. Concurrently, concerns about bias, surveillance, exploitation, misinformation, and more have risen. The call has grown significantly this year to merge ethics and innovation together to properly mitigate these risks. As this year wraps up, I wanted to propose my (non-exhaustive) forecast for the upcoming hot AI topics of 2024. I predict these will be some of the most important discussions for the next year, following up on this year’s biggest events.

Regulatory Landscape

2024 will be a huge year for regulation. Different risk-based legislations and frameworks have been published this year, but now it will be up to individual countries to determine how to integrate them. There is also still much unclear about how open source groups and startup companies fit into the regulatory landscape, as well as how to prevent policy discussions from succumbing completely to Big Tech lobbying. On a positive note, many regulatory bodies emphasized the importance of information sharing and increasing accountability, hopefully leading to greater consensus on best practices.

  • EU: Now that the provisional deal has passed a few weeks ago, it is up to Member States to begin to implement the EU AI Act into their own legislation. The actual legislation will not go into effect until 2025. The Digital Services Act did go into effect this year, though, banning targeted advertising and increasing transparency and accountability in the interest of promoting a safer online environment. The next year will determine how effectively these acts can promote human-centric values and increase accountability in the EU. Other countries will be heavily influenced by the leadership of the EU and likely align their policies with them.
  • US: The White House Executive Order on Safe, Secure, and Trustworthy AI, released at the end of October, creates new standards to protect privacy and advance equity, building upon the earlier voluntary commitments by 15 leading companies. The next year will lead to federal agencies having to address high-risk issues such as algorithmic discrimination, labor rights, and surveillance. In the meantime, the FTC has had a busy year addressing consumer concerns around AI, most recently banning Rite Aid from using facial recognition technology for the next 5 years. They also still have a pending investigation into OpenAI from July, which will hopefully have a resolution in the next year. Additionally, with the 2024 presidential election looming, cracking down on disinformation will be more necessary and urgent than ever.
  • Latin America: Brazil has shown itself to be at the forefront of AI regulation in Latin America, with its AI Strategy launched in 2021 and a proposed bill under discussion this past year to establish a rights-based set of principles for AI’s development and application. Colombia and Argentina both also have frameworks for ethical AI. However, more needs to be done to bring Latin American countries into the international discussions. Generalized proposals such as UNESCO’s digital platforms regulation guidelines risk incentivizing authoritarian regulations without accounting for the human rights impact. This risk is evident in censoring freedom of expression on social networks in countries such as El Salvador, Nicaragua, and Venezuela. In the coming year, this region needs more representation on a global level so that its individual countries can reach greater maturity.
  • Africa: Mauritius and Egypt have published national AI strategies, while Kenya has established a roadmap for utilizing AI to increase national competitiveness. Other countries, such as Rwanda and Ghana, are also creating similar strategies and roadmaps for ethical usage and socioeconomic development. Similar to Latin America, though, the specific needs of African countries must be prioritized in international discussions. African workers have faced exploitation from the AI industry, whether through traumatizing content moderation gigs or violating data harvesting. The lack of infrastructure and access to AI tools in native languages further increases the gap between the innovation ecosystem in Africa and Western countries. This gap needs to be bridged in order to make African countries more competitive on a global level.
  • G7/G20: Both the G7 and the G20 have set the priorities for their Member States to begin integrating ethical principles into AI regulation. In September, the G20 New Delhi Leaders’ Declaration announced their pursuit of a pro-innovation regulatory/governance response and promoted responsible AI to achieve Sustainable Development Goals (SDGs). In October, the G7 Leaders released the International Guiding Principles and Code of Conduct, both geared towards organizations developing advanced AI systems. In the coming year, the G20 Member States will have to begin to implement tangible actions such as responsible disclosure of AI usage into their own national policies. The G7 Member States have already begun holding public comment opportunities to determine how to best integrate the guiding principles into their national policies.

Industry Standards

Tech companies will have to remain ahead of the rapidly changing regulation, incorporating ethical benchmarks at all parts of the design process. However, as new applications for AI continue to spring up, other sectors will also hold the responsibility to ensure ethical use.

  • Responsible AI: Responsible AI (RAI) has become one of the biggest buzzwords this year in the tech industry. It encompasses the ethical and regulatory concerns that tech companies have to contend with when developing and utilizing AI. Regulatory compliance may prove challenging due to some regions being more mature in their AI policy than others. Any company planning to develop or use AI will also have to develop its own corporate AI policy that aligns with regulations and human-centric principles. The NIST AI Risk Management Framework, launched in January, will serve as a key reference point from which to base their own mitigation processes. Many companies have already begun investing heavily into RAI and with more likely to follow suit. Other tech companies this year, however, have come under fire for abruptly firing their entire AI ethics teams. Public pressure will only continue to grow for ethical AI, which means RAI will become an increasingly popular sector.
  • Worker Rights: As AI continues to increase in its prevalence, worker rights will take a larger stage in the next year. Worker training on AI will be necessary to ensure that people are properly upskilled and not displaced by automation. Additionally, AI’s use in workforce management has caused undue harm to workers. Amazon’s constant monitoring has caused immense pressure on workers to reach unreasonable levels of productivity, leading to serious injury rates nearly double those at other companies’ warehouses. Copyright has been one of the biggest debates of 2023 due to generative AI, with a class action lawsuit against Stable Diffusion and the Writers’ Guild of America’s (WGA) strike both calling for better protections for artists. The WGA’s historic win set a major precedent for protecting creators’ livelihoods from the generative AI systems trained on their work. As companies continue to try to fit AI into their workflows, protecting workers from displacement, surveillance, and exploitation will remain an ongoing conversation.
  • Sustainability: The environmental impact of AI must become a bigger priority in 2024. Large language models (LLMs) can emit over 626,000 pounds of CO2 in a single round of training. This resource consumption does not just stop at training, though; the fine-tuning, data storage, and maintenance of these models all take up vast amounts of natural resources and energy. Current generative AI models operate on billions of parameters, exponentially increasing carbon emissions with each round of fine-tuning. The evaporative cooling systems used in data centers to prevent equipment from overheating consume vast amounts of water. An estimated 700,000 liters of fresh water were potentially consumed in training ChatGPT3, for example. There is also limited publicly available data on the true energy usage of the companies developing these models, making it difficult to get an accurate estimate of the actual environmental impact. Companies will need to become more transparent about their energy usage to be held properly accountable. The focus will also have to shift to using smaller models and more efficient training techniques rather than the massive models currently being used.

Individual Responsibility

While there is much necessary change happening on a broad scale level, individuals still will have to examine their own approach to engaging with technology. Tech literacy will play a huge role in determining how far the harms of AI can spread.

  • Identifying AI-generated images: With the risks of disinformation at an all-time high, individuals must learn how to spot the nuances that reveal an AI’s limitation in creating completely normal content. This could be poorly rendered hands or distorted letters in images. It is also crucial to check the sources from which these images are generated. Developing literacy on this matter will help curb the spread of false content on an individual level. It is important to note that as the technology continues to improve, it will only become more difficult to decipher between AI-generated images and real life.
  • Understanding AI limitations: While AI-powered chatbots are being touted to assist with web searches or lessen loneliness, individuals will have to stay aware of the shortcomings of this technology. All queries should be verified for authenticity to avoid spreading misinformation. Individuals should use caution when turning to chatbots for emotional support, as AI cannot experience empathy and may produce toxic outputs causing further harm. Managing expectations for AI will help individuals stay protected from the associated risks.

Looking forward

2023 has been a huge year for AI, and there seems to be no signs of slowing down for 2024. The different topics I have listed share themes of information sharing, forming consensus, and thoughtful usage of AI. Personally, I feel hopeful that the next year will show maturity in our discussions about AI governance as we begin to further understand the various implications of untethered innovation. Centering human rights and democratic values will go a long way in ensuring that AI helps, not harms, us as we move forward as a society. Happy New Year!

--

--

Nidhi Sinha
Women in Technology

Working at the intersection of technology and ethics! Learn more about me @ https://worldofnidhi.com