State of the Art-ificial Intelligence: Outracing Regulation
AI’s rapid evolution and the regulations’ slow-motion catch-up
There’s a race happening right now in the tech world. On one side, we’ve got all these tech wizards conjuring AI models, and on the other side, the good ol’ lawmakers, entangled in their agendas and bureaucracy. And in the middle, it’s us and the ever-debated questions: Can AI inherit human bias & discrimination? Are current regulations truly safeguarding our privacy and ensuring our safety? And, as we are an inch closer to AGI (Artificial General Intelligence), is there a real risk of a super-intelligent AI being a threat to humanity?
So, let’s get started, folks. Grab your cold brew (or maybe a kombucha) and let’s pull back the curtain.
Part I: state of the art-ificial intelligence
Quick catch-up, since we last chatted:
- LLaMa-2: By Meta (yeah, the one that used to be Facebook) it’s been now trained on 40% more data, provides double the context length, and has larger parameters available in different versions. Furthermore, it does not produce harmful or offensive content and comes equipped with a fine-tuning feature, allowing customization for specific tasks. Leaving its pals GPT-3.5 and GPT-4 slightly jealous.
- Mid-journey V5.2: Launched in June 2023, it is changing the AI game with its outstanding painting, variation, and prompt optimization.
- Falcon: There’s been a lot of buzz around this one. Developed by the UAE’s Technology Innovation Institute in the Middle East. Open-source & with 40B parameters. Gaining global interest due to its transparency.
- Stability.ai (SDXL): Stability AI’s text-to-image generation model is not only enhancing image quality from text prompts but also creating a buzz on HuggingFace.com due to its improved fine-tuning features and high-level customization.
The tech industry is in high gear, pushing boundaries, engaging in fierce competition, and producing AI at an astonishing speed.
Part II: AI regulations
AI ACT
Last June 14th the European Parliament has finally given a nod to the Artificial Intelligence Act (AI Act or AIA). It was originally proposed in April 2021-yeah, it took them more than 2 years!
This act aims to ensure AI’s safety, transparency, traceability, non-discrimination, and environmentally sustainable use within the European Union. Its approach is to classify AI systems based on the risk they present:
Unacceptable-Risks Systems: AI systems, such as those that manipulate behavior or classify individuals based on personal traits, are seen as posing unacceptable risks and will be prohibited.
High-Risk Systems: AI systems impacting safety or fundamental rights will be subject to EU product safety legislation and will need to be registered in an EU database. Generative AI systems, like ChatGPT, must meet transparency criteria.
Limited-Risk Systems: These AI systems are required to adhere to basic transparency standards.
AI’s rapid rise vs. EU’s slow-mo regulation
During these 2 years that the EU’s been deliberating on these regulations, AI has rapidly advanced, leaving not just lawmakers but also enforcement mechanisms playing catch-up. So, the question is. How do we effectively & safely oversee this expanding AI landscape?
Part III: AI governance
Last Sunday, I had a personal conversation with a current member of the AI Governance team at Meta. For privacy reasons, I won’t disclose her name. But interestingly, we both belong to a tech community in Germany for people with immigration backgrounds, much like ours.
Discussing the AI Act, she shared some eye-openers:
Lawmakers are non-technical
Imagine having a chef draft up the rules for a soccer game — sounds wild, right? That’s sort of what’s happening in the AI scene. For instance, some laws require the use of representative data. But if this representative data contains bias against minorities, then the whole data set will consequently be biased as well. As a result, these regulations aren’t quite hitting the mark, and companies are left to fill in the gaps, making educated guesses on matters that should be crystal clear.
For instance, when it comes to fairness, there’s no one-size-fits-all solution. It’s challenging to achieve unbiased results as there are numerous gray areas and complexities. What’s considered fair in the EU or USA might be viewed differently in China. Issues such as gender or minority targeting vary based on social context.
Navigating a rapidly evolving tech landscape with minimal guidelines is risky. It’s comparable to piloting a plane while learning the instructions on the fly (pun intended).
Consequently, companies often find themselves making assumptions about factors that should be explicitly defined.
Lawmakers should make laws based on the use case
Developing comprehensive and generic laws for AI is a complex endeavor due to the extensive and diverse scope of its implementations. Each application of AI, whether it’s for customer service, creating music, or offering medical advice, presents its own distinct set of challenges.
As we venture deeper into the world of Generative AI, we need to be clear about our intentions for using this technology. It’s difficult for regulators to come up with comprehensive guidelines because the use cases for Generative AI remain largely undefined. Society currently seems to be in a phase where it’s tempted to solve every problem with AI — akin to seeing every problem as a nail when you have a hammer. However, as time progresses, we’ll likely discern where this technology truly meets a meaningful user need.
Taking a closer look, consider the application of AI in aiding decision-making for areas like employment and housing. These domains are quite distinct from applications in targeted advertising or seemingly less impactful areas like social media. Yet, each carries its own set of substantial implications. Given these nuances, it’s evident that lawmakers face an immense task: crafting laws that address the unique characteristics and challenges of each AI, ML, and LLM application.
Distinguishing laws for apps at scale
When it comes to AI applications at scale, the game changes again. Implementing AI in vast, global operations isn’t the same as creating a playlist or recommending a restaurant. The potential implications and repercussions of decisions made by AI systems grow exponentially with scale.
Imagine an AI system tasked with determining loan approvals for millions of people or orchestrating a nation’s power grid. The ramifications of any bias, error, or security breach could be colossal and far-reaching. This is a whole different ball game and thus, requires a distinctive set of rules.
Part IV: The road ahead & closing thoughts
In the rapidly evolving landscape, certain truths are becoming evident. Global corporations with the budget to train these huge LLMs are dominating the scene. While current regulations, though well-intended, they often struggle to keep pace with these tech giants.
Can AI inherit human bias & discrimination? The answer is, unfortunately, yes. Are current regulations truly safeguarding our privacy and ensuring our safety? Not quite yet. These companies are in search of more comprehensive guidelines to develop safe, unbiased systems that don’t compromise humanity’s integrity or privacy. However, at the moment, they have very little to refer to.
As we are an inch closer to AGI, is there a real risk of a super-intelligent AI being a threat to humanity? This question remains unanswered. It depends on whether lawmakers and tech giants recognize that their race isn’t against each other, but a shared journey. The clock is ticking; it’s not too late, but there’s no room for procrastination.
Here’s a wild thought: what if an AI could draft these laws? But then, would we need another AI to supervise it? Hmm… Sounds like a teaser for the next article. Stay curious, folks & thanks for reading.
Luciano Radicce is a seasoned entrepreneur, strategist, and founder of Lazy Consulting, specializing in AI strategy & implementation. With a passion for discussing relevant tech topics that shape the world, engaging in thought-provoking conversations that push the boundaries of ethical innovation. Support this work by sharing this article, liking it, or commenting on it.
Sources:
This article was co-authored with a little magic from GenAI
https://www.ghacks.net/2023/06/23/how-to-use-midjourney-5-2-new-features/
https://about.fb.com/news/2023/07/llama-2/
https://blog.teamwave.com/llama-2-how-it-can-be-a-game-changer-for-your-business/
https://ai.meta.com/blog/responsible-ai-progress-meta-2022/https://huggingface.co/tiiuae/falcon-40b
https://open.spotify.com/episode/3U4cvCZC95Jp4Ln9QjZXEV?si=eb52797753d744f5