We will all have personal AI that is smarter than us — and sooner than we think
Personalization, multi-modality, and agents are some of OpenAI’s main development pillars for 2024. As we are headed towards Artificial General Intelligence (AGI), models that serve as our very own personal assistants are bound to pop up.
Great developments with room for improvement
New features and models are announced daily, and the pace might be getting a bit dizzying even for us working with AI. For the general public, ChatGPT is still pretty much synonymous with AI, even though their freely available 3.5 model is far from being a market leader anymore. They are evolving at an amazing pace, though. Their goals for this year, as presented by CEO Sam Altman a few weeks ago, are hinting at the possibility of a personal AI for everyone closer and closer by the day.
Image Source: https://twitter.com/morqon/status/1777371993454579859
These updates are well on the way, but there is ample room left for improvement even after they go live. First, the models will consume much less energy and data to train and run, resulting in much smaller models and more agile development. This will be a big step towards the democratization and spreading of the tech. Second, AGI — the generalistic type of AI that can help with, well, anything — is really, really close. In some sense, it is already here.
Dario Amodei, co-founder and CEO of Anthropic, recently stated in a podcast that he believes our way to AGI “is a bit like going to a city, like Chicago. First, it’s on the horizon. Once you’re in Chicago, you don’t think in terms of Chicago, but more in terms of what neighborhood I am entering, what street I am on” — “I feel the same about AGI. We have very general systems now.”
“The curve is exponential but smooth. It looks spiky because of the reaction of society to certain milestones”.
(Source: https://www.nytimes.com/2024/04/12/opinion/ezra-klein-podcast-dario-amodei.html — from about 00:20:00 in the podacast)
I prefer Google Deepmind’s intersectional definition of AGI, with progress levels going from 0 to 5, and placing our current state of progress between levels 1 and 2. I think that’s a fair assessment and can help people manage their expectations more realistically.
Heading towards accessibility
With the stakes being so high, model developers are racing each other (and legislators) at an incredible pace. Just a few weeks ago, Anthropic’s Claude 3 Opus model snatched the crown away from OpenAI’s GPT-4 for the first time since its release on the Chatbot Arena Leaderboard. Then, just recently, OpenAI came out with a sneaky update and got back on the throne. Plus, the gap between open source and closed models’ performance is quickly closing. This competition, plus the future energy efficiency (that can’t come soon enough) will have an impact on the pricing of paid models, meaning that they will become even more accessible to all.
In addition to making AI as generalistic, accurate, and efficient as possible, reducing model size and hardware requirements might be the most important. Another key objective is to prepare models to learn and evolve with private data as well — with proper safety measures in place, of course. This means that, in time, everyone will have their own AI, trained on their own problems and environment, customized with a unique personality and properties.
The dark side
As opposed to OpenAI’s previous developments (and namesake), the specifics of architecture, size, and training data of the new version were not made public. According to one of the company’s co-founders, this openness is a mistake of the past.
This situation is far from black and white. There are strong arguments for maintaining control over the technology regarding transparency and security. The exponentially increasing number of AI startups and generative models makes the scene less and less transparent, but open-source AI developments can easily be misused for disinformation and propaganda. Check out my thoughts on the topic if you are curious!
A call for legislation
As we are headed towards Artificial General Intelligence, most office work might be automated, and at the minimum, completely transformed. Without proper preparation and regulation, this poses serious risks to society as a whole. However, if we look back, regulation has always been an afterthought, and it is difficult to imagine that it will be any different this time. Even the European Union, which has been at the forefront of this issue and is about to introduce the AI ACT package, has essentially recognized the potential risks of AI, however, it does not yet offer concrete steps on how to combat them. In my opinion, the framework for a new way of work, a new economy, and the redefinition of value must be created and laid down now, before AGI is here, to avoid massive societal displacement in the short run after it arrives.
Well, here’s hoping. :)