4 Things Going on in AI Right Now
Overwhelmed by the AI news cycle? Me too. Here are 4 AI headlines worth your time.
In 2022, there was a trickle of news about the AI industry. In 2023, there was a news beat. Well, here we are in 2024 and the AI news beat is now an AI firehose. The speed and volume of AI updates is dizzying…and that’s coming from a girl who loves this stuff.
Who’s doing what with whom?… And why should I care? Here are 4 things worth understanding in 2024 AI news.
1. Google launched Gemini, an AI-driven voice assistant.
Read This: Google Releases Gemini, an A.I.-Driven Chatbot and Voice Assistant (NYT, Feb 8)
What’s Happening: First, we had voice assistants like Alexa, Siri. Then we got AI chatbots like ChatGPT, Bard. Google just took a huge step forward by launching it’s new smartphone app — a talking AI assistant called Gemini. Gemini is powered by Google’s huge new large-language model (LLM), Gemini Ultra.
Gemini is like if Alexa and ChatGPT had a baby. You’ve got the accuracy of the best voice assistant and a multi-modal AI chatbot — mutimodal because it can read, understand, and provide results back as text, image, or audio, or code formats.
Why You Care: As humans, we don’t carefully separate our tasks by type or format or category. Life comes at us as a messy, multimodal cacauphony. I have high hopes that this will be the world’s first ‘assistant’ app that truly acts like a human assistant. To be continued.
Late-breaking update: Appears like most of us mortals won’t be able to access the Gemini app until the cool kids are bored with it.
2. There’s a new AI Safety Consortium called AISIC
Read This: Leading AI Companies Join Safety Consortium to Address Risks (Reuters, Feb 7)
What’s Happening: The Biden administration is spearheading the first real AI safety consortium of over 200 entities, to support the safe development and deployment of generative AI. AISIC (aka: the U.S. AI Safety Institute Consortium) members cut across industries and include members from Google to JPMorgan Chase to Palentir.
Why You Care: Because we don’t really want election tampering, nuclear threats, or AI overlords. It’s early, but this is finally a step in the right direction.
3. AI-Driven voice robocalls are now illegal.
Read this: AI Robocalls can Trick Voters. The FCC Just Made Them Illegal. (AP, Feb 8)
What’s Happening: This week, the FCC outlawed AI-voiced robocalls in a unanimous vote, in an attempt to get ahead of the use of AI to generate ‘fake news’ or to tamper with elections in any way. This ruling is tied to the the malicious AI robocalls sent in the voice of President Joe Biden within New Hampshire, dissuading people from voting in the 1st U.S. primary in the country.
Why You Care:
- It’s a proactive move to counter the negative side of AI. This unanimous passing by the FCC sends a strong proactive signal that AI deception will not be tolerated in the U.S…especially related to elections and politics.
- I mean… it’s robocalls. Not sure if it’s just me, but these days I get at least 2 spam robocalls to my mobile phone each day. Can you imagine if we add AI-voiced robocalls as well?
4. Are you E/acc or E/A? Time to pick sides.
Read This: Inside the Sectarian Split Between AI Builders (The Independent, Feb 7)
What’s Happening: Most of us were left scratching our heads about the whole Sam-Altman-vs-his-own-OpenAI-board thing. He’s out! He’s going to Microsoft! All of the employees are going to walk! Oh wait…he’s back. Anyhoo.
So what the heck happened there? To understand, you need to learn a little bit about the two big tech movements that are increasingly asking us to choose sides between starkly extreme philosophies:
- E/acc: Effective Accelerationism. You believe in innovation without limits...or at least without regulatory constraint. Unrestricted tech progress will solve nearly all universal human problems (war, poverty, disease, climate change). Unrestricted development of AI.
- E/A: Effective Altruism. Effective Altruism, at its core, is about using evidence and reason to find the most effective ways to benefit others. Slow and ethical development of AI and other technologies to solve the world’s biggest problems, from global poverty to disease eradication…to the existential threat of annihilation via AI development.
As you can see, the movements aren’t opposing by definition.
It’s more like the EA community often believes that the quick evolution of technology is often not in the best interest of humanity, the planet, life. And the E/Acc community usually believes that unfettered technological development is the path to solving the world’s problems.
There’s so much more to say here. Looking for e/acc versus e/a rabbit holes? Try this and this and maybe this.
Why You Care: I mean…if you don’t care about the future of humanity and the planet and/or the threat of world annihilation, then I’m not sure we can be friends anymore.
But in all seriousness…you’ve heard me say this before: Tech is having this debate about the future of humanity without you. I cannot express enough how much we need your voice, your expertise, your knowledge. School yourself then get involved. It may seem esoteric, but I promise you, these movements are shaping up to guide our human future with AI. Kinda important.
Oh wait! I forgot to get into what the HECK this has to do with Sam Altman and OpenAI? More to come, but essentially, OpenAI’s leadership is split between factions: the board that fired Altman is largely E/A, and they saw him as an increasingly E/acc-type threat. To be continued.
This was a good project for me to read the headlines, too. Let’s do it again, ok Technormalists?