AI Top-of-Mind for 5.27.24 — xAI-AC

dave ginsburg
AI.society
Published in
4 min readMay 27, 2024

Top-of-mind is xAI. I’ve covered Sam Altman’s $7 trillion ‘ask,’ Microsoft’s $100B Stargate announcement, and foundry investment in the US. Not to be left out is Musk and xAI, with ‘The Information’ reporting on his ‘Gigafactory for Compute’ plans. It would require at least 100K Nvidia H100 GPUs consuming a whopping 100 megawatts of power, and he is hoping for completion by the end of 2025. Another option would be to leverage Oracle. And speculation that any data center could look much like Tesla’s Texas Gigafactory, below.

Source: Tesla

Note: Reference to ‘AC’ is from Asimov’s ‘The Last Question.’

Closely related is hardware availability. Given Nvidia’s GPUs are still supply (and cost) constrained, what are the options? ‘EE Times’ looks into this, covering options like FPGAs, AMD, TPUs via Google Cloud, GPU ‘marketplaces,’ and even CPUs. But how many GPUs do you need? Dr. Walid Soula offers answers to this question, with the following formulas:

SourceL. Dr, Walid Soula

With over a week to kick the tires of Microsoft’s new ‘Recall’ feature, some additional reviews with different viewpoints. The first, from Andrew Zuo, sees Recall as a potentially powerful tool. A counterpoint by Jim Clyde Monge in ‘Generative AI’ calls it a privacy nightmare. I personally don’t think it introduces any additional risks than what we store on our laptops now.

And yet another perspective on GPT-4o, this time from Patricia Gestoso writing in ‘Code Like A Girl.’ She offers views on things to be worried about such as bias, privacy, safety, and who ultimately takes responsibility. From her post:

· It’s not a coincidence. ChatGPT-4o’s voice is distinctly female — and flirtatious — in the demos. I could only find one video with a male voice.

· Unfortunately, not much has changed since chatbot ELIZA, 60 years ago…

To file away, some GPT-4o use cases that may save time at some point. Uzman Ali writing in ‘Write A Catalyst’ shows how to leverage the bot for web development, exam prep, video game creation, language instruction, and even parenting. Then Wei Mao in ‘Artificial Intelligence in Plain English’ shows how you can now directly open data files in ChatGPT. Very simple to sort, create graphs, and export.

Source: Wei Mao

And the last on models today, can LLMs really learn? Salvatore Raieli in ‘Level Up Coding’ provides background on fine-tuning and references a recent study on what happens to LLMs undergoing tuning and whether this results in a greater threat of hallucination. The authors conclude that it does.

Turning to China, the latest on the surveillance front. Nothing new, but the net is growing. The ‘NY Times’ article looks at the reestablishment of Mao-era techniques, with an example:

· The wall in the police station was covered in sheets of paper, one for every building in the sprawling Beijing apartment complex. Each sheet was further broken down by unit, with names, phone numbers and other information on the residents.

· Perhaps the most important detail, though, was how each unit was color-coded. Green meant trustworthy. Yellow, needing attention. Orange required “strict control.”

Ad some AI into the mix, and what could go wrong?

From the Stanford AI Index Report, charts from part 2 on AI performance,

Lastly, I thought there wouldn’t be more to cover on the AI device front, but ‘Coffeezilla’ dives into Rabbit AI as a potential scam. Beyond just a crappy overall experience. Great watching!

--

--

dave ginsburg
AI.society

Lifelong technophile and author with background in networking, security, the cloud, IIoT, and AI. Father. Winemaker. Husband of @mariehattar.