OpenAI and Google stumble into the future

When OpenAI introduced ChatGPT a year and a half ago, the whole world was amazed at the possibilities of generative AI.

Michiel Frackers
Operations Research Bit
8 min readJun 2, 2024

--

OpenAI and Google fight over dominance in AI, choosing speed over quality.
Image by Author created with Midjourney.

Every Internet user suddenly had a free tool to perform all kinds of tasks better and faster. Led by CEO Sam Altman, OpenAI is now behaving like a difficult teenager and Google is reacting like a boomer who has trouble keeping up with the times.

OpenAI piles scandal on success

OpenAI has found a unique formula for stringing together scandals and successes. The formula is simple and effective: after yet another scandal, like the dubious staff contracts in which departing employees holding onto their stock appears to be tied to mob style vows of silence, positive news ‘leaks’ out.

The shares of an OpenAI employee who broke the omerta. Image created with Midjourney.

Last week, that positive news was a possible agreement between OpenAI and Apple, whereby new generations of iPhones and iOS software would be equipped with OpenAI’s ChatGPT software. The always secretive Apple will be greatly annoyed by the leaked news. Microsoft CEO Satya Nadella wasn’t cheering about it either, probably also because he had to learn the news through the media.

OpenAI CEO Sam Altman makes the same move at every scandal, which is best described as a Vatican pirouette; he says it’s really, really, really bad what happened (if it happened like rain, through no one’s fault), boo-hoo, but that he himself knew nothing about it of course and that OpenAI will do a much better job in these matters from now on. Promise, hand on heart. The problem is that you never see what the other hand is doing.

It is the same defense as a few weeks ago when the leads of the safety team at OpenAI resigned, disgusted with the lack of support by Altman and the leadership team for addressing safety concerns. The list of scandals at OpenAI is now so long that a publicly traded company would have long since parted ways with its CEO.

Media stories are not that relevant unless they have as their source multiple colleagues and former colleagues of the person being reported on. Former board member Helen Toner’ story about why Altman was fired from OpenAI in November, partly by Toner (before he returned), seems at first to be a revenge story. But her account of the lack of safety is particularly disturbing:

“On multiple occasions he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible to know how well those safety processes were working or what might need to change.”

Altman thinks he can solve the problem of poor security at OpenAI by taking charge of the security team himself. That’s like making the fox manager of the chicken coop.

The more one looks into Altman’s past, both about his time at Y Combinator and his own startup Loopt, the more he seems, let’s keep it classy folks, smoother than an eel in a bucket of snot being held by an octopus smeared with Vaseline under a shower of olive oil. With all due respect.

The question is how long Satya Nadella, CEO of the world’s most valuable company -that’s Microsoft until Nvidia announces its next quarterly results- will tolerate Altman’s antics. Whereas Mark Zuckerberg at Meta and, earlier, Larry Page and Sergei Brin at Google had ensured through a sophisticated structure that they could appoint the majority of board members, thus always retaining corporate control, Altman has to deal with a relatively independent board and, in Microsoft, one major shareholder with 49% of the shares.

OpenAI a B-Corp?

OpenAI has an unusual structure, with a for-profit company accountable to a nonprofit. The excellent The Information claims that Altman and his allies are trying to turn OpenAI into a social enterprise, known as a B-Corp.

B-Corps allow companies to have additional purposes beyond shareholder interest, protecting them from certain types of shareholder lawsuits if they act for reasons other than profit. A B-Corp could be a middle ground between OpenAI’s current structure and that of a fully profit-oriented company. The conversion of OpenAI to a B-Corp could also be a moment for Altman to try to adjust OpenAI’s governance structure in his favor.

New ChatGPT a lesson in AI hype

When OpenAI presented the latest version of its immensely popular ChatGPT chatbot, GPT-4o, in May, it featured a new voice with seemingly human emotions. The online demonstration also showed a bot tutoring a child. Earlier, I described these gimmicks as irrelevant, like decorative rims under a Leopard tank.

GPT-4o is now available to everyone, but pretty much without all the bells and whistles, much to the chagrin of the New York Times, which sees through OpenAI’s use of old-fashioned vaporware tactics to get the better of Google.

The problem is that the rivalry between OpenAI and Google has now taken such forms that even OpenAI’s narrative that networks from Russia, China, Iran and Israel were trying to manipulate public opinions with AI-generated content is in doubt.

OpenAI stops the Russians?

OpenAI reported Thursday that it has shut down five covert influence operations that used its AI models for deceptive activities. These operations, which OpenAI allegedly shut down between 2023 and 2024, originated from Russia, China, Iran and Israel and sought to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, according to OpenAI.

OpenAI’s report comes amid concerns about the impact of generative AI on several elections worldwide scheduled this year, including in the US. In its findings, OpenAI revealed how networks of humans engaged in influence operations and used generative AI to generate texts and images on a much larger scale than before, and fake engagement by using AI to generate fake responses to social media posts.

Voters in India inundated by millions of deepfakes

In a year when nearly half the world’s population is going to the polls, these are the exact safety issues that critics within OpenAI, who have left the company, were concerned about.

In India, voters are now inundated with millions of deepfakes, much to the delight of the politicians who create them. They welcome these new tools, but many voters are unaware that they are looking at a computer-generated person. With the British and U.S. elections coming up, the way AI companies allow their technology to be used will have to be looked at extremely critically.

Leak at Google Search

Viewed precisely in light of its great social responsibility, it was awkward, to say the least, that Google allowed a set of 2,500 internal documents to be stolen, raising questions about the company’s previous statements.

The Google Search algorithm has not been leaked and SEO experts have not suddenly uncovered all the secrets about how Google works. But the information that did leak this week is still huge. It offers an unprecedented glimpse into the inner workings of Google that are normally closely guarded.

Perhaps the most remarkable revelation from the 2,500 documents, is that it appears Google representatives have misled the public in the past in explaining how Google’s search engine evaluates and ranks information. This offers little confidence in how Google will handle critical questions about how the company’s AI applications are deployed.

Google’s AI sometimes gives false, misleading and dangerous answers

The leak at Google last week was not even the search giant’s biggest problem. That turned out to be the malfunction of answers provided in part by Google based on AI.

From recipes with glue on pizza to recommendations for “blinker fluid,” the quality of Google’s AI is still far from good. It begs the question of why Google is unleashing this type of technology, which is clearly still in its early stages, on the general public.

Failures of Google’s AI review appeared to occur when the system did not realize that a quoted source was trying to make a joke. An AI answer that suggested using “1/8 cup of non-toxic glue” to prevent cheese from sliding off pizza could be traced to someone somewhere online who was trying to troll a discussion.

A comment recommending “blinker fluid” for a noiseless turn signal can similarly be traced to a troll on a dubious advice forum, which Google’s AI review apparently considers a reliable source.

As I experienced myself last week when trying to get average returns calculated, numbers prove to be a challenge for Google’s AI technology. When asked about the relative value of dollars, Google was off by dozens of percent, according to the inflation calculator that Google itself quotes. In another example, Google’s AI said there are 738,523 days between October 2024 and January 2025.

Users were told to drink a lot of urine to flush out a kidney stone and that Barack Obama was a Muslim president. Another Google answer said John F. Kennedy graduated from the University of Wisconsin in six different years, three of them after his death. Which is clearly nonsense, since everyone knows JFK was touring with Elvis during those years.

According to Google, it has now made “more than a dozen technical improvements” to its AI systems after expressions of misinformation.

Vaporware by OpenAI leads to blunders at Google

The tech industry is in the midst of an AI revolution, with both start-ups and big tech giants trying to make money with AI. Many services are being announced or launched before they are good enough for the general public, while companies like OpenAI and Google are fighting to present themselves as leaders.

The apology message from Liz Reid, responsible for Google’s search product, reads like a strange combination of public penance, uninhibited chest-beating and offending customers. Like, ‘Yeah sorry, we made a mistake, but do you know how hard it is what we do? So don’t ask stupid questions!

Ars Technica, as is often the case, comes up with a clear conclusion:

“Even if you allow for some errors in experimental software rolled out to millions of people, there’s a problem with implied authority in the erroneous AI Overview results. The fact remains that the technology does not inherently provide factual accuracy but reflects the inaccuracy of websites found in Google’s page ranking with an authority that can mislead people. You’d think tech companies would be striving to build customer trust, but now they are building AI tools and telling us not to trust the results because they may be wrong. Maybe that’s because we are not actually the customers, but the product.”

--

--

Michiel Frackers
Operations Research Bit

I write a newsletter every Sunday about technology that shapes our lives. Founder of http://bluecity.solutions.