AI Top-of-Mind for Dec 26

dave ginsburg
AI.society
Published in
5 min readDec 26, 2023

Happy Boxing Day, and a bit of a long one today, so please bear with me….

Top-of-mind is Apple and its negotiations with major publishers for LLM training data. There has been a constant call for better copyright and content protections, and Apple’s move is in the right direction. Asreported by ‘Reuters,’ some of the organizations contacted by Apple include Conde Nast, NBC, and IAC. What needs to be clear, though, is that no deal can be exclusive. The ‘New York Times’ also offers a perspective.

Happy Boxing Day! So, on the Christmas front, Coca-Cola’s AI-driven card generator and some interesting creations. As published in ‘Bootcamp,’ not exactly on-brand, but I’d expect additional publicity from this. And as they say, what doesn’t kill you makes you stronger.

Sources: Bootcamp (left), Author (right)

Jordan Gibbs recently published a good analysis of the most common phrases generated by ChatGPT, comparing them to the most common natural English phrases. Then, he compared expected usage between the two platforms, with some interesting results:

Source: Jordan Gibbs

Speaking of prompts, an article by Anish Singh Walia at ‘AI monks.io’ that looks at prompts that feel illegal in that they are just too helpful. He covers six areas, describing the prompts and how to use them:

· Repurpose Blog Post Prompt

· AI Content Remover-Rewriter Prompt

· Create a Course Prompt

· Sales Script Prompt

· Summarise Any text prompt

· High-converting landing Pages

On the Google front, their new AI Studio, as detailed in ‘Voicebot.ai.’

Google has unveiled AI Studio, a new web-based platform giving developers easy access to its recently announced Gemini AI large language models. The service lets builders quickly develop and deploy chatbots, apps, and other software powered by Gemini’s natural language generation capabilities, specifically Gemini Pro, the middle-size version of the three Gemini options.

The AI Studio interface is here, with additional background here.

Source: Google

On the same line, Google’s Imagen 2.0, accessible via Vertex AI, and said to produce the most realistic and accurate images yet. From the ‘Generative AI’ article:

Source: Generative AI

As a follow-up to my posting on creative anatomy, this on a more serious front as to what level of confidence doctors should place on AI-generated diagnosis. I’m sure we’ll learn more over the coming years in removing bias from diagnosis.

But as a counterpoint, Bill Gates in a ‘CNET’ interview offers a more upbeat view, covering AI-driven advances like fighting antibiotic resistance, high-risk pregnancy help, HIV risk assessment, and quick access to medical records. I’ve included below a snippet from the article. Good reading!

A New York Times in early December noted that Gates was “long skeptical” of what AI could do. That changed in August 2022, when he saw a demonstration of OpenAI’s GPT-4, the large language model underlying ChatGPT. That sold Gates on the concept, and he helped Microsoft “move aggressively to capitalize on generative AI.”

Also looking at bias, a great analysis in ‘The Information’ on building trust in AI.

Building the most robust AI industry isn’t just about powerful models and microchips — the real competitive advantage is trustworthiness. Simply put, people don’t use things they can’t trust to work well — this applies equally to car seats, toasters and AI. No matter how sophisticated they are under the hood, AI systems that lead to bad outcomes will not translate to products people want.

And some guidance:

· Create legal protections for trusted individuals and organizations outside companies engaged in ethical hacking or red teaming — breaking a system to expose a problem, with the intent of pressuring the company to fix it.

· Fund ambitious noncorporate actors to create technical approaches for the public good, instead of relying on AI companies to do this work. This might mean funding a public platform (perhaps government owned) to evaluate algorithms so outsiders can conduct their own analyses and share their findings.

· Professionalize the field of auditing algorithms by investing in training, certification, standards and paid programs such as bias bounties (modeled after the bug bounties of cybersecurity). This creates a path for talented individuals to have an impact and a livelihood doing this work outside big tech companies.

Looking at creative, this from ‘CNN’ on the growing use of AI for art authentication. Recently, a work thought to be by Raphael was identified as having one of the figures — St. Joseph — painted by someone else. From the article:

· The debate around the “de Brécy Tondo” feeds into wider discussions about the role of AI in art authentication, which Ugail sees as complimentary to other forms of analysis, such as researching the provenance of a work.

· Hassan Ugail, director of the Centre for Visual Computing and Intelligent Systems at the University of Bradford, told CNN Thursday that he’d developed an algorithm to recognize genuine Raphael paintings, with 98% accuracy.

· It analyzes 4,000 parameters such as brush strokes, color palette and hue to determine whether a painting is a genuine Raphael, explained Ugail.

Source: CNN

Finally, a last look at the retail space, this time on how the technology applies to demand forecasting, delivery, and personalized recommendations. The ‘CNET’ article looks at Walmart, Target, and Nordstrom, both behind-the-scenes use as well as, in the case of Walmart, customer facing:

Walmart in late August launched its own internal spin on ChatGPT called My Assistant, which more than 50,000 corporate employees can use to craft email pitches or construct slide decks, among other tasks.

--

--

dave ginsburg
AI.society

Lifelong technophile and author with background in networking, security, the cloud, IIoT, and AI. Father. Winemaker. Husband of @mariehattar.