AI for Busy Product Leaders: A Recap

Kartik Sachdev
Product Leadership Journal
12 min readAug 8, 2023

Disclaimer: Views in this article are my own and based on publicly available information. While they may be unintentionally biased by my place of work, they do not represent those of my team or my employer.

Conversational AI in outer space, c. 2364

It’s only been half a year since ChatGPT set the record for the fastest-growing product ever, and as if that wasn’t enough, a lot has happened since then. And we’re talking announcements made daily, not weekly or monthly!

The initial hype and hysteria seem to have levelled out (though not died down), and this seems like a good time to recap what has happened so far and what the trends are pointing towards. If you’re a Product Leader, PM, PO or Product Designer who has been struggling to find your footing in this brave new world, I hope you will find this post somewhat useful. Let’s dive in.

What is it?

For many years we’ve encountered AI in the form of algorithms, such those that rank search results by relevance or find songs similar to the ones we like. We’re also familiar highly-specialized AI, such as computer vision used by self-driving cars. The latter is trained using Machine Learning (ML), which when applied across multiple layers is known as Deep Learning.

Large Language Models (or LLMs) were made possible by breakthroughs in self-supervised and semi-supervised ML, specifically transformer architecture. Essentially, LLMs are trained on vast datasets, mostly scraped from the Internet, and are designed to predict the next word(s) given a sequence of words.

LLMs like OpenAI’s GPT (literally, Generative Pre-trained Transformer) revealed emergent properties resembling intelligence, due to the sheer size of data they were trained on — think of them as brains that can process data in tens of thousands of dimensions. The pre-trained models can be fine-tuned for better accuracy & relevance on data such as your company FAQ or your product documentation.

Generative AI tools leverage LLMs to process and produce multi-modal (i.e. text, images, music & video) content that resembles but does not repeat the data they were trained on. For example, Stable Diffusion is a popular text-to-image model, MusicGen can generate audio and OpenAI Codex powers GitHub Copilot for producing code.

Artificial General Intelligence (AGI) or Superintelligence is what all of this seems to be leading to. A seemingly unexpected acceleration in this field has been made possible by the convergence of 3 things in recent years: Generality (few tools solving many problems), Competence (solving them well) and Scalability (doing it at scale)[1].

Are LLMs intelligent? Yes. Can they reason? Maybe. Are they sentient? We don’t know yet. Are they smarter than humans? Depends on what we discover about their sentience ;)

As you’ve no doubt heard, this technological shift has been declared as one of the biggest in our history — bigger than the invention of the Internet, the advent of cloud & mobile computing and perhaps even comparable to the invention of electricity. (BTW, The Current War is a great movie on innovation and the race to capitalize on long-reaching network effects).

At present, different companies are taking different approaches:

  • Google is taking a relatively measured and conservative approach, even though the original research on transformers came out of Google Brain in 2017 (Attention is All You Need, Ashish Vaswani et al).
  • Microsoft and OpenAI have been taking a rapid and optimistic approach, heavily relying on safeguards, feedback and disclaimers. See Microsoft Responsible AI.
  • Anthropic is taking a constitutional approach, making the AI give feedback to itself. See Constitutional AI.
  • Meta is going all-in on open-source, laying it bare to the community.
  • Apple has been largely downplaying their role but are poised to benefit hugely from the 1.65 billion devices active around the world
  • Many companies and countries are building their own “frontier models”: LLMs that exceed the capabilities of most existing models and can perform a wide variety of tasks, e.g. UAE’s Falcon 40B model.

An excellent grounding reference is this Talk for Corporate Boards[2].

Why should I care?

  1. Task vs Intent This is potentially the first new UI paradigm in 60 years[3]. NLU (Natural Language Understanding) has been around for a while now and we are all familiar commerical applications of it like Siri, Alexa and OK Google. However, LLMs make it possible for users to state what they want, rather than what they want the computer to do. It’s like programming (“prompting”) in English, and also has the potential to elevate just about any human-computer interaction to Star Trek levels. All of this with a high degree of reliability and reproducibility.
  2. Scale & Speed As mentioned in the previous section, the scale and speed of this shift is unprecedented. Unless governments step in or we run out of GPUs, there will be a direct or indirect impact on everyone who is a part of the global economy.
  3. Creators vs Consumers The line between creators and consumers of the technology is being rapidly blurred. If you’re a PM using ChatGPT to summarize a document for you, you’re a consumer now. If your customer is using Code Interpreter to analyze their data, they are creators now.
  4. Alignment The “alignment problem” refers to the divergence between human values, ethics & beliefs and the ways in which AI / AGI tends to behave. This is a tough one since as recently world events have revealed only too well, even humans are not well-aligned with each other.
  5. Law & Regulation The time to influence the creation of appropriate laws & safeguards is now. Copyright is a big one: LLMs are trained on publicly visible data, which raises the question of whether companies providing Generative AI tools should share profits with the creators of the original content. Cases in point: GitHub lawsuit, Twitter & Reddit restricting content scraping and at the other end of the spectrum, new Japanese copyright regulation. There have also been instances of AI “hallucinations” being wrongfully submitted as court evidence. (PS: By far the most interesting legal story on AI copyright is… Google vs YouTube :) )

Ultimately, Generative AI is the largest platform ever and everything that is true for large platforms (like Social Media) is applicable to Generative AI too — such as the exponential value of content generated by users. We’re also seeing a burst of innovation thanks to the combination of a low entry barrier, widespread access and a remote/hybrid workforce that has extra time on their hands for experimentation and self-education.

Why should I not freak out?

OK, so that’s a lot to take in.

Is this good or bad? I don’t know… it’s progress. On one hand this is the vision utopian Sci-Fi has been painting for the last 70+ years, and we are probably the first generation to see hard Sci-Fi become reality in our time. On the other, it’s coming at a time when the world is facing multiple existential crises.

Will AI replace people? I’m going to go with the optimistic view below, which has been repeated so many times that it’s impossible to find the original author :)

Source: Linus Ekenstam on Twitter

How can I believe this is not going to end up in a Skynet destroys the world scenario? Well, because for years, it is AI that has been making the world a better place. For example, by enhancing security and content moderation on social media. Marc Andreessen makes the case for AI for good here: Why AI Will Save the World.

What should I care about?

As PMs we are trained to empathize with our users, and that skill is needed now more than ever. We must understand, educate and navigate this together with our users — even more so as the lines between users, creators and developers get blurred every hour.

As PMs we are trained to empathize with our users, and that skill is needed now more than ever. We must understand, educate and navigate this together with our users — even more so as the lines between users, creators and developers get blurred every hour.

  1. Misinformation I mentioned the alignment problem above, and OpenAI has gone a step further to pose the Superalignment problem. These are areas that need healthy debate and thoughtful shaping. Generative AI has an unprecedented potential for giving bad actors the ability to spread misinformation quickly and take advantage of the vulnerable, such as the aged and kids. It is our responsiblity to design products that both detect and prevent this.
  2. Legislation Compensating the humans who produced content on which AI is trained is another interesting & unsolved problem. Should derivate AI works be bound by the same copyright as the training data? Should some form of joint ownership and/or profit-sharing be created? Can blockchain help with in establishing a chain of ownership, perhaps even watermarking generated content? There is enough evidence from the world of self-driving cars, aviation and space to prove that there is no turning back. Legislation will adapt to advances in technology, not the other way round.
  3. Trust, safety & ethics are another important area that could benefit from product thinking applied well. We don’t want AI to cause harm or to create harmful content, and while huge strides have been made in this area, a) they come at a huge cost and b) the problem is far from solved. Also there is the question of areas where AI-generated (or AI-assissted) content should not be permissible, e.g. formal education.
  4. Accessibility Natural language interfaces are great, but they pose unique accessibility challenges (example video). Steerability is the ability to prompt an LLM to provide the desired result, and if the future of user interfaces is natural language, then we must understand and master prompting.
  5. Cost LLMs currently training and deployment costs for LLMs are exorbitant, and have a potentially huge environmental impact. This remains the case despite rapid and constant improvements, such as QLoRA (a PEFT technique) which theoretically make it possible to fine-tune a model on a laptop. As you include Generative AI in your products, consider the RoI — and make your stakeholders aware of it.
  6. Walled Gardens If it benefits all, it should be available to all. That’s our job. Also, pushing for interoperability & standards. Remember that the rapid advances we’re seeing today have been made possible by years of development on programming paradigms, APIs and frameworks. These best practices must continue.
  7. Productivity Boost You as an individual, your team and your organization can benefit greatly from applying Generative AI in your day-to-day work. Don’t fight it; understand it and experiment with it to discover use cases where it makes sense. And in those cases, embrace it.
  8. Shaping the Future While “Copilots everywhere” seems to be the prevalent model for deploying AI / AGI at an industrial scale (we even have a Copilot Stack now), there is still a strong desire and push towards fully autonomous, self-assembling agents[4]. These future paradigms are being designed today. Also see ChatGPT Plugins.
  9. Model Collapse As LLMs run out of data sources to feed on, there is a tendency to use AI to create data to train AI. There are many who feel this approach is problematic; it can lead to something called Model Collapse which you should be aware of.
  10. The Planet Some food for thought: Consciousness arose in the body before the brain, and AI might be a dangerous delusion that ignores the consequences on the 8 billion or so species we share our planet with (How to be Animal: An Antidote to Our Self-Expatriation from Nature by Maria Popova).

We are seeing rapid iteration, massive swings, major shifts and unlikely alliances on a regular basis. As product people, it’s very risky for us to not be informed and involved.

How can I be a part of it?

  1. Learn Educate yourself, keep up with what’s happening in the industry and try to build a foundational understanding of what areas are and are not relevant to your work. Immersion is key. Personally I do two things:

a) I have set myself a goal to use at least one Generative AI tool everyday, to build familiarity and context

b) I learn in public. Every interesting new tool I come across goes into this Mindmap and I (hope to) make notes as I go along.

https://survivalcrziest.github.io/ai/index.html

2. Educate As you build your own understanding, share it with others. (That’s what I’m doing now :)). In fact, one of the earliest and most widespread use of Generative AI tools is for self-education, for example by Khan Academy.

3. Contribute Wherever you have an opportunity, put your hand up. Nobody understands this well enough, not even the people who worked on it. Trust me.

4. Separate the Problem from the Tech Generative AI can solve many problems, but not every problem needs to be solved by Generative AI. I find the Copilot framework the most pragmatic, wherein AI is available in 3 modes: Beside, Inside and Outside[5]. This provides a high degree of flexibility, autonomy and control to the user.

5. Prevent Stupidity (Ahem, I mean “Minimize Waste”) We watched silently and/or in shock as NFTs came and went, let’s make sure we don’t let that kind of ridiculousness happen this time, because the consequences could be a lot more serious than monetary loss. An common example of lack of product thinking is building a lightweight wrapper around OpenAI’s API and calling it a “product”, when the end user could simply perform the same job with their own OpenAI subscription and some copy-paste.

How can I stay up-to-date?

I recommend starting with these newsletters:

  1. Ben’s Bites: Daily digest, unparalleled in value + a hyperactive Discord community!
  2. Aspiring for Intelligence: Bi-weekly update with focus on key companies in the space
  3. MIT Algorithm: Weekly newsletter focused on R&D

If podcasts are your thing:

  1. No Priors with Elad Gil & Sarah Guo
  2. Possible with Reid Hoffman & Aria Finger

Some articles and sites I found useful:

  1. [2] Every Company Needs an AI Strategy, re-purposable deck by Sarah Guo
  2. [3] AI: First New UI Paradigm in 60 Years, by Nielsen Norman Group
  3. [4] Dawn of the Agents, by Vivek Ramaswami & Sabrina Wu
  4. What we know about LLMs (Primer) by Will Thompson
  5. Vector Databases explained in under 10 Tweets by Peter Wang
  6. Prompt Engineering Guide project by Elvis Saravia
  7. AI Incident Database
  8. HuggingFace Open LLM Leaderboard

Finally, below are some high-value talks I recommend to people interested in diving deeper. Some of them are a bit technical, but you can still pick up some of the key messages without an advanced understanding of the topic:

0. John Oliver gave a talk, I mean, TV show about AI, which is a great way to introduce the average non-technical person to the topic — it can save you a lot of time in preparing your own material ;)

  1. Sparks of AGI from Microsoft Research: For inspiration on the possibilities
  2. [5] 3 AI Interaction Models for App Developers: To think about product design
  3. State of GPT talk by Andrej Karpathy: A bit technical but very educative on concepts, capabilities & recommendations
  4. AI for the Next Era Sam Altman interview with Reid Hoffman. An unfiltered view into what OpenAI was thinking about, before the hype
  5. Sam Altman on Lex Fridman podcast reveals how quickly things changed and what the new landscape might look like
  6. [1] Ilya Sutskever on Lex Fridman podcast is a bit technical but connects the dots between the past, present and future of AI / AGI. As well, lots of interesting philosophical discussion on the nature of intelligent life
  7. Greg Brockman on Lex Fridman podcast is relatively old but gives excellent insight into how OpenAI came into being and what the long game of AGI could look like

Thanks for reading, I hope you found this of some value. Please clap and share if you did, it helps others discover it too! I’d love to hear your thoughts, feedback or links to resources you’ve found useful. Please leave a comment here or hit me up on Twitter (I mean X) or LinkedIn. Keep calm and AI on!

--

--

Kartik Sachdev
Product Leadership Journal

Principal Product Manager, Conversational AI Platform @Microsoft | Accidental weekend DJ | Occasional Race Driver, SimRacer | Views are my own