What you should know about the Generative AI Boom

Code Economics
9 min readJun 4, 2024

--

Hey everyone welcome back to another week of Code Economics. In case you missed my startup equity series, here is a link to the last article:

Now let’s shift gears to something that’s all over the press and rapidly transforming workspaces: Generative AI.

Brief Introduction to Generative AI

Generative AI is a class of artificial intelligence systems designed to create new content indistinguishable from human-made data. While a lot of the earlier work in the field was related to generating images, we now have systems that can generate text, images, music, and other forms of media.

While traditional AI systems are typically used for recognizing patterns or making decisions based on existing data, generative AI can produce novel outputs based on the patterns learned during training. (where the key distinction comes from) This has led to incredible breakthroughs in the field of unsupervised learning.

History of Generative AI

While generative AI originated in the 1950s and 1960s, significant progress started with the advancements in computational power (thanks to Nvidia) and the development of Deep Learning that started with AlexNet in 2012.

One of the key developments in the history of generative AI was the introduction of Generative Adversarial Networks (GANs) by Ian Goodfellow and co in 2014. GANs consist of two neural networks — a generator and a discriminator — that compete against each other, creating highly realistic images, videos, and other content. This breakthrough was pivotal in demonstrating the potential of AI to generate new, high-quality data.

Another major milestone was the development of Transformer models, introduced in a paper titled “Attention is All You Need” by Vaswani et al. in 2017. Transformers get brought up a lot because they revolutionized natural language processing by enabling models to understand and generate human-like text.

These foundational advancements set the stage for the current era of generative AI, where models like GPT-3 and GPT-4 have demonstrated remarkable capabilities in generating coherent and contextually relevant text, images, and even music.

The combination of improved algorithms, larger datasets, and increased computational power has propelled generative AI from a theoretical concept to something companies across various industries are now starting to take advantage of.

How Generative AI Works

Most generative AI systems these days rely on some transformer-based architecture:

This is the architecture of a transformer, the key breakthrough here was the Multi-Head Attention nodes which can take a sequence of words and “learn” which words are important and related. This concept is what helped text-generating systems to generate coherent sentences.

To train these models, companies first collected training data by scraping the entire web. (which has profound copyright/legal implications yet to be resolved) GAN networks are trained by having the generator create new data samples, while the discriminator evaluates them against real data samples. Over time, the generator improves its ability to produce realistic data as it tries to fool the discriminator. The generator here is the actual large language model. OpenAI and other foundational model companies also use a lot of human input after this phase to train the models

What is the Current Environment?

All over the news, we see AI companies raising larger and larger amounts of capital. Here is a chart from January of this year:

Source: Finro

We see the explosion of Generative AI that occurred during the 2021 post-covid boom, which is likely attributed to low interest rates. 2022 saw a dip similar to the rest of the tech sector, but 2023 is when ChatGPT was released and the entire market substantially changed — suddenly all the dry powder we had in Venture Capital flooded back into the market.

We now see the difference in funding valuations across startups:

source: Finro

Public and private market multiples have largely come down in SaaS companies. One notable exception is in AI with an average ARR multiple of 40.6x. This has led to many companies embracing AI with examples such as 40 of 152 companies in the Winter 2023 Y Combinator batch having the AI tag.

So who are the highest-paying employers in the “foundational” AI space? It so far seems to correlate very strongly with the capital raised + valuation:

Source: Statistica

These companies leverage AI in some capacity and are spending tremendous capital on purchasing compute resources and fighting for AI talent. With a brief look at levels.fyi, it confirms salaries that are much higher than comparable software businesses. (with higher hiring bars)

How to Take Advantage of LLMs as an Employee?

Not everyone works as an AI engineer, so what are things that you need to do to stay ahead of this technology shift?

Employees who only provide value through one of the following ways will eventually get automated:

  • Data Analysis
  • Content Writing
  • Generic Graphic Design
  • Stock Photos (Even Getty Images now has a generative AI offering)
  • Language Translation (GPT-4o made huge improvements here)
  • Low-Level Customer Service
  • Data Entry

You have to think about how to make yourself a higher-leverage employee. If you are a Data analyst, think about how you can use your skills to build a reproducible AI system to analyze data efficiently and work on fine-tuning this. If you are a content writer, see if you can build a system based on what context you have to write a first pass over all your work to increase your output.

Remember, AI automation doesn’t mean all people lose their jobs, but it does mean that employees will be able to do more with less, as shown by Klarna’s marketing team:

genAI will save us $10m in marketing this year. We’re spending less on photographers, image banks, and marketing agencies. The numbers are mind-blowing:

- $6m less on producing images.

- 1,000 in-house AI-produced images in 3 months. Includes the creative concept, quality check, and legal compliance.

- AI-image production reduced from 6 WEEKS TO 1 WEEK ONLY.

- Customer response to AI images on par with human produced images.

- Cutting external marketing agency costs by 25% (mainly translation, production, CRM, and social agencies).

Our in-house marketing team is HALF the size it was last year but is producing MORE!

How to Utilize LLMs with Better Prompts

LLMs that are available to the general public are a new phenomenon, so it’s not surprising most people don’t have any training in how to get the most out of these powerful tools. Generally, these are the 6 components of a good prompt:

Source: Jeff Su

Not all of these are required for every prompt; generally, most are optional except for [task]. Here is an explanation of how each of them works:

1. Task

The Task component is the heart of your prompt. It should clearly articulate the end goal and always start with an action verb. This helps the AI understand precisely what you want it to do.

Example:

  • Summarize the main points of the following article.
  • Generate a list of creative birthday party ideas for a 10-year-old.
  • Explain the concept of blockchain in simple terms.

Guidance:

  • Be specific: The more specific your task, the better the AI can meet your expectations.
  • Start with an action verb: This sets a clear intention and direction.

2. Context

Context is the background necessary to generate relevant and accurate responses. Use the following guiding questions to structure relevant and sufficient context:

  • Who: Who is involved? Who is the intended audience?
  • What: What is the subject or topic?
  • Why: Why is this information or task important?

Example:

  • You are a teacher creating a lesson plan for high school students about the American Revolution. Provide an overview of the key events leading up to the war.

Guidance:

  • Include relevant details: Help the AI understand the broader situation.
  • Be concise yet informative: Offer enough information without overwhelming the prompt.

3. Exemplars

Exemplars improve the output by giving specific examples for the AI to reference.

Example:

  • Here are two examples of well-written job descriptions for software developers. Please write a similar job description for a data scientist.

Guidance:

  • Provide clear examples: Show what good results look like.
  • When providing multiple examples, have some variety: This can help the AI understand different nuances and styles.

4. Persona

The Persona component defines who you want the AI to emulate in the given task situation. Think of it as choosing a role or character for the AI to adopt, which can significantly influence the tone and style of the response.

Example:

  • You are a friendly and knowledgeable customer service representative. Answer the following customer query about return policies.

Guidance:

  • Choose an appropriate persona: Match the persona to the task for better alignment.
  • Be explicit: Clearly state the desired persona to set the right tone.

5. Format

Visualizing your desired result will let you know what format to use in your prompt. The format can range from lists and bullet points to essays and dialogues.

Example:

  • List the top five benefits of a healthy diet in bullet points.
  • Write a dialogue between two friends discussing their weekend plans.

Guidance:

  • Specify the format: Direct the AI on how to structure the output.
  • If you use an API for this, you can oftentimes coerce the LLM to give the output you want.
  • If you want a very specific format in the output, it’s best to provide an example or two.

6. Tone

The tone is the final piece that shapes the overall feel of the response. Whether you need a formal report, a casual conversation, or a humorous take, defining the tone ensures the output aligns with your expectations.

Example:

  • Explain the concept of quantum computing in a casual and friendly tone.
  • Write a formal letter to a business partner outlining the benefits of a new partnership.

Key Points:

  • Define the tone: Clearly state how you want the response to sound.
  • Align tone with the audience: Consider who will read the output and adjust accordingly.

Putting It All Together

To illustrate how these components come together, let’s craft a comprehensive prompt:

Task: Create a lesson plan.

Context: You are a high school history teacher preparing a lesson on the American Revolution. Your students have a basic understanding of the period but need to learn about the key events and figures.

Exemplars: Here is an outline of a lesson plan on the Civil War: …. Please use a similar structure.

Persona: You are an experienced and engaging history teacher.

Format: Outline format with bullet points for each section.

Tone: Informative and engaging.

Final Prompt: Create a lesson plan for a high school history class on the American Revolution. Your students have a basic understanding of the period but need to learn about the key events and figures. Use the outline format with bullet points for each section. You are an experienced and engaging history teacher. Here is an outline of a lesson plan on the Civil War… Please use a similar structure. Make sure the tone is informative and engaging.

By incorporating all the components here, you can create effective prompts that guide AI language models to deliver high-quality, relevant, and tailored outputs. Happy prompting!

Conclusion

Generative AI is one of the most transformative technologies to ever come out and will enable incredible productivity gains. It is in your best interest to stay informed and make sure you are utilizing AI to increase your throughput and productivity.

I will cover additional AI topics in future articles, so subscribe for more!

--

--