How to Automate Youtube and Blog Content with LangChain & OpenAI: A Step-by-Step Guide

Wayne Lee
8 min readJun 21, 2023

Table of Content

· Overview
· YouTube Tutorial & Source Code
· Designing the Marketing Assistant App
· Part 1: The Basics
Step 1: Setting up your environment
Step 2: Getting your OpenAI API key
Step 3: Setting up the app.py file & Importing libraries
· Part 2: Building Blocks of LangChain
Step 4: Understanding the Basic Components
Step 5: Building our First LLMChain
· Part 3: Mastering the Basics of Chains in LangChain
Step 6: Creating a SimpleSequentialChain
Step 7: Building a Complex SequentialChain
· Part 4: Streamlit — Building a Web App
Step 8: Integrating with Streamlit
· Follow me to stay updated

Overview

In the ever-evolving landscape of marketing, automation is the way forward. Content creation is paramount, but it can often be overwhelming and time-consuming. The key to addressing this issue lies at the intersection of artificial intelligence and language processing.

Today, we’ll dive into how you can create an AI-powered marketing assistant that can streamline your content creation process, whether you are a seasoned marketing professional, a small business owner, or a solopreneur!

YouTube Tutorial & Source Code

Check out our YouTube version of this tutorial

YouTube Channel

Source code here

AI Marketing Assistant built with LangChain, OpenAI, and Streamlit
AI Marketing Assistant Example

Designing the Marketing Assistant App

The AI marketing assistant uses LangChain and OpenAI to generate content. It works by creating a series of “chains” — a blog post chain, a YouTube script chain, and a YouTube visuals chain.

Marketing Assistant Design Diagram

We will first learn about LLMChain as we introduce the model and prompt structure and build out your first prompt template and chain.

Then we will introduce the SimpleSequentialChain which can chain together single input and output chains to start building out more complex models.

We will then put together all the chains with a SequentialChain, allowing us to build complex model like the marketing automation chain which will power our app.

Finally we will use Streamlit for our front end to make our AI Marketing Assistant.

Part 1: The Basics

Step 1: Setting up your environment

Before we begin, we need to make sure we have all the necessary packages installed. Here are the packages we’ll need:

  • LangChain: Simplifies the process of using large language models.
  • OpenAI: The large language model we will be using.
  • Streamlit: A framework for building web applications in Python.
  • Python-dotenv: For managing environment variables.

To install these packages, open your terminal and run:

pip install langchain openai streamlit python-dotenv
Installing Libraries Example

Step 2: Getting your OpenAI API key

Next, you need to get your OpenAI API key. This is a unique key that allows you to access the GPT-3 model (GPT-3.5/ChatGPT, GPT-4). You need to sign up for an account at OpenAI, and you’ll find your API key in the dashboard.

OpenAI Dashboard

Once you have the key, you need to securely store it in a .env file. In your project directory, create a .env file and add the following line:

OPENAI_API_KEY=your_openai_key

Replace your_openai_key with your actual OpenAI key. This file will be used to securely store our API key.

OpenAI Key Example

Step 3: Setting up the app.py file & Importing libraries

We will create our main Python file, app.py. This is where we'll build our marketing assistant.

Creating app.py file example

We’ll start by importing the necessary libraries and loading our environment variables:

from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SimpleSequentialChain, SequentialChain
import os
from dotenv import load_dotenv
load_dotenv()

In these lines of code, we’re importing the OpenAI class from the langchain.llms module, which allows us to interact with the GPT model. We’re also importing the PromptTemplate and LLMChain classes, which will be used to create our prompts and chains.

The load_dotenv() function loads the OPENAI_API_KEY environment variables from our .env file.

Part 2: Building Blocks of LangChain

Step 4: Understanding the Basic Components

Before we begin coding, let’s get a brief overview of the components we will be using.

  1. Large Language Model (LLM): An instance of the GPT-3 model.
  2. Prompt Template: This is a simple instruction that we will give to our LLM. It takes input variables and uses them in the template.
  3. Chain: This is a combination of the LLM and a Prompt Template. It’s the smallest unit of operation in LangChain.
LangChain Basics Explanation

We’ll be using these three components to create our blog post generator.

Step 5: Building our First LLMChain

To build our first chain, we’ll need to initialize our LLM, create a prompt template, and then combine them to form a chain.

Set up your LLM and your first prompt template:

# LLMChain Example

# Model
llm = OpenAI(temperature=0.9)

# Prompt
blog_prompt_template = PromptTemplate(
input_variables = ['product_description'],
template = 'Write a blog post on {product_description}'
)

# Chain
blog_chain = LLMChain(llm=llm, prompt=blog_prompt_template, verbose=True)

# Run
product_description = 'best eco-friendly coffee'
blog_chain.run(product_description)

Tip: The temperature parameter controls the randomness of the model’s output. It takes in values from 0 to 1, where we set it at 0.9 because we want many new ideas, while a lower value like 0 makes it more deterministic.

In the code above, we set up our LLM and create a prompt template that takes a product description and generates a blog post about it. We then combined these two to form a chain. Then we run the LLMChain with the product_description.

Part 3: Mastering the Basics of Chains in LangChain

Step 6: Creating a SimpleSequentialChain

Simple Sequential Chain Example

We have created a single chain so far. Now, let’s create a sequence of chains that build on each other.

The second prompt will generate a YouTube script using the blog post as an input:

# SimpleSequentialChain Example

# Prompt 2
youtube_script_template = PromptTemplate(
input_variables = ['blog'],
template = '''Write an engaging Youtube Short video script
for a new product based on this blog content: {blog}.'''
)

# Chain 2
youtube_script_chain = LLMChain(llm=llm, prompt=youtube_script_template,
verbose=True, output_key='yt_script')

The LLMChain now includes an output_key set to yt_script. These keys are important as we use them to pass the output of one chain into another.

At this point you might be wondering, we never set the output_key for the blog_chain example, and you’re right! We need to update our blog_chain to include an output_key.

# Chain
blog_chain = LLMChain(llm=llm, prompt=blog_prompt_template,
verbose=True, output_key='blog')

We can now create our SimpleSequentialChain that takes in the blog_chain and youtube_script_chain:

# Sequential Chain
simple_chain = SimpleSequentialChain(chains=[blog_chain, youtube_script_chain],
verbose=True
)

#Run
product_description = 'best eco-friendly coffee'
simple_chain.run(product_description)

Tip: Set verbose as True to see the log output in the terminal which is very helpful while developing the app.

Input our product_description to the simple_chain and there you have it! The chains are executed in the order they are listed.

Step 7: Building a Complex SequentialChain

Sequential Chain Example

Having mastered simple chains, it’s time to take our application to the next level. A SequentialChain is designed to handle complex tasks requiring multiple inputs.

First, we generate a prompt to create the YouTube visuals. For this, we require both the YouTube script and the blog as inputs. The resulting template guides the model to generate a detailed, scene-by-scene description for the YouTube video based on the script and blog content.

# Prompt 3
youtube_visuals_template = PromptTemplate(
input_variables = ['yt_script', 'blog'],
template = '''You're an amazing director, generate the scene by scene
Description for the Youtube video based on the following script: {yt_script}.
Here is additional blog content if additional context is needed: {blog}.'''
)

Next, we define the chain for the YouTube visuals, similar to our previous chains.

# Chain 3
youtube_visuals_chain = LLMChain(llm=llm, prompt=youtube_visuals_template,
verbose=True, output_key='yt_visuals')

Finally, we create the SequentialChain, dubbed the “marketing automation chain.” It includes all three chains in the sequence they’ll be executed. We specify the input and output variables for each chain.

# Sequential Chain
marketing_automation_chain = SequentialChain(
chains=[blog_chain, youtube_script_chain, youtube_visuals_chain],
input_variables=['product_description'],
output_variables=['blog', 'yt_script', 'yt_visuals'],
verbose=True
)

To run our application, we simply call the run function on the marketing automation chain and pass in the product description.

#Run
product_description = 'best eco-friendly coffee'
marketing_automation_chain(product_description)

With just a few lines of code, we’re able to compose powerful language models to generate a blog post, a YouTube script, and detailed visual scenes for a video.

Part 4: Streamlit — Building a Web App

Step 8: Integrating with Streamlit

Finally, to make our AI-driven marketing assistant more accessible and user-friendly, we’ll build a web app using Streamlit.

Streamlit Example

We begin by creating a title for our app and providing a brief overview of its capabilities. We then create a text input field where users can enter the product description.

import streamlit as st

# Streamlit App Front End
st.title('✨🤖 Product Marketing Assistant')
st.text(
"""Features:
1) Blog post
2) Youtube script
3) Youtube visual description
Future: Instagram, Twitter, LinkedIn post generator""")

# User Input
user_input = st.text_input('Insert product description:',
placeholder='New recommended feature launch for photos app on phone.')

Next, we create a button that, when clicked, runs our marketing automation chain on the user-inputted product description.

if st.button('Generate') and user_input:
app_data = marketing_automation_chain(user_input)

st.divider()

st.write(f"Generated content based on {app_data['product_description']}")

st.write('## Blog Post')
st.write(app_data['blog'])

st.divider()

st.write('## Youtube')
st.write('### Script')
st.write(app_data['yt_script'])
st.write('### Visuals')
st.write(app_data['yt_visuals'])

That’s it! To run our marketing assistant, open up the terminal and type streamlit run app.py.

AI Marketing Assistant Running Example

You’ve now built your very own AI-powered marketing assistant. This tool allows you to automate your content creation process, save time, and focus on what really matters: growing your business!

Leave a comment on what you built or if you have any questions or just want to say something nice. Happy coding, and until next time!

Follow me to stay updated

(also, I could always use some more motivation, as creating this content takes a while 🙂)

YouTube — www.youtube.com/@principlesofai

Instagram — www.instagram.com/principlesofai

LinkedIn — www.linkedin.com/in/wayn9/

--

--

Wayne Lee

Helping our future generation build better relationships. In edtech, ex-Google, Amazon.