Use GPT-4o To Automate Posts To Medium Based on Live Web Data!

Emmett McFarlane
4 min readMay 29, 2024

--

Disclaimer: I (a human) wrote this article.

An overview of the automated Medium posting script you’ll have built by the end of this guide.

Introduction

We are starting to see language models transform how we create content, and the recent announcement of GPT-4o has me beyond excited about the following topic. Here, I will guide you through creating a custom GPT-4o-powered script that extracts live web data and posts it to Medium. You’ll need to have Python set up before you can begin.

Please use this guide responsibly (I do not recommend constantly posting AI-generated spam)

Step 1: Set Up Your APIs

Before our implementation, ensure you have the necessary API keys. You will need keys for OpenAI, Medium, and The Pipe APIs.

Obtain API Keys

  1. OpenAI API Key: Visit OpenAI to get your API key. You can opt to do this without a key by using a (less intelligent) local language model such as LLaMa 3, which you can set up on with this guide.
  2. Medium Integration Token: Log in to Medium, navigate to Settings, and create an integration token under the “Integration Tokens” section.
  3. The Pipe API Key (optional): Register for the API and obtain your API key. You can opt to do this without a key by following the local installation instructions (more details in article)
A screenshot showing how to find the Integration tokens tab on Medium to generate your key

Installations & Environment Variables

Set the API keys as environment variables. For Windows:

setx OPENAI_API_KEY "your_openai_api_key"
setx MEDIUM_API_KEY "your_medium_api_key"
setx THEPIPE_API_KEY "your_thepipe_api_key"

For Mac:

export OPENAI_API_KEY="your_openai_api_key"
export MEDIUM_API_KEY="your_medium_api_key"
export THEPIPE_API_KEY="your_thepipe_api_key"

Then install the necessary libraries using pip:

pip install openai requests thepipe_api

Restart your terminal to apply all of these changes.

Step 2: Extract Content from a Webpage

Use The Pipe API to extract the latest text and image contents from a webpage.

from thepipe_api import thepipe
webpage_content = thepipe.extract("https://www.example.com/")

This will handle extraction of dynamic content from web pages, including automatic scrolling, screenshotting, and text extraction into a multimodal prompt format for GPT-4o. With some extra setup, you can append local=True to run this process for free on your own machine, rather than the API.

The extraction results should return multiple screenshots and text capture, allowing GPT-4o to parse even the most visually complex information

Step 3: Prepare Input for GPT-4o

Combine the extracted content with a user query to create an input prompt for GPT-4o.

query = {
"role": "system",
"content": """Please use the given extracted information to generate a Markdown-formatted Medium article.
The article should be brief, and should summarize the latest news.
Return the output in valid JSON format under the "title" and "content" keys.
Ensure the article has a clear structure with an engaging introduction, informative body, and a concise conclusion.
Include headings, subheadings, and any relevant images from the extracted content.
The article should be written in a professional tone suitable for a Medium publication."""
}

Step 4: Send Input to GPT-4o

Send the input to GPT-4o using the OpenAI API and process the response. We’ll ask for a JSON object to ensure it returns exactly what we want, and we’ll use a temperature of 0 to ensure as much consistency to the source document as possible.

import openai
import json
# Initialize the OpenAI client
openai.api_key = "your_openai_api_key"
response = openai.ChatCompletion.create(
model="gpt-4o",
messages=webpage_content + [query],
response_format={"type": "json_object"},
temperature=0,
)
# Extract the structured JSON response
response_content = response['choices'][0]['message']['content']
response_json = json.loads(response_content)
# Print the result
print(response_json)

Step 5: Post to Medium

Configure authentication and use the Medium API to post the extracted content:

Configure Auth

import requests
medium_api_url = "https://api.medium.com/v1/users/{{userId}}/posts"
headers = {
"Authorization": f"Bearer {medium_api_key}",
"Content-Type": "application/json",
"Accept": "application/json"
}
# Get user details
user_response = requests.get("https://api.medium.com/v1/me", headers=headers)
user_id = user_response.json()['data']['id']

Post the Markdown to Medium

post_data = {
"title": response_json['title'],
"contentFormat": "html",
"content": response_json['content'],
"tags": ["AI", "Automation", "Content Creation"],
"publishStatus": "public"
}
# Post the article
post_response = requests.post(
f"https://api.medium.com/v1/users/{user_id}/posts",
headers=headers,
json=post_data
)
# Print the response
print(post_response.json())

Conclusion

Congratulations! You’ve successfully created a custom GPT-4o script that automates the extraction of live web data, generation of a post, and publication to Medium. Again, please use this guide responsibly (I do not recommend constantly posting AI-generated nonsense), and happy posting! 🎉

--

--

Emmett McFarlane

ML engineering & astrophysics geek in Toronto. Nothing makes me prouder than building AI pipelines and seeing them work in production.