AI Dev Tips 2: Convert PostgreSQL Schema into Markdown, HTML, JSON, etc.

Chris St. John
AI Dev Tips
Published in
11 min readAug 8, 2024

Walkthrough to convert a PostgreSQL database schema into various formats

In this tutorial, we’ll walk through the process of converting a PostgreSQL database schema into various formats: Markdown, HTML, JSON, and YAML. This will help you document and use your schema across different tools and platforms effectively.

Reasons This is Useful

  • Versatile Documentation: Easily convert schemas into formats suitable for documentation tools, web applications, and APIs.
  • Improved Accessibility: Use schema data in various contexts, including technical documentation and developer tools.
  • Enhanced Communication: Share schema details with team members in different formats for clarity and collaboration.
  • Streamlined Integration: Integrate schema data into applications and pipelines with minimal conversion effort.
  • Error Identification: Identify and correct issues in schema representations across different formats.

This is just more or less a tip for showcasing the various ways we can manipulate our output format.

But it is useful in some scenarios such as:

5 Use Cases

  • Technical Documentation: Convert schema to Markdown or HTML for inclusion in project documentation.
  • API Integration: Export schema as JSON for use in API responses or integrations.
  • DevOps Pipelines: Use YAML format for schema representation in CI/CD pipelines or configuration files.
  • Web Applications: Integrate schema into React components or other front-end frameworks using JSON.
  • Collaboration: Share schema in various formats with team members for feedback and review.
From AI Dev Tips series

More articles on AI dev tips: https://medium.com/ai-dev-tips

Also make sure to check out the previous article related to this: 🤖 AI Dev Tips 1: Business Idea to PostgreSQL Database Schema in 5 Minutes

Step 1: Generate Schema Details

We created a schema in the previous AI Dev Tips article.

Prompt:

ChatGPT 4o Prompt:

“Please provide a detailed walkthrough of the PostgreSQL schema, including technical names, primary keys, foreign keys, data types, and explanations for each field. Here is my current schema: [insert your schema we made in the previous article]”

Output (sample excerpt)

Step 2: Convert to Markdown

ChatGPT 4o Prompt:

“Convert the following PostgreSQL schema into Markdown format. Provide a separate table for each schema component. [insert your schema we made in the previous article]”

Here is the PostgreSQL schema converted into Markdown format, with a separate table for each schema component.

## Users Table

| Column | Data Type | Constraints |
|----------------|----------------|-------------------------------------|
| user_id | SERIAL | PRIMARY KEY |
| username | VARCHAR(50) | UNIQUE NOT NULL |
| email | VARCHAR(100) | UNIQUE NOT NULL |
| password_hash | VARCHAR(255) | NOT NULL |
| first_name | VARCHAR(50) | |
| last_name | VARCHAR(50) | |
| created_at | TIMESTAMP | DEFAULT CURRENT_TIMESTAMP |

## Books Table

| Column | Data Type | Constraints |
|----------------|----------------|-------------------------------------|
| book_id | SERIAL | PRIMARY KEY |
| title | VARCHAR(255) | NOT NULL |
| author | VARCHAR(255) | NOT NULL |
| genre | VARCHAR(100) | |
| price | NUMERIC(10, 2) | NOT NULL |
| published_date | DATE | |
| stock_quantity | INTEGER | NOT NULL |

## Orders Table

| Column | Data Type | Constraints |
|------------------|----------------|-------------------------------------|
| order_id | SERIAL | PRIMARY KEY |
| user_id | INTEGER | REFERENCES Users(user_id) |
| order_date | TIMESTAMP | DEFAULT CURRENT_TIMESTAMP |
| total_amount | NUMERIC(10, 2) | NOT NULL |
| status | VARCHAR(50) | DEFAULT 'Pending' |
| shipping_address | VARCHAR(255) | NOT NULL |
| billing_address | VARCHAR(255) | NOT NULL |

## OrderItems Table

| Column | Data Type | Constraints |
|---------------|----------------|----------------------------------------------|
| order_item_id | SERIAL | PRIMARY KEY |
| order_id | INTEGER | REFERENCES Orders(order_id) |
| book_id | INTEGER | REFERENCES Books(book_id) |
| quantity | INTEGER | NOT NULL |
| price_per_unit| NUMERIC(10, 2) | NOT NULL |
| total_price | NUMERIC(10, 2) | GENERATED ALWAYS AS (quantity * price_per_unit) STORED |

## Reviews Table

| Column | Data Type | Constraints |
|-------------|-------------|---------------------------------------|
| review_id | SERIAL | PRIMARY KEY |
| book_id | INTEGER | REFERENCES Books(book_id) |
| user_id | INTEGER | REFERENCES Users(user_id) |
| rating | INTEGER | CHECK (rating >= 1 AND rating <= 5) |
| review_text | TEXT | |
| review_date | TIMESTAMP | DEFAULT CURRENT_TIMESTAMP |

## Authors Table

| Column | Data Type | Constraints |
|---------------|----------------|-------------------------------------|
| author_id | SERIAL | PRIMARY KEY |
| name | VARCHAR(255) | NOT NULL |
| biography | TEXT | |
| date_of_birth | DATE | |
| nationality | VARCHAR(100) | |

## Genres Table

| Column | Data Type | Constraints |
|-------------|----------------|-------------------------------------|
| genre_id | SERIAL | PRIMARY KEY |
| genre_name | VARCHAR(100) | UNIQUE NOT NULL |
| description | TEXT | |

Step 3: Convert to HTML

HTML may useful for dashboards, dcoumentation and can be included in React apps.

ChatGPT 4o Prompt:
“Convert the following PostgreSQL schema into HTML format for use in an HTML document. [insert your schema]”

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Online Bookstore Schema</title>
<style>
table {
width: 100%;
border-collapse: collapse;
margin-bottom: 20px;
}
th, td {
border: 1px solid #dddddd;
text-align: left;
padding: 8px;
}
th {
background-color: #f2f2f2;
}
</style>
</head>
<body>
<h1>Online Bookstore Schema</h1>

<h2>Users Table</h2>
<table>
<tr>
<th>Column</th>
<th>Data Type</th>
<th>Constraints</th>
</tr>
<tr>
<td>user_id</td>
<td>SERIAL</td>
<td>PRIMARY KEY</td>
</tr>
<tr>
<td>username</td>
<td>VARCHAR(50)</td>
<td>UNIQUE NOT NULL</td>
</tr>
<tr>
<td>email</td>
<td>VARCHAR(100)</td>
<td>UNIQUE NOT NULL</td>
</tr>
<tr>
<td>password_hash</td>
<td>VARCHAR(255)</td>
<td>NOT NULL</td>
</tr>
<tr>
<td>first_name</td>
<td>VARCHAR(50)</td>
<td></td>
</tr>
<tr>
<td>last_name</td>
<td>VARCHAR(50)</td>
<td></td>
</tr>
<tr>
<td>created_at</td>
<td>TIMESTAMP</td>
<td>DEFAULT CURRENT_TIMESTAMP</td>
</tr>
</table>

<h2>Books Table</h2>
<table>
<tr>
<th>Column</th>
<th>Data Type</th>
<th>Constraints</th>
</tr>
<tr>
<td>book_id</td>
<td>SERIAL</td>
<td>PRIMARY KEY</td>
</tr>
<tr>
<td>title</td>
<td>VARCHAR(255)</td>
<td>NOT NULL</td>
</tr>
<tr>
<td>author</td>
<td>VARCHAR(255)</td>
<td>NOT NULL</td>
</tr>
<tr>
<td>genre</td>
<td>VARCHAR(100)</td>
<td></td>
</tr>
<tr>
<td>price</td>
<td>NUMERIC(10, 2)</td>
<td>NOT NULL</td>
</tr>
<tr>
<td>published_date</td>
<td>DATE</td>
<td></td>
</tr>
<tr>
<td>stock_quantity</td>
<td>INTEGER</td>
<td>NOT NULL</td>
</tr>
</table>

<h2>Orders Table</h2>
<table>
<tr>
<th>Column</th>
<th>Data Type</th>
<th>Constraints</th>
</tr>
<tr>
<td>order_id</td>
<td>SERIAL</td>
<td>PRIMARY KEY</td>
</tr>
<tr>
<td>user_id</td>
<td>INTEGER</td>
<td>REFERENCES Users(user_id)</td>
</tr>
<tr>
<td>order_date</td>
<td>TIMESTAMP</td>
<td>DEFAULT CURRENT_TIMESTAMP</td>
</tr>
<tr>
<td>total_amount</td>
<td>NUMERIC(10, 2)</td>
<td>NOT NULL</td>
</tr>
<tr>
<td>status</td>
<td>VARCHAR(50)</td>
<td>DEFAULT 'Pending'</td>
</tr>
<tr>
<td>shipping_address</td>
<td>VARCHAR(255)</td>
<td>NOT NULL</td>
</tr>
<tr>
<td>billing_address</td>
<td>VARCHAR(255)</td>
<td>NOT NULL</td>
</tr>
</table>

<h2>OrderItems Table</h2>
<table>
<tr>
<th>Column</th>
<th>Data Type</th>
<th>Constraints</th>
</tr>
<tr>
<td>order_item_id</td>
<td>SERIAL</td>
<td>PRIMARY KEY</td>
</tr>
<tr>
<td>order_id</td>
<td>INTEGER</td>
<td>REFERENCES Orders(order_id)</td>
</tr>
<tr>
<td>book_id</td>
<td>INTEGER</td>
<td>REFERENCES Books(book_id)</td>
</tr>
<tr>
<td>quantity</td>
<td>INTEGER</td>
<td>NOT NULL</td>
</tr>
<tr>
<td>price_per_unit</td>
<td>NUMERIC(10, 2)</td>
<td>NOT NULL</td>
</tr>
<tr>
<td>total_price</td>
<td>NUMERIC(10, 2)</td>
<td>GENERATED ALWAYS AS (quantity * price_per_unit) STORED</td>
</tr>
</table>

<h2>Reviews Table</h2>
<table>
<tr>
<th>Column</th>
<th>Data Type</th>
<th>Constraints</th>
</tr>
<tr>
<td>review_id</td>
<td>SERIAL</td>
<td>PRIMARY KEY</td>
</tr>
<tr>
<td>book_id</td>
<td>INTEGER</td>
<td>REFERENCES Books(book_id)</td>
</tr>
<tr>
<td>user_id</td>
<td>INTEGER</td>
<td>REFERENCES Users(user_id)</td>
</tr>
<tr>
<td>rating</td>
<td>INTEGER</td>
<td>CHECK (rating >= 1 AND rating <= 5)</td>
</tr>
<tr>
<td>review_text</td>
<td>TEXT</td>
<td></td>
</tr>
<tr>
<td>review_date</td>
<td>TIMESTAMP</td>
<td>DEFAULT CURRENT_TIMESTAMP</td>
</tr>
</table>

<h2>Authors Table</h2>
<table>
<tr>
<th>Column</th>
<th>Data Type</th>
<th>Constraints</th>
</tr>
<tr>
<td>author_id</td>
<td>SERIAL</td>
<td>PRIMARY KEY</td>
</tr>
<tr>
<td>name</td>
<td>VARCHAR(255)</td>
<td>NOT NULL</td>
</tr>
<tr>
<td>biography</td>
<td>TEXT</td>
<td></td>
</tr>
<tr>
<td>date_of_birth</td>
<td>DATE</td>
<td></td>
</tr>
<tr>
<td>nationality</td>
<td>VARCHAR(100)</td>
<td></td>
</tr>
</table>

<h2>Genres Table</h2>
<table>
<tr>
<th>Column</th>
<th>Data Type</th>
<th>Constraints</th>
</tr>
<tr>
<td>genre_id</td>
<td>SERIAL</td>
<td>PRIMARY KEY</td>
</tr>
<tr>
<td>genre_name</td>
<td>VARCHAR(100)</td>
<td>UNIQUE NOT NULL</td>
</tr>
<tr>
<td>description</td>
<td>TEXT</td>
<td></td>
</tr>
</table>
</body>
</html>

Step 4: Convert to JSON

JSON is cross-language format that can be used easily in JavaScript, Python, Java and pretty much all languages.

JSON is natively supported by NoSQL databases such as MongoDB, CouchDB, and cloud provider SDKs such as with AWS.

ChatGPT 4o Prompt:
“Convert the following PostgreSQL schema into JSON format. Provide the schema with table names, columns, column names, and data types. [insert your schema]”

Output

JSON:

{
"Users": {
"columns": [
{
"name": "user_id",
"data_type": "SERIAL",
"constraints": "PRIMARY KEY"
},
{
"name": "username",
"data_type": "VARCHAR(50)",
"constraints": "UNIQUE NOT NULL"
},
{
"name": "email",
"data_type": "VARCHAR(100)",
"constraints": "UNIQUE NOT NULL"
},
{
"name": "password_hash",
"data_type": "VARCHAR(255)",
"constraints": "NOT NULL"
},
{
"name": "first_name",
"data_type": "VARCHAR(50)",
"constraints": null
},
{
"name": "last_name",
"data_type": "VARCHAR(50)",
"constraints": null
},
{
"name": "created_at",
"data_type": "TIMESTAMP",
"constraints": "DEFAULT CURRENT_TIMESTAMP"
}
]
},
"Books": {
"columns": [
{
"name": "book_id",
"data_type": "SERIAL",
"constraints": "PRIMARY KEY"
},
{
"name": "title",
"data_type": "VARCHAR(255)",
"constraints": "NOT NULL"
},
{
"name": "author",
"data_type": "VARCHAR(255)",
"constraints": "NOT NULL"
},
{
"name": "genre",
"data_type": "VARCHAR(100)",
"constraints": null
},
{
"name": "price",
"data_type": "NUMERIC(10, 2)",
"constraints": "NOT NULL"
},
{
"name": "published_date",
"data_type": "DATE",
"constraints": null
},
{
"name": "stock_quantity",
"data_type": "INTEGER",
"constraints": "NOT NULL"
}
]
},
"Orders": {
"columns": [
{
"name": "order_id",
"data_type": "SERIAL",
"constraints": "PRIMARY KEY"
},
{
"name": "user_id",
"data_type": "INTEGER",
"constraints": "REFERENCES Users(user_id)"
},
{
"name": "order_date",
"data_type": "TIMESTAMP",
"constraints": "DEFAULT CURRENT_TIMESTAMP"
},
{
"name": "total_amount",
"data_type": "NUMERIC(10, 2)",
"constraints": "NOT NULL"
},
{
"name": "status",
"data_type": "VARCHAR(50)",
"constraints": "DEFAULT 'Pending'"
},
{
"name": "shipping_address",
"data_type": "VARCHAR(255)",
"constraints": "NOT NULL"
},
{
"name": "billing_address",
"data_type": "VARCHAR(255)",
"constraints": "NOT NULL"
}
]
},
"OrderItems": {
"columns": [
{
"name": "order_item_id",
"data_type": "SERIAL",
"constraints": "PRIMARY KEY"
},
{
"name": "order_id",
"data_type": "INTEGER",
"constraints": "REFERENCES Orders(order_id)"
},
{
"name": "book_id",
"data_type": "INTEGER",
"constraints": "REFERENCES Books(book_id)"
},
{
"name": "quantity",
"data_type": "INTEGER",
"constraints": "NOT NULL"
},
{
"name": "price_per_unit",
"data_type": "NUMERIC(10, 2)",
"constraints": "NOT NULL"
},
{
"name": "total_price",
"data_type": "NUMERIC(10, 2)",
"constraints": "GENERATED ALWAYS AS (quantity * price_per_unit) STORED"
}
]
},
"Reviews": {
"columns": [
{
"name": "review_id",
"data_type": "SERIAL",
"constraints": "PRIMARY KEY"
},
{
"name": "book_id",
"data_type": "INTEGER",
"constraints": "REFERENCES Books(book_id)"
},
{
"name": "user_id",
"data_type": "INTEGER",
"constraints": "REFERENCES Users(user_id)"
},
{
"name": "rating",
"data_type": "INTEGER",
"constraints": "CHECK (rating >= 1 AND rating <= 5)"
},
{
"name": "review_text",
"data_type": "TEXT",
"constraints": null
},
{
"name": "review_date",
"data_type": "TIMESTAMP",
"constraints": "DEFAULT CURRENT_TIMESTAMP"
}
]
},
"Authors": {
"columns": [
{
"name": "author_id",
"data_type": "SERIAL",
"constraints": "PRIMARY KEY"
},
{
"name": "name",
"data_type": "VARCHAR(255)",
"constraints": "NOT NULL"
},
{
"name": "biography",
"data_type": "TEXT",
"constraints": null
},
{
"name": "date_of_birth",
"data_type": "DATE",
"constraints": null
},
{
"name": "nationality",
"data_type": "VARCHAR(100)",
"constraints": null
}
]
},
"Genres": {
"columns": [
{
"name": "genre_id",
"data_type": "SERIAL",
"constraints": "PRIMARY KEY"
},
{
"name": "genre_name",
"data_type": "VARCHAR(100)",
"constraints": "UNIQUE NOT NULL"
},
{
"name": "description",
"data_type": "TEXT",
"constraints": null
}
]
}
}

Step 5: Convert to YAML

YAML is a format that is used in certain tools and some Dev teams may want to keep that format for other schema usages.

Tools like SQLAlchemy (for Python) or TypeORM (for TypeScript) can use YAML files for generating ORM models. OpenAPI uses YAML. Also DevOps tools like Ansible use YAML.

YAML files can be used to define database schemas and migrations as well as DevOps tools.

ChatGPT 4o Prompt:
“Convert the following PostgreSQL schema into YAML format. Provide a schema with table names, columns, column names, and data types. [insert your schema]”

Output:

YAML:

Users:
columns:
- name: user_id
data_type: SERIAL
constraints: PRIMARY KEY
- name: username
data_type: VARCHAR(50)
constraints: UNIQUE NOT NULL
- name: email
data_type: VARCHAR(100)
constraints: UNIQUE NOT NULL
- name: password_hash
data_type: VARCHAR(255)
constraints: NOT NULL
- name: first_name
data_type: VARCHAR(50)
constraints:
- name: last_name
data_type: VARCHAR(50)
constraints:
- name: created_at
data_type: TIMESTAMP
constraints: DEFAULT CURRENT_TIMESTAMP

Books:
columns:
- name: book_id
data_type: SERIAL
constraints: PRIMARY KEY
- name: title
data_type: VARCHAR(255)
constraints: NOT NULL
- name: author
data_type: VARCHAR(255)
constraints: NOT NULL
- name: genre
data_type: VARCHAR(100)
constraints:
- name: price
data_type: NUMERIC(10, 2)
constraints: NOT NULL
- name: published_date
data_type: DATE
constraints:
- name: stock_quantity
data_type: INTEGER
constraints: NOT NULL

Orders:
columns:
- name: order_id
data_type: SERIAL
constraints: PRIMARY KEY
- name: user_id
data_type: INTEGER
constraints: REFERENCES Users(user_id)
- name: order_date
data_type: TIMESTAMP
constraints: DEFAULT CURRENT_TIMESTAMP
- name: total_amount
data_type: NUMERIC(10, 2)
constraints: NOT NULL
- name: status
data_type: VARCHAR(50)
constraints: DEFAULT 'Pending'
- name: shipping_address
data_type: VARCHAR(255)
constraints: NOT NULL
- name: billing_address
data_type: VARCHAR(255)
constraints: NOT NULL

OrderItems:
columns:
- name: order_item_id
data_type: SERIAL
constraints: PRIMARY KEY
- name: order_id
data_type: INTEGER
constraints: REFERENCES Orders(order_id)
- name: book_id
data_type: INTEGER
constraints: REFERENCES Books(book_id)
- name: quantity
data_type: INTEGER
constraints: NOT NULL
- name: price_per_unit
data_type: NUMERIC(10, 2)
constraints: NOT NULL
- name: total_price
data_type: NUMERIC(10, 2)
constraints: GENERATED ALWAYS AS (quantity * price_per_unit) STORED

Reviews:
columns:
- name: review_id
data_type: SERIAL
constraints: PRIMARY KEY
- name: book_id
data_type: INTEGER
constraints: REFERENCES Books(book_id)
- name: user_id
data_type: INTEGER
constraints: REFERENCES Users(user_id)
- name: rating
data_type: INTEGER
constraints: CHECK (rating >= 1 AND rating <= 5)
- name: review_text
data_type: TEXT
constraints:
- name: review_date
data_type: TIMESTAMP
constraints: DEFAULT CURRENT_TIMESTAMP

Authors:
columns:
- name: author_id
data_type: SERIAL
constraints: PRIMARY KEY
- name: name
data_type: VARCHAR(255)
constraints: NOT NULL
- name: biography
data_type: TEXT
constraints:
- name: date_of_birth
data_type: DATE
constraints:
- name: nationality
data_type: VARCHAR(100)
constraints:

Genres:
columns:
- name: genre_id
data_type: SERIAL
constraints: PRIMARY KEY
- name: genre_name
data_type: VARCHAR(100)
constraints: UNIQUE NOT NULL
- name: description
data_type: TEXT
constraints:

Tips for Improved Formatting

  • Chunk Your Prompts: Break down large schemas into smaller chunks to avoid truncation issues.
  • Explicit Instructions: Specify clearly in prompts if you need full details without truncation.
  • Use Correct Formats: Choose the format based on your use case — Markdown for documentation, HTML for web integration, JSON for APIs, and YAML for DevOps.

More articles on AI dev tips: https://medium.com/ai-dev-tips

Store: Buy the full ebook / more info: CloudMetricsGuide.com

Cloud Metrics Guide / 190+ cloud metrics / 800+ pages

Cloud metrics covered:

  • 190+ metrics in the categories of: User, Network, Reliability, Compute, Compute Scaling, API, Database, Storage, Events and Queues, Security and Cost metrics.
  • 190 + cloud metrics in detail with insights!
  • 800+ pages of valuable content, the ULTIMATE reference guide!
  • PDF, Searchable, Linked Table of Contents (TOC) for fast navigation.
  • Scenario/s giving example calculations, formulas and realistic situations.
  • List of “Gotchas” for each metric usage, for better understanding.
  • Best Practices and implementation suggestions for each metric!!!

Store: Buy the full ebook / more info: CloudMetricsGuide.com

About me

I’m a cloud architect, senior developer and tech lead who enjoys solving high-value challenges with innovative solutions.

I’m always open to discussing projects. If you need help, have an opportunity or simply want to chat, you can reach me on X/Twitter @csjcode and same username gmail. Also, can schedule a Zoom here (lower on page): https://systemsarchitect.io/docs/consulting

I’ve worked 20+ years in software development, both in an enterprise setting such as NIKE and the original MP3.com, as well as startups like FreshPatents, SystemsArchitect.io, API.cc, and Instantiate.io.

My experience ranges from cloud ecommerce, API design/implementation, serverless, AI integration for development, content management, frontend UI/UX architecture and login/authentication. I give tech talks, tutorials and share documentation of architecting software. Also previously held AWS Solutions Architect certification.

Recently I’m working on Instantiate.io, a value creation experiment tool to help startup planning with AI. I’m making progress writing a reference manual on cloud metrics.

Also, an enthusiast of blockchain, I’m active working on applications in the innovative Solana blockchain ecosystem.

Thanks for reading and hope to see you around!

--

--

Chris St. John
AI Dev Tips

Cloud/Solutions Architect, Solana/Web3 enthusiast, engineer and entrepreneur. Nomad. Prev. senior dev at NIKE, founder of store.SystemsArchitect.io