How to Make ChatGPT your Pair Programming Partner with sudolang

Nathan Laundry
A Tinkerer’s Journal
12 min readApr 9, 2023

--

Photo by Alvaro Reyes on Unsplash

👋 Hey Friends,

I don’t have as much time to code or learn new practices as I’d like. Between literature reviews, meetings, classes, thesis writing, … the list goes on, I don’t get to delve into code like I did during undergrad. I also have a perfectionist tendency that nags at me - “if you can’t do it right, don’t do it at all.” So, I end up with tons of ideas that rot away in the “do one day pile” and my urge to code is left unsatiated.

That is, until I learned ChatGPT can write code.

Ever since that discovery, I’ve been using ChatGPT to help me write more and higher quality code. The ease and speed with which I can implement my ideas makes hobby coding so much more enticing. Not only that, but GPT’s ability to suggest refinements for performance, memory, and code readability is teaching me to be a better programmer.

Still, like working with a junior developer or an undergraduate research assistant, the quality of the work ChatGPT produces is dependent on the precision of the instructions you give it. Haphazardly tossing vague prompts at ChatGPT is a great way to get it to produce awful code.

With this in mind, I’ve been in search of more effective ways to guide ChatGPT to write quality code, and scaffold my own prompts to give it more detailed instructions.

Cue sudolang’s arrival on the scene.

Sudolang — a programming language for ChatGPT

Briefly, sudolang is a pseudo-programming language for instructing ChatGPT. It outlines many useful programmatic constructs like functions, interfaces, conditionals, various operators — most of the creature comforts of modern programming languages. You can then load ChatGPT (works best with gpt-4) with the sudolang language spec and a sudolang program and you’re off to the races.

I won’t get into too many details about sudolang, you can read more about it from its author, Eric Elliott, here:

Suffice to say, sudolang gives our interactions with ChatGPT the precision of a programming language with all the inferencing power of ChatGPT. For a programmer, this is a match made in heaven.

The SudoLang Programming Assistant

With the power of sudolang at my fingertips, I can precisely define how I want ChatGPT to help me do software development.

In this tutorial we’re going to illustrate the power of sudolang and ChatGPT by walking through how I’ve augmented my software development progress with sudolang and ChatGPT.

First, let’s look at the program itself.

Disclaimer: I probably broke about a dozen of sudolang best practices writing this program. It’s my first attempt so I’m likely misusing language features and things like that. I will be iterating on this as I learn more effective prompts and practice writing sudolang. Check out the sudolang spec by Eric Elliott and my AI Assistant program

The AI Assistant program

The AI programming assistant tries to help you walk through a software development cycle with your ideas. This first version was written explicitly for python but it can be adapted easily for other languages and if you start asking it to write things in other languages it just will.

Here’s the general prompt at the beginning of the program.

# AI Python Programming Assistant

You will act as an AI Python Programming Assistant.
You are my software engineering programming companion.
Together we will build python programs.
You have a skill for building clean, modular and performant code.
You document extensively using docstrings.
You use Python types wherever possible.
You write code to match the Python Black formatting standard.
You use asynchronous programming, pythonic syntax, and other best practices to improve performance whereever possible.

In this part I’m outlining standards that help ChatGPT know what kind of code I want out of it. I’m specifically looking for: documentation, performance, modularity and good formatting with this prompt.

The AI Assistant’s Main Functions

The AI assistant program has 7 functions designed to pair with a sensible software development workflow:

  • brainstorm
  • whiteboard
  • create_ticket
  • implement
  • optimize
  • refine
  • debug
## Functions### brainstorm
function brainstorm(problem, n) {
return n solutions for problem;
}
### whiteboardfunction whiteboard(solution, n) {
return n implementation_options for solution;
}
### create_ticketfunction create_ticket(implementation_option) {
return ticket for implementation_option;
}
### implementfunction implement(ticket) {
create $code that implements all the requirements and meets all acceptance criteria in $ticket
return code and explanation for implementing ticket;
}
### optimizefunction optimize(code, ticket) {
update $code and create $optimized_code that implements all the requirements and meets all acceptance criteria in $ticket
return optimized code and explanation based on ticket;
}
### refinefunction refine(code, ticket) {
update $code and create $refined_code that implements all the requirements and meets all acceptance criteria in $ticket
return refined code and explanation based on ticket;
}
### debugfunction debug(code, error) {
return debugged code and explanation for resolving error;
}

Brainstorm takes a problem — think of a user story — and a number of solutions. It then returns that many potential solutions as well as pros and cons for each.

Whiteboard takes a solution from the brainstorming stage and drafts a number of implementation options. This includes details like: software stacks, pros and cons, links to documentation, etc.

create_ticket turns an implementation option into a ticket for a junior developer. This includes a description, requirements, and acceptance criteria.

implement takes a ticket and produces code that meets the requirements and passes acceptance criteria as well an explanation for how the code does that.

optimize takes existing code as well as an optional ticket and produces optimized code for performance, memory, or other metrics outlined in the ticket. If no ticket is given it tries to infer the most important optimizations for the code it was given and explains its choice.

refine takes existing code and an optional ticket and produces refined code for legibility, modularity, basically the more human aspects of coding. The same inferencing as in optimize happens if no ticket is provided.

debug takes existing code and an error. Then it returns updated code that attempts to solve the error as well as an explanation for why the error occurred and how it was fixed.

Interfaces — Structuring and Standardizing ChatGPT’s Output

ChatGPT will come up with all sorts of ways to give you the information you ask it for … some are great, some are not so good. I’ve used sudolang interfaces like structs in C — to formalize the what data I want and formalize how it’s structured for ChatGPT.

In this program, I use interfaces to structure things like tickets, solutions, implementation_options, etc. This way they always come back to me in a format that’s easy to read and addresses the things I need to write good software.

Here are the interfaces.

## Interfaces
interface Solution {
### Describes a high level solution to a problem
problem;
approach;
pros_cons;
resources;
}
interface ImplementationOption {
### Describes the technologies, and example code of how to use those technologies to implement a solution
technologies;
code_solution;
pros_cons;
resources;
}
interface Ticket {
### Describes a feature that a developer can implement
title;
description;
requirements;
acceptance_criteria;
}
interface OptimizationTicket () {
### Describes a method of optimizing performance or resource usage of code
title;
description;
requirements;
acceptance_criteria;
optimization_goal;
optimization_method;
}
interface RefinementTicket (code: Code) {
### Describes a method of refining code modularity, extensibility, and legibility of code
title;
description;
requirements;
acceptance_criteria;
refinement_goal;
refinement_method;
}

Using the SudoLang AI Programming Assistant Program

So, with all that out of the way, let’s walk through how to use it and why I think this is so much more effective than haphazard prompting.

In this example we’ll get the programming assistant to help us build a simple python CLI that helps students review before exams. We’ll keep the problem intentionally vague so that we can see the brainstorming tool help us ideate but choose a CLI so that this example doesn’t go on for 30 pages of showing off tickets for some non-sense web-app.

Step 1: Seeding ChatGPT

Before we can use our AI Programming Assistant and sudolang, we have to provide chatgpt with the sudolang language spec and our sudolang program. To do this you can simply copy paste them into your first message to ChatGPT. You can get these from the links provided earlier in the article.

Here’s what ChatGPT should give you after seeding it.

ChatGPT output after seeding it with sudolang. Lists the functions in the sudolang program.

With this our sudolang program is ready to use with chatGPT :)

Step 2. The Brainstorming Phase

As an HCI researcher, exploring alternative solutions before settling on one is key to the design process.

I use the brainstorm method to help me come up with a few options. In this example I’ve given more detail to brainstorm than I normally would so that this project doesn’t get too complex. In my own projects, I like to keep the prompt intentionally void of implementation details so that GPT gives me a larger variety.

A good technique for choosing brainstorm prompts is giving chatGPT a user story or a user issue. This helps avoid getting locked into a single solution type before exploring other options.

ChatGPT lists out 3 solutions after being prompted with brainstorm()

These are pretty good!

You can see there’s some diversity in the solutions despite having narrowed the scope with our prompt. between quizzes, interactive code snippets, and coding challenges we have a pretty good set of solutions. The pros and cons also helps do a quick comparison before deciding what to implement.

Step 3. The Whiteboarding Phase

The whiteboard method takes a solution and proposes a few high level technical implementation routes.

For example, it suggests various technologies, packages, stacks, etc. that can be used to build our solution from the brainstorming phase as well as some resources to help you understand the implementation options better before choosing one.

These were some good ideas but I wanted something I could implement quicker for this blog. So, I leaned into the inference engine and gave it this prompt.

This little chatGPT driven quiz app looks a lot more digestible.

Note how I intentionally don’t follow the function signature for whiteboard and sudolang + chatgpt works with it anyway. The mixture of flexibility and structure that sudolang and chatgpt’s inference bring is incredibly powerful.

Step 4. Ticket Creation Phase

The ticket creation phase is designed to help provide more detailed prompts to chatGPT moving forward and also help us understand the steps required to build out the application.

One nice facet of this is that we can tweak tickets before jumping to implementation if we want more detail, stricter acceptance criteria, or want to switch tools used.

Let’s look at how I use it to sketch out a minimum viable product for this app.

With this prompt I leverage the inference engine in conjunction with a sort of do while loop.

I call the create_ticket function and pair it with a kind of do while loop. The condition to end the loop relies on ChatGPT’s ability to infer what “done” means with the prompt: “once the tickets outline a minimum viable product for the solution.”

And you know what? ChatGPT nails it!

This is a reasonable MVP and it even adds interesting new features in tickets 3 and 4.

For brevity’s sake, we’ll only build tickets 1 and 2.

Step5. Implement

Implementation is super easy. We call the implement function and pass it a ticket and ChatGPT knows what to do from there.

With the ticket and implement, ChatGPT gives me easy to follow steps and well documented code that implements all the ticket’s requirements

Let’s try ticket 2 which handles adding the openai API features.

First ChatGPT gives us some basic setup steps.

Then it does a phenomenal job of adding ticket 2’s features to the existing codebase.

It’s still amazing to me that ChatGPT can recall previously generated code and add to it.

Debug

When I copy paste this code and run it, I get the following error:

raise self.handle_error_response(
openai.error.InvalidRequestError: The model `text-davinci-codex-002` does not exist

So ChatGPT didn’t quite get it … But what a great chance to test out our debug function!

I like to explicitly set an $error variable however you can just copy paste the whole error and say “debug”.

GPT-4 gives us an explanation of the error as well as an attempt at fixed code. I say attempt because sometimes the debugging process takes a few steps back and forth.

Let’s give this new code a try.

After the debug session and a quick test, we have a great set of questions 😄

Optimize

We’re not done yet though. Now we can use optimize to improve our code.

There’s not a lot to this codebase so I’m not sure what it’ll recommend but we can see what ChatGPT suggests. I’ll also tell it to infer some optimizations just to be explicit. I doubt that’s necessary but it makes me feel more like a programmer haha.

I have to say I’m really impressed.

The error handling and modularity improvements make the developer and user experiences a lot nicer.

Given that the optimize function was designed to make performance improvements and it made modularity and legibility improvements instead, I’m not sure what refine will do but let’s try it anyway!

Refine

Again I’m going to lean into inference and see what it suggests.

These are also fantastic improvements to the codebase.

It looks like the refine and optimize functions produce very similar output so it may be worth merging them in my sudolang program. That being said, I do like to have the distinction for my own mental model of the development process.

What would you suggest?

Conclusion

ChatGPT prompting is the wild west right now and improvements to methodology are happening blazingly fast.

I think tools like sudolang are going to create massive improvements in the quality of human-LLM interactions. I strongly suggest you give sudolang a try and test out this AI programming assistant workflow on your next GPT pair programming session.

How have you been prompting GPT to get better output? Let me know in the comments or reach out by email 😊

Huge thanks to Eric Elliott for creating and sharing sudolang.

Cheers,

Nathan Laundry

Founder of The Academic’s Field Guide to Writing Code

Incoming HCI PhD student at the Intelligent Adaptive Interventions Lab

✉️ Join my Email Newsletter #GuidingQuestions here

--

--

Nathan Laundry
A Tinkerer’s Journal

Sustainable productivity | Tech Tinkering | Occasional Poetry