Autocomplete, supercharged: AI-assisted coding in real life with Github Copilot

Stephen Mckellar
MPB Tech
Published in
6 min readApr 8, 2024
The GitHub Copilot logo on a dark blue background
GitHub Copilot

The popular image of programming with Artificial Intelligence has software designing even better software so quickly that it’s obsolete before it can even be run.

According to recent reports, Microsoft runs its Github Copilot service at a loss of $20 per user per month — which at least demonstrates just how firmly Redmond believes in an AI-powered tomorrow.

But whether your vision of the omniscient-tech future more closely resembles Star Trek or Skynet, the question of how to use AI in your day job right now might be less clear-cut.

In which case you probably aren’t alone (irrespective of headlines claiming that 92% of coders were already fully immersed in AI last June).

For most of us, the reality in 2024 looks more like supercharged autocomplete than oven-ready code on demand. And I’m willing to bet that out in the real world, there are plenty of us who haven’t quite found their AI-assisted happy place.

So I’m here to help by not telling you what you ought to be doing already. Instead I want to share my experience of using Copilot in my day job over the past six months, and suggest how it might help you.

How this doesn’t work

First things first, you’ll almost certainly have already played with ChatGPT or some other LLM-based chatbot.

And very early on, you’ll have asked it something like: “Code me a JavaScript image gallery.”

If there’s one thing AI bots know about, it’s how to code software. They just don’t always understand what software you actually want.

So you probably found yourself tweaking your prompt, adding requirements and parameters, until finally it resembles some kind of English-language pseudocode but gets you a skeleton of the thing you wanted — even if you do have to watch it frustratingly “type out” each line, then go through line by line until you have understood and made good.

Happily, that’s probably the last time you’ll need to do that.

Introducing Github Copilot

Microsoft bills Github Copilot “the world’s most widely adopted AI developer tool”, proven to increase developer productivity.

It’s powered by OpenAI Cortex (a version of GPT-3), supports a number of popular IDEs and learned to code Python from 159GB of material held in 54 million public repositories.

So to get started, open Visual Studio, VS Code, JetBrains or Neovim and install the Copilot plugin.

Next, start coding as you normally would, then hit Tab to autocomplete.

At this point you experience the penny-drop moment. The result will be 90% correct, or at least scaffold out a solution for you to start work on.

There are other AI-type autocompletes available for IDEs, but Copilot just goes the extra mile. Here’s what that looks like.

Github Copilot in action

1. Start writing a function

2. Copilot suggests some code

It can make mistakes, but if your function is well named I find its guesses to be generally very good.

1. Give your function a descriptive name

2. This is a pretty good guess, but we can add a comment to specify more detail, which will change the prediction

So Copilot doesn’t do everything for you but it can really speed you up. And that’s particularly helpful when you’re writing tests.

Of course, in an ideal world you’d write the test first, then write the code to pass it. That’s hard to do if you don’t know what the code is going to look like, and Copilot won’t be overly helpful there either.

But ask it to write a test retrospectively and it has all your previous code to help it. Suddenly the laborious task of writing test cases becomes much simpler.

1. I want a test for my sandwich cost calculator:

2. Or maybe I want another scenario:

I’m actually quite impressed with what it inferred from the description here!

Tips on working with Github Copilot

Like other AI bots, Github Copilot brings better results if it has plenty of clues about context.

Some ways to help it:

  • Use descriptive names for variables and functions.
  • Use comments for better suggestions. (This is useful when you have known steps you want to use in the implementation. You can always remove comments later if you really want to.)

As the first one is an indicator of well-constructed code, you win both ways.

I’ve found Copilot particularly useful for scaffolding code, writing obvious helper functions and writing tests (especially backfilling them).

Of course, like every helpful thing in the history of the world, Copilot has its drawbacks too.

It’s great when it learns from what you’ve done, but it can also learn from your mistakes. I’ve seen it try to paste them back in via autocomplete.

Small details are often wrong; sometimes its suggestions are altogether wrong. In that case, you’ll need to:

  1. Delete the bit you don’t like
  2. Add a comment describing what you want the function to do (step-by-step descriptions if necessary)
  3. Go back and start writing.

Closing thoughts

Github Copilot is a hugely helpful tool but you still need to code, the same as we’ve always understood it. Today’s AI tools won’t stop you from having to understand and check your code. What they will do is save you an awful lot of typing.

Quite how much clock-time this saves is hard to estimate and will probably vary from person to person. Copilot probably won’t double your development team’s output.

On the other hand, it will give them more thinking time to solve problems and work on structure — in other words, to do the things humans are best at.

Of course, using autocomplete to write regexes for you is a great timesaver, as long as you can tell whether the result is correct. My spelling has certainly gone downhill since the dawn of autocorrect. Given modern pilots’ fears about skills fade due to increasingly automated airliners, perhaps this isn’t the tool to use if you’re new to programming autopilot systems.

I hope this post has been helpful. To close, I’ll leave you with my colleague David Anderson’s general guidelines for all AI code generation:

  • Break down complex tasks
  • Clearly articulate the requirements
  • Provide specific details, such as expected inputs and outputs
  • Review generated code thoroughly
  • Test generated code thoroughly
  • Be careful about including sensitive data such as PII or secret keys
  • Specify any coding standards you require
  • Iterate if/as necessary.

Stephen MacKellar is a Senior Software Engineer at MPB, the largest global platform to buy, sell and trade used photo and video gear. https://mpb.com

--

--