Coding with ChatGPT 1 : Like father, like son?
Edit 13/8/24. Almost as soon as I posted this I was catching up on some things that I’d flagged to read later and found this blog post which just goes to show that I continue to just be doing the same things as Ben, just backwards in high heels (e.g. uncomfortably with large amounts of falling down).
My father was a far better engineer than I will ever be. Unable to go to university, he spent many years at night school, on top of a full time job, to earn a masters-equivalent qualification. He then went on to become a Fellow of both the Institute of Electrical Engineers and the Institute of Mechanical Engineers. All that said, computers baffled him. His whole working life was spent in the electricity-generating industry. Show him a 400-ton steam turbine spinning at 3000 RPM and he was completely in his element. Show him a PC and he didn’t know where to start. He always felt that, to understand a machine, you needed to be able to physically take it apart. The concept of a “black box” where all you needed to do was understand the inputs and outputs and trust that the internals were working properly was something he really struggled with. The computer revolution left him behind. He also gave me a ZX81 in 1982 and, despite only being 10 years old, I strongly remember him saying, “Learn this, it’s where the future’s going.” He wasn’t wrong.
I also became an engineer — albeit a much lazier one. My focus was on computers, but it wouldn’t be an exaggeration to say that I was a bad software engineer. It was fun at times to be able to point to a high-street cashpoint or a flagship mobile phone and say, “I have code in there!” But I also think several of my colleagues were secretly pleased when I was suddenly moved into a change management role. There, my engineering background really helped with large-scale introductions of new software development systems and methodologies.
While I wasn’t strictly “technical” any more, I always swore I would be different to my dad. I wouldn’t be left behind by the constant march of technology. I’d stay up-to-date with all the new things coming into the field. I might not be writing code for mobile apps, voice-driven systems, VR or robotics but I understood how they work and could explain their advantages and disadvantages to senior people in the public sector, where I increasingly found myself working.
AI has completely left me in the dust.
I’ve tried to vaguely keep up with the overall state of AI but the sheer diversity and speed of developments has left me largely baffled regarding what’s inside the “black box”. I finally understand how my father felt.
Just because the workings of the machine are increasingly beyond my understanding, though, doesn’t mean I can’t use the machine to be more productive, solve problems and learn a lot of new things. Much as I dislike the phrase prompt engineering, I find this is what I’m spending an ever-increasing amount of my time doing. This is happening across a huge range of areas. I’ve recently been discussing a wildly disparate range of topics with ChatGPT, including John Rawls and his theory of justice, Japanese Raccoon Dogs, generating Kanji flashcards and helping marketing for my choir.
So, given I was using ChatPGT to do a lot of things anyway, I decided to try some experiments. As I said above, I used to be a professional software engineer, albeit not a very good one. Also, for many years, I’ve had a long list of various software projects that I’d like to do in my own time. Every six months or so, I’d pick one of them at random and start having a go. After a few days, I’d grind to a halt in frustration (usually around the time I’d have to try and use CSS 😡). My question was whether I could get further if I used ChatGPT to guide what I was doing.
So, I’ve been using ChatGPT to work on various of my long-term tech projects and am continuing to do so. There will be more related blog posts at a future date. I’m aware of other AIs I could have used instead; for example, perhaps I should have used Google Gemini when working on Android app development, but I wanted to stick to one I was comfortable with.
Also, it’s worth mentioning that, while ChatGPT is broadly good at writing code and giving advice on software development, I’ve certainly run into significant issues while using it in other contexts, whether that’s just failing to work with some datasets or hallucinating wildly wrong answers in some situations.
What follows is the outcome of my first experiment. I decided to jump in the deep end by developing an Android app — something I’ve never done before. ChatGPT started by getting me to download the IDE for Android development.
That looks like this:
Like Excel, it’s extremely impressive but also really intimidating, with probably 80% of the functionality only being used by a small number of developers. I have a history of software development so, after an hour or two, I was fairly comfortable with the most basic functionality.
While I was doing that, I was also giving ChatGPT a series of prompts to ensure it understood the application I wanted to build. It turned out that, to build the thing I wanted, I only needed two code files to be created, one to define how it looked and one to provide the functionality.
Now, the next decision I made is the critical part of the exercise. With my background, I could have gone through these files and spent the effort to work out what they were doing and how they worked. This would probably have required me to learn a few new concepts and quite a lot of specifics about Android development but, because of my previous experience, I could have done that. I didn’t. Instead, I just copied the code directly from ChatGPT to the IDE without reading it.
This highlights a very important point. I’m only really needed at the start and end of each iteration of the development process. William Gibson famously said, “The future is already here, it’s just unevenly distributed.” In this context, I’m sure that there will already be systems where I can talk (speech not text) directly to an AI (see this ChatGPT demo) and the app or website being developed will be updated seamlessly, without me needing to ever see an IDE, cloud hosting or any other kind of technical environment. That’s going to radically change both who can develop new technology applications / websites and how that will be done.
But, back to my experiment. So, I would copy-and-paste the code suggested by ChatGPT into the IDE and test it on my phone (after going through the process to enable the IDE to run test versions of the new app on my phone over WiFI, which is just witchcraft). I would see how it wasn’t quite what I was looking for and then state the changes required to ChatGPT and it would iterate the code until I got what I wanted. It really was that simple!
I’m lying.
The code really was just two files and the process of iterating it using ChatGPT prompts was very simple, albeit a rather tedious process of copying-and-pasting from ChatGPT to the IDE. If it was that easy, the whole process probably would have only taken 2–3 hours, tops. The issues I kept running into were related to the build process. As Bruce Sterling memorably said in his book Heavy Weather: “As you climbed higher and higher up the stacks of interface, away from the slippery bedrock of the hardware grinding the ones and zeros, it was like walking on stilts. And then, stilts for your stilts, and then stilts for your stilts for your stilts.” One of the main potential issues with any, even relatively simple, piece of modern software is that it’s built on top of a vast amount, millions and millions of lines, of code written by other people. Each of those building blocks is called a software library and organising them is a whole separate part of the field of computing, called dependency management.
Suffice to say that, even with ChatGPT’s help going through all the errors, it still took me another couple of hours before my app was completed and installed on my phone. Still, with no experience at all of Android application development, I had created, from scratch in one day, one of the projects that had been sitting into the “icebox” part of my personal to-do list for at least a decade.
Oh, you want to know what it does? Well, if you’re old enough to remember the original series of Quantum Leap, this might briefly amuse you. I never said it wasn’t silly!
Of course, this is just a proof of concept. This app doesn’t access the internet, it doesn’t need a cloud-based server to do anything complex, it doesn’t use a database. At this point, it definitely doesn’t show that someone without a technical background can create an Android app without help. It does show that that day will arrive very soon.
One of the reasons I wanted to try these experiments is to assess the potential impact of this kind of technology on the current standard development teams used to create government digital services. The GDS Service Manual currently has this list of potential roles in such a team. It’s worth pointing out that this particular experiment only shows that the role of developer might be impacted. I’ve worked with, mostly commercially outsourced, teams of 6–8 developers working on building transactional websites. Those days are rapidly coming to an end. While roles like Service Designer and User Researcher will be as critical as ever, it’s feeling like, even today, probably only one developer with AI assistance might be needed. In the near future, maybe even none at all — perhaps just a small number of experts spread across a large number of teams to fix things when they, inevitably, go wrong.
However! This is only the future if we learn to trust what’s inside the “black box” — as my dad saw it. In this case, not just the mysterious workings of the AI itself but the “traditional” code it generates, if that also goes unchecked by any human. Even with extensive automated and human testing of apps and services, that trust will be hard to come by — and rightly so.
Still, things are inexorably moving in that direction and government technical teams need to be having this conversation right now. In addition, if I was working in an SME that currently develops technology for the government, I’d be focusing on retraining my developers to become individual contributor consultants who can produce the whole technical side of an app or digital service on their own. This is because, as sure as I am of anything, that’s the level of offer that the huge consultancy firms are going to be providing very soon — if they’re not doing so already.