AI & the Future of Software Engineering

Key Takeaways from a dinner discussion on June 29 2023 in London after Leading Eng.

Hywel Carver
Skiller Whale
4 min readJul 12, 2023

--

On June 29, Skiller Whale ran a tech leader dinner discussion on how AI is going to change the shape of engineering teams.

With participants from organisations such as Github, Pleo, Netlify and E&Y, we had a lot of different perspectives on this very timely question. Below is a summary of my key takeaways from each of the prompts we discussed:

Prompt 1: If the type of code written by juniors is fully replaced by AI, how will we get new developers into the industry?

There was a general consensus that a junior’s job is not to code, but to learn. So with that in mind, the group did not anticipate a huge revolution. However, some folks *did* anticipate a steeper learning curve for juniors, where they now need to learn how to prompt AI as well as how the code under-the-hood works.

There was disagreement about whether it matters that a junior in the future may not know how to code at all, but only how to prompt AI. Some felt that this was fine — it’s how all industries develop, you get further from the raw materials and more architectural and directive. However, some felt that this would risk resulting in worse future mid-level and senior engineers who would be able to produce software without understanding how what they are manipulating works. Whilst most engineers today don’t need to be able to write machine code, they do need to understand the precise behaviour of the code they write; the same may be true at a higher level in the future.

There was general agreement that learning and understanding would need to be deeper sooner for juniors entering the industry today. As one person put it: “AI automates so coding becomes less important, but actually understanding the code becomes much more important”.

Prompt 2: Will any humans write code in the (near) future? What will we do instead?

This prompt resulted in a philosophical discussion about what code actually is (we landed on ‘unambiguous instructions’) and what things software engineers do that aren’t code. For example, translating the ambiguous client requirement to the unambiguous product feature, asking the right questions to remove the ambiguity, or ensuring that the product feature continues to meet them as they inevitably change with time.

After product requirements had been rigorously defined, the consensus was that code as we think of it today may be written very little by humans but that architecting that code, communicating logic to LLMs and monitoring the behaviour of the resulting programs will be the main focus of our role in software engineering, and already is for many of us. There was general agreement that you don’t need to be involved in code writing at all to still be developing software.

Prompt 3: What new roles are going to be needed if more code is written by LLMs?

This discussion looked at shorter versus longer term roles.

In the short term, we expect human feedback on generated code to be paramount to help LLMs write code of a consistently high quality. Once LLMs are mature enough, the need for this role is expected to diminish.

One person posited that a new hierarchy of prompt engineers will emerge (Chief Prompt Engineering Officer, anyone?), but that longer term we’d simply expand our definition of software engineer to include that capability. Several folks thought it was possible that a specialist role would emerge in deeply understanding what the AI is producing and being able to debug it.

There was also some discussion of increasingly favouring black box testing, and the need to cultivate that skill. Most current testing approaches assume knowledge of the code and how the system works, but that’s likely to be less and less true in the future, because LLMs are not transparent in how they make decisions.

Prompt 4: What are the pros and cons of embracing AI now, vs waiting until AI products are more mature?

In this part of the discussion we talked about the hype-curve and how the right point to invest is likely to vary from company to company, depending on their context. The discussion focused on using AI rather than building AI (some of the attendees already have their own AI products on the market).

One really interesting take on this was that right now LLMs are ‘a bit crap’, so it’s much easier to get a better understanding of their potential shortcomings now, rather than when they’re more mature. Other pros of embracing now:

  • Investors are excited by it
  • It could give you an edge over competitors if you can use it to make you faster or more productive (even with its shortcomings).

Cons included:

  • Convincing hallucination is very common, so might introduce a heap of bugs, slowing you down rather than giving any advantage.
  • The training data is publicly available code, which is likely to be heavily slanted towards student projects. There may indeed be an inverse correlation between the likelihood of code to be publicly available and its quality.
  • Forcing the use of LLMs at an early stage when it’s not a good fit could do more harm than good, both to the product and to the morale of the engineering team.

--

--

Hywel Carver
Skiller Whale

Co-Founder & CEO of Skiller Whale; published curriculum author and keen funk saxophonist.