This year, AI has once again been one of the hottest topics in the news. Startups marketing themselves as “AI-driven” have been raking in investors’ money while big tech has been taking this technology to the masses. Alexa, Google Assistant, Siri and the Night Mode offered by the Google Pixel’s camera have been wowing users worldwide while serving as a showcase for the state of the art of its applications.
Thanks to this level of exposure, consumers have also started to become more aware of the moral and ethical implications surrounding AI. Much has been written about the use of personal data for training commercial systems, serve ads, and even create nanny state-like surveillance programs in technologically advanced, but ultimately dictatorial nations.
And as with any other misunderstood technology before it, it was only time before people became paranoid and started asking the same age-old question:
Will AI ever take over my job?
Today, I want to look at this question from the perspective of the very industry that brought AI to life, software engineering. I’m sure you appreciate the irony of this proposition: the Frankensteinian monster destroying its creator’s life. But how realistic is this concern? Will AI ever take over software engineering jobs?
In short, at least for now, not likely, but this is certainly something worth giving some further thought.
Software engineering beyond coding
Often, when talking about the impact of AI in software engineering, the argument goes straight to coding and its dependence on creative problem solving, a skill that only the best human minds can offer. Fine, let’s stick with this assumption for now, but building software is much more than just coding.
There are many activities taking place in the software development lifecycle that still require an engineer’s expertise and have very little to do with typing out lines upon lines of their favorite programming language. Think about the time spent analyzing requirements, designing systems, dealing with scalability and performance concerns, doing code reviews, etc. All of these activities are important to the production of good software, but don’t require any coding.
Can AI help here then? Yes, and in fact, that’s one of the trends we’ve seen coming up in the past couple of years.
Engineers often use tools to perform non-coding activities more effectively: knowledge base systems are commonly employed to help with estimation of new projects based on past experience, code linting saves the engineers time during code review, and the list goes on.
AI can make these tools smarter, and in turn more effective. Here are a couple of example of this in action:
- Microsoft’s Visual Studio Intellicode : An extension for Visual Studio / VS Code that allows a developer to quickly access the relevant methods and classes based on current context, instead of hunting them down in an alphabetical list.
- DeepCode.Ai : A tool for automated code reviews with a focus on security vulnerabilities.
These tools aren’t going to replace an engineer any time soon, they’re just here to help. They’re no different than using an IDE to code instead of relying solely on text editors in the first place. The end result is that whatever used to take us longer to do (e.g. writing boilerplate code) can be done much more quickly so that we can spend more time on the things that matter and get stuff done more quickly.
The real question is whether technology (AI or otherwise) can go beyond being a helper tool, and really replace engineers on their core tasks, those aforementioned things that matter.
To answer this question, we must take a closer look at the current state of the art for AI systems.
The state of the art
AI today in common parlance is pretty much a synonym for Deep Learning. This is a specific set of techniques that rely largely on brute force and luck to make sense of large amounts of data. This approach isn’t particularly new in itself. Convolutional neural networks (CNNs) for instance, which are at the basis of AI-powered computer vision, have been studied since the early 80s. What is new is that today we have enough data and enough computational power to create, train and analyze complex models within reasonable timeframes.
Researchers today, both academic and commercial, spend most of their time creating more and more complex networks out of some common building blocks, achieving more and more interesting results as this complexity grows. However, beneath it all these very building blocks have a common flaw: they all require a lot of data in order to be trained to produce accurate results.
And this data isn’t sufficient on its own. It needs to be paired with an expected output for the system so that the network can self-evaluate its accuracy and adjust its results as self-training is carried out. The outcome is that modern AI is good at tasks where the expected end result is easy to define, for instance:
- Object recognition : Given a picture, can you recognize the objects contained within? End result: zero or more object labels
- Speech recognition : Given a sequence of sounds (think “Hey Siri”), can you recognize a specific trigger word? End result: recognized / not
- Language translation : Given a passage in language A, can you produce the same passage in language B? End result: the translated sentence
- Speech synthesis : Given some text, can you produce sounds akin to a person speaking it? End result: audio data corresponding to the text being spoken
The last two examples are an interesting demonstration of another characteristic of current AI systems: they’re good at approximating. And for some problems, an approximation of the end result is good enough.
This is what allows modern AI to produce convincing art, or aged versions of our selfies (and deep fakes too, unfortunately). Our brain doesn’t need a pixel-perfect representation of a Van Gogh to get the general idea that our picture has been turned into a portrait in his style. As long as the result is close enough to the original, it is acceptable.
What does this mean for software engineering?
Quite simply: it’s currently impossible for deep learning techniques to produce advanced code by themselves. There are infinite ways to state a problem and just as many ways to solve it, although some are more efficient or sensible than others, so the expected end result is not as straightforward to define as, say, identifying whether a picture represents a dog or a cat. Let’s also not forget that software engineering is a precise art, approximation like in the case of speech synthesis isn’t going to be good enough. The code must work, be understandable and be maintainable.
Realistically, the current style of AI can only produce boilerplate, such as translating UI mockups into its corresponding frontend code, which then would need to be “massaged” or otherwise completed by a software engineer before it can be considered production-worthy.
But deep learning isn’t all there is to AI. Evolutionary algorithms (EA), for instance, are better suited at producing code than deep learning. Perhaps in future what we’ll see is a combination of different AI techniques to finally have a system that can spit out good code on its own.
Sadly, given that currently there is very little ongoing research in other AI techniques aside deep learning / neural networks, I don’t see this happening anytime soon.
Where does that leave us?
Ultimately the question of obsolescence of any highly skilled job is less about the technology that threatens it, and more about the adaptability of the workforce.
Computing has evolved exponentially since the microprocessor revolution of the early 1970s. In the intervening years, we’ve built a completely new industry based around software, an industry which is by now pretty mature. Specialists as diverse as UX designers and DevOps engineers are working together to build more and more complex applications every day and advance the industry further. Some of these jobs only appeared in the past 5 years, and with them, some of the older jobs disappeared. This is the natural evolution of business, only made more evident by the breakneck speed of innovation in our specific field.
Today, a good chunk of software engineering work is done through high-level languages and high-level frameworks (e.g. Flutter, Tensorflow, React). Most of the underlying hardware is abstracted by APIs, and even completely virtual sometimes, as is the case for containerized cloud deployments.
In the years to come, it wouldn’t be too far fetched to imagine that AI can raise software engineering to an even higher level of abstraction. Think about telling an AI what to build, maybe by directly plugging in information from user stories coming from the product management team, and checking the resulting codebase. Initially, the code produced by the AI would be of poor quality and require a lot of human intervention, but if the AI is designed to learn from this manual changes, perhaps by being contextually aware of the product and the general rules governing the system’s architecture, it will get better and better over time.
Again, not something that would happen in the next few years, especially if the research stays focused only on deep learning, but nevertheless an interesting proposition. If this happens, a new category of engineering jobs will appear, while older jobs will become obsolete. The engineers who will thrive the most are the ones embracing this change and adapting to it.
In conclusion, the key to survival in the ever evolving world of software engineering is the same one I always talk about:
Never stop learning
Keep an open mind, and keep your eyes open to new technology and what it brings to your industry. Only by doing so you can ensure you’re not going to be made obsolete by an AI, or any other technology that might come along. Fear and suppression of new technology is never the key to success, it’s only an obstacle that will hold you back as the world inevitably moves on.
In the end, I’d like to think that we chose to be software engineers because we have a curious mind and like to solve problems. Until there are no problems left to solve, our skills will always be in demand, regardless of the technology currently in the spotlight.