Sometimes, your copilot can take you down the wrong road

Enrique Dans
Enrique Dans
Published in
2 min readDec 28, 2022

--

IMAGE: A hand sketched in blue lines emerging from a laptop screen and handshaking a real one
IMAGE: Kiquebg — Pixabay

Since the launch of GitHub Copilot in July 2021, any number of developers have experimented with it, and why not? The idea of an assistant to take on routine tasks and provide interesting ideas drawn from a huge database is an attractive one, and that also raises many implications such as using a code with different types of license to programming quality.

Now, a Stanford study shows that developers who program using the robotic assistant generate, on average, less secure code with more vulnerabilities than those who write code on their own. At the same time, robotically assisted developers were under the impression that their code was better protected.

Why this mistaken impression? Quite simply, greater confidence in the product generated by a robotic assistant; a subjective feeling of trust in the machine similar to what I’ve observed in many people since ChatGPT appeared: the idea that, since it comes from a machine, it must be right. This feeling, of course, is completely wrong, because the result depends only on the data fed to the algorithm, which implies, in many cases, erroneous conclusions. What can we expect from an application that is fed with thousands of sources found on the internet? What should we expect from an algorithm that learns to generate code from thousands of open source projects? That sometimes it…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)