The history of Github Copilot and its ONLY 60% valid code❌

Juan España
ByteHide
Published in
4 min readSep 1, 2021

GitHub Copilot, the artificial intelligence used to program is already real. But is that good or bad for developers? Will robots be able to do the work of a human? If so, how come there have been studies saying that nearly half of the code written by GitHub’s Copilot is poorly written or vulnerable? 🤔

Many developers at first thought they were going to lose their job, but no, they won’t.

If you wonder what GitHub Copilot is, I’ll tell you 👇

What is Github Copilot

It is basically an assistant with artificial intelligence. Its main goal is to help developers generate better and more code in less time.

Its operation is based on the input that the programmer writes to use it in context and thus be able to generate that code, greatly reducing the time when programming, apart from suggesting new code fragments so that the programmer can help himself more with Artificial Intelligence.

This really is a great advance since the programmer will be able to develop faster and more effective solutions, apart from not having to do certain repetitive tasks that Copilot performs.

Apart from that, the programmer can write the logical part that he wants to implement in the solution he is developing, and artificial intelligence is capable of generating the necessary code to be able to implement that function in the solution.

How GitHub Copilot works.

Everything is based on OpenAI Codex. An artificial intelligence system created by OpenAI, with which GitHub Copilot has been trained to be able to perform the necessary tasks when helping the programmer.

Is GitHub Copilot for developers? Or could anyone use it? 🤔

Many application developers have already tried it and in their opinion it is not “magic”. Help program… Help, but it is not possible to create applications without programming knowledge. This is why most programmers classify it as an aid to improve performance when programming.

The conclusion is that in the future it could make a very noticeable difference when programming, but currently it is simply considered as an aid by developers.

In addition, and in that they agree with the creators of GitHub Copilot, the tool is not totally efficient. It is only so in a percentage that does not reach 60%. In other words, the code that it proposes should not be simply written and considered valid. Like any other, it must be tested, reviewed and tested because it may contain errors. Very often the code suggested by GitHub must be optimized by the developer.

It is clearly committed to Artificial Intelligence for the near future, so we have no choice but to closely monitor its progress.

Does GitHub Copilot generate 40% vulnerable code? 🤔

But to this day it is still something that is not 100% perfect, in fact a study published by the University of New York, talks about the tests that have been carried out in recent days to see how effective and safe GitHub Copilot is to the time to generate code.

But how long will it take to move forward? Will that become 100% efficient? 🤔

That is not yet known. It remains to be seen and we will have to wait to be certain.

The code often contains bugs, and therefore, given the large amount of code that Copilot has processed, it is certain that the model will have also been trained in exploitable code with bugs. This raises questions about the security of the code hints from Copilot .

In order to have good statistics, in this study more than 80 possible scenarios were used to test the Github Copilot. This translated into over 1500 different programs, and clearly, in different programming languages.

Researchers were surprised to find that almost half of the code (roughly 40%) was either vulnerable or produced bugs or errors in the program.

The future of GitHub Copilot 🤖

The investigation concluded that the code generated by the Github Copilot was vulnerable and they recommended to always verify what was generated.

And that was not the only conclusion that was made, since the only problem was not the vulnerabilities in the code or the bugs it generated, but it did not take into account the way in which the developers have changed the generated code, considering it as “good practices”.

It is clear that Copilot is capable of generating huge amounts of code and at a very high speed, and that these types of tools will increase the productivity of developers, but they always have to be vigilant when analyzing the generated code because if it is overlooked, it could cause a lot of problems.

--

--

Juan España
ByteHide

CEO at ByteHide🔐, passionate about highly scalable technology businesses and .NET content creator 👨‍💻