When Your Paper Is Critized … Have Faith In Your Work

Meet one of the most cited computer science papers ever!

--

As an academic, we receive feedback that can be quite brutal. In fact, there’s often a “third reviewer syndrome”, and where two of the reviewers love your paper/grant, but the third one just trashes it because it’s either not the paper that they wanted, or that just doesn’t get it. But, overall, the best form of revenge is success.

For us we have seen three of our spin-outs advance, and some people thought we would never have success with this. But all of these have been successful, and two have even been acquired by large companies (Zonefox and Symphonic).

Alex Krizhevsky

And, so, if you face rejection on your research work, just think of Alex Krizhevsky:

Basically, he created “AlexNet” and showed that neural networks could be applied to many application areas and that GPUs were at the core of this. To date, the related paper has been cited 129,088 times, and is one of the most cited papers ever in Computer Science:

At the time it was published neural networks were seen as a failing research area. Marvin Minsky — the famous AI researcher — was one of the most vocal of detractors. For this, he published a book on perceptrons and that showed that they could not cope with exclusive-or circuits:

And, so, many criticised the AlexNet approach in that it would not scale, and that neural networks were a poor methods. But the critics were wrong, and it has become one of the greatest research papers ever published. At the time, many researchers had to hide the fact that their papers used neural networks, as it was such a disregarded area. But it was the start of the drive towards Deep Learning.

Alex has born in Ukraine but grew up in Canada. He had the choice to get to many great US universities but decided that the University of Toronto was a perfect place to study deep learning. While studying, he used backpropagation methods to recognise patterns in image data sets. And, so, Alex submitted his method to the ImageNet competition in 2012 [1], and beat the rest by a significant margin (over 10% better than anything else). The paper included the mighty Ilya Sutskever, and who is one of the key people behind ChatGPT and DALL-E. Alex’s method builds on one of his co-authors (Geoffrey Hilton) methods and which used GPUs to significantly speed up the learning processing. He then built up layered neural networks that build on each other.

In fact, it was such a success that many Silicon Valley companies came after the three authors, but it was Google who able to integrate the method into Google Photos and a wide range of applications. But, they all left Google to focus on other things. Alex worked for Google from Mar 2013 to Sep 2017 and left to join Dessa (an AI-enabled business development company). Geoffrey became worried about the effect of AI and has since become a proponent of AI governance. Ilya is now the Chief Scientist at OpenAI.

AlexNet basically provided a foundation for others to build on, and since we have seen it applied to many application areas, including face recognition, medical imaging and autonomous vehicles.

Overall, Alex created many great advancements [here]:

Conclusions

The moral of the story: If you have a vision and something that is ground-breaking, don’t be put off by those who doubt you. Have faith.

References

[1] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

[2] Minsky, M., & Papert, S. (1969). An introduction to computational geometry. Cambridge tiass., HIT, 479(480), 104.

--

--

Prof Bill Buchanan OBE FRSE
ASecuritySite: When Bob Met Alice

Professor of Cryptography. Serial innovator. Believer in fairness, justice & freedom. Based in Edinburgh. Old World Breaker. New World Creator. Building trust.