Member-only story
24 Really F***ing Interesting Deep Learning Papers
With links and brief summaries
5 min readJul 16, 2022
I’ve read a lot of deep learning papers, but some stand out because of how unique and interesting their ideas are. They may not be the most cited, nor necessarily the most practical— but they definitely get the gears turning. Here they are, in no particular order.
- “DeepInsight: A methodology to transform a non-image data to an image for convolution neural network architecture”. Using t-SNE to transform tabular data into images so computer vision methods can be used to model tabular data.
- “ResMLP: Feedforward networks for image classification with data-efficient training”. Building a functional image classification architecture without convolutions, attention, capsules — jut feedforward layers.
- “Single Headed Attention RNN: Stop Thinking With Your Head”. The funniest deep learning paper I have had the pleasure of coming across. Author Stephen Merity offers a humorous and insightful critique of modern Transformer-centric recurrent-ditching NLP research and proposes reconsidering the usefulness of recurrent layers.
- “GPT-D: Inducing Dementia-related Linguistic Anomalies by Deliberate Degradation of Artificial Neural Language Models”. Title says all.
- “Predictability and Surprise in AI…