Greatness cannot be planned

Samuel Gbafa
Samuel Synthesized
Published in
3 min readNov 10, 2020

Over the last two weeks, I’ve comfortably settled into the world of research. As I continue to explore deep learning, I keep finding more topics and areas to explore. The simpler specializations I’m used to seeing in software engineering are moot (front-end, backend, dev-ops, etc.); the areas of research and directions to move in are so diverse. I’ve found that exploring the foundational topics takes a lot of focused time and that in research, what you discover along the way is part of the journey and not the destination. So what in the last two weeks led to this perspective?

First things first, I found a good work cadence as well as a way to track and categorize the topics I’m exploring. There is so much to learn and I encounter so many new topics that I felt I wasn’t making progress; my awareness of my ignorance grew faster than my knowledge. I ended up creating a list of all the topics I encountered and started to leave notes on them. I was able to see the list of topics, both known and unknown, grow. In doing this, I’ve been able to see distinct new fields with new subtopics emerge independently of the topics I was actively learning about. Finding out about these new topics felt more like pushing the edge of my knowledge as I began to expand my known unknowns.

So how have I been encountering new topics?

I’ve met some pretty amazing people at OpenAI in the last few weeks and have had great conversations with them. I met with Gabriel Goh and we discuss many topics from his work in introspecting into the inner workings of convolutional neural networks to discussing different ideas in meta-learning. Gabriel exposed me to some interesting ideas that I’ve been contemplating when considering my project. After speaking with Gabriel, I dove into learning about different approaches to meta-learning.

I also ended up speaking with Kenneth Stanley, with whom I had a mind-blowing conversation. I don’t think I’ve ever stopped and taken so many notes in a conversation with someone I’m working with. Kenneth focuses on problems related to open-endedness, investigating algorithms that continually generate interesting things. We talked about open-ended algorithms in reality, convergent evolution, challenges in this space, artificial life, and even simpler things like what it’s like to work at OpenAI versus other organizations. Ken sent me his talk, based on his book “Why Greatness cannot be planned”, which had great insights as to how optimizing towards a specific objective may not be the best way to achieve it.

Aside from the great conversations I had and the subsequent investigations into the new topics I’ve encountered, I spent most of the past two weeks learning about natural language processing and sequence models, particularly recurrent neural networks and their variations including bidirectional RNNs, LTSMs, GRUs, etc. I also began to learn about transformers, and began to dive deeper than I ever have before! The more I learn, the more confident I feel about exploring this space. No longer is it just a black box, but a new set of tools! I feel empowered, and I think I now know enough to be dangerous!

The journey continues!

--

--

Samuel Gbafa
Samuel Synthesized

I like making and analyzing things. I occasionally reflect publicly. about.me/samgbafa