15 Breakthroughs and Ideas to work on in 2019

Harsh Sikka
Technomancy
Published in
5 min readDec 21, 2018

So, 2018 is coming to a close. It was a vivid year. Traditionally around this time people solidify resolutions, reflect, and contemplate how to better themselves. I don’t really have any resolutions, except build some damn cool things.

So, I want to outline topics and projects I’m particularly interested in working on during this year. Quick Disclaimer: The topics aren’t very organized. I don’t know when or even if I’ll get to each topic, and certainly don’t know how long they’ll take. I’ll only really know when I start working on them ;) So, without further ado, here they are:

1.) Biologically Inspired Neural Networks

My thesis at Harvard can be summarized at a very high level as looking at computational paradigms in the brain and formalizing ML architectures from them. Some work coming out of DeepMind and Vicarious is related.

I’m also working day and night on Modular Neural Networks with adaptive topology, where I’m using biologically constrained networks to build more robust and general systems. You can read more here!

Another really interesting area is looking to other cellular networks and bioelectric activity for novel and robust architectures. My mind was blown by Dr. Michael Levin’s recent talk at NeurIPS. Check it out, you won’t regret it. Evolutionary algorithms are also very interesting, though I know little about them.

2.) A Biological Compiler and Organism design framework

In biology, vast strides are being made. Hardware and Software are a little more blurry, since both the lowest level and the highest are mutable and dynamic. The folks at Asimov developed Cello, which functions as a sort of CAD to design simple genetic circuits. In the NeurIPS talk I linked earlier, Dr. Levin also noted that the endgame of his work would be to develop a high level compiler that ultimately culminates in functional outputs that influence organism design.

I really really want to build a rudimentary implementation of this. Time to hit the biology textbooks for a refresher.

3.) Better, open source Neural Architecture Search Pipeline

A few years ago now, a google brain resident debuted NAS, an ml model that picks the optimal topology for a problem space. Following this, breakthroughs have been made, including DARTS. For a good treatise on the state of automated neural architecture search, check out this awesome post by Rachel Thomas at Fast.ai.

The only product in the space is Google’s own autoML, and OS autoKeras. These are useful, but computationally complex. I feel that AutoML could be huge for general software engineers, but the pipeline and technology needs a lot of work.

4.) ML models that can generate code snippets from descriptions

Desc2Code is an interesting Request for Research out of Open AI. It’s understandably difficult, but also seems like it would be category changing if completed. I know of one person pursuing this actively.

5.) Wetware Computing

Read this and have your mindblown. Koniku is operating in the space, but wetware computing combines synthetic biology, electrical and computer engineering, neuroscience, and artificial intelligence. It may just be the most underrated topic of the 21st century.

6.) Preventative Personal Health Dashboard

Since I started tracking every aspect of my health, storing and operating on the data, I’ve had some remarkable insights. I’m shocked there isn’t an app that acts as a hub or dashboard and gives me preventative suggestions about my health day to day. There is some novel ml work to be done here, but I think its important and I’d definitely love to give it a shot. Shake the healthcare system around a bit ;)

7.) Financial Models

I’d love to spend a weekend building some cool trading algorithms, competing on Kaggle finance competitions or Quantopian. I took ML for Trading in my master’s at Georgia Tech, definitely worth checking out!

8.) Neural Turing Machines

Neural Turing Machines and Differentiable Neural Computers are incredible concepts coming from Alex Graves and the folks over at Deepmind. Implementations are still fresh, and I think they could be applied to a whole new range of applications.

9.) Low power Neuromorphic Computing and Sensors

Here I mean brain mimicking hardware, not just non Von Neumann architectures. I think synthetic, neuron imitating sensors and networks could be huge, and may be related to/ a predecessor to the Wetware Computing topic I mentioned earlier. Organic transistors seem particularly useful here.

10.) Diagnostic ML Models

I love seeing new diagnostic models, for cancers, diabetic retinopathy, etc. How many useful models could we build that sit on a mobile phone and help a physician? I’d be interested in finding out.

11.) Automating and Extending Learning

In my Education Technology research a year ago, I realized that learning hasn’t really been scaled well, despite the propensity of Q&A models and search technology to do so. Once can imagine a system that indexes your notes, asks you questions and answers your queries, learning with you and becoming a sort of online extension of your knowledge. It may also have implications in the workplace!

12.) Automated Data Science tooling

Recently, Francois Chollet, the creator of Keras and author of some awesome DL books, asked a question about what the most difficult/costly area of ML is in industry, and the answer was data prep and cleaning. I agree from experience, and I think working on tools that can take care of some or all of this would be huge. It’s unclear where one would start, but Open Refine has been a very useful tool.

13.) Distributed and Federated ML

Federated ML was a big deal when Google dropped their paper, but hasn’t really been implemented commercially. A lot of cool work has also been done by Andrew Trask from OpenMined fame. I think projects around this and distributed ML could yield opportunity for interesting products and general OS tooling. It’s also a very important topic given the times.

14.) An Operating System for Machine Learning

I’ve been toying with the idea of building an OS dedicated to ML performance, my instincts could be off, but I think it would be a fun project!

15.) Deep Reinforcement Learning and Simulation

I’m no expert in Deep RL, but I’m curious about the capabilities of some of those other model types, including biologically constrained ones. I’d love to test them out. Also, I think there is a lot of work to be done around RL environments and simulation related tooling. Applied Intuition is working this space for AVs.

Wrapping Up

There are a lot of cool things to be working on and above are just those that capture my personal interest. I’d love to hear what you’re all up to, and will keep you updated as I hustle along my merry little way in 2019! Happy Holidays :)

If you enjoyed it, please let me know by clapping, sharing or commenting! I have a few other personal projects that center around learning various subjects, like general computer science, physics, economics, and other related topics, but I didn’t go into them in depth here. If there’s interest, I can share my CS and Physics curricula!

I’ll be updating this publication as I go along.

--

--

Harsh Sikka
Technomancy

Grad Student at Harvard and Georgia Tech. Artificial Intelligence, Theoretical Neuroscience, Synthetic Biology, and generally cool stuff.