2017 Q1 Plans

The boom of Machine Learning and AI continues, and for my personal research projects, 2016 year was quite productive. But 2017 has already started. Where to think to in Q1?


Ok, a bit of methodology first. I want to be close to artificial intelligence in general and machine learning in particular. What layer of the intelligence stack?

Lower levels (hardware, languages) don’t interest me much. Extending ML frameworks like TensorFlow could be interesting, but I’ll skip it for now. Neural network architecture layer is very interesting; it’s where all the fun is going on, but fundamentally requires a lot of research typically done in academia or labs in Google/Microsoft/etc. Not sure if I can push the frontier of research without having time and resources for fundamental research.

What’s left is cognitive architecture and application layers: looking for existing research papers, figuring out how different kinds of models work, what problems they are good for, designing implementation for any problem in particular.

Machine learning field if huge, I should narrow it down. It should be something related to text data and conversational interfaces so that I can apply learning to my work projects right away.


In Q1 2017 I will focus on applying state of the art machine learning algorithms to natural language understanding (NLU) problems. Problems are primarily related to chatbots. I don’t define a concrete list of projects or areas of research at this point. Some ideas:

  1. Intent classification.
  2. Named entity recognition.
  3. Spelling correction.
  4. Topic extraction.
  5. Chat response selection with learning from feedback.
  6. Efficient integration of machine learning algorithms with Good Old-Fashioned AI (rule-based systems).
  7. Text summarization.

Other areas where I want to have a more structured understanding: ML project management, engineering practices for ML systems.


A good strategy should include a list of things which should not be done. After all, if you focus on everything, you don’t focus on anything. In Q1 2017 I won’t spend time on learning areas of machine learning which are not applicable to natural language processing or my work projects. For example, though CS231N computer vision class in Stanford looks interesting, I won’t really need it, so it’s better to put it on hold.


I like an idea of one-month personal projects with tangible outcomes. I will try that in Q1.


What are your research plans for Q1? Please share in comments!


The article was originally published on http://pavel.surmenok.com/2017/01/02/2017-q1-plans/