A.I. is not magic

A dash of data science and a bucket of blood, sweat, and tears, and there you go: Your model is over-fitting…. Crap. Yeah. Actually, it is kind of hard to get going in the field. Artificial intelligence, and machine learning in general, is basically composed of 5 or 6 unrelated fields duct taped together. Let’s look at some cool demos related to 3 different topics in the area to get a sense of what’s possible and how these things link:

  1. Understanding words, sentences, paragraphs, and documents. The tensorflow projector is a nice place to show how words can be clustered according to their relationship to each other. I like word2vec and doc2vec for this task, and t-SNE for clustering, but gloVe and others are just fine for this too.
  2. t-SNE is a tool for dimensionality reduction and works well with visualization tools. It is built into the tensorflow projector from #1, along with PCA. You can get the code here. The demos are super fun. Classifying digits from the MNIST dataset, we have this one. This is also a nice visualization of how t-SNE parameters affect the evolution of the dimensionality reduction.
  3. Understanding images is a super fun topic. My favorite examples from this field are tensorflow for poets, style transfer, and deep dream.
AI is all about infrastructure and getting it done. There is no secret sauce.

I had planned to give more examples of RNN/LSTM, GaN, ILP, forest, and more, but work is pulling me back to reality. To tie it all together, these math models (i.e. capabilities) all live inside these big machine learning frameworks, but in the deeper details they are not related at all. Clearly RNN, DNN, CNN, and others are similar in construction, but have no connection at all to stuff like word embedding models. Right? Or so I thought. If you read this link on CNNs for text analysis, your brain kind of feels like this:

The effect of drinking a Pan Galactic Gargle Blaster is like having your brains smashed out by a slice of lemon wrapped around a large gold brick

So yes. There is this zen type feeling when all these unrelated math objects suddenly fit together perfectly, as though they were designed to work together. I can go on and on about the unreasonable effectiveness of mathematics. See also this and this and this. Oh man, I could go on.

OK. Back to the salt mine. I had two unrelated extremely exciting ML breakthroughs today. The first was achieved through truncating some outliers before normalizing the dataset. The other was a result of no sleep and grinding away at simulations until the models did what I wanted them to. Really, I need to get back to it now.

Happy Coding!

-Daniel
daniel@lsci.io
LemaySolutions.com