Randomization has proven to be a pretty fascinating and useful resource to use when designing algorithms. Randomization has been used to create fast algorithms for things like sorting, finding the median of a list, and more interesting problems like matrix multiplication verification, global min-cuts, finding 2-dimensional Delaunay triangulations of a point set, and tons more. The list of randomized algorithm applications is endless, making it clear how useful randomization can be in algorithm design (and for me, helps make it so much fun).

So it is February 2020 and if there’s some things I smell in the air these days, it is a mixture of *rain*, *snow*.. and a bit of *Deep Learning* (DL). I’ve been passively observing some of the academic and industry trends of AI and Machine Learning (ML) for a bit and if there is one thing that gets people of all kinds worked up with excitement, that would be **Deep Learning**. Now I am not really a practitioner of DL these days but I do find the mathematics and theory behind it to be quite fun. Fun, you might…

Optimization is a fascinating area with a lot of uses, especially these days with Machine Learning (ML). As those involved with ML know, gradient descent variants have been some of the most common optimization techniques employed for training models of all kinds. For very large data sets, stochastic gradient descent has been especially useful but at a cost of more iterations to obtain convergence. …

PhD student @ UIUC who enjoys the mystical arts of mathematics. Works in Theoretical Computer Science. www.christianjhoward.me