One of the popular requests that I usually get from readers who love the “building algorithms from scratch” series is coverage on deep learning or neural networks. Neural network models can be quite complex but at their core, most architectures have a common base from which new logics have emerged. This core architecture (which I refer to as Vanilla Neural Network) will be the main focus of this post. As a result, this post aims to build a vanilla neural network WITHOUT any ML package but instead implement these mathematical concepts with Julia (a language I feel is the perfect…
You may have heard this quote “Data Is The New Oil” one too many times. Regardless of one’s opinion on this assertion, it is widely recognised that data is crucial to the innovative processes of every industry. The financial industry, in particular, is one industry where having access to quality and updated data is crucial given the highly dynamic nature of this industry. …
Amazing libraries for machine learning tasks (such as scikit-learn) have made it ridiculously easy to generate machine learning models these days. Gone are the days when one had to write every model from scratch. It’s a good thing that we have moved past those times as most problems end up as an optimization problem. On the other hand, this “fit and predict” culture could have serious long term effects on the learning culture within the field as a whole. Some scavengers are feeding off the “AI hype” with those half-baked “learn data science in 30 days” schemes which often leads…
Currently, Python and R are undoubtedly the most widely used programming languages in the machine learning world. Unfortunately, the high-level and elegant abstractions from these programming usually come at a cost, especially when working with large systems. Before we even begin, this post will not be some endless bashing of either Python or R so any reader with such expectations should look elsewhere. Rather, this post seeks to enlighten and inform readers about the potential addition of Julia to their toolbox. At the end of the day, all these programming languages are mere tools with different use cases.
Julia is…
Generating deep learning model is highly experimental by nature and design. This experimental feature is also one of the biggest pains of trying to generate deep learning models. It can be time-consuming as well as computationally expensive as one fidgets a lot with lots of tweaks in a bid to get optimal parameters of a model. However, this could (and should) all change very soon with the major release of Keras-Tuner 1.0 by the same team that gave the data science community “deep learning for humans”, Keras!.
Hyperparameter tuning is a fancy term for the set of processes adopted…
Machine learning models are meant to be deployed! For some reason, coverage on the deployment of machine learning models is very thin in both literature and the blogging space. It seems the glamour of designing the shiniest and latest cutting edge model with state of the art results appeals more to the machine learning community than what happens after such models are crafted. While that is understandable, some efforts also need to be directed towards the integration of these generated models in production systems. After all, what good is a model if it is not being used after it has…
In their paper in 2016, Kumar et al. advanced for machine learning practitioners in their paper titled “Model Selection Management Systems: The Next Frontier of Advanced Analytics” a 3-staged process for the selection of machine learning models. They named this process the “The Model Selection Triple”. The selection and implementation of the optimal machine learning algorithm is a tedious and iterative process due to the uniqueness of each problem as well as the enormous list of available algorithms. …
The long wait is over! As promised in an earlier post, I have resumed blogging after taking some time off to deal with some academic and/ professional duties. Luckily, this extensive break resulted in the development of a Python package, Coinsta, which will be detailed in this post!
I spent some months on a graduate dissertation which required the use of both historical and current data on cryptocurrencies. After browsing the Python Packaging Index (PyPI), I was frustrated by the lack of a Python package that catered for such needs. …
This post will be the shortest post on this blog. It has been a while! I had to take care of some academic duties. I apologise for the long absence and lack of information about the future of this blog.
The good news is that I now have time to blog again! I have also had time to review and reflect on the direction of future series of posts. In due time, I will share some of the topics that I have been working on during my absence.
Until then happy coding!!
Created in the early 80s and named after its developer (pictured above), Bollinger Bands represent a key technical trading tool for financial traders. Bollinger bands are plotted by two (2) standard deviations (a measure of volatility) away from the moving average of a price. Bollinger Bands allow traders to monitor and take advantage of shifts in price volatilities. Let’s examine the main components of this tool.
I have a love/hate relationship with numbers