First, making humans a multi-planetary species. On a long enough time scale extinction on earth is certain. We have to go to other planets. The only viable option for now seems to be Mars. A self-sustaining colony has to be our goal.
Second, defeating death. Humans don’t have to die. No physical law makes this unavoidable. In the short term curing cancer and heart disease. In the long term developing a way to stop/reverse aging.
Third, solving intelligence. The first two problems are not solvable today only because we lack the required knowledge. Humans are the only entity that is…
Writing programs in a high level programming language is fun and easy, but do you really know how a computer works. If I sent you back in time like 300 years ago could you come up with the main principles? If not, I want to give you some resources and concepts so you have a way to start yourself.
The problem I want to solve is the following: There is a camera in a vehicle and I want to know how fast it is going. You obviously can not look at the speedometer, only at the video footage itself. Something a little deep learning magic should help us with.
I have 2 different videos. One for training and the other for testing. The training video has 20399 frames and the one for testing has 10797 frames. You can download the videos from here. Here are some examples:
In my last blog post I argued that running deep learning models is way too hard. Well complaining about something doesn’t change anything, so I decided to build easy-model-zoo (emz). Imagine papers-with-code but you can actually run the models.
The goal of emz is to abstract all the unnecessary stuff away that you don’t need if you just want to run the model. You are not interested, if the actual model is implemented in Pytorch, Tensorflow or Caffee. Who cares, you just want to try out the model. You don’t even need to download the model yourself. …
You discover a new paper that just came out. You look at it and its brilliant. Some kind of new architecture made it possible to improve the SOTA on your specific dataset, that you are interested in by 4%. That is amazing you think and you start to scroll down in search for the magical link to GitHub so you can try it out yourself. You find it, click it and hopefully there is a pre-trained model as well.
You don’t want to train the model from scratch. It would take an eternity on your budget GPU from 2013. Even…
Every solution exists in the universe of all possible programs, we just have to search for them…
When you look into the world today, you can easily see the difference between objects created by humans and those that are created by nature. But what is this difference exactly and how would we define it? I think nature works by applying very simple rules, in a fractal sort of way, over and over. If you take a tree for example, you could say that every subtree in the tree has the same structure as the tree itself. …
Few years ago I started programming in Python and I was hooked. The syntax was beautiful, installing it was easy and third party packages amazing. The only problem? Python is not easily parallelizable and slow. Really slow. If you compare it to see C it is often times 1000x slower. Here is a graph for you:
You can’t train your Mutliple Billion Parameter Megatron Model when you are this slow.
This has some consequences. Every Numeric Computing (NC) library you use in Python is a very thin wrapper around C, Fortran or C++. For example:
The company Tom works at wants to get rich, so his boss asks him to use his newly acquired knowledge (3 days workshop) about Machine Learning (ML) to predict the stock price of Apple in the next 2 years (This is not possible, don’t try to do this!). Give me a call if you succeed though…
He starts his ambitious project and does the following:
I once talked to a friend of mine who like me was studying computer science. We always had a different view of things. I would always make sure that something works before looking at its theoretical foundations. My friend was the exact opposite. If there was no theory behind something, he would not look into it (this makes no sense as I will explain later).
I just started with Machine Learning and we started to talk about it. I was using a very simple Neural Network to do simple digit recognition on the MNSIT dataset, something that probably most people…
There is no doubt that Machine Learning often feels magical. As a Machine Learning Engineer myself, I am still fascinated when my model does solve some very hard high-dimensional problem that would have been unsolvable otherwise. I am convinced that data-driven solutions will solve the most challenging problems in the future like self-driving cars, and that Software 2.0 will play a very important role.
The performance of those algorithms highly depends on your data though. …