When it comes to classic ML feature engineering is one if not the most important factors to improving your scores and speeding up your model without even bothering to tune or get fancy with your model.
There is not a lot of resources and books out there that cover feature engineering in depth, so I wanted to compile a list of code snippets covering most of the techniques I found online and used over time that were critical to most of the projects I worked on. These techniques mostly apply to decision tree and regression-type models (not deep learning).
My goal here was to give the coding examples and not cover the ins and outs on how each technique works and how they impact various models metrics, I am assuming you already heard of most of these techniques and that you will experiment to discover what works best for each project you’re working on. …
Over the last few years, designers have started trapping themselves inside boxes. We’ve gotten into a bad habit of comparing every new idea we come up with to something that has come before. This problem is most readily described by the sentence we’ve all heard now dozens―perhaps hundreds―of times:
“It’s like Uber for… ”
That sentence has become so familiar that there is even a website, itslikeuberfor.com. If you visit the site, you’ll find that you can click anywhere on the screen, and a random word generator will provide you with your next great idea. …
More than 1.6 billion people visited IKEA’s website in 2015. That’s a fifth of the entire world’s population and more than half of all people with internet access.
But despite that incredibly impressive reach, IKEA had a conversion problem. The store’s online and mobile platforms did a poor job of converting younger consumers, and the reason wasn’t a lack of brand appeal or pricing — it was a clunky, out-of-date user experience.