SQL, R, and Python: Why Data Wrangling in ONLY Code is Inefficient
So everyone knows the oh-so-popular statement that a data scientist spends 50 to 80% of his time cleaning and preparing his data before he even starts looking for insights in it. I mean everyone’s been talking about this since 2014 really, and yet it hasn’t changed.
Data Wrangling is Inherent to Big Data
Since 2014 of course there have been LOTS of articles written about this, so we pretty much know why that is. Data preparation is inherent to Big Data. The more data you bring in to train your model on, the better your model is, but also the dirtier that data is.
When you’re bringing in not just pre-formated weblogs but data from documents, sensors, CRM tools and social connectors, it all comes in different formats. It needs to be cleaned and unified before you can use it even to get basic Business Insights. And building and maintaining super normed data pipelines that spit out just the right data in just the right format just isn’t possible.
Of course, there are many tools available that try to make this easier, but a lot of data scientists tend to come back to the tools they know best: coding in Python, SQL, and even R. Why? Because they are most flexible, and can be manipulated easily and naturally by data scientists. Also because data scientists need to think about their future, and don’t want to become specialists in a tool or skill that will be restrictive when they look for their next job.
Data wrangling in code is costly
In any case, data scientists end up spending much of their time looking into the data, finding where the problems in the data are, replacing incorrect values or formats, or correcting anomalies, finding and checking keys to join on, and performing code-consuming operations such as geo-coding, date parsing and time series manipulation, or just multiple splits to generate new features, all the while going back and forth and testing to see if these new features impact algorithmic performance positively. And each time they get started on a new dataset they have to start over from scratch and recode.
This approach proves to be even more complicated when you start dealing with more than one data scientist. Trying to re-read someone else’s code and understand it to find out where the schema was messed up and while your whole pipeline is down is complicated.
How to pimp your data wrangling code
There isn’t a miracle solution to end all data preparation hassle of course; every job has it’s annoying tasks. And, like it or not, data munging is useful to data science. But we’ve found that there is a way to make it easier: Visual Preparation!
Ok now just hear me out! I know it’s not a new idea and so many tools out there offer visual interfaces to clean data that are already rejected by data scientists because they can fiddle with them. That’s not what I’m referring to.
A visual interface can never be as flexible as a human using code, that’s a fact. But what if you could go back and forth seamlessly in one interface between visual accelerators AND coding in your favorite languages.
So instead of rewriting your code for annoying little standard operations like date parsing or geocoding, or text processing or various folds and unfolds, you can just use a visual editor, see the effect each step has on your data and go back to twitch your features at any time. And when you want to go into more advanced cleaning and enriching, or if some operations are just easier for you in code, you can just switch to writing code, SQL, R, Python, anything in shell (and all the equivalents for distributed databases), all of this in the same interface.
That’s what we call the visual interactive data preparation and it has over 80 pre-coded processors designed by our own data scientists to make their work more efficient.
Want to see what that looks like? Try Dataiku DSS out now, there’s a great free version!