Big Data Science

Finding wisdom in data

Albert Bifet
Big Data Science
Published in
5 min readOct 11, 2013

--

What is Big Data?

The origin of the term “Big Data” is due to the fact that we are creating a huge amount of data every day. The first book mentioning “Big Data” is a data mining book that appeared in 1998 by Weiss and Indrukya. Usama Fayyad in his invited talk at the KDD BigMine 2012 Workshop presented amazing data numbers about internet usage, among them the following: each day Google has more than 1 billion queries per day, Twitter has more than 250 milion tweets per day, Facebook has more than 800 million updates per day, and YouTube has more than 4 billion views per day. The data produced nowadays is estimated in the order of zettabytes, and it is growing around 40% every year.

A new large source of data is going to be generated from mobile devices, and big companies such as Google, Apple, Facebook, Yahoo, Twitter are starting to look carefully to this data to find useful patterns to improve user experience. Alex ‘Sandy’ Pentland in his ‘Human Dynamics Laboratory’ at MIT, is doing research in finding patterns in mobile data about what users do, not what they say they do.

Big Data’s V

We need new algorithms, and new tools to deal with all of this data. Doug Laney was the first to mention the 3 V’s of Big Data management:

  • Volume: there is more data than ever before, its size continues increasing, but not the percent of data that our tools can process
  • Variety: there are many different types of data, as text, sensor data, audio, video, graph, and more
  • Velocity: data is arriving continuously as streams of data, and we are interested in obtaining useful information from it in real time

Nowadays, there are two more V’s:

  • Variability: there are changes in the structure of the data and how users want to interpret that data
  • Value: business value that gives organizations a competitive advantage, due to the ability of making decisions based in answering questions that were previously considered beyond reach

Gartner summarizes this in their definition of Big Data in 2012 as high volume, velocity and variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.

Big Data Applications

There are many applications of Big Data, for example the following:

  • Business: costumer personalization, churn detection
  • Technology: reducing process time from hours to seconds
  • Health: mining DNA of each person, to discover, monitor and improve health aspects of every one
  • Smart cities: cities focused on sustainable economic development and high quality of life, with wise management of natural resources

These applications will allow people to have better services, better costumer experiences, and also be healthier, as personal data will permit to prevent and detect illness much earlier than before. Big Data Scientist is becoming the most challenging career nowadays.

Big Data Science Challenges

There are many future important challenges in Big Data management and analytics, that arise from the nature of data: large, diverse, and evolving. These are some of the challenges that researchers and practitioners will have to deal with in the years to come:

  • Analytics Architecture. It is not clear yet how an optimal architecture of an analytics systems should be constructed to deal with historic data and with real- time data at the same time. An interesting proposal is the Lambda architecture of Nathan Marz. The Lambda Architecture solves the problem of computing arbitrary functions on arbitrary data in realtime by decomposing the problem into three layers: the batch layer, the serving layer, and the speed layer. It combines in the same system as Hadoop for the batch layer, and Storm for the speed layer. The properties of the system are: robust and fault tolerant, scalable, general, extensible, allows ad hoc queries, minimal maintenance, and debuggable.
  • Evaluation. It is important to achieve significant statistical results, and not be fooled by randomness. As Efron explains in his book about Large Scale Inference, it is easy to go wrong with huge data sets and thousands of questions to answer at once. Also, it will be important to avoid the trap of a focus on error or speed as Kiri Wagstaff discusses in her paper “Machine Learning that Matters”.
  • Distributed mining. Many data mining techniques are not trivial to paralyze. To have distributed versions of some methods, a lot of research is needed with practi- cal and theoretical analysis to provide new methods.
  • Time evolving data. Data may be evolving over time, so it is important that the Big Data mining techniques should be able to adapt and in some cases to detect change first. For example, the data stream mining field has very powerful techniques for this task.
  • Compression: Dealing with Big Data, the quantity of space needed to store it is very relevant. There are two main approaches: compression where we don’t lose anything, or sampling where we choose data that is more representative. Using compression, we may take more time and less space, so we can consider it as a transformation from time to space. Using sampling, we are losing information, but the gains in space may be in orders of magnitude. For example Feldman et al. use coresets to reduce the complexity of Big Data problems. Coresets are small sets that provably approximate the original data for a given problem. Using merge-reduce the small sets can then be used for solving hard machine learning problems in parallel.
  • Visualization. A main task of Big Data analysis is how to visualize the results. As the data is so big, it is very difficult to find user-friendly visualizations. New techniques, and frameworks to tell and show stories will be needed, as for example the photographs, infographics and essays in the beautiful book ”The Human Face of Big Data”.
  • Hidden Big Data. Large quantities of useful data are getting lost since new data is largely untagged file- based and unstructured data. The 2012 IDC study on Big Data explains that in 2012, 23% (643 exabytes) of the digital universe would be useful for Big Data if tagged and analyzed. However, currently only 3% of the potentially useful data is tagged, and even less is analyzed.

--

--