Chandu Kavar
6 min readSep 20, 2017

From Application Developer to Big Data Engineer

I started my career as an application developer mostly having worked on OOP languages, databases, REST APIs, a little client-side programming, etc. It was around that time that I heard about “Big Data” and “Hadoop” but didn’t know much in detail. My journey into Big Data started when, in the project that I was working on, we started migrating to Hadoop in order to support the large volume of data for our client. In my first month, I did a few things that most people do when they start learning Big Data —

  • read blogs and books about Hadoop and Map Reduce
  • manually setup a small standalone cluster with the help of Google (Mostly copy/paste all the commands and config)
  • copy/paste word count problem from the internet, solve all the errors and ran it successfully on the cluster, and be happy.
  • tried to understand the syntax and working of map-reduce
  • tried to find other similar problems like word count, but there are not many. I have tried to explain “How to find top most-viewed products from large datasets” , “Transpose of matrix for large data” and “Cumulative sum of one billion numbers”.
  • playing with different configs in core-site.xml and few other XMLs. If the solution is giving some exception after config change, searched on the internet and fixed it.

The above things help me to understand Big Data to some extent, but I didn’t get the whole picture. Here are a few reasons —

  • continued using the earlier programming approaches in big data problems, forgetting that they mostly work for silo-computing(single machine/data in MBs).
  • hard to understand how Hadoop can solve the same problem that we solved using common language once data size increased. (I was wondering how Hadoop can solve most of the problem by just giving key/value pair as input.)
  • not able to come up with the solution of any problem when data is partitioned and stored on different machines.
  • ignored the downstream impact of each API and how exactly it will affect the performance and memory usage.
  • not able to decide which tool or framework would solve my problem efficiently. Because there are many tools and frameworks available in Hadoop ecosystem and day by day it’s increasing.

While working on big data projects, I observed a couple of things that we need to focus —

  • Mindset change

While solving a problem for silo-computing, we mostly focus on performant APIs, immutability, heap memory and few other things. But, for distributed computing, you also have to consider in-memory vs disk I/O, CPU usage and network bottleneck. At every stage from analysis to production deployment, we need to test and verify all above parameters for each and every small change.

  • Problem solving approach

Let’s say we want to do an average of 1000 numbers, we can implement it using any language and run it on a single machine. But, what if the numbers are 100 billion? We cannot efficiently solve this using the earlier approach. There are mainly two parameters that we need to consider — the volume of data and parallel execution. We need to split the data and store it on multiple machines and want to run each split in parallel to get the answer in minimum time.

The Average Problem using map reduce — Find the average of 1 to 10 numbers. Numbers are partitioned into three machines.

As you may know, MapReduce is a two-phase process: Map phase and Reduce phase. In map phase, the mapper will take any Hadoop directory as input, one line at a time as a key-value pair. The goal of the mapper is to take key-value as the input, process it and emit the other key-value pair. Between the map and reduce phase, all the values emitted by mapper are grouped together according to the key. Reduce phase will take one key and all associated values at a time and process it and emit the other key-value pair.

Data flow diagram to calculate average of 1 to 10 numbers

In the above diagram, all mappers are run in parallel and generate key as a number of elements and value as a sum of all the elements. Then, all the grouped values will be consumed by the reduce task and emits key as zero and value as a tuple of a number of element and its total respectively. The emitted pairs will pass to one more mapper and it will give the average of one to ten number.

Some points to remember:

  1. Unlearn app dev programming approach
  2. While implementing the solution of any problem for a large volume of data, you have to make it run in parallel.
  3. Don’t think about the logic that will directly give you the final result. Break the problems into parts and finalize the intermediate outputs ( and make sure all the splits run in parallel). In the average problem, in step 1, we do a sum of numbers for all the split in parallel and the result will go for further processing in step 2 and 3.
  4. Minimum network I/O

Try thinking how one can solve problems like cumulative sum, the transpose of a matrix, top N most frequently used words etc. in a distributed fashion.

  • Do the impact analysis of the API that you are going to use

The framework like Spark provides higher level abstraction such as RDD (Resilient Distributed Datasets). When you do any operation on RDD, behind the scene it will execute on each of the split available on a different machine. That’s why it is more important to understand the internal working of each API that you are going use for any transformation. For example, word count can be computed using either reduceByKey or groupByKey. Both will give the correct answer, but reduceByKey is much better on a large volume of data.

  • Try new tools or frameworks on an environment that is close to a copy of the production

When I started exploring tools in Big Data space, I was confused because you will find many tools or frameworks to fulfill your single requirement. Each of them has its own set of pros and cons. Here is a snapshot of how the big data landscape looks like —

Some frameworks give more flexibility but don’t perform well on very large volume of data. Some tools are good for iterative processing and some tools are not. Some frameworks are good in writing the data and some frameworks are good in querying the data. Understand your use cases, a volume of data that you want to process by that framework and the performance. Before finalizing any tool or framework, I would suggest to spike out the framework on a test environment and measure the performance on the production-like environment to get a better understanding of the framework on a large data set. If everything goes right, use it in production!

Thanks for reading. If you found this blog helpful, please recommend it and share it. Follow me for more articles on Big Data.

Thanks to Surabhi Seth, Shakti Garg, Himanshu Khantwal and Noah Pereira who reviewed this blog. I truly appreciate it.