What is Data Engineering?

Packt
CodeX
Published in
13 min readMar 22, 2021

While data engineering is not a new field, it seems to have stepped out from the background recently and started to take center stage. In this overview, you’ll learn what data engineers do, what the difference is between data engineering and data science, and also find out about some of the tools used in data engineering.

What data engineers do

Data engineering is part of the big data ecosystem and is closely linked to data science. Data engineers work in the background, and while they do not get the same level of attention as data scientists, they are critical to the process of data science. The roles and responsibilities of a data engineer vary depending on an organization’s level of data maturity and staffing levels. However, there are some tasks, such as data extraction, loading, and transforming, that are foundational to the role of a data engineer.

At the lowest level, data engineering involves the movement of data from one system or format to another system or format. Using more common terms, data engineers query data from a source (extract), perform some modifications to the data (transform), and then put that data in a location where users can access it and know that it is production quality (load). The terms extract, transform, and load are often abbreviated to ETL. This definition of data engineering is broad and simplistic, however. With the help of an example, let’s dig deeper into what data engineers do.

An online retailer has a website where you can purchase widgets in a variety of colors. The website is backed by a relational database. Every transaction is stored in the database. How many blue widgets did the retailer sell in the last quarter?

To answer this question, you could run a SQL query on the database. This doesn’t rise to the level of needing a data engineer. But as the site grows, running queries on the production database is no longer practical. Furthermore, there may be more than one database that records transactions. There may be databases at different geographical locations — for example, the retailers in North America may have a different database than the retailers in Asia, Africa, and Europe.

Now you have entered the realm of data engineering. To answer the question about sales of blue widgets, a data engineer would create connections to all of the transactional databases for each region, extract the data, and load it into a data warehouse. From there, you could now count the number of all the blue widgets sold.

Rather than finding the number of blue widgets sold, companies would prefer to find answers to the following questions:

  • Which locations sell the most widgets?
  • What are the peak times for selling widgets?
  • How many users put widgets in their carts and remove them later?
  • What combinations of widgets are sold together?

Answering these questions requires more than just extracting the data and loading it into a single system. There is a transformation required in between the extract and load. There is also the difference in times zones in different regions. For instance, the United States alone has four time zones. Because of this, you would need to transform time fields to a standard. You will also need a way to distinguish sales in each region. This could be accomplished by adding a location field to the data. Should this field be spatial — in coordinates or as well-known text — or will it just be text that could be transformed in a data engineering pipeline?

Here, the data engineer would need to extract the data from each database, then transform the data by adding an additional field for the location. To compare time zones, the data engineer would need to be familiar with data standards. For time, the International Organization for Standardization (ISO) has a standard — ISO 8601.

So to answer the questions in the preceding list, we need to:

  • Extract the data from each database
  • Add a field to tag the location for each transaction in the data
  • Transform the date from local time to ISO 8601
  • Load the data into the data warehouse.

The combination of extracting, loading, and transforming data is accomplished by the creation of a data pipeline. The data comes into the pipeline raw, or dirty in the sense that there may be missing data or typos in the data, which is then cleaned as it flows through the pipe. After that, it comes out the other side into a data warehouse, where it can be queried. The following diagram shows the pipeline required to accomplish the task:

Figure 1 — A pipeline that adds a location and modifies the date

Knowing a little more about what data engineering is, and what data engineers do, you should start to get a sense of the responsibilities and skills that data engineers need to acquire. Let’s elaborate on those skills.

Skills and knowledge required to be a data engineer

In the preceding example, it should be clear that data engineers need to be familiar with many different technologies — and we haven’t even mentioned the business processes or needs.

At the start of a data pipeline, data engineers need to know how to extract data from files in different formats or different types of databases. This means data engineers need to know several languages used to perform many different tasks, such as SQL and Python. During the transformation phase of the data pipeline, data engineers need to be familiar with data modeling and structures. They will also need to understand the business and what knowledge and insight they are hoping to extract from the data because this will impact the design of the data models.

The loading of data into the data warehouse means there needs to be a data warehouse with a schema to hold the data. This is also usually the responsibility of the data engineer. Data engineers will need to know the basics of data warehouse design, as well as the types of databases used in their construction.

Lastly, the entire infrastructure that the data pipeline runs on could also be the responsibility of the data engineer. They need to know how to manage Linux servers, as well as how to install and configure software such as Apache Airflow or NiFi. As organizations move to the cloud, the data engineer now needs to be familiar with spinning up infrastructure on the cloud platform used by the organization, be it Amazon, Google Cloud Platform, or Azure.

Having walked through an example of what data engineers do, we can now develop a broader definition of data engineering.

Information

Data engineering is the development, operation, and maintenance of data infrastructure, either on-premises or in the cloud (or hybrid or multi-cloud), comprising databases and pipelines to extract, transform, and load data.

Data engineering versus data science

Data engineering is what makes data science possible. Again, depending on the maturity of an organization, data scientists may be expected to clean and move the data required for analysis. This is not the best use of a data scientist’s time. Data scientists and data engineers use similar tools (Python, for instance), but they specialize in different areas. Data engineers need to understand data formats, models, and structures to efficiently transport data, whereas data scientists utilize them for building statistical models and mathematical computation.

Data scientists will connect to the data warehouses built by data engineers. From there, they can extract the data required for machine learning models and analysis. Data scientists may have their models incorporated into a data engineering pipeline. A close relationship should exist between data engineers and data scientists. Understanding what data scientists need in the data will only serve to help the data engineers deliver a better product.

In the next section, you will learn more about the most common tools used by data engineers.

Data engineering tools

To build data pipelines, data engineers need to choose the right tools for the job. Data engineering is part of the overall big data ecosystem and has to take account of the three V’s of big data:

  • Volume: The volume of data has grown substantially. Moving a thousand records from a database requires different tools and techniques than moving millions of rows or handling millions of transactions a minute.
  • Variety: Data engineers need tools that handle a variety of data formats in different locations (databases, APIs, files).
  • Velocity: The velocity of data is always increasing. Tracking the activity of millions of users on a social network or the purchases of users all over the world requires data engineers to operate often in near-real time.

Programming languages

The lingua franca of data engineering is SQL (Structured Query Language). Whether you use low-code tools or a specific programming language, there is almost no way to get around knowing SQL. A strong foundation in SQL allows the data engineer to optimize queries for speed and can assist in data transformations. SQL is so prevalent in data engineering that data lakes and non-SQL databases have tools to allow the data engineer to query them in SQL.

A large number of open-source data engineering tools use Java and Scala (Apache projects). Java is a popular, mainstream, object-oriented programming language. Java appears to be slowly being replaced by other languages that run on the Java Virtual Machine (JVM). Scala is one such, and other languages that run on the JVM include Clojure and Groovy. Apache NiFi allows you to develop custom processers in Java, Clojure, Groovy, and Jython. While Java is an object-oriented language, there has been a movement toward functional programming languages, of which Clojure and Scala are members.

Another language used extensively in data engineering is Python, and it shall be our focus here. Data Engineering with Python is well-documented, has a large user base, and is cross-platform. Python has become the default language for data science and data engineering. Python has an extensive collection of standard libraries and third-party libraries. The data science environment in Python is unmatched in other languages. Libraries such as pandas, matplotlib, numpy, scipy, scikit-learn, tensorflow, pytorch, and NLTK make up an extremely powerful data engineering and data science environment.

Databases

In most production systems, data will be stored in relational databases. Most proprietary solutions will use either Oracle or Microsoft SQL Server, while open-source solutions tend to use MySQL or PostgreSQL. These databases store data in rows and are well-suited to recording transactions. There are also relationships between tables, utilizing primary keys to join data from one table to another — thus making them relational. The following table diagram shows a simple data model and the relationships between the tables:

Figure 2 — Relational tables joined on Region = RegionID

The most common databases used in data warehousing are Amazon Redshift, Google BigQuery, Apache Cassandra, and other NoSQL databases such as Elasticsearch. Amazon Redshift, Google BigQuery, and Cassandra deviate from the traditional rows of relational databases and store data in a columnar format, as shown:

Figure 3 — Rows stored in a columnar format

Columnar databases are better suited for fast queries — therefore making them well-suited for data warehouses. All three of the columnar databases can be queried using SQL — although Cassandra uses the Cassandra Query Language, which is similar.

Contrasting with columnar databases are document, or NoSQL, databases, such as Elasticsearch. Elasticsearch is actually a search engine based on Apache Lucene. It is similar to Apache Solr but is more user-friendly. Elasticsearch is open-source, but it does have proprietary components — most notably, the X-Pack plugins for machine learning, graphs, security, and alerting/monitoring. Elasticsearch uses the Elastic Query DSL (Domain-Specific Language). It is not SQL, but rather a JSON query language. Elasticsearch stores data as documents, and while it has parent-child documents, it is a non-relational database (like the columnar databases).

Once a data engineer extracts data from a database, they will need to transform or process it. With big data, it helps to use a data processing engine.

Data processing engines

Data processing engines allow data engineers to transform data whether it is in batches or streams. These engines allow the parallel execution of transformation tasks. The most popular engine is Apache Spark. Apache Spark allows data engineers to write transformations in Python, Java, and Scala.

Apache Spark works with Python DataFrames, making it an ideal tool for Python programmers. Spark also has Resilient Distributed Datasets (RDDs). RDDs are an immutable and distributed collection of objects. You create them mainly by loading in an external data source. RDDs allow fast and distributed processing. The tasks in an RDD are run on different nodes within the cluster. Unlike DataFrames, they do not try to guess the schema in your data.

Other popular process engines include Apache Storm, which utilizes spouts to read data and bolts to perform transformations. By connecting them, you build a processing pipeline. Apache Flink and Samza are more modern stream and batch processing frameworks that allow you to process unbounded streams. An unbounded stream is data that comes in with no known end — a temperature sensor, for example, is an unbounded stream. It is constantly reporting temperatures. Flink and Samza are excellent choices if you are using Apache Kafka to stream data from a system.

Data pipelines

Combining a transactional database, a programming language, a processing engine, and a data warehouse results in a pipeline. For example, if you select all the records of widget sales from the database, run it through Spark to reduce the data to widgets and counts, then dump the result to the data warehouse, you have a pipeline. But this pipeline is not very useful if you have to execute manually every time you want it to run. Data pipelines need a scheduler to allow them to run at specified intervals. The simplest way to accomplish this is by using crontab. Schedule a cron job for your Python file and sit back and watch it run every X number of hours.

Managing all the pipelines in crontab becomes difficult fast. How do you keep track of pipelines’ successes and failures? How do you know what ran and what didn’t? How do you handle backpressure — if one task runs faster than the next, how do you hold data back, so it doesn’t overwhelm the task? As your pipelines become more advanced, you will quickly outgrow crontab and will need a better framework.

Apache Airflow

The most popular framework for building data engineering pipelines in Python is Apache Airflow. Airflow is a workflow management platform built by Airbnb. Airflow is made up of a web server, a scheduler, a metastore, a queueing system, and executors. You can run Airflow as a single instance, or you can break it up into a cluster with many executor nodes — this is most likely how you would run it in production. Airflow uses Directed Acyclic Graphs (DAGs).

A DAG is Python code that specifies tasks. A graph is a series of nodes connected by a relationship or dependency. In Airflow, they are directed because they flow in a direction with each task coming after its dependency. Using the preceding example pipeline for building a data pipeline with Apache Airflow, the first node would be to execute a SQL statement grabbing all the widget sales. This node would connect downstream to another node, which would aggregate the widgets and counts. Lastly, this node would connect to the final node, which loads the data into the warehouse. The pipeline DAG would look as in the following diagram:

Figure 4 — A DAG showing the flow of data between nodes. The task follows the arrows (is directed) from left to right

The following is a screenshot of a DAG in Airflow:

Figure 5 — The Airflow GUI showing the details of a DAG

The GUI is not as polished as NiFi, which we will discuss next.

Apache NiFi

Apache NiFi is another framework for building data engineering pipelines, and it too utilizes DAGs. Built by the National Security Agency and used at several federal agencies, Apache NiFi is easier to set up and is useful for new data engineers. The GUI is excellent, and while you can use Jython, Clojure, Scala, or Groovy to write processors, you can accomplish a lot with a simple configuration of existing processors. The following screenshot shows the NiFi GUI and a sample DAG:

Figure 6 — A sample NiFi flow extracting data from a database and sending it to Elasticsearch

Apache NiFi also allows clustering and the remote execution of pipelines. It has a built-in scheduler and provides the backpressure and monitoring of pipelines. Furthermore, Apache NiFi has version control using the NiFi Registry and can be used to collect data on the edge using MiNiFi.

Another Python-based tool for data engineering pipelines is Luigi, developed by Spotify. Luigi also uses a graph structure and allows you to connect tasks. It has a GUI much like Airflow, and is an excellent option for data engineering with Python.

Summary

Hopefully you now have a much better sense of what data engineering is. Data engineering roles and responsibilities vary depending on the maturity of an organization’s data infrastructure. But data engineering, at its simplest, is the creation of pipelines to move data from one source or format to another. This may or may not involve data transformations, processing engines, and the maintenance of infrastructure.

Data engineers use a variety of programming languages, but most commonly Python, Java, or Scala, as well as proprietary and open-source transactional databases and data warehouses, both on-premises and in the cloud, or a mixture. Data engineers need to be knowledgeable in many areas — programming, operations, data modeling, databases, and operating systems. That breadth is part of what makes such a fun, exciting, and challenging area. For those willing to accept the challenge, data engineering makes a rewarding career.

To continue reading Paul Crickard’s book Data Engineering with Python, subscribe to the Packt library and unlock access to a wide range of eBooks and videos.

--

--

Packt
CodeX
Writer for

We help developers build better software | Email customercare@packtpub.com for support | Twitter support 9-5 Mon-Fri