Spark ETL Chapter 8 with Lakehouse | Apache HUDI

Kalpan Shah
Plumbers Of Data Science
7 min readMar 24, 2023

Previous blog/Context:

In an earlier blog, we discussed Spark ETL with Lakehouse (with Delta Lake). Please find below blog post for more details.

Introduction:

In this blog, we will discuss Spark ETL with Apache HUDI. We will first understand what Apache HUDI is and why Apache HUDI is used for creating Lake houses. We will source data from one of the source systems which we have learned till now and load that data into Apache HUDI format. We will create an on-premise lake house and load all data into it.

What is Apache HUDI?

Apache Hudi is an open-source data management framework for Apache Hadoop-based data lakes. Hudi stands for “Hadoop Upserts Deletes and Incrementals.” It provides a way to manage data in a big data environment with features like data ingestion, data processing, and data serving. Hudi was originally developed by Uber and was later contributed to the Apache Software Foundation as an open-source project.

Hudi provides several key features that make it useful for managing big data, including:

  1. Upserts, deletes, and increments: Hudi supports efficient updates and deletes existing data in a Hadoop-based data lake, allowing for incremental data processing.
  2. Transactional writes: Hudi supports ACID transactions, ensuring that data is consistent and reliable.
  3. Delta storage: Hudi stores data as delta files, which allows for fast querying and processing of data changes.
  4. Schema evolution: Hudi supports schema evolution, enabling changes to the schema without requiring a full reload of the data.
  5. Data indexing: Hudi provides indexing capabilities that make it easy to query data in a Hadoop-based data lake.

Overall, Hudi provides a flexible and efficient way to manage big data in a Hadoop-based data lake. It enables efficient data processing and querying while ensuring data consistency and reliability through ACID transactions. Hudi is used by a variety of companies and organizations, including Uber, Alibaba, and Verizon Media

Spark ETL with different Data Sources (Image by Author)

Today, we will be doing the operations below ETL and with this, we will also be learning about the Apache iceberg and how to build a lake house.

  1. Read data from MySQL server into Spark
  2. Create a HIVE temp view from a data frame
  3. Load filtered data into HUDI format (create initial table)
  4. Load filtered data again into HUDI format in the same table
  5. Read HUDI tables using Spark data frame
  6. Create Temp HIVE of HUDI tables
  7. Explore data

First, clone below GitHub repo, where we have all the required sample files and solution

If you don’t have a setup for Spark instance follow the earlier blog for setting up Data Engineering tools in your system. (Data Engineering suite will setup Spark, MySQL, PostgreSQL, and MongoDB in your system) In that Spark instance, we already have packages installed for Azure blog storage and Azure Data Lake Services.

Start the Spark application with all required packages

First, we will start the Spark session with all the required packages and configurations for Apache HUDI. We know that with our spark instance, we don’t have packages (jar file) available for Apache HUDI, so when we start the spark session, we need to externally specify that. We will also be using MySQL so we will specify package requirements for MySQL also.

With Apache Iceberg, we also need to pass the configurations below.

Starting Spark application with required Apache Hudi Packages (Image by Author)

Now, we have our spark session available with all the required packages and configuration, so we can start the ETL process.

Read data from MySQL server into Spark

(If you have already completed chapter 7, you can skip reading data from MySQL and create the HIVE table, and can directly go to create HUDI table section)

For this ETL, we are also using the same MySQL as the source system and are loading the same table. We will not discuss much on how to load data from MySQL and how to create a HIVE table as we have already discussed in detail in Chapter 7.

If you don’t have already uploaded data into MySQL, please follow the earlier blog for the same.

MySQL Table which we will load into Spark (Image by Author)

We will read this data from Spark and we will create a spark data frame and HIVE table on this.

Creating connection with MySQL from Spark (Image by Author)

Create a HIVE temp view from a data frame

We will create a HIVE temp view from a data frame.

Create a HIVE table and explore Spark SQL (Image by Author)

We will explore data and check the highest food group and filter with one group.

Load filtered data into HUDI format (create initial table)

We have data frame “newdf” available in which we have only one group food is there. We will use that to create the first hudi table.

Create first Apache Hudi table (Image by Author)

With the HUDI format, we need to pass a few options. And some of the options are mandatory, if we don’t pass that it will not create a table. Few mandatory columns have passed in our example.

We also need to pass the base path.

Once, we prepare options parameters and base path, using the format “hudi” we can create a hudi table.

It will create a folder named “hudi_food” and create a parquet file in which it will store data and metadata.

Hudi first table on local drive (Image by Author)

Inside “hudI_food”, we have metadata and a parquet file in which we have data. The folder structure will be as below

Hudi Data and Metadata format (Image by Author)

Which looks like this in the folder

Image by Author

Inside the “.hoddie” folder

Hudi metadata folder (Image by Author)

properties file has all the properties of the HUDI table.

Properties in metadata file (Image by Author)

parquet file with actual data.

Data file (Image by Author)

If in case, if we also specify the partition property, it will create a folder as below

Creating a Hudi table with more properties (Image by Author)

It will create a folder as below

Hudi table with more properties (Image by Author)

Inside “hudi_food”

Data and Metadata at the file server level (Image by Author)

Load filtered data again into hudi format in the same table

We will create one more data frame by filtering one more food group and then appending data into the same hudi table.

It will create one more parquet file in the same folder.

Data files at file server level (Image by Author)

Read HUDI tables using Spark data frame

Now, we will read the HUDI table into the Spark data frame.

Read a Hudi table from Apache Spark (Image by Author)

Here, we see that there are extra columns added. The first 5 columns were added by the HUDI data frame. Those are the metadata. As we discussed, HUDI is not only creating separate files for storing metadata but it is also storing in the same file only.

Now, if we print data

Printing data from Hudi table (Image by Author)

If I do “truncate=False” to check the commit time and sequence.

Hudi properties (Image by Author)

Commit time is: 20230322103927766

Which is 2023/03/22 10:39:27.766 (YYYY/MM/DD HH:mm:SS.sss)

and commit sequence number: 20230322103927766_0_0

Create Temp HIVE of HUDI tables

We have data available in the data frame. Now we will create a HIVE temp table so that we can write Spark SQL.

Create a HIVE table from a Hudi data frame (Image by Author)

Explore data

We can Spark SQL queries and explore data.

Spark SQL on Hive table based on Hudi data frame (Image by Author)

Conclusion:

Here, we have learned the concepts below.

  • Understating of Apache HUDI
  • How to install HUDI packages from the Maven repo
  • How to configure Spark parameters for HUDI
  • How to create a HUDI table and load data
  • How data is stored in HUDI format
  • How to read data from HUDI tables
  • How to write Spark SQL queries on HUDI

Video Explanation:

--

--

Kalpan Shah
Plumbers Of Data Science

Senior Data Engineer | Developer | Data Enthusiast | Mentor | Amigos 😍