Introducing Hydrator Data Pipelines

cdapio
cdapio
Published in
4 min readApr 23, 2019

May 18, 2016

Albert Shau is a software engineer at Cask, where he is working to simplify data application development. Prior to Cask, he developed recommendation systems at Yahoo! and search systems at Box.

Cask Hydrator lets you easily create ETL pipelines through a simple drag and drop user interface. We’ve found that our users like the simplicity of Hydrator, but often want to create pipelines that are more complex than simple transformations. For example, you may want to remove duplicate data, count how many records satisfy some criteria, or even run a machine learning algorithm. To support use cases like these, we made one fundamental change to Hydrator in CDAP 3.4; instead of operating on individual records, Hydrator now has the flexibility to operate at a record level or at a feed (i.e. collection of records) level. This paradigm shift allowed us to add three new plugin types — Aggregate, Compute and Model — that let users create complex data pipelines through the same simple drag and drop user interface.

Aggregate

The first new plugin type is the Aggregate type, and consists of two phases. In the first phase, each input record is assigned to zero or more groups. In the second phase, each group is aggregated into zero or more output records. For example, Cask Hydrator includes a Deduplicate plugin that groups records by a subset of their fields, then chooses one record in the group as the canonical record. Let’s suppose the records input to the Deduplicate plugin have time_window, ticker, and price fields. You could configure the plugin to group by ticker and time_window, then pick the record with the highest price for each group. You can do this easily using the Hydrator UI:

The plugin will group, then filter as configured:

Cask Hydrator also includes a GroupByAggregate plugin that can compute SQL-like aggregates. For example, with similar input data, you could group by ticker, then compute a count, sum, min, and max for each group:

The plugin will group, then compute aggregates as configured:

You are also free to implement your own aggregator plugin. For example, it is straightforward to write a TopK aggregator that groups by a field and outputs the top k records in each group.

Model

The second new plugin type is the Model type and allows you to construct and store machine learning models in Spark. This type of plugin is a sink, which means it is not connected to other stages in your pipeline. The plugin receives all its input records as an RDD, and can run any logic you could normally do in Spark. This makes the Spark sink a natural place to run and store the many different machine learning algorithms available in Spark.

Compute

The last new plugin type is the Compute type. A Compute plugin is similar to a Model plugin in that it runs using the Spark execution framework. The only difference is that it must output an RDD, which allows it to be connected to other plugins in a Hydrator pipeline. This allows you to leverage the power of Spark to create a wide variety of useful plugins. For example, you could load the classification model trained by another plugin to tag each input record with an additional category field, run a feature selection algorithm to filter out irrelevant records, or just sample your data. For example, you can easily write a plugin that normalizes a field to a value between 0 and 1:

@Override
public JavaRDD<StructuredRecord> transform(SparkExecutionPluginContext context,
JavaRDD<StructuredRecord> input) throws Exception {
JavaRDD<Double> values = input.map(new Function<StructuredRecord, Double>() {
@Override
public Double call(StructuredRecord record) throws Exception {
return record.get(config.field);
}
});
final Double min = values.min(Comparator.<Double>naturalOrder());
final Double max = values.max(Comparator.<Double>naturalOrder());
return input.map(new Function<StructuredRecord, StructuredRecord>() {
@Override
public StructuredRecord call(StructuredRecord record) throws Exception {
return cloneRecord(record)
.set(config.field, (record.get(config.field) - min) / (max - min))
.build();
}
});
}

This plugin computes the minimum and maximum values for a specific field, then uses those values to scale that field to a value between zero and one.

Since Cask Hydrator is built on top of the Cask Data Application Platform (CDAP), your data pipelines automatically get all the benefits that CDAP provides. You can configure different actions to run after your data pipeline run has finished. For example, you can send an email if the run failed, or run a database query if it succeeded. Metrics, logging, scheduling, and lineage all come out of the box. There is also no limit on how many of these new plugin types can be used in the same pipeline, and no restrictions on how they can be connected. Hydrator handles the magic of transforming your data pipeline into a workflow of Spark and MapReduce jobs. You can find more information on creating custom ETL plugins here. In an upcoming blog post, we will take a peek under the hood and examine how Hydrator transforms your data pipeline into a workflow.

Try Cask Hydrator by downloading the latest version 3.4 of CDAP and give data pipelines a spin. As always, contributions and feedback are appreciated. Let us know what you think!

--

--

cdapio
cdapio
Editor for

A 100% open source framework for building data analytics applications.