Create Your ETL Pipeline In 90 Min. (A Best Case Scenario)

Creating a data engineering pipeline using Python, SQL and Google Cloud in less than 2 hours.

Zach Quinn
Pipeline: Your Data Engineering Resource

--

Create a job-worthy data portfolio. Learn how with my free project guide.

The first ETL pipeline I turned in at work was a buzzer beater. Not only was my release date on a Friday, it was the Friday before my wedding. With the blessing of my at-the-time senior engineer, we broke a cardinal rule of data engineering and merged on a Friday.

Joining my current team in its infancy (I was the second full-time hire for the new, official data engineering team) means that, inevitably, I would build a lot of pipelines. Basically connections hitting lower-priority vendor APIs that the seniors didn’t have the bandwidth to work on. But, as I gained experience, I was able to take on higher-priority projects and, soon, I had the very unofficial and very specific distinction of building the most ETL pipelines in a fiscal year.

With volume and “reps” inevitably comes speed, or at least less roadblocks.

Which is how I was able to build out an end-to-end connection to my target API, ConvertKit, in under two hours.

To be clear, what follows is the best case scenario for any data engineering development process. I had my own requirements, possessed incredibly clear API documentation and, most significantly, I didn’t have to wait for any reviews or QA.

--

--