The #1 Reason Snowflake is Different

Evolution of data warehouses and technical analysis of why Snowflake has a major technical advantage over others

Doug Foo
Doug Foo
May 3 · 7 min read
Photo by Kelly Sikkema on Unsplash

It intrigues me how a small startup like Snowflake can almost come out of stealth mode to taking away $1 billion in sales mainly from Redshift (but also Azure, GCP and Oracle) using the same ecosystem on AWS.

In this article, we do an in-depth analysis of how Snowflake differs and why it is doing so well. We’ll start with a little history of Data Warehouse platforms so we can really understand the evolution. (TLDR: It is native cloud meaning not a hack on top of existing RDBMS engines).

Photo by Johannes Plenio on Unsplash

Evolution of Data Platforms

Immon and Kimball were the fathers of the classic Data Warehouse and its star schema models and the concept data marts. However, their work was mostly design philosophy around how to structure and build and load tables, normalized vs de-normalized schemas, and data marts vs a central schema.

RDBMS Extensions for Warehouses

It was natural for the standard database of the era — Oracle, DB2, Informix, etc to adapt from transactional (OLTP) workloads to Warehouse workloads (OLAP) with optimization like large block sizes, bitmapped indexes, and star query optimizations. Customers were just starting to build large data warehouse and a 1TB DWH was a huge deal (given it was on 4–8GB drives it took a van size array of disks). I was lucky to receive this funny & classic t-shirt courtesy of Oracle 8 Marketing way back in 1997 boasting about Terabyte sizes.

Photo © Doug Foo

Some of the challenges with traditional RDBMS systems and DWH workloads:

  1. Row oriented storage meant very expensive I/O
  2. Limited memory to cache enough of the data to reduce I/O
  3. Limited ability to scale up or out given their legacy on SMP architectures

Oracle solved some of these issues by vertically integrating their HW + SW into the Exadata platform, combined with their RAC (Parallel Server) to scale out, along with column-oriented compression. However, but one might argue they are improvements to a core engine that is really for general-purpose workloads not specific to Data Warehousing.

The Rise of Specialty DWH Systems

Due to the need for scale and performance, vendors developed specialized in column-oriented databases such as KDB and Sybase IQ (both formally launched in mid-90’s, but much more popular in post y2k). These greatly improved the first 2 problems of I/O.

The 3rd problem was solved by products like Teradata (spawned from CalTech + banking customers in the 80’s) who innovated Massive Parallel Processing (MPP — basically distributed computing) techniques by developing cluster management software (BYNET) and query coordinators (PE) to allow scale across physical machines (AMP) — similar to what technologies like Kubernetes + Spark or Presto can do today at a higher level of abstraction.

© Teradata public docs

Vendors tend to copy each other, so over time into the last decade, Oracle, Sybase, DB2, Teradata all had similar DWH features and boasted MPP support and horizontal scaling.

Cloud Re-Invents the Warehouse

Two key things happened recently that enabled the Cloud Warehouse movement.

First, Jeff Dean @ Google created Map-Reduce and this way of running MPP at scale became the next big thing. Hadoop evolved from this at Yahoo and now petabytes could be processed in parallel without a central “database”, creating a new ecosystem for data which became the modern day Data Lake.

Second, Andrew Jassey @ Amazon created AWS and made compute and storage easily available. This made infra to build your own MPP (BYO-MPP) available to any company or vendor — combine ideas from Google and HW from AWS one could write massively scaled apps (or a databases..). This process evolved into the distributed approach to building large scale systems distributed across servers (aka microservices).

Next step, Amazon basically took a solid open source database (Postgresql — successor to Ingress, both courtesy of Michael Stonebraker @ University of California) and MPP’d it to make Redshift (yes I’m sure there is a lot more to it).

The architecture of Redshift is not much different than Teradata, to be honest, but it is also not much different than any distributed system. Azure SQL Warehouse is built in roughly the same way — they took SQL Server and MPP’d it.

© Amazon AWS Redshift Public Documentation

Along Came a Snowflake

Now finally to the Snowflake story. Created by 2 ex-Oracle architects and a Dutch veteran of MPP systems — it ran in stealth mode from 2012–2014, then launched on AWS then on Azure & Google in 2018 and 2019. They had over $1.3b in VC investment before IPO’ing in 2020 @ a $33B valuation (current valuation of $66B). Their 2021 annual revenue targets is $1B.

One of the biggest things Snowflake did differently was to completely separate Compute and Storage from the classic MPP architecture. Hence it is perfectly elastic, you can add compute resources and storage, and each compute node can connect to each storage device via EBS, S3, or equivalent on Azure and GCP. You can actually spin up/down compute while pointing to the data all centrally in one place. This is fundamentally different to MPP databases like Teradata —Snowflake is more like Map-Reduce & Hadoop/Spark.

The biggest thing you hear about Snowflake over Redshift was the ability to scale compute w/o storage — which Redshift and Azure Warehouse quickly “fixed” right away. But did they really fix it? I took a deeper look at this and figured out the real issue. While Redshift and Azure enabled elastic storage, data is still coupled to each compute node. This is because the core engine in each compute node is still just a Postgresql or SqlServer DB that mostly runs like a traditional DB — ie, has its own set of data/files, cache, and locking. Thus to elastically scale up or down, you wind up needing to re-shard and migrate data to different compute nodes. I’m sure Redshift is engineering ways around this — as current docs say it only takes 4–8min to do online resizes but these are fundamental issues in design when your core engine is a legacy database.

Analogy: imagine you want to build a mega-car with 32 engines and 128 wheels. So you take 32 Toyotas with 1 engine and 4 wheels each and put series of interconnects and controllers to make it work like 1 mega-car. With a new shell, paint, and consolidated driver's seat, it looks and operates like the mega-car of your dreams — but internally it is just 32 Toyota’s so you can’t really add/remove a wheel or engine since the core unit is a Toyota (but you can do some hacks like putting on larger tires for capacity). Contrast with a fresh design that is designed ground-up to allow adding/removing engines and wheels on the fly — well that fresh design is Snowflake.

Conclusion

If you think about why Oracle might have problems using its general purpose database as a specialized Data Warehouse, you can imagine why Redshift with its legacy core Postgresql engine has problems adapting to the elastic demands of the cloud. We all know from our experiences modifying legacy codebases that it is likely impossible to retrofit 30+ yrs of Oracle’s core code in significant ways — and hence it is likely impossible to retrofit Redshift and its Postgresql engine to be fully elastic and decoupled from storage. Hence I believe Snowflake has a huge technical advantage — not to mention being multi-cloud friendly to help it more and more business.

In summary — Snowflake is special. I won’t recommend buying the stock but could it be the next Oracle ?

Courtesy of Google search

Geek Culture

Proud to geek out. Follow to join our +500K monthly readers.

Doug Foo

Written by

Doug Foo

Tech Manager by Day, ML Hacker by Night — founder: foostack.ai

Geek Culture

A new tech publication by Start it up (https://medium.com/swlh).

Doug Foo

Written by

Doug Foo

Tech Manager by Day, ML Hacker by Night — founder: foostack.ai

Geek Culture

A new tech publication by Start it up (https://medium.com/swlh).

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store