Loading all Hacker News comments into Snowflake (in less than 1 minute)
This is a quick demo to show how to export the Hacker News archive from BigQuery into Snowflake. It will be quick, easy, and a great way to show off semi-structured data support in Snowflake, and its SQL recursive capabilities.
From BigQuery to Snowflake in less than 1 minute
Hacker News in BigQuery
If you are a BigQuery user, you can find the Hacker News archive table on console.cloud.google.com/bigquery?p=bigquery-public-data&d=hacker_news&t=full&page=table.
This table contains an updated copy of Hacker News, as seen on my previous posts (when I used to work with the BigQuery team). Right now it has more than 11GB of data, with almost 30 million rows:
To load a copy of this table into Snowflake, we can export it in different formats to GCS — for example, Parquet:
16 seconds later the operation has finished:
Import to Snowflake (even cross cloud)
On my Snowflake account I have already configured a secure integration with Google’s Cloud Storage, so it’s straightforward to check that the new files are there with a SQL query:
Note that even when a Snowflake account lives in AWS, the integration with Google Cloud Storage is seamless.
We can create a schema-less table to load these Parquet files into a Snowflake native table:
create or replace table hacker_news(v variant);copy into hacker_news
file_format = (type='PARQUET');
It took 43 seconds to create this semi-structured table with an S warehouse — while also moving the data from GCP to AWS:
With an L warehouse the time goes down to 28 seconds (while moving the data from GCP to AWS too):
Our data is now ready to query.
Key differences once the data is in Snowflake
It’s worth noting that Snowflake charges for compressed data. When moving data from BigQuery to Snowflake, it usually compresses 4 or 5 times. In this case — since most content is text that doesn’t compress heavily in columnar format — the same table uses half the storage when in Snowflake:
select bytes, row_count
Querying semi-structured data
We took Parquet files into a Snowflake semi-structured table with one column of the
VARIANT type. This is a special type in Snowflake that parses semi-structured data on load, and prepares it for querying. This is especially useful when loading JSON objects with different schemas, and it can help us here too.
For example, to search one specific story by id:
where v:id = '25978000';
The result looks like a JSON object, but it’s more than that — you can also notice that we could navigate it with the colon in
To search for all comments in that story, we can use the
where v:parent = '25978000';
This finds 15 top-level comments:
A hard problem in SQL is how to find all the comments that have the above comments as parents (and so on), in order to find all the comments that were left in that specific story. But that’s not a hard problem in Snowflake, thanks to the recursive SQL clause
with data as (
select v:by author, v:time t, v:parent parent, v:id id, v:type type, v:title title, substr(v:text, 0, 20)||'' text
)select author, t::timestamp, text
, sys_connect_by_path(id,'->') path
--, connect_by_root title
start with id = 25978000
connect by parent = prior id
And that’s how we can find all comments left on this story, regardless of their depth:
The query profile for these recursive results looks pretty interesting too:
For more fun analysis on these comments, stay tuned.
- We were able to load 11 GB of data from BigQuery in GCP to Snowflake in AWS in less than 1 minute.
- Snowflake charges storage for compressed data (instead of uncompressed).
- Snowflake works seamlessly with semi-structured data in schema-less tables (thanks to the
- Snowflake can navigate recursive table structures with recursive SQL support (including the
- Thomas Dziedzic, for sharing his method to analyze Hacker News with Snowflake back in March 2021.
- Try this out with a Snowflake free trial account — you only need an email address to get started.
I’m Felipe Hoffa, Data Cloud Advocate for Snowflake. Thanks for joining me on this adventure. You can follow me on Twitter and LinkedIn. Check reddit.com/r/snowflake for the most interesting Snowflake news.