Streaming Data to BigQuery with Dataflow and Updating the Schema in Real-Time

Alex Fragotsis
Inside League
Published in
3 min readDec 26, 2021
Robert Delaunay, “Relief-disques,” 1936.

In our previous story, we saw how to stream data to Big Query and also add new columns when needed. This solution though is not really real-time, I think we can do better.

Another approach I’ve seen discussed online, but haven’t found any code samples, is this. We enable streaming inserts to Big Query using Dataflow, if the new data contain new fields, the insert is going to fail, then we get all the failed rows, detect the schema, update the schema in BQ and then re-insert.

A really simple pipeline that streams data to Big Query looks like this:

def run(argv):
with beam.Pipeline(options=pipeline_options) as pipeline:
realtime_data = (
| "Read PubSub Messages" >>
| f"Write to {options.bq_table}" >>

Now, if the PubSub message contains some new fields that are missing from Big Query, the , according to the documentation, is going to emit the failed rows to

So all we have to do is just read them. Group them in a small window (I use 1 minute), just to catch any other messages that happen to come at the same time, and re-insert them to Big Query

| f"Window" >>
| f"Failed Rows for {table}" >>
beam.ParDo(ModifyBadRows(options.bq_dataset, options.bq_table))


before we start testing there are a few gotchas.
1. we need to change retry policy to Never.

2. The default GroupWindowsIntoBatches we find in Google’s documentation doesn’t work. Messages coming from BigQueryWriteFn.FAILED_ROWS are not timestamped, so we need to timestamp them

import timeclass GroupWindowsIntoBatches(beam.PTransform):def __init__(self, window_size):
# Convert minutes into seconds.
self.window_size = int(window_size * 60)
def expand(self, pcoll):
return (pcoll
| 'Add Timestamps' >>
beam.Map(lambda x: beam.window.TimestampedValue(x, time.time()))
| "Window into Fixed Intervals" >>
| "Groupby" >> beam.GroupByKey()
| "Abandon Dummy Key" >> beam.MapTuple(lambda _, val: val)

Finally, to detect new schema we use the BigQuery Schema Generator, as the last time

class ModifyBadRows(beam.DoFn):

def __init__(self, bq_dataset, bq_table):
self.bq_dataset = bq_dataset
self.bq_table = bq_table

def start_bundle(self):
self.client = bigquery.Client()

def process(self, batch):"Got {len(batch)} bad rows")
table_id = f"{self.bq_dataset}.{self.bq_table}"

generator = SchemaGenerator(input_format='dict',

# Get original schema to assist the deduce_schema function.
# If the table doesn't exist
# proceed with empty original_schema_map
table_file_name =f"original_schema_{self.bq_table}.json"
table = self.client.get_table(table_id)
self.client.schema_to_json(table.schema, table_file_name)
original_schema_map =
except Exception:"{table_id} table not exists. Proceed without getting schema")
original_schema_map = {}

# generate the new schema
schema_map, error_logs = generator.deduce_schema(
schema = generator.flatten_schema(schema_map)

job_config = bigquery.LoadJobConfig(

load_job = self.client.load_table_from_json(
) # Make an API request.

load_job.result() # Waits for the job to complete.
if load_job.errors:"error_result = {load_job.error_result}")"errors = {load_job.errors}")
else:'Loaded {len(batch)} rows.')

except Exception as error:'Error: {error} with loading dataframe')

And that’s it! Now our pipeline will stream the data to Big Query in real-time, and if we get a message that contains a field that we don’t have a column in Big Query:

  • that insertion will fail,
  • we’ll gather all failed rows and group them in a 1-minute window
  • our pipeline will automatically detect the new schema
  • update Big Query and
  • re-insert the failed rows

and all that without stopping the pipeline at all, messages that will arrive after the failed one will get inserted to Big Query.

You can find the full code here: