Sestina Bio — Harnessing the Power of Digital Biology for Synthetic Biology

Sestina Bio
Sestina Bio
Published in
4 min readOct 14, 2020


Sestina is writing cellular poetry

Synthetic biology has the potential to transform vast sectors of our economy. We now can genetically engineer cell lines to manufacture foods, flavors, aromatic chemicals, fuels, fabrics, and even therapeutics. We’ve achieved this through the modifications of existing organisms and the creation of new ones that have new functions. Unfortunately, engineering biological function is not as simple as refactoring computer code. Sequence space is enormous; there are many more possible genetic combinations in a yeast cell, for example, than there are atoms in the universe. In addition, biology is intrinsically messy and non-modular, with many components interacting in complex ways for almost every function.

The result is that rational approaches to genetic engineering are often little more than educated guesses, and success still requires a significant amount of trial and error. The classic synthetic biology learning cycle has four phases — design, build, test, and learn (DBTL). You want as many cells with unique sets of mutations as possible to go through the entire DBTL cycle to maximize the likelihood of success. Since the goal is to link these genetic variants to function, it is also desirable to collect many data types (metabolome, transcriptome, etc.) that describe the desired function. In general, the more high-quality data you can generate, the better. Current lab processes are designed around macroscopic wells, each of which contains an individual colony of cells. Scaling this to make thousands of measurements in parallel has historically required a very large infrastructure investment in laboratory automation equipment and is usually only practical within a centralized or core laboratory.

We created Sestina Bio to develop an alternative approach to the existing rational paradigm that we believe will allow exponential increases in our ability to explore biology, specifically through the number of unique designs moving through the DBTL cycles for a fraction of the cost. The basis of this approach arose from three powerful “digital biology” based principles: digitizing, multiplexing, and indexing. Digitizing samples, in whatever format, allows you to increase the quality and speed of a process by separating the input sample into thousands or even millions of individual reaction volumes. Multiplexing provides the capability to perform many parallel chemical processes in the same volume, and indexing allows you to deconvolute the signal from these processes in a traceable and inexpensive manner.

These principles formed the basis of nearly every measurement technology I have been personally involved in or exposed to over my 30-year career. The first example of this, over 25 years ago, was development at Lawrence Livermore National Laboratory (LLNL) of a system that separated samples into microwells and made multiplexed, real-time polymerase chain reactions (the inventor went on to found Cepheid). We also developed multiple measurement systems for biodefense applications, including a collaboration with a very young start-up at the time (Luminex Corporation) that helped the Center for Disease Control develop their first immunoassay-based select agent panels. After leaving LLNL to found my first company, QuantaLife, my team and I developed a research tool to separate and process individual DNA molecules in thousands of individual partitions (now BioRad’s Droplet Digital PCR™ system). From there, my co-founders and senior leadership at QuantaLife have created new companies based on these same principles for making massively parallel single cell measurement systems (10X Genomics) and powerful new digital genome engineering systems (Inscripta). Over the past decade many other Tools companies have or are developing new research tools that follow this same paradigm.

For genetic engineering, one important element was still missing, however. Given nearly infinite sequence space, how do you decide which genetic changes to make first? And once you have the ability to run thousands of cells through the full DBTL cycle and make multi-omic measurements on all of them, how do you process this data to decide what the next design cycle should be? To address this challenge, I partnered with Foresite Labs to start Sestina Bio.

Says Vik Bajaj, CEO of Foresite Labs: “Sestina Bio will transform synthetic biology from a bespoke enterprise that uses room-scale automation to one that operates with great finesse, at the level of single cells. The unprecedented dimensionality implied by the ability to deeply modify, individually follow, and functionally characterize single cells will make biology look more like an engineering discipline. The resulting experiments are so complex that they cannot be designed and executed by human minds; they require new approaches, including machine learning, to design experiments, extract features from primary data, and to efficiently find optimal solutions within an enormous parameter space.”

We named my new start-up Sestina Bio after a complex form of 12th century poetry. We were inspired by the Nobel Prize winner, Professor Frances Arnold, who described the process of directed genetic evolution as a Beethoven symphony. At Sestina Bio, we hope to develop a platform that will allow us to create countless cellular masterpieces in the future.

-Bill Colston PhD, Founder and CEO