Cells Not Wells

Sestina Bio
Sestina Bio
Published in
6 min readMay 3, 2021

By Ted Tarasow

Starting a new company in the middle of a pandemic is challenging. Hiring an A-player team without in-person interviews, finding and equipping laboratory space, and initiating operations are all more complicated. But we felt these factors were more than offset by the opportunity to take advantage of truly game-changing, emerging technologies that, when combined, will vastly improve our ability to engineer cells and produce globally useful and valuable products (Figure 1). The alignment of new advances in genome-wide editing, microfluidic systems, and measurement technologies will allow us to decrease our cycle time from weeks to days as we transition from plate-based formats to massively parallel single cell manipulation. The ability to deterministically edit and measure cells across an enormous diversity space will create equally enormous data sets linking genotype to phenotype. These data sets are what is needed to realize the true potential of machine learning and advanced algorithms for understanding and ultimately designing biological systems. All of this on a backdrop where the global demand for products made from more sustainable technologies is growing daily.

Figure 1. The evolution of technology and its impact on synthetic biology capabilities. Technology improvements over time have enabled greater exploration of biology diversity leading to higher productivity in a shorter time.

I have always loved science and math and, in particular, chemistry. Since graduate school, I have been particularly drawn to the interface of chemistry and biology; using chemistry to manipulate and measure biological components and systems and using biology and biological concepts to create and manipulate chemistry. The Sestina goal of creating the most efficient and productive synthetic biology platform in the world fits right in with all of the things that I love about the chemistry/biology interface. Biology is extraordinarily complex, and despite some suggestions that we can treat it like a parts list or an engineering discipline, there is much we do not understand and cannot predict. We must have a much more complete understanding of how changes in a cell’s DNA code (genetic changes or singularly a genotype) manifest themselves in different properties of the cell (phenotypes) to truly harness biology and produce future products. For example, by inserting a piece of DNA into the cell’s genetic code we provide the blueprints for the cell to produce a product of interest or by changing the existing cell DNA code we can cause the cell to produce more or less of the product, depending on the DNA edit. We can create myriad edits to the cell’s DNA code and observe how they perturb the biology and ultimately the chemistry of the cell, collectively the phenotype. The challenge is to explore and measure edit diversity at a scale that is commensurate with biological complexity. Data at that scale and complexity require advanced computational tools for analysis and can power machine learning algorithms to help us better understanding how to design biological systems. Our goal is to significantly improve our efficiency at developing a cell with all of the traits that are desired for a given product application.

There have been many impactful advances over the years that have improved our ability to engineer cells fit for a purpose: DNA synthesis, Next Generation Sequencing (NGS), nuclease engineering, robotic liquid handling, and high throughput MS systems to name a few. A principal limitation, however, remains that the workflow and throughput are mainly constrained to plates. Plates have been the dominant form for containerization, indexing, handling, and storage over the years. However, they take time to fill, move and manipulate, require significant volumes of reagents, and consume huge amounts of space in the lab and storage when used at scale. Despite large investments in trying to industrialize the processing of plates, the plate as a unit of throughput and processing is too costly and limits our ability to effectively address the challenge. Fortunately, there are new technologies, some on the horizon and some already in our lab, capable of a scale that is commensurate with the biological complexity we are trying to understand and control.

Digital Biology Enabled Strain Engineering — The cell as the central unit of processing and measure

The cellular engineering process is inherently cyclic passing through design, build, test, learn (DBTL) phases multiple times to reach a final strain with the desired properties. In seeking to increase DBTL scale and decrease cycle time to result, we are technology agnostic and approach the problem looking to combine the best solutions whether purchased, partnered, or developed in-house. The DBTL quest starts with generating genetic diversity at the single cell level, deterministically. The larger and more diverse the libraries the more comprehensively we can explore the biological space. There now exist off-the-shelf technologies that allow the creation and tracking of genome-wide libraries on the order of tens of thousands of edits in a matter of days. The ease at which these libraries can be designed and made greatly lowers the barrier to more complete exploration of genetic diversity space akin to going from looking for your keys in the dark under a streetlight to illuminating the whole city. A critical concept in this technology is barcoding the edit at the single-cell level, facilitating rapid edit identification and acceleration of the learning phase. This is in contrast to whole cell mutagenesis, where strains typically accumulate 10–20 random edits per cell, which require significant effort to deconvolute. This advance creates an immediate problem — plate-based processing and measurement of the edit libraries is impedance mismatched with the scale of diversity. Fortunately, micro/nano/pico fluidic technologies have emerged that can process tens to hundreds of thousands of samples per day. These technologies can be directly coupled to phenotype measurement systems capturing physical, metabolic, proteomic, and other strain-specific traits. NGS technologies are well suited to provide the genetic data necessary to track genotypes in these libraries. Coupling these capacity and throughput matched technologies allows for tens to hundreds of thousands of individual genetic variants to be made, tracked, and measured on the scale of days to weeks representing orders of magnitude improvements in biological diversity generation and exploration throughput and efficiency. We are just beginning to realize the potential of what these technologies enable and will continue to push the boundaries. The end goal is to build robust data sets at scale that couple comprehensive phenotype data to the genetic diversity explored in the edit libraries.

Why are comprehensive genotype-phenotype data sets corresponding to large genome wide diversity libraries so important? Like the technology breakthroughs that make these data sets possible, there have been significant advances in data science. Understanding complex systems is only possible if we have more complete phenotype-genotype data sets to power machine learning and other advanced computational tools. Ultimately, the combination of genotype-phenotype data at scale with advanced computational tools will create biologically aware algorithms capable of vastly improving our efficiency at creating built-for-purpose biological systems.

We are also looking beyond the core strain engineering platform. Early consideration of manufacturing factors and processes means we can fit the strain to scalable processes instead of trying to engineer manufacturing processes around a strain. De-risking the use of novel, low-cost feedstocks by developing compatible strains early in product development provides a transferable cost of goods advantage. Fermentation technologies have changed little over many decades. New equipment designs, some borrowed from other industries, and technical approaches that enable more efficient and continuous processing are incorporated in the earliest phases of development. Our goal is end-to-end innovation to create a technology platform that can consistently deliver products much faster and more efficiently than other approaches.

The timing is right for Sestina. There are huge opportunities to build and align new technologies that can deliver on the ultimate promise of synthetic biology. How we use the tools to explore, manipulate, measure, model and predict biology will differentiate the Sestina R&D platform. Coupling that platform with manufacturing innovation and a scalable business ecosystem will drive the success of the company to efficiently deliver high-value products from sustainable processes that improve the lives of people around the world.

--

--