Laying the Foundations for the Bioeconomy

What is standardization and why is it important in building a bioeconomy with synthetic biology?

Ross Kent
Published in
6 min readJun 3, 2020


Data is an essential commodity in the modern world. Whilst the true value of data can be hard to define, the impact data has on people across the globe is difficult to deny. The data revolution has changed the way we live our lives: from ordering a take-away, to going on a date.

The adoption of digital tech in so many parts of our lives is made possible by simple foundations — integration and communication. Whether it’s the software in your phone or the Fitbit on your arm, software integration is simple because they share a common language. Both the phone and Fitbit share the same agreed structure of 0s and 1s. In other words, they are standardized; they speak the same language.

Data is important in scientific innovation across all fields, from astrophysics to medicine. With the growing excitement surrounding synthetic biology, we ask: how is the digital age affecting this emerging technology?

Synthetic biology, a recently developed field in research, seeks to apply the principles of engineering to biology. SynBio looks to design and use genetic building blocks, or “parts”, for the construction of new-to-nature systems. This exciting concept aims to tackle some of the grand challenges we face today using biological solutions. Biology is set to drastically change our lives, impacting everything from our diets and health to the materials we build our houses out of.

Scientists use a wide array of tools and equipment to understand the complex behavior of cells

These tools capture data about how engineered biological systems work. Using modern technology we can understand how cells divide, sense, and respond to their environments. However, it’s often surprising how manual the process of scientific R&D can be. It’s easy to imagine vast robotics labs where machines conduct and analyze all our data, build DNA devices and culture cells — much like you might see on a car assembly line.

Yet, more often than not this work is diligently carried out by human hands. Cells are grown manually and imaged or tested by hand. Data is collected and analyzed using complex and error-prone spreadsheets or specialized software. Research is an iterative process where, in each cycle, we seek to learn from the last, and use the lessons we learn to inform what we do next. The manual steps involved throughout these learning cycles complicate an already complicated process.

Biotechnology R&D is infamously risky and slow

This is driven, in part, by the labor-intensive and costly manual processes associated with lab research. However, in recent years the landscape of biological research has begun to change. As synthetic biology has matured out of academic circles and into the industrial realm, so the world of “bio” and “tech” have become closer together. Synthetic biology was built on several founding principles: abstraction, decoupling, and standardization. The aim of SynBio is to make biology an easier medium to work with. This brings together experts from many areas including molecular biology, machine-learning, and bioprocessing.

It’s important to establish a common language for biology, like those found in software development, as communication across disciplines can be challenging. In a highly diverse field like SynBio, standards will prove essential to the growth of the technology.

Photo by Sven Mieke on Unsplash

Take engineering and the standardization of simple things like nuts and bolts or computer science and ASCII (a standard for interpreting binary 0s and 1s). Both of these fields have revolutionized every aspect of our day-to-day lives. Both are built on established standards, which act as blueprints for others to follow. The development of these standards was vital to the growth of these fields and will be essential in building a bioeconomy. Standardization created a universal way of communicating and storing information and allowed knowledge to be shared with confidence; confidence that the technology can be easily understood and used by others.

A recent proposal by SynBioBeta, set out to establish the #BuiltWithBiology initiative. This proposal sets out a roadmap for how the state of California might build a hub for biomanufacturing following the COVID-19 crisis. The roadmap highlights the importance of a “digital backbone” to the bioeconomy. This backbone will use computer-aided biology to enable sharing and learning from biological data. Investing in these resources will lay the foundations for building a robust bioeconomy. Never has the need for timely development and universal data communication been more clear. As we race to develop vaccine candidates for COVID-19, the need for rapid and robust data sharing becomes crucial.

Photo by Markus Spiske on Unsplash

Developments in computer-aided hardware and software are enabling the execution of complex experiments. Modern tools are making advanced experimentation and analysis more accessible and wide-spread. Analysis of complex data streams can now be carried out in flexible pipelines. Together with these tools, the benefits of adopting standardization has clear advantages. As we begin to close the loop of designing, building, and testing biological systems, this toolkit will only grow in importance. So how do we decide what these standards should look like?

Standardization in SynBio is currently championed by a number of groups

One such group is the BioRoboost initiative. BioRoboost brings together experts from 27 research bodies from across the globe. The group was recently set-up with the aim of installing standardization across various aspects of biology. This includes chassis development, genetic device design, and data standardization. One key example is SBOL. BioRoboost represents “An effort to codify the biological discovery process”. In a recent article from the group, they highlighted that: “There is ample room for networks of practitioners involving industrial players, who can provide information on how biological properties and processes could improve product development, manufacturability and consumer confidence.”

Despite the benefits of standardization, it’s proven challenging to put into practice. There are many reasons for this, but ultimately it comes down to finding a standard which maintains flexibility; thus allowing its incorporation into the diverse range of applications within SynBio. Establishing standardization is particularly challenging in biotechnology research. Unlike computer science or engineering, standards must be retro-fitted into a mature ecosystem. The transition of biology into the digital age gives us the chance to establish long-lasting standards.

Another barrier to the adoption of any standard is the cost associated with its implementation

The standard must be easy to integrate, need minimal set-up time, and allow easy merging with existing systems. Currently, this doesn’t seem to be the case. The stick (technology barrier) is bigger than the carrot (technology benefit). However, computer-aided biology tools such as automation and machine learning are set to reshape the way we conduct biological research. With this change the proverbial carrot will grow, until the cost of NOT having established data standards will outweigh the cost of changing the system.

As biological technologies continue to progress out of research environments and towards industrial-scale production, the need to track, trace, and record as much information as possible grows exponentially. This makes a common language for mapping data and its relationship to a given biological system very important. To tackle these challenges, people are increasingly turning to computer-aided methods, including automation and so-called “closed-loop experimentation”.

As these approaches evolve and we further digitize biological discovery, the need to integrate data collection and analysis from multiple pieces of equipment grows. Standardization forms a foundation on which these computer-aided biology pipelines can be built.

Ross Kent is a Postdoctoral Research Associate working with Manchester University and Cognate BioServices. The research focus of his PhD was on developing regulatory mechanisms for controlling gene expression. Throughout this work, he has sought to bring modern statistical, data-led reasoning to his research, and he’s passionate about this objective way of thinking about problems and believes it can greatly advance how the research industry tackles scientific challenges.



Ross Kent

Synthetic biology geek. Passionate about bio, design automation and statistics.