Computers process information; cells process stuff: March 28, 2018 Snippets

Snippets | Social Capital
Social Capital
Published in
8 min readMar 19, 2018

As always, thanks for reading. Want Snippets delivered to your inbox, a whole day earlier? Subscribe here.

This week’s theme: biology as a platform for value creation, and three ways that living biological systems are similar to computers.

In last week’s Snippets, we introduced a new idea and a new series to go along with it: how synthetic biology in the 21st century may have a comparable impact on the world as computers, software and the Internet have had so far. Over the next few weeks, we’ll explore this topic in greater detail: what biology as a platform for value creation will look like five, ten, twenty and fifty years from now; how its evolution may to some extent resemble the rise of computers in some surprising ways; and other ways where it’ll be very different.

This week, we’ll go through several reasons why biology and living organisms are in some ways like computers, and then starting next week we’ll dive into the interesting aspects of how they’re fundamentally different beasts. Obviously their differences are vast: we’re going to use living systems to do very different jobs than we ask of computers. But the similarities are important to go through first, because they help illustrate how powerful and flexible these systems both can be.

Let’s start out with the extreme basics. We can think about computers, at their simplest conceptual model, as information processors. Information comes in; something happens to it, and then different information comes out. On its own, that is not necessarily useful: you might know the saying, “garbage in; garbage out”. But if the information coming out is useful to us, then our machine is adding value; if the cost of the work done is less than the amount of value created; then our machine can be put to practical use. Early computers were essentially that: information processors. We used them for jobs like calculating ballistics trajectories while at war: put in coordinates, wind, and altitude; and with the right instructions, you get out the direction and height angles where you should point your artillery. They were expensive to develop and build, but ample R&D money was provided by the government to support the war effort, and the early computing industry was largely born out of that initial wartime subsidy.

When we think about synthetic biology, we can use the same simple conceptual model: cells as matter processors. Organic “stuff” (carbon, nitrogen, oxygen and other contents, arranged into molecules and structures) comes in; something happens to it; and then different stuff comes out. Again, that on its own is not necessarily useful: the stuff coming out (carbon dioxide, water and waste products) might have less value than what went in (sugar and oxygen, perhaps). But if we can give the cell instructions to make something more useful, like insulin, then our little living system in a dish will do useful work for us. Early commercial synthetic biology applications were exactly that: organic matter processors. We use them for one main purpose: making pharmaceutical drugs. You put cheap nutrients in, and with the right instructions, you get valuable molecules and compounds out. This technology was very expensive to develop and build, but just like with computers, ample R&D money was provided by the government through research and funding institutions like the NIH; as you’d expect, the early biopharmaceutical industry was for the most part born out of those years of government-funded research.

Of course, today we can easily see that computers are much more than information processing machines: software has worked its way into pretty much everything, and the TAM for computing turned out to be orders of magnitude greater than anyone anticipated. What happened? In the early days of computing, our mindset was focused on applications where processed information was the final product, and computers were what made that product. But over time, we realized that information is also an intermediate good in nearly everything. When we listen to music, hail a rideshare, or buy a book, “information” isn’t really the finished product we’re consuming, but it’s an intermediate component of many steps in some way.

Today, we’re going through the same moment of reckoning with organic matter. Carbon, nitrogen and oxygen-based matter, which isn’t everything in the world but it is most of the interesting substance, acts as an intermediate good for a whole lot of the world’s physical contents. So, just like we went through this period of broadening our perspectives on computers from ballistics trajectories to nearly everything, we’re just now beginning to go through a period of broadening our perspectives on synthetic biology and artificial life from pharmaceutical drugs to, well, nearly everything.

How so? Biological systems can be thought of as extendable platforms in quite a similar way to how we think about computers and the software stack. The central dogma of molecular biology — “DNA → RNA → protein” — is frighteningly complex in real life, but we’re successfully able to abstract it all away into “write DNA → execute Protein” with remarkable facility. Today, we leverage those abstractions in remarkable ways: we may not understand everything that’s going on in cells, but we’ve gotten very good at easily adding, modifying or deleting genes inside model organisms and influencing the stuff that comes out the other side. At a high-level view, it’s not so different from a high-level programmer being able to write commands in C++ or Java, without necessarily knowing what bits are being flipped underneath. And just like with computers, we’ve started to modularize and function-ize this capability: before too long, our functional building-block approach to biology will look an awful like a real software build.

These extendable living platforms have another essential similarity to software and computers: they are both scalable in two important ways. First, they’re scalable at the level of executing a set of instructions: once you’ve successfully made the first insulin batch, the second is a while lot easier, and the nth will trend towards free. In this respect, cells have an important disadvantage which we’ll cover in a following issue: biological matter breaks down over time, and most cells can only make stuff for a limited period of time before they break down and get recycled. But the second way in which they’re scalable more than makes up for the first: cells and living systems are inherently self-replicating. Imagine if your iPhone contained not only the hardware and software capability to run all your apps, but could also easily create brand new copies of itself through replication. So yes, iPhones are scalable too, in the sense that manufacturing the nth one is easier than the first one. But biology puts even Apple’s legendary supply chain to shame. In fact, that’s one of the aspects of living systems that make people pretty nervous about synthetic biology: are we at risk of creating something so successful it runs away from us? We worry about endlessly complex computing accomplishments like malevolent AI systems replicating out of our control, which is fine, but personally I think we’re far more likely to get in trouble with biological systems over the long run.

With those three similarities in mind — Information vs. Matter processing, Extendable Modular Platforms, and Scalability — we’ll now turn over the next two weeks to two essential differences between computers and living systems. And it’s where things get really interesting.

In memory:

Obituary: Stephen Hawking (1942–2018) | BBC

Stephen Hawking, famed physicist, dies at 76 | Andrea Stone, National Geographic

“Remember to look up at the stars”: the best Stephen Hawking quotes | The Guardian

A mind like no other: the best of Stephen Hawking, in his own words | Scroll.in

Stephen Hawking’s five best and nerdiest cameo appearances in pop culture | Constance Grady, Vox

Internet communities: sometimes they do work

Why Wikipedia works | Brian Feldman, Select/All

Reddit and the struggle to detoxify the internet | Andrew Marantz, The New Yorker

Kottke.org is 20 years old | Jason Kottke

Learning to adapt:

Chimpanzees deliberately switch to inferior nut-cracking methods if they have to blend in with others | Luncz et al., Animal Behavior

Intel fights for its future | Jean-Louis Gassée

Berkshire Hathaway has evolved into an acquisition engine; the returns look pedestrian | The Economist

An important scaling milestone:

Announcing our first Lightning mainnet release, lnd 0.4 beta! | Lightning Labs

Lightning’s first implementation now in Beta; developers raise $2.5M | Aaron von Wirdum, Bitcoin magazine

Other reading from around the Internet:

America’s ‘retail apocalypse’ is really just beginning, and maturing buyout debt is a huge reason why | Matt Townsend, Jenny Surane, Emma Orr & Christopher Cannon, Bloomberg

Phase separation (like in lava lamps and vinaigrette) can teach us a lot about cell biology | Elie Dolgin, Nature

Qualcomm, national security and patents | Ben Thompson, Stratechery

Distributed “versus” HQ org structures | Steve Sinofsky

Amazon’s internal numbers on Prime Video | Jeffrey Dastin, Reuters

How UMBC shocked Virginia to become the first 16 seed to ever beat a 1 seed | Ricky O’Donnell, SB Nation

The reckoning over social media has transformed SXSW | Casey Newton, The Verge

In this week’s news and notes from the Social Capital family, it’s time for a lightning round around the portfolio:

First, some lessons learned by experience from customers, vendors, competitors, and more:

Kevin Dang & Steven Ingram write about a hard experience at Wave, where what appeared to be a fruitful partnership with early customer adoption immediately went south after a crucial vendor feature was removed — without a good contingency plan. The good thing about being a startup is you can react quickly and adapt flexibly and tactically, but these kind of experience still sting quite a bit, and their write-up contains valuable lessons for everybody:

Hard lessons from self-service business intelligence | Kevin Dang & Steven Ingram, Wave Accounting

Des Traynor from Intercom walks us through a key kind of competition which many of us forget: indirect competition. Unlike primary direct competitors, who compete head to head (Burger King versus McDonalds), or secondary competitors who compete over outcomes with sharply different approaches (like business class seats versus video conferencing), indirect competition arises when the customer has two distinct jobs they want to do which each compete with or preclude the other: “I want to allow payments in my product, but I want to minimize the amount of third-party integrations we rely on” or “I want to add this analytics tool, but also optimize response times.” For these and many other situations, Des walks us through the art of having your cake and eating it too:

Understanding direct and indirect competition | Des Traynor, Intercom

And Angelica Weaver from Hustle shares some lessons learned from a successful engagement with a political campaign in Virginia, and how coordinated large-scale personal text messaging is poised to transform how campaigns of all kinds get done. The art of “getting people to care” in large numbers has always had a bit of science to it, and Hustle is full of lessons on how we can always be getting a little more data-driven and a little sharper and more deliberate with our actions every day:

Virginia grassroots coalition: lessons learned from the 2017 Virginia election | Angelica Weaver, Hustle

In other milestone news, Airmap has a new CEO:

Welcome David Hose, Airmap’s new CEO | Ben Marcus, Airmap chairman & cofounder

Elevating the drone industry | David Hose, Airmap CEO

And congratulations to Penny on being acquired by Credit Karma:

Credit Karma has acquired an instant message bot, Penny, that helps people track their spending | Theodore Schleifer, Recode

Have a great week,

Alex & the team from Social Capital

--

--