Emerging: How New Technologies Move from Obscurity to Ubiquity
Frontier tech, deeptech, hard tech, emerging tech. All of these terms attempt to describe technologies that are enabled by novel technical or scientific breakthroughs. Sometimes, these technologies are so consequential that they gain general purpose technology status — a category which includes the steam power engine, the printing press, electricity, the automobile, the computer, the Internet, artificial intelligence, nanotechnology, and synthetic biology. Others are less consequential, or play an enabling role, but still have the power to shape society. Whatever terminology you prefer, the development of these technologies is increasingly vital as we face problems that require solutions well beyond our current capabilities. The adoption of these technologies has also created some of the world’s most valuable companies.
“Frontier tech is no longer at the frontier.” Deena Shakir
I’ve spent the last several years analyzing emerging technologies as a graduate student at UC Berkeley, as an investor with SamsungNext and Playground Global, and as a research fellow the World Economic Forum, the United Nation’s ITU, and Stanford’s Future of Digital Currency Lab. What I’ve learned about emerging technologies and their path from obsurity to ubiquity is outlined below, according to Oxford Language’s definition of the word “emerging.”
*Views are my own and do not represent those of the above organizations.*
1. To emerge: “to move out of or away from something and come into view.”
The breakthrough: True breakthroughs are hard to come by. Our natural tendency is to extrapolate the present into the future, carrying forward all the limitations of our current understanding of the world, complete with the downfalls of existing structures and systems. Unsurprisingly, the first applications built on new technologies tend to port over legacy structures instead of considering how a new technology could enable a radically new structure. Investors elevate contrarian thinking, but oftentimes contrarians simply take the status quo and add a minus sign, a thought process that is still grounded in our current understanding of the world, just in the inverse. Instead, counterintuitive thinking is what drives real breakthroughs.
There are many theories about what leads to breakthrough ideas, and given their highly creative and unscientific nature, it is hard to identify any one theory as authoritative. It might be that original thinkers consume different information than the mainstream or have been somehow sheltered from main stream views. Mike Maples, partner at Floodgate, suggests that breakthroughs are derived from living in an environment that is somehow futuristic and then “backcasting” to the present.¹ National labs (Netscape)¹, government funded challenges (self-driving cars)², and university classrooms (deep learning startups)² are examples of such futuristic environments. Paul Graham, co-founder of YC, proposes that breakthroughs are often hidden behind mistaken assumptions that a select few dare to question.¹⁴ Adam Grant, author of Originals, proposes that original thinkers are simply more prolific than others and therefore have a higher probability of landing on something novel. Chris Dixon, partner at a16z, has found that visionary entrepreneurs spend many years deeply immersed in an underlying technology before achieving a breakthrough insight.
The business: While a breakthrough insight is a necessary first step, technology in and of itself is not transformative. Superior technology doesn’t always determine market winners and losers. Non-technical factors including design, ease of integration, key partnerships, stakeholder alignment, ecosystem development, timing, liquidity, and other network effects can all be deciding factors as to which technologies are widely adopted.
University settings, lab demos, and benchmarks do not necessarily translate into commercial success as they fail to account for the exponential complexity and incentive structures of practical operations.
The integration: Technology doesn’t exist in a vacuum. Adoption requires absorption into existing structures, systems, and incentive schemes. Too many emerging technologies fall into the category of being a novel innovation in search of a practical application. In many of these cases, technologists are unfamiliar with the traditional, and oftentimes bureaucratic, industries in which they seek to find product-market-fit and fail to take into account their complexities (ie. healthcare.) When and where possible, new technologies should integrate with existing production or manufacturing lines and drop into established processes. Any technology that demands a rip and replace of legacy technology, requires existing systems to be run in parallel during adoption, or requires material disruption to existing processes (rewriting applications, for example) creates barriers to broad adoption.
Disrupting established network effects, offsetting switching costs, and overriding the inertia of alternatives that are “good enough” are all significant obstacles that are consistently underestimated. National labs are still running programs written in Fortran.
The funding: Emerging technologies tend to have longer R&D timelines and may require several years of development before revenue generation opportunities materialize. As a result, funding risk must be considered.
- Public sector funding: Public investments are critical to the development of new technologies, in part because the State can operate on a 15–20 year horizon. Government investment has played a central role in developing microprocessors, RAM memory, hard disk drives, lithium batteries, the Internet, cellular networks, GPS, multi-touch screens, the Apollo program, and the freeway system. The public sector is now playing a critical role in supporting climate technology (ARPA-E), quantum technologies, and AI. Emerging technology startups should understand how to secure government funding.
- In the U.S., DARPA, NASA, the SBIR program, the NSF, and the NIH all play critical roles in novel tech development. Compaq, Intel, Apple, and Tesla were all recipients of government funding. In fact, 35 percent of all patents filed by U.S. startups were funded by the government.⁵ While not all technology supported by the government is used in military, national security, or defense applications, much of it is. Investors in emerging technology will need to assess their comfort level with these applications.
- Private sector funding: Emerging technologies are frequently written off as risky and capital intensive. These characterizations are not entirely fair. The risk of these investments must be considered in relation to the venture funding model. Venture returns are driven by a handful of investments that generate outsized returns and outweigh the majority that fail. The potential for outsized returns is higher with transformational technologies, further aided by the ability to receive non-dilutive funding. Furthermore, while capital (both financial and human) has saturated well-understood categories, venture funding for companies building something less easily understood, and which may require years in the lab, significantly dissipates.⁶ This leads to entry valuations that are more attractive, especially at the point at which the technology has been de-risked, but prior to commercialization.⁷ There is an advantage to believing before others understand.⁸
- The timeline to meaningful revenue generation can be longer relative to traditional software startups. However, the risks being underwritten in the meantime are fundamentally different. Given the massive size of the markets emerging technologies can create, investors are primarily underwriting technical risk. That means the majority of funding will go towards engineering and hard assets, rather than scaling and marketing, as with traditional software startups.⁷ That’s money spent developing a technical moat with real and significant barriers to entry, which should reduce capital intensity over time. In contrast, traditional software businesses often suffer from diseconomies of scale as CAC grows over time. While the exit horizon may be longer for emerging technologies, the success of these endeavors is also measured over longer horizons, better sheltering companies from cyclicality and short term externalities like COVID-19.
2. To emerge: “to recover from or survive a difficult or demanding situation.”
The cycles: Emerging technologies are often mistaken for science projects or toys and are initially considered strange, unserious, expensive, dangerous³, and even heretical.¹⁴ They are initially dimissed and then subject to various waves of enthusiasm and disillusionment. So called “winters,” or periods in which funding dries up and progress is stalled, are caused by long-term R&D timelines that fail to reach commercialization fast enough. This can be due to a feature set that needs material technological development in order to become commercially viable.² For example, self-driving cars must be nearly perfect before they are valuable at all.² Blockchain networks that solve coordination problems may require all participants to join a network before the real value can be derived. Winters can also occur if complementary technologies lag behind. An “AI winter” all but stopped its development in the 1970’s as wide spread applicability was constrained by the pace of development of complementary technologies.
The compounding and overlapping effects of some technologies are only fully expressed when they converge, and future use cases may require capabilities that have not yet been developed.
The timing: As Howard Marks, cofounder of Oakmark Capital Investments, famously said “being too far ahead of your time is indistinguishable from being wrong.”⁴ Technologies that are too early to market often fail to gain adoption. In some cases, they fail to gain broad adoption but end up paving the way for later iterations that do. General Magic paved the way for Apple’s iPhone, AltaVista laid the ground work for Google, and Virtual Boy paved the way for Oculus. Others never really catch on. A third of the 300 technologies featured in Gartner’s Hype Cycle over the past 25 years, only appeared once.⁹
The shortages: The last thing a startup needs is for adoption to be constrained at the exact moment that momentum is taking off (ie. Cryptokitties.) However, new technologies face unique challenges that make scaling more challenging. Talent is always a key constraint. Processes will have to be adapted (ie. agile doesn’t work for cryptonetworks and CI/CD is infeasible for production ML) and tooling won’t exist yet, so it will have to be built from scratch. UX friction will be high and user/executive education will require time and resources.
The shifts: New technologies oftentimes require a behavior change or a mindset shift. This dynamic can cause adoption to occur “generationally,” constrained by the pace at which those in decision making positions are ready to embrace a new way of thinking or until the next generation holds decision making power. Many of these technologies disrupt legacy technologies, deeply entrenched incumbents, or tightly held power structures. Sometimes it requires a decision maker to put their reputation on the line or stake their status. Sometimes the shift shatters closely held world views. Expect resistance.
“It doesn’t matter whether the technology is actually superior or mature…It really comes down to the world’s collective assessment of whether… the value the technology offers outweighs the costs, real or imagined, of adopting it.” MaryAnne M. Gobble
3. To emerge: “to become apparent, important, or prominent.”
The movement: Emerging technologies inherently challenge the status quo. Some of them challenge the very underpinnings of society. Broad adoption, therefore, will require something akin to a social movement.
“Your startup has an opponent, but it’s not your competitors; it’s some version of the status quo.” David Sacks
Storytelling, narratives, and memes are critical to the adoption of new technologies. Too often technologists focus on technical superiority rather than highlighting practical applications. For people to join a new movement, they must believe in a larger cause, coming together to support a common mission. Bitcoin has been compared to a religion. However, Adam Grant posits that originals must become “tempered radicals” if they wish to succeed, arguing the most successful social leaders have learned to “tone down their radicalism by presenting their beliefs and ideas in ways that are less shocking and more appealing to mainstream audiences.”¹⁰
Movements are driven by grassroots communities and early evangelists of a new technology play an instrumental role. New technologies either need to be 10x better than alternatives or support an otherwise impossible functionality. With the latter, the core group of users that is willing to tolerate a worse UX on every dimension other than novelty are crucial to driving early adoption. Building a strong developer community can also determine success (ie. Ethereum and Github.)
The “killer app”: Once a community of supporters has been created, a “killer application” must emerge to expand interest beyond a core constituency. Killer apps become so popular that they drive people to engage with a new technology, playing a critical role in mainstream adoption. Lotus 1–2–3 was the killer app for the IBM PC, VisiCalc was the killer app for Apple 2, and the Mosiac Browser was the killer app for the Internet.
VisiCalc “propelled the Apple II to the success it achieved more than any other single event.” Steve Jobs
The creators of the original technology may not be the creators of the killer app and oftentimes early adopters develop unforeseen or unimaged use cases. The phonograph was never intended to play music¹⁵ and GPUs were originally designed for video games.
The ecosystem: New technologies cannot launch into a market void. For emerging technologies, creating a minimum viable product is not enough, they require a minimum viable ecosystem. The Hardware Lottery (when a research idea is pursued over superior alternatives because it is best suited to available software+hardware) serves as an excellent reminder that tech doesn’t exist in isolation, but instead depends on the ecosystem that surrounds it.
In recent years, the approach has been to support this development via “ecosystem funds” and multi-stakeholder consortia. Ecosystem funds support the development of critical tooling, enabling and complementary technologies, and promote innovation at every layer of the stack (the importance of which is underscored by Nvidia’s Cuda software.) However, they struggle from many of the shortcomings of corporate venture capital funds and usually survive by expanding their mandate well beyond the scope of the core ecosystem they exist to support. Consortia coordinate a wide range of stakeholders. However, in practice, consortia move slowly, are prone to IP disputes and freeriding, and often fail to make meaningful practical progress. Despite these drawbacks, the fear of investing in the wrong technology or architecture (remember Sprint’s and WiMax?) continues to motivate enterprises to join. Given the time and resources required to develop this entire ecosystem, the “first mover advantage” may not exist for emerging technologies. It does no good to have a 10x better camera with bad film.
“Successful breakthroughs are often distinguished from failures by benefiting from multiple criteria aligning surreptitiously.” Sarah Hooker
The committees: A lack of standards can slow adoption, dampen confidence in the industry, and increase the risks of stranded investments in incompatible solutions. However, official standards setting bodies often suffer from “design by committee,” wherein an attempt to satisfy all contributors results in a final output that is suboptimal for all. Global standard setting can move even more slowly since it’s difficult to neutralize the inherent policy and politics of even seemingly technical standards across stakeholders (views of privacy vary widely across regions, for example.)
The private sector often moves much faster than official standard setting bodies, and the first-to-market private solution may very well become the de-facto standard.
Oftentimes the canonical stack (the dominant hardware or software stack adopted by a majority of the industry) is self-selected by developers. A canonical stack is a critical milestone as it enables developers to move “up the stack” and build more intricate applications, enabling combinatorial innovation.
4. To emerge : “(of facts or circumstances) become known.”
The tipping point: Andy Grove, co-founder of Intel, defines inflection points as “an event that changes the way we think and act.”¹¹ Since most technologies benefit from network effects, as more users capitulate, the new technology overtakes legacy alternatives, gradually and then seemingly all at once. Mike Maples identifies three categories of inflection points:¹
- Exponential improvements in the price/performance of technologies, leading to feature/cost parity with alternatives (ie. any technologies that have benefitted from Moore’s Law.)
- Nonlinear, exogenous events that lead to step-function changes in the adoption rate of a technology. These changes can be brought about by chaotic periods in time (ie. the financial crisis -> bitcoin, the cold war -> the space program, COVID 19 -> novel drug development.)
- Regulatory changes that create massive new opportunities (ie. Open Banking and telemedicine.)
The side effects: Widespread adoption catapults a new technology into the spotlight. Being at the cutting edge is, by definition, unchartered territory. This landscape is rife with ethical dilemmas. The ethical ramifications of new technologies should be at the forefront of discussions, but I’ve found ethics to be almost entirely absent from the conversation.
Emerging technologies often operate in realms without established norms, procedures, or protections. The prevailing default is to consider ethics as an afterthought.
Emerging technologies (ie. AI, bioengineering, facial recognition, and quantum computing) introduce complex ethical dilemmas that need to be addressed proactively, thoughtfully, and practically. Jessica Dai, former TA of Brown’s Socially Responsible Computing course, argues that ethics discussions should not be limited to technical fixes.¹³ She adds that these discussions must include theoretical and historical issues, address the role of power structures, and consider whether or not entire segments of the population will be excluded from participation as students, engineers, or users.¹³ These factors need to be considered upfront by employees, investors, and board members. The recent firing of Timnit Gebru illustrates the importance of raising ethical issues, as well as the potentially adverse reception.
“Established markets have legal norms, and companies learn to operate within them or push against them. That’s very different from some of the most innovative and nascent fields, where few guidelines exist.” Frieda Klotz
Given our current technical capabilities, adoption is now more often constrained by socio-economic and political limitations than technological ones, by priorities rather than probabilities, and ethics more so than engineering. As a result, regulators will get involved. Unfortunately, effective regulation is hard to come by and all too often technologists are forced to comply with regulations that are unclear, impractical, or un-engineerable. How Complex Systems Fail is a reminder of why regulation so often fails. Regulators must understand complex systems before writing rules to manage them, acknowledging that systemic safety isn’t static and catastrophy is caused by a shifting combination of failures.
The regulatory rulemaking processes is outlined above by Katie Haun,¹² partner at a16z, but policy is too often drafted without enough input from technologists, despite their willingness to engage in the process. Entrepreneurs working on emerging technologies should engage with regulators and policymakers early and often. All technologies (even decentralized ones) face the risk of being regulated into obsolescence.
Ultimately, these technologies cease to be noteworthy as they finally achieve ubiquity. So-called “deeptech” investors strive to be the first in a variety of new categories, jumping to shiny new categories every few years. However, the full value is more likely to be derived from exploiting the development of a new category over a decade or more of maturation.²
“It’s the fundamental irony of successful technologies that they eventually are taken for granted and disappear in the background.” Matt Turck
There is a fine line between emerging and receding, between improbable and impossible, and between ubiquity and obsolescence. In my opinion, the most interesting and meaningful work occurs on this line. If you’re walking this line, I’d love to hear from you.