Building Human-Level A.I. Will Require BILLIONS of People
The Great AI hunger appears poised to quickly replace and then exceed the income flows it has been eliminating. If we follow the money, we can confidently expect millions, then billions of machine-learning support roles to emerge in the very near-term, majorly limiting if not reversing widespread technological unemployment.
Human-directed machine learning has emerged as the dominant process for the creation of Weak AI such as language translation, computer vision, search, drug discovery and logistics management. Increasingly, it appears Strong AI, aka AGI or “human-level” AI, will be achieved by bootstrapping machine learning at scale, which will require billions of humans in-the-loop.
How does human-in the-loop machine learning work? The process of training a neural net to do something useful, say the ability to confidently determine whether a photo has been taken indoors or outside, requires feeding it input content, in this case thousands of different photographs, allowing it to generate its own model of the photographs, correcting, re-generating and improving the model until the program has achieved a high enough confidence to perform the sorting behavior automatically. This neural model can then be applied to other content and ultimately requires less correction by humans in the future. Thus, work has been done and added to the broader body of machine learning knowledge.
One can then imagine that, over time, as these models are encoded, fewer and fewer humans will be needed to train up useful AI… Wrong!
Rather, as the companies now trailblazing AI (Google, Amazon, Apple, Microsoft, Facebook, Tesla, Uber, etc) have generated more value through machine learning, they’ve realized that 1) machine learning can be applied to infinitely more domains/problems, 2) that more complex, creative problems require more human-in-the-loop intervention, and 3) that more value can be created by integrating the machine learning they’ve already done — a cumulative effect, eg Google’s recent breakthrough in translation, which ultimately required billions or trillions of human-in-the-loop (including you, if you ever used Google Translate) machine learning cycles to finally break through to another level of automatic functionality.
To recap, machine learning requires 1) some well-educated machine learning professionals, 2) many more less-educated machine learning guides and 3) access to large swaths of structured content, 4) access to previously encoded machine learning. And the market-driven desire to apply it to new problems sets is growing very, very quickly.
With technological unemployment growing as a U.S. and global problem, and economic stratification rapidly increasing, many have been wondering how the general human population will earn a living in the transformed economy. A few years ago I argued that users of social networks could soon start getting paid by the parent companies. Now that a the basic business model surrounding human-directed machine learning, AI and digitized content is emerging, that scenario can be advanced.
As the Great AI Race heats up and more companies, countries and other actors come to realize the narrow and broader potential of human-in-the-loop machine learning, the demand for machine learning pros, machine learning guides and content workers will grow proportionately, driving up their share of the pie as they help to build more intelligent superstructures brick by brick.
The growing competition is also driving up the value of content (aka The Rocket-Fuel of Accelerating Change) itself — especially large bodies of structured content. Over time, content producers (including users of search engines and social networks who add value simply through their interactions with those systems) can expect to receive more value for their work or property.
As AI-generated revenues continue to grow, additional billions, even trillions of dollars will flow to super-lucrative machine learning processes and, ultimately, into the digital pockets of the masses essential to building the different aspects of AI.
The amount of value shared with users will depend on the size of the pie. With Kurzweil’s Law of Accelerating Returns in full effect, that pie is likely to grow MASSIVELY. The limits to growth appear to be our finite ability to capture, sort and export information about our lives and the universe around us. In theory, the total pie is limited only by the total information contained in our universe.
From one perspective, this process can be viewed as a market-driven acceleration of science. From another, it’s an evolution of the economy from Industrial Age to Knowledge Age. Looking at the big picture, it sure looks like mass-scale Human/AI symbiosis that ultimately drives up machine, human and planetary intelligence by digitizing the vast universe of information surrounding us.
Seems pretty natural to me. So natural, that billions of people are already at work via Machine Learning systems. They’re just not getting paid for it… yet.
Cross-posted from SocialNode.com
Original artwork by satirist Jimbob Peltaire.