Why is modern career advice so terrible?
If you want to mess with an optimistic futurist, ask them a simple question: “what do I need to do in order to be successful in future?” It’s a question of absolutely critical importance. So why are the answers usually so bad?
The standard response tends to be “tech, something, something, mumble”. Futurist Kevin Kelly argues, ‘you’ll be paid in the future based on how well you work with robots’. OK. Does he mean I should know how to use email, or that I should surrender my body to be used as their power source like in The Matrix? I think people just mostly mean you should learn how to code. Wired even claims coding is the ‘next blue-collar job’. Which maybe makes sense until you realize the same job can be done in Shenzhen or Mumbai, probably better, and by someone with roughly a hundredth of your living costs. The first wave of globalization decimated developed world blue-collar jobs by shifting manual labour to cheaper countries. But as trade flows move from physical to digital, and automation increases, higher paid white collar information jobs are going to deflate even faster. Technological labour is not only now globally fungible, but a decent portion may eventually done effectively by the machines themselves anyway. Even coding itself is slowly being replaced by machine learning.
AI professor Pedro Domingos points out that ‘we used to think 30 years ago that the easiest jobs to automate were going to be the blue collar ones. We thought that the white collar jobs, which require education, were going to be hard to automate. That has it exactly backwards. The hardest jobs to automate are things like construction work because they require dexterity, moving around, not stumbling, seeing things that we take completely for granted. These are tasks that took hundreds of millions of years to evolve. On the other hand, doctors, lawyers, analysts, engineers, scientists, these are hard because we didn’t evolve to do them. We have to go to college to learn how to do them. But that also means that the computers can learn that much more than we can because we are evolving them for that purpose. So there are already a lot of white collar jobs that have shrunk or disappeared and more of them will.’
Put simply, the internet is driving the cost of information down to zero. That’s a problem when an estimated 60% of the US workforce’s main job function is the aggregation and application of information. The broader question here is “what will I be paid to know?” Most information-based jobs are about acquiring and deploying specialized knowledge, and you used to be able to make a living purely by knowing things others people didn’t. But Tyler Cowen and Alex Tabarrok have described The End of Asymmetric Information; a buyer walking onto the lot can now know as much about a car as the person trying to sell it to them. Is the value of that car salesperson higher or lower as a result? Tech critic Jaron Lanier agrees- ‘the rise of networking has coincided with the loss of the middle class, instead of an expansion in general wealth, which is what should happen. But if you say we’re creating the information economy, except that we’re making information free, then what we’re saying is we’re destroying the economy.’ Lanier argues that we’ve created only half of the new economic system, while moving from a bell curve to a “winner takes all” distribution. ‘You can’t have an economy in which there’s just a tiny, hyper-fortunate formal part, and then a vast, not necessarily impoverished, but insecure, informal part…..It’s not economically stable… there aren’t going to be any customers to buy your stuff, eventually’. Even Lanier’s proposed solution is a bit underwhelming: to share the value of a user’s data. The uneven economics of networks makes this virtually pointless. For example, let’s say you’re worth about ~$15 a year to Facebook. With user numbers in the billions that’s amazing for Facebook, but hardly enough to make any difference to you as an individual.
If the question of the future of work is so pervasive, why are the most commonly-proposed answers so vague and insubstantial? Maybe the world is too complex for simple solutions. As H.L. Mencken said, ‘for every complex problem there is an answer that is clear, simple, and wrong.’ But there’s still plenty of demand for comfortingly simple narratives. As Nassim Taleb tweeted a couple of weeks ago; ‘Society is increasingly run by those who are better at explaining than understanding.’ Dan Drezner’s new book on ‘The Ideas Industry’ (quick summary here), points out that the new billionaire benefactors are concentrated in big tech, thus the new breed of thought leaders produce narratives that primarily cater to them. ‘The intellectuals that will thrive in this milieu are those that stress disruption, self-empowerment, and entrepreneurial ability- the values that are a core part of the identity of philanthrocapitalists….. Thought leaders will push ideas or policies that promote disruptive innovation. These concepts appeal to those who have managed to stay on top in the global economy.’ This has spawned an epidemic of nebulous advice (mostly from rich white males) with extremely limited real-world applicability. But that doesn’t matter as much if the primary goal is to justify the billions that have already been bestowed upon their benefactors. Hence the ‘Big Idea’ TED crowd repeatedly spout platitudinous narratives of self-determination; “We could all have founded Amazon, you’re poor because you didn’t.” This implicitly excuses the spectacular narrowness of the gains of the digital economy, while placing the burden of responsibility back on the shoulders of those it is failing.
Take Thomas Friedman’s “advice” in The World Is Flat: ‘to thrive in the global economy, one needs to be ‘special,’ a unique brand like Michael Jordan.’ So…..the next mass market job is ‘being unique’? Cool. Helpful. This brand of generic, startlingly vague advice is ubiquitous, and usually reads something like “be a life-long learner”, “make yourself a specialist-generalist” or “build your skill-stack”. It’s so vague because nobody seems to have any bloody idea which specific knowledge or skills are going to be valuable in 5–10 years. In fact, if Lanier is right, ‘knowing things’ in general is going to become inexorably less and less valuable.
Other bog-standard answers will include something about startups and entrepreneurship. Ignoring the now-inevitable vagueness for a second, the outside data on startups is that somewhere between 60–90% fail. Although entrepreneurship is definitely a necessary ingredient of a well-functioning economy, a career path with that kind of insecurity and failure rate can’t considered a majority mainstream profession. It’s just confirming Lanier’s assertion that the digital economy is becoming more structured like a casino. In fact, among the more practical advice I’ve read came from the futurist book Bold by Diamandis and Kotler. Essentially the authors argue that good start-up ideas are a commodity, so robust execution is now the scarce asset. As coding, web design and technological execution become increasingly globally fungible skills; outsource them to people who can live more cheaply than you can. Why buy only one lottery ticket when you can buy 5?
Sam Altman, the President of startup incubator Y Combinator, described the equation for a startup’s chance of success as ‘something like Idea x Product x Execution x Team x Luck, where Luck is a random number between zero and ten thousand.’ Yet the absolutely massive influence of luck doesn’t stop founders from volunteering their hindsight gospel on the reasons behind their own success. As Michael Lewis put it in his great Princeton address ‘people really don’t like to hear success explained away as luck- especially successful people. As they age, and succeed, people feel their success was somehow inevitable.’ This means the advice they give will likely always prove much less replicable than it appears. As Anna Wiener writes in the superb Uncanny Valley; ‘Venture capitalists have spearheaded massive innovation in the past few decades, not least of which is their incubation of this generation’s very worst prose style. The internet is choked with blindly ambitious and professionally inexperienced men giving each other anecdote-based instruction and bullet-point advice.’
If the global playing field is becoming exponentially more competitive, it’s predictable that many people advise you to compete locally instead. Y Combinator’s startup playbook argues that ‘It’s much better to first make a product a small number of users love than a product that a large number of users like.’ This is echoed in Kevin Kelly’s essay 1,000 true fans. ‘To make a living as a craftsperson, photographer, musician, designer, author, animator, app maker, entrepreneur, or inventor you need only thousands of true fans.’ Obviously this advice is being dispensed by men who made their millions in technology rather than busking in subway stations. The bigger problem is that it may not be that true anymore. Network monoliths like Spotify and Netflix are deflating the profitability of art for all but the very top producers. And I’m not sure anyone could sensibly argue that the talent distribution in society means this is a mass-market solution. You also quickly stumble upon the discovery problem in a world of overabundant information. In his book Hit Makers about what defines online popularity, Derek Thompson largely refutes the notion of virality. ‘Popularity on the Internet is “driven by the size of the largest broadcast.” Digital blockbusters are not about a million one-to-one moments as much as they are about a few one-to-one-million moments.’ For emerging artists, success overwhelmingly requires someone with a large preexisting platform distributing your work. But the odds of getting on their radar are tiny as are the odds of you going viral on your own. You’re back in Lanier’s casino.
If global work is insanely competitive and local work isn’t that lucrative, a neat solution seems to be using the scale advantage of global platforms in a local setting; enter Uber, AirBnb etc. Thus the gig economy is repeatedly heralded as the new paradigm of modern work. But for many, a career trajectory in the gig economy already resembles a game of snakes and ladders, but with virtually no ladders. If you’re a senior management consultant whose job gets outsourced to Mumbai, you’re not starting your next job for TaskRabbit on an even vaguely equivalent salary. Why would anyone ever get paid more money to start a completely different career from scratch? Thus it seems many people are going to keep getting reset back downwards over and over again. Even if you find flexible career progression and stability due to exactly the right combination of marketable skills at exactly the right time, is that existence necessarily better than the kind of progression and stability our parents could expect within large corporations? As Brett Scott writes in Reversing the Lies of the Sharing Economy: ‘Of course, if you want to put a positive spin on this kind of work, you can call it flexible, decentralized micro-entrepreneurship. But pan out, and it looks more like feudalism, with thousands of small subsistence farmers paying tribute to a baron that grants them access to land they don’t own.’ [As an aside, it’s a polarizing topic better left to smarter people than me, but there are certainly many who believe that the blockchain is the best hope for decentralizing the fiefdoms of the information age. You can read more on that thesis in Aeon here].
As wages at TaskRabbit, Postmates and Uber have largely proven, gig-economy jobs don’t tend to be very lucrative; the bulk of the gains go to the owner of the platform. Meanwhile traditional, local, ‘quintessentially human’ jobs are indeed seeing the most growth in the US. The NYT recently cited BLS forecasts that 9 of the 12 fastest-growing job fields in the US are basically different kinds of nurse. But these jobs also don’t typically command robust salaries, progression or benefits. And thus millennials can increasingly look forward to nursing their ailing boomer parents until they inherit the houses that they themselves can no longer afford to buy.
The standard techno-utopian response is encapsulated by the messiah of the geek rapture himself, Ray Kurzweil. ‘If I were a prescient futurist giving a speech in 1900, I would say that a third of you now work on farms and another third in factories, but in a hundred years — that is, by the year 2000 — that will go down to 3% and 3%. That is indeed what happened; today it is 2% and 2%. Everyone in 1900 would exclaim, “My god, we’ll all be out of work!” If I then said not to worry, you’ll get jobs as website designers, database editors, or chip engineers, no one would know what I was talking about. In the U.S. today, 65% of workers are knowledge workers of some kind and almost none of these jobs existed fifty years ago.’ So the jobs just haven’t been invented yet. OK. Is it rude to ask when we can expect them? If technological progress is accelerating ever faster, we will also need to be inventing the replacement jobs at a faster-and-faster rate. Where is the evidence that this is happening? Moreover, these industrial revolution comparisons are hardly comforting. Tyler Cowen notes that ‘it took 60 to 70 years of transition, after the onset of industrialization, for English workers to see sustained real wage gains at all.’ That timeline is obviously problematic; we’re already seeing economic insecurity spill over into populism. After the US election, 538.com noted ‘Economic anxiety is about the future, not just the present. Trump beat Clinton in counties where more jobs are at risk because of technology or globalization. Specifically, counties with the most “routine” jobs — those in manufacturing, sales, clerical work and related occupations that are easier to automate or send offshore — were far more likely to vote for Trump.’
Perhaps the most troubling question of all is: “Why do the new jobs have to be better than the old jobs?” There’s no guarantee these amazing-but-as-yet-nonexistent-replacement jobs will ever appear. The postwar era may prove to have been a unique time in history when most people were guaranteed reasonably stable and satisfying jobs for the majority of their lives. The 19th & 20th century economic models required large numbers of healthy soldiers and factory workers, the 21st century largely doesn’t. Put very simply, globalization is massively expanding the supply of global labour at the same time automation is reducing the demand.
This supply shock is the result of the unambiguously positive rise of the emerging world out of relative poverty. The scale of the emergence of China and India is hard to overstate, as McKinsey puts it ‘the two leading emerging economies are experiencing roughly ten times the economic acceleration of the Industrial Revolution, on 100 times the scale — resulting in an economic force that is over 1,000 times as big’. Adding this many new workers to the global economy was always going to be spectacularly deflationary. As Paul Mason argues in the book Postcapitalism: ‘This is the real austerity project: to drive down wages and living standards in the West for decades, until they meet those of the middle class in China and India on the way up.’ The graphical expression of this phenomenon is the now-infamous Globalization Elephant Chart that shows huge gains at the lower end of the global income distribution at the expense of developed world middle classes. This process is still rapidly unfolding. McKinsey estimates nearly 50% of global GDP growth between 2010–2025 will come from 440 small and medium-sized EM cities. In his new book Scale, physicist Geoffrey West talks about the equivalent of a NYC metropolitan area of 15m people being added to the planet every couple of months. China alone is aiming to build up to 300 new cities each in excess of a million people: ‘at the present rate it will be moving the equivalent of the entire US population (more than 300m people) to cities in the next 20–25yrs.’ If you’re adding another USA to the global workforce, will middle and lower class wages be rising in that scenario?
Our parents’ generation had the luxury of emerging from college with a relatively homogenous choice of socially-acceptable careers inside big corporations. But, while start-ups have an incredibly high failure rate, technology and globalization is also making life in traditional corporations much more unstable. The average lifespan of an S&P500 company in 1935 was 90 years. Today it is 18 years. The first victims of that instability are frequently the employees. The new “Superstar Tech Companies” are hardly mass-employers either. As The Economist recently noted: “In 1990 the top three carmakers in Detroit between them had nominal revenues of $250 billion, a market capitalization of $36 billion and 1.2m employees. In 2014 the top three companies in Silicon Valley had revenues of $247 billion and a market capitalization of over $1 trillion but just 137,000 employees.” Even the global capital of techno-utopianism, San Francisco, has less than 10% of Bangkok’s population but 6x as many homeless people. The biggest employer in the US is currently retail, but not for much longer. Appropriately, Amazon is making 47,000 square feet of its new HQ into a homeless shelter.
This is typically the point in the conversation where Universal Basic Income comes up. UBI certainly succeeds in solving for vagueness; “we actually don’t know what skills you need, so here, have $15,000 a year just to survive while we work it out”. But it also tacitly accepts the absence of any practical mass-market solution. Assuming people work to find a sense of meaning as well as merely subsistence, this is only a stopgap. Moreover, there’s evidence to suggest that a more likely capitalist response is what David Graeber colourfully calls more Bullsh*t Jobs. ‘Rather than allowing a massive reduction of working hours to free the world’s population to pursue their own projects, pleasures, visions, and ideas, we have seen the ballooning not even so much of the “service” sector as of the administrative sector, up to and including the creation of whole new industries like financial services or telemarketing, or the unprecedented expansion of sectors like corporate law, academic and health administration, human resources, and public relations.’ The expansion of these sectors has previously been a signal of impending trouble; Peter Turchin calls it ‘elite overproduction’. He argues that periods when society was producing this many overeducated, frustrated workers typically ended in conflict. War helps thin the ranks of disaffected, revolutionary males…..
But instead of antipathy, many are already choosing apathy. Increasing numbers of working-age American men are using drugs or videogames as a more palatable alternative to the futility and confusion of the job market. For every unemployed American man between 25 and 55 years of age, there are now another three who are neither working nor looking for work. Erik Hurst has found that US high-school drop-outs are progressively opting out of work, because technology is increasing the value of their free time relative to work. We can fill our increasingly idle hours with Candy Crush, Overwatch and the Golden Age of Television. Moreover, there’s further proof that it’s about willingness to work, not ability to work: immigrants are increasingly replacing native drop-outs in the labour force. You could easily spin that into a negative value judgement about relative work ethics. Or you could be more charitable and see it as yet another symptom of globalization. Local basic wages are still far too low unless you’re repatriating part of your paycheck to your family in a much poorer country. In contrast to entry-level jobs, videogames have also been meticulously optimized for a balance between maximum challenge and progression that the modern workplace is obviously failing to provide. As Ryan Avent recently argued in The Economist: ‘A society that dislikes the idea of young men gaming their days away should perhaps invest in more dynamic difficulty adjustment in real life. And a society which regards such adjustments as fundamentally unfair should be more tolerant of those who choose to spend their time in an alternate reality, enjoying the distractions and the succour it provides to those who feel that the outside world is more rigged than the game.’ Maybe the world of work has something to learn from the video games industry.
Convention dictates that I find a way to end on an upbeat note. The best I can do is note that a problem this massive will present a wildly profitable opportunity to anyone that can solve it. As Twitter founder Ev Williams memorably said; ‘Here’s the formula if you want to build a billion-dollar internet company… Take a human desire, preferably one that has been around for a really long time…Identify that desire and use modern technology to take out steps.’ The first iteration of the internet has excelled at taking out steps to seamlessly match buyers and sellers. Recent advances in machine learning have allowed for even more sophisticated matching tools. For example, “doppelganger searches” can microscopically tailor book recommendations to your preferences based on both your prior behavior and that of those people most similar to you. If that same technology can be used to match our desire for work to meaningful jobs, then maybe the future will start to look a little more evenly-distributed.