We march backwards into the future.

We look at the present through a rear-view mirror. | Marshall McLuhan

Stowe Boyd
Work Futures
14 min readAug 4, 2018

--

Beacon NY — 2018–08–04 — It’s been a week where I’ve been rereading material from years ago: research a decade old, writing of mine from five years ago or older, and David Ronfeldt’s Tribes, Institutions, Markets, Networks from 1995 (see the quote of the day at the bottom).

I think limiting ourselves only to the breaking news and this week’s posts on Medium cramps our thinking, makes us parochial, and out of balance. By reaching back to older writing we can enlarge our perspectives just like we can by interacting with a more diverse group of people. And just as with contemporaneous diversity, it can be hard work to wrap our mind around Ronfeldt’s thinking about social evolution from 1995, or even my pitch for ‘connectives’ from 2015.

But I think it’s worth the effort.

If you subscribe to Work Futures Daily today, you’ll get a free month!

If you want to share a story with me at Work Futures Daily, please email. Thanks!

On Evidence-Based Management

from More Science, Please

In a stark expose of the anti-scientism in today’s management, Eryn Brown investigates the sad state of affairs in ‘evidence-based management’, where findings from scientific research could be applied to critical management initiatives and decisions, but generally are not:

Getting companies to pay attention to science and engage in so-called “evidence-based management” is a challenge that has been driving industrial-organizational psychologists nuts for the better part of 20 years. Whether it’s hiring staff or determining salaries or investing in technology, managers making high-stakes decisions have a vast scholarly literature at their disposal: studies conducted over more than a century, in labs and in the field, vetted through peer review, that show whether pay incentives drive internal motivation (often not); whether diversity training works (only under the right conditions); whether companies should get rid of performance ratings (yes, Colquitt would say); how to train effective teams; and more.

Executives love hard numbers, and they desperately want to know how to keep their best employees, how to make more widgets, how to be more creative. So you’d think they’d lap up the research. “It’s hard to find students in graduate school who don’t hear the idea of evidence-based management and say, ‘Yes! Of course!’” says Neil Walshe, an organizational psychologist who teaches the approach at the University of San Francisco School of Management.

Except most companies don’t. Occasionally, a firm will make a splash — the poster child these days is Google, which gets kudos for its data-centric, research-based “People Operations” (a.k.a. human resources) department. But most executives would rather just copy another company’s proven ideas than do the hard work of assessing evidence relevant to their own circumstances. Managers falter, victims of inertia (“but we’ve always done things this way!”) confusion (“industrial-organizational what?”), even downright hostility to expertise.

The first advocate for evidence-based management was Denice Rousseau of Carnegie Mello University, who proposed the idea in 2005. She had thought companies were paying attention to industrial psychology research findings, and had discovered they weren’t.

Slowly it began to dawn on her that that wasn’t the case. It was an epiphany that “blew my mind,” she says today.

Even after a decade of pushing, advocates admit that evidence-based management hasn’t made much of a dent:

“We’d love to see a commitment from a leader that says, ‘I expect our decisions about people and work and the organization to have evidence behind them,’” says John Boudreau, research director at the Center for Effective Organizations, housed in USC’s Marshall School of Business. “I don’t know that I have seen examples of that. Especially at the high level, the CEO level.”

“I’m a little baffled that it’s not more widespread,” says Jennifer Kurkoski, director of Google’s People Innovation Lab (PiLab), the internal research and development team behind the company’s People Operations department. “Companies spend billions on R&D, almost none of which is devoted to making people work better. It’s not something we understand yet. And we should.”

Brown enumerates the reasons why managers have been slow to get on the bandwagon (but they are all excuses):

  1. It’s a lot of work.
  2. People fear change and risk.
  3. Managers put more faith in intuition than they put in science.
  4. Parsing the scientific literature can be hard.

So, they are basically lazy and stupid, and unwilling to change.

Meanwhile, evidence mounts that dumb management fads are harming companies’ productivity and employees wellbeing and engagement:

Studies that find open offices don’t, in fact, encourage conversation and collaboration. Studies that find employees resent the corporate fad of hot-desking — jumping from desk to desk instead of having a dedicated workspace, based on a notion that this will spark synergies and blue-sky thinking.

In one recent paper calling on industrial-organizational psychologists to put “an end to bad talent management,” [Alan] Colquitt [the author of Next Generation Performance Management: The Triumph of Science Over Myth and Superstition] … called out companies who fall for consultants promising to help them understand “the brain science of millennials” and other trendy topics, with little or no evidence for any of it.

Sigh.

On Connectives

from More Science, Please

Yesterday, I included a mention of a piece by Kate Dalby, Tour de Workforce — here comes the collective economy, and I omitted the link. So I am remedying that goof.

I had an email interchange with her, as well, and because of my suggestion that she use the term ‘connective economy’ rather than ‘collective economy’, I also shared a link to something I wrote back in 2013, Community is Plural:

As companies transition away from slow-and-tight organizations, based on collective long-term strategy and identity, the unitary community within a business shakes out into a multiplicity of overlapping communities. Some will still feel and act like the older, slow-and-tight organization, but many will become fast-and-loose, adopting the cooperative logic of ‘connectives’, shaped by the self-organizing dynamics of social networks rather than the imposed order of business process and ordained strategy.

These various communities within a single business pose a new challenge for leadership. In the past, creating a corporate culture meant indoctrinating people into a single collective, with explicit shared goals: especially a long-term and exclusive commitment to the company’s vision of the future and the company’s place in it. Today, in a time of radical change and ‘innovation vertigo’, wise leaders do not promulgate a single, official future, and in fact will encourage a variety of diverse ideas of what the future may bring. If only for that reason, we are confronted with the need to reject a single monolithic culture in any reasonably large business, and even in small ones that want to grow to become large.

The emergent properties of social networks — like knowledge creation, innovation, and sense making — may be the greatest leverage a company has, so allowing more communities within a single company will lead to higher levels of innovation and adaptation. Rather than a monolithic organization trained to operate as a single unit based on a single fixed set of rules, we are now confronted with an economic context where it’s more rational to have a spectrum of communities operating independently, inventing and rewriting their own rulebooks along the way.

And the self-awareness that this is going on in the business is the psychographic that these communities will share, so that this apparent disorder is understood as a source of strength, resiliency, and competitive advantage.

Five years have passed, and this insight is increasingly relevant.

On Robot Hands and Robot Minds

from Theory of Mind, Theory of Hand

A look at the state of the art for robot hands by a team of writers at The NY Times reveals that new advances are allowing the robots to learn how to manipulate things on their own:

Researchers at the University of Washington are training robotic hands that have all the same digits and joints that our hands do.

That is far more difficult than training a gripper or suction cup. An anthropomorphic hand moves in so many different ways.

So, the Washington researchers train their hand in simulation — a digital recreation of the real world. That streamlines the training process.

At OpenAI, researchers are training their Dactyl hand in much the same way. The system can learn to spin the alphabet block through what would have been 100 years of trial and error. The digital simulation, running across thousands of computer chips, crunches all that learning down to two days.

It learns these tasks by repeated trial and error. Once it learns what works in the simulation, it can apply this knowledge to the real world.

Many researchers have questioned whether this kind of simulated training will transfer to the physical realm. But like researchers at Berkeley and other labs, the OpenAI team has shown that it can.

They introduce a certain amount of randomness to the simulated training. They change the friction between the hand and the block. They even change the simulated gravity. After learning to deal with this randomness in a simulated world, the hand can deal with the uncertainties of the real one.

Today, all Dactyl can do is spin a block. But researchers are exploring how these same techniques can be applied to more complex tasks. Think manufacturing. And flying drones. And maybe even driverless cars.

Recall that the famous automation analysis by Oxford’s Carl Benedikt Frey and Michael Osborne, The Future of Employment: How Susceptible Are Jobs To Computerization?, invested a great deal of discussion of robots’ capacity to manipulate objects, noting

Expanding technological capabilities and declining costs will make entirely new uses for robots possible. Robots will likely continue to take on an increasing set of manual tasks in manufacturing, packing, construction, maintenance, and agriculture. In addition, robots are already performing many simple service tasks such as vacuuming, mopping, lawn mowing, and gutter cleaning — the market for personal and household service robots is growing by about 20 percent annually (MGI, 2013). Meanwhile, commercial service robots are now able to perform more complex tasks in food preparation, health care, commercial cleaning, and elderly care (Robotics-VO, 2013). As robot costs decline and technological capabilities expand, robots can thus be expected to gradually substitute for labour in a wide range of low-wage service occupations, where most US job growth has occurred over the past decades (Autor and Dorn, 2013). This means that many low-wage manual jobs that have been previously protected from computerisation could diminish over time.

Frey and Osborne concluded through their study that 47% of all US occupations were at risk of being automated in the next 20 years. Now that the robots can learn how to manipulate objects on their own, how many more jobs will be automated, and has the time horizon shrunk, as well?

‘Theory of Mind’ is the term used to characterize the ability of people to imagine the mental states of other people, to put themselves in the heads of others, and to create a representation of what others are thinking. We do this as a matter of course. But researchers have demonstrated a means to endow AI with theory of mind, so one AI could in principle infer what another was ‘thinking’.

The new project began as an attempt to get humans to understand computers. Many algorithms used by AI aren’t fully written by programmers, but instead rely on the machine “learning” as it sequentially tackles problems. The resulting computer-generated solutions are often black boxes, with algorithms too complex for human insight to penetrate. So Neil Rabinowitz, a research scientist at DeepMind in London, and colleagues created a theory of mind AI called “ToMnet” and had it observe other AIs to see what it could learn about how they work.

ToMnet comprises three neural networks, each made of small computing elements and connections that learn from experience, loosely resembling the human brain. The first network learns the tendencies of other AIs based on their past actions. The second forms an understanding of their current “beliefs.” And the third takes the output from the other two networks and, depending on the situation, predicts the AI’s next moves.

[…]

[Alison] Gopnik [University of California, Berkeley] of says this study — and another at the conference that suggested AIs can predict other AI’s behavior based on what they know about themselves — are examples of neural networks’ “striking” ability to learn skills on their own. But that still doesn’t put them on the same level as human children, she says, who would likely pass this false-belief task with near-perfect accuracy, even if they had never encountered it before.

[…]

Gopnik notes that the kind of social competence computers are developing will improve not only cooperation with humans, but also, perhaps, deception. If a computer understands false beliefs, it may know how to induce them in people. Expect future pokerbots to master the art of bluffing.

Uh, there is already a poker-playing AI that learned to bluff, called Claudico, that did pretty well at a recent Brains vs AI Poker Championship:

One of the most important strategies in poker is the art of bluffing, in which a player makes or raises a bet without having the best hand, in order to fool an opponent into folding. “People often think about bluffing as being a psychological phenomenon,” [Tuomas] Sandholm [of Carnegie Mellon University] said. But beyond psychology, “bluffing still emerges as a strategic phenomenon,” he said.

Sandholm and his colleagues didn’t pre-program Claudico’s poker strategy. They wrote algorithms that automatically compute a strategy by trying to find the Nash equilibrium. This concept from game theory was developed by American mathematician John Nash, who was portrayed in the film “A Beautiful Mind.” In a noncooperative game, players are said to be in Nash equilibrium if they are making the best decision possible, taking into account the decisions of the other players.

And taking into account the decisions of the other players could be a good proxy for theory of mind, it seems.

At any rate, we can expect that in the near term AI’s will be able to explain what other AI’s (or people) are ‘thinking’, and even if they are bluffing (or lying).

On Bosslessness

from Bunk and Debunk

André Spicer debunks the utopian ideal of bosslessness, saying that companies like Valve and others that have theoretically dispensed with a hierarchy and the notion of ‘working for’ other people are actually operating on undefined and hidden power structures. His piece seems largely based on comments of former, disgruntled employees:

In 2012 Valve’s new employee handbook was leaked. Fawning articles about this unique and amazing company appear everywhere from the BBC to Harvard Business Review. Valve’s economist in residence — Yanis Varoufakis, the former Greek finance minister — appeared on a podcast describing the company’s unique system of rewarding employees.

Since then, the glittering aura of Valve’s “no boss” culture has started to fade. In 2013, an ex-employee described how the company had “a pseudo-flat structure”. “There is actually a hidden layer of powerful management structure in the company,” she said, which made it feel “a lot like high school”.

Now, five years later, another ex-employee has taken to Twitter to share his thoughts about a nameless company that closely resembles Valve. Rich Geldreich described how the firm would hire employees, make them grand promises, then fire them once they were no longer useful. He described the firm as being run by “barons” — and advises new employees to cosy up to a baron in order to “rapidly up your purge immunity level before the next firing cycle”.

Geldreich’s description squares with some reviews of Valve on Glassdoor, a site where staff leave anonymous verdicts on their employers (although it has to be said that many employees like Valve’s culture). One describes the no-boss culture as “only a facade”: “To succeed at Valve you need to belong to the group that has more decisional power and, even when you succeed temporarily, be certain that you have an expiration date. No matter how hard you work, no matter how original and productive you are, if your bosses and the people who count don’t like you, you will be fired soon or you will be managed out.”

Geldreich describes a neo-feudal workplace culture of powerful barons who ruthlessly exercise their whims over temporary favourites, then turn on them during the next “head count reduction” exercise.

I believe that you can’t move to a new way of work by simply saying ‘there are no bosses’ and expect some magical transformation to happen. In fact, changing from a conventional top-down command-and-control organization to something else, based on new principles, involves both a fairly good idea of what the something else is to be, and a plan for taking the organization from A to B.

I am in the process of researching the profound changes that have gone on at Haier, the world’s largest appliance maker, and it will serve as a strong case study supporting my view. In fact, I have recently returned from a trip to Qingdao where I met with many people across Haier, including Zhang Riumin, the CEO and Chairman. About that, more to follow.

On Reflection

from Who Wins in The Gig Economy?

In The Rewards of CEO Reflection, we need to amend the authors’ predilection to address their observations to CEOs and no one else. I think such an approach is limiting, and treats CEOs as some sort of alien species, so if you read the piece, whenever possible imagine generalizing their language in a more inclusive way.

Nonetheless, I found a few points worth relating [emphasis mine]:

Reflective thinking is thinking turned in on itself. In reflective thought, a person examines underlying assumptions, core beliefs, and knowledge. Unlike critical thought, which is aimed at solving a problem and achieving a specific outcome, reflective thought enhances the framing of problems, the search for meaning, and pattern recognition. Mary Helen Immordino-Yang, an associate professor of education, psychology, and neuroscience at the University of Southern California, has written about the role of “constructive internal reflection” in “making meaning of new information and for distilling creative, emotionally relevant connections between complex ideas.”

Reflective thinking engages the medial prefrontal cortex, the part of the brain involved in self-referential mental activities. At rest, this region exhibits the highest metabolic activity and during goal-oriented thinking, lower levels of activity. In other words, reflective thinking and critical thinking exist at opposite ends of a digital switch. When one is “on,” the other is “off.”

So, when we are thinking critically — like when we are evaluating alternative courses of action — we are blocking our capacity to make sense of new information that may be relevant to make that critical decision. Therefore, it is best to segregate the activities: first, dig into the new information and reflect, then later, after any relevant connections between ideas are made it’s time to switch to critical thinking about the alternatives.

Don’t try to do both at once.

I am not slavishly opposed to multitasking, but this is an example of when it absolutely should not be attempted.

Quote of the Day

While institutions (large ones in particular) are traditionally built around hierarchies and prefer to act alone, the new multiorganizational networks consist of (often small) organizations or parts of institutions that link together to act jointly. Building and sustaining such networks requires dense, reliable information flows. As mentioned earlier, today’s information technology revolution enables this by making it possible for dispersed actors to consult, coordinate, and act jointly across greater distances and on the basis of more and better information than ever before.

The rise of the network form is at an early stage, still gaining impetus. It may be decades before this trend reaches maturity. But it is already affecting all realms of society.

| David Ronfeldt, Tribes, Institutions, Markets, Networks: A Framework About Social Evolution (1996)

Note that this predates the web, and the examples of technology Ronfeldt mentions are email and fax!

Originally published at workfutures.substack.com.

--

--

Stowe Boyd
Work Futures

Insatiably curious. Economics, sociology, ecology, tools for thought. See also workfutures.io, workings.co, and my On The Radar column.