Random finds (2016, week 41) — On the edge of inside, the paradox of automation, and Maggie’s Farm
Every Friday, I run through my tweets to select a few observations and insights that have kept me thinking.
On the edge of inside
Sometimes a book, like the excellent The Neo-Generalist by Richard Martin and Kenneth Mikkelsen, puts into words so many of one’s own experiences and thoughts “it almost reads like a biography.” David Brooks’ At the Edge of Inside on The New York Times’ opinion pages has a similar effect.
“In any organization there are some people who serve at the core. These insiders are in the rooms when the decisions are made,” Brooks writes. “Then there are outsiders. They throw missiles from beyond the walls. They are untouched by internal loyalties and try to take over from without.”
It’s easy to determine the position of, say, Hillary Clinton and Donald Trump on this insider-outsider spectrum. But according to Brooks, there’s also a third position in any organization: the edge of inside. Those are people within the organization, but wo are not subsumed by group think. The people who work at the boundaries, bridges and entranceways.
People on the edge of the inside are involved in a process of perpetual transformation, not a belonging system. They are more interested in being searchers than settlers.
“The person on the edge of inside is involved in constant change. The true insiders are so deep inside they often get confused by trivia and locked into the status quo. The outsider is throwing bombs and dreaming of far-off transformational revolution. But the person at the doorway is seeing constant comings and goings.”
“Insiders and outsiders are threatened by those on the other side of the barrier. But a person on the edge of inside neither idolizes the Us nor demonizes the Them. Such a person sees different groups as partners in a reality that is paradoxical, complementary and unfolding.”
There are downsides too, says Brooks. “You never lose yourself in a full commitment. You may be respected and befriended, but you are not loved as completely as the people at the core, the band of brothers. You enjoy neither the purity of the outsider nor that of the true believer.”
“But the person on the edge of inside can see reality clearly. The insiders and the outsiders tend to think in dualistic ways: us versus them; this or that. But […] the beginning of wisdom is to fight the natural tendency to be dualistic; it is to fight the natural ego of the group. The person on the edge of inside is more likely to see wholeness of any situation. To see how us and them, which seem superficially opposed, are actually in complementary relationship within some larger process.”
My biography in a single sentence: a neo-generalist who is stending on the edge of inside …
On the paradox of automation
According to psychologist James Reason, author of Human Error, “manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills […] when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”
We increasingly let computers fly planes and carry out security checks. Driverless cars are next. But is our reliance on automation dangerously diminishing our skills?
“This problem has a name: the paradox of automation,” Tim Harford writes in a long read in The Guardian, Crash: how computers are setting us up for disaster. “It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face.”
Gary Klein, a psychologist who specialises in the study of expert and intuitive decision-making, summarises the problem: “When the algorithms are making the decisions, people often stop working to get better. The algorithms can make it hard to diagnose reasons for failures. As people become more dependent on algorithms, their judgment may erode, making them depend even more on the algorithms. That process sets up a vicious cycle. People get passive and less vigilant when algorithms make the decisions.”
“Decision experts such as Klein,” Harford continues, “complain that many software engineers make the problem worse by deliberately designing systems to supplant human expertise by default; if we wish instead to use them to support human expertise, we need to wrestle with the system. GPS devices, for example, could provide all sorts of decision support, allowing a human driver to explore options, view maps and alter a route. But these functions tend to be buried deeper in the app. They take effort, whereas it is very easy to hit ‘Start navigation’ and trust the computer to do the rest.”
A possible solution to the paradox of automation is to reverse the role of computer and human. “Rather than letting the computer fly the plane with the human poised to take over when the computer cannot cope, perhaps it would be better to have the human fly the plane with the computer monitoring the situation, ready to intervene. Computers, after all, are tireless, patient and do not need practice. Why, then, do we ask people to monitor machines and not the other way round?”
A bit more …
“I try my best to be just like I am, but everybody wants you to be just like them. They sing while you slave and I just get bored.” — Bob Dylan in Maggie’s Farm
Shortly after it was announced that Bob Dylan had won the Nobel Prize in Literature, an interviewer put to the Nobel Permanent Secretary Sara Danius the notion that because Dylan isn’t known for novels or traditional poetry, the committee has “widened the horizon” of the literature prize. Darius pushed back:
“Well, it may look that way. But really, we haven’t, in a way. If you look back, far back, 2,500 years or so ago, you discover Homer and Sappho. And they wrote poetic texts that were meant to be listened to, they were meant to be performed, often together with instruments. It’s the same way with Bob Dylan. But we still read Homer and Sappho and we enjoy it. And same thing with Bob Dylan. He can be read and should be read, and is a great poet in the grand English poetic tradition.”
Source: Bob Dylan’s Nobel Prize Isn’t About Music, by Spencer Kornhaber, The Atlantic.
In Bob Dylan as Richard Wagner, Alex Ross writes:
“The debate over Dylan’s Nobel will go on for a while, as it should: lockstep praise is never healthy. Chances are, however, that in fifty years the Swedish Academy, which has made any number of dubious and perplexing choices over the years, will have no reason to feel ashamed of the selection of Bob Dylan — and will have no need to explain who he was. Love him or hate him, he is a towering figure in modern culture: his appeal has transcended generations and inspired small libraries of commentary. He deserves the Nobel Prize in Word and Tone, but this will have to do.”
“Part of figuring it out is being able to work across barriers and differences. There’s a certain faith in rationality, tempered by some humility. Which is true of the best art and true of the best science. The sense that we possess these incredible minds that we should use, and we’re still just scratching the surface, but we shouldn’t get too cocky. We should remind ourselves that there’s a lot of stuff we don’t know.” — President Obama
“It’s hard to think of a single technology that will shape our world more in the next 50 years than artificial intelligence. As machine learning enables our computers to teach themselves, a wealth of breakthroughs emerge, ranging from medical diagnostics to cars that drive themselves. A whole lot of worry emerges as well. Who controls this technology? Will it take over our jobs? Is it dangerous? President Obama was eager to address these concerns. The person he wanted to talk to most about them? Entrepreneur and MIT Media Lab director Joi Ito. So I sat down with them in the White House to sort through the hope, the hype, and the fear around AI. That and maybe just one quick question about Star Trek,” says Scott Dadich in WIRED.
Joi Ito: “It’s actually nonintuitive which jobs get displaced, because I would bet if you had a computer that understood the medical system, was very good at diagnostics and such, the nurse or the pharmacist is less likely than the doctor to be replaced — they are less expensive. There are actually very high-level jobs, things like lawyers or auditors, that might disappear. Whereas a lot of the service businesses, the arts, and occupations that computers aren’t well suited for won’t be replaced. I don’t know what you think about universal basic income6, but as we start to see people getting displaced there’s also this idea that we can look at other models — like academia or the arts, where people have a purpose that isn’t tied directly to money. I think one of the problems is that there’s this general notion of, how can you be smart if you don’t have any money? In academia, I see a lot of smart people without money.”
President Obama: “You’re exactly right, and that’s what I mean by redesigning the social compact. Now, whether a universal income is the right model — is it gonna be accepted by a broad base of people? — that’s a debate that we’ll be having over the next 10 or 20 years. You’re also right that the jobs that are going be displaced by AI are not just low-skill service jobs; they might be high-skill jobs but ones that are repeatable and that computers can do. What is indisputable, though, is that as AI gets further incorporated, and the society potentially gets wealthier, the link between production and distribution, how much you work and how much you make, gets further and further attenuated — the computers are doing a lot of the work. As a consequence, we have to make some tougher decisions. We underpay teachers, despite the fact that it’s a really hard job and a really hard thing for a computer to do well. So for us to reexamine what we value, what we are collectively willing to pay for — whether it’s teachers, nurses, caregivers, moms or dads who stay at home, artists, all the things that are incredibly valuable to us right now but don’t rank high on the pay totem pole — that’s a conversation we need to begin to have.”
Where is your mind? Where does your thinking occur? Where are your beliefs? These are the kind of questions the English philosopher and writer Keith Frankish explores.
In The mind isn’t locked in the brain but extends far beyond it, he writes that “René Descartes thought that the mind was an immaterial soul, housed in the pineal gland near the centre of the brain. Nowadays, by contrast, we tend to identify the mind with the brain. We know that mental processes depend on brain processes, and that different brain regions are responsible for different functions. However, we still agree with Descartes on one thing: we still think of the mind as (in a phrase coined by the philosopher of mind Andy Clark) brainbound, locked away in the head, communicating with the body and wider world but separate from them. And this might be quite wrong. I’m not suggesting that the mind is non-physical or doubting that the brain is central to it; but it could be that (as Clark and others argue) the mind extends beyond the brain.”
“Of course, we think of ourselves as being situated in our heads. But that is because of how our perceptual systems model the world and our location in it (reflecting the location of our eyes and ears), not because our brains happen to be in there. Imagine (if it isn’t too gruesome) having your living brain temporarily removed from your skull, nerve connections intact, so that you could hold it and look at it. You would still seem to be in your head, even though your brain was in your hand.”
“Throughout history, people have always worried about new technologies. The fear that the human brain cannot cope with the onslaught of information made possible by the latest development was first voiced in response to the printing press, back in the sixteenth century. Swap ‘printing press’ for ‘internet’ and you have the exact same concerns today, regularly voiced in the mainstream media, and usually focused on children,” writes Dean Burnett, the author of The Idiot Brain about the weird and confusing properties of the brain, in Is the internet killing our brains?. “But is there any legitimacy to these claims? Or are they just needless scaremongering?”
One of the concerns is that the constant access to information stored online is atrophying or disrupting our memories. Why bother to remember anything when you can just Google it, right? But according to Burnett, our “memory doesn’t quite work that way. The things we experience that end up as memories do so via unconscious processes. Things that have emotional resonance or significance in other ways tend to be more easily remembered than abstract information or intangible facts. These things have always required more effort to remember in the long term, needing to be rehearsed repeatedly in order to be encoded as memories. Undeniably, the internet often renders this process unnecessary. But whether this is harmful for the development of the brain is another question.”
The NYC Dance Project is set to launch a new book, The Art of Movement, featuring breathtaking portraits of more than 70 of the world’s most talented dancers from American Ballet Theatre, New York City Ballet, Martha Graham Dance Company, Royal Danish Ballet, the Royal Ballet and many others. The Art of Movement will be in bookstores from 25 October. Discover more at www.nycdanceproject.com.
Frank Gehry has made a name for himself by being unafraid to pursue his vision, even when other people think he’s nuts. In Barbara Isenberg’s Conversations with Frank Gehry, the architect remembers how his early educational experiences led him to understand what he was really supposed to be doing with his life:
“Two of my USC professors, the landscape architect Garrett Eckbo and Simon Eisner, who taught city planning, knew my liberal political do-gooder leanings, because they were like that. They also knew I wasn’t interested in doing rich guys’ houses and that I would be more emotionally inclined toward low-cost housing and planning. They urged me to apply to Harvard Graduate School of Design and recommended that I take graduate work in city planning.
I didn’t know what it entailed to study city planning. The city planning class, the so-called design class, was run by Charlie Eliot, who was the grandson of the president of Harvard [1869–1909]. He was an administrative planner, and design was not his thing. He had us do research on how the cities and towns around Harvard were run, and then he’d give us problems to solve on structure and other things that were scintillatingly boring. So when our final project was to come up with a master plan for Worcester, Massachusetts, I thought I could finally do what I wanted to do. I approached it like an urban design project, figuring out the ring roads and the parking and making an urban center, like I studied at Gruen’s. When it was my time to present. Charlie Eliot stops me and says, ‘Mr. Gehry, you have completely ignored the problem I’ve given you. This has nothing to do with this class. Would you please stop?’”
Shortly thereafter, frustrated and angry with Eliot, Gehry transferred into design studies.
Source: Frank Gehry, Liz Diller, Rem Koolhaas and Others Share Crucial Moments in Their Education, by Julia Ingalis, Archinect.
More Gehry in My days as a young rebel, a 1990 TEDtalk in which he takes a whistlestop tour of his early work, from his house in Venice Beach to the American Center in Paris, and Random thoughts — The Old Man and The Fish.
“Anything of a serious nature isn’t ‘instant.’ You can’t ‘do’ the Sistine Chapel in one hour.” — Leonard Bernstein