Random finds (2017, week 23) — On the crisis of expertise, the Internet of Things (who is it good for?), and serendipitous design
“I have gathered a posy of other men’s flowers, and nothing but the thread that binds them is mine own.” — Michel de Montaigne
Random finds is a weekly curation of my tweets, and reflection of my curiosity.
The crisis of expertise
“Experts are either derided or held up as all-seeing gurus. Time to reboot the relationship between expertise and democracy,” writes Tom Nichols in The crisis of expertise. Nichols is professor of national security affairs at the US Naval War College and adjunct professor at the Harvard Extension School. His latest book is The Death of Expertise: The Campaign Against Established Knowledge and Why It Matters (2017).
Even though experts get things wrong all the time, laypeople have no choice but to trust experts. “We live our lives embedded in a web of social and governmental institutions meant to ensure that professionals are in fact who they say they are, and can in fact do what they say they do. Universities, accreditation organisations, licensing boards, certification authorities, state inspectors and other institutions exist to maintain those standards,” Nichols writes. “This daily trust in professionals is a prosaic matter of necessity. It is in much the same way that we trust everyone else in our daily lives, including the bus driver we assume isn’t drunk or the restaurant worker we assume has washed her hands.”
One of the most common errors that experts make is to assume that, because they are smarter than most people about certain things, they are smarter than everyone about everything.
But when it comes to matters of public policy, things are different. “To say that we trust a college professor to teach our sons and daughters the history of the Second World War is not the same as saying that we therefore trust all academic historians to advise the president of the US on matters of war and peace. For these larger decisions, there are no licences or certificates. There are no fines or suspensions if things go wrong. Indeed, there is very little direct accountability at all, which is why laypeople understandably fear the influence of experts.”
In his book Expert Political Judgment: How Good Is It? How Can We Know?(2005), Philip Tetlock gathered data on expert predictions in social science. He found that “certain kinds of experts seemed better at applying knowledge to hypotheticals than their colleagues. Tetlock used the British thinker Isaiah Berlin’s distinction between ‘hedgehogs’ and ‘foxes’ to distinguish between experts whose knowledge was wide and inclusive (‘the fox knows many things’) from those whose expertise is narrow and deep (‘the hedgehog knows one big thing’). While experts ran into trouble when trying to move from explanation to prediction, the ‘foxes’ generally outperformed the ‘hedgehogs’, for many reasons,” Nichols writes.
“Hedgehogs, for example, tended to be overly focused on generalising their specific knowledge to situations that were outside of their competence, while foxes were better able to integrate more information and to change their minds when presented with new or better data. ‘The foxes’ self-critical, point-counterpoint style of thinking,’ Tetlock found, ‘prevented them from building up the sorts of excessive enthusiasm for their predictions that hedgehogs, especially well-informed ones, displayed for theirs.’”
People with a very well-defined area of knowledge don’t have many tools beyond their specialisation, so their instinct is to take what they know and generalise it outward, no matter how poorly the fit is between their own area and the subject at hand.
According to Nichols, “There are some lessons in all this, not just for experts, but for laypeople who judge — and even challenge — expert predictions.”
First and foremost, failed expert predictions don’t mean very much in terms of judging expertise itself. “The goal of expert advice and prediction is not to win a coin toss, it is to help guide decisions about possible futures.” Besides, experts “usually cover their predictions […] with caveats, because the world is full of unforeseeable accidents that can have major ripple effects down the line. History can be changed by contingent events as simple as a heart attack or a hurricane.” Yet, despite their importance, most laypeople tend to ignore these caveats.
And whereas professionals must own their mistakes, laypeople must exercise more caution in asking experts to prognosticate. If they refuse to take their duties as citizens seriously, and if they don’t educate themselves about issues important to them, The rule of experts, so feared by laypeople, will grow by default, and “democracy will mutate into technocracy,” Nichols argues.
Who is the Internet of Things good for?
Interconnected technology is now an inescapable reality — ordering our groceries, monitoring our cities and sucking up vast amounts of data along the way. The promise is that it will benefit us all. But in Rise of the machines: who is the ‘internet of things’ good for?, Adam Greenfield, who teaches urban design at the Bartlett School of Architecture at University College London, wonders how it can.
The technologist Mike Kuniavsky characterises the internet of things, or IoT, as a state of being in which “computation and data communication are embedded in, and distributed through, our entire environment.” Greenfield however prefers to see it for what it is: “the colonisation of everyday life by information processing.”
“The internet of things isn’t a single technology. About all that connects the various devices, services, vendors and efforts involved is the end goal they serve: capturing data that can then be used to measure and control the world around us,” Greenfield writes.
“Whenever a project has such imperial designs on our everyday lives, it is vital that we ask just what ideas underpin it and whose interests it serves. Although the internet of things retains a certain sprawling and formless quality, we can get a far more concrete sense of what it involves by looking at how it appears at each of three scales: that of our bodies (where the effort is referred to as ‘the quantified self’), our homes (‘the smart home’) and our public spaces (‘the smart city’). Each of these examples illuminates a different aspect of the challenge presented to us by the internet of things, and each has something distinct to teach us.”
With regard to the smart city, Greenfield writes:
“The picture we are left with is that of our surroundings furiously vacuuming up information, every square metre of seemingly banal pavement yielding so much data about its uses and its users that nobody yet knows what to do with it all. And it is at this scale of activity that the guiding ideology of the internet of things comes into clearest focus.
The strongest and most explicit articulation of this ideology in the definition of a smart city has been offered by the house journal of the engineering company Siemens: ‘Several decades from now, cities will have countless autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits and energy consumption, and provide optimum service … The goal of such a city is to optimally regulate and control resources by means of autonomous IT systems.’
There is a clear philosophical position, even a worldview, behind all of this: that the world is in principle perfectly knowable, its contents enumerable and their relations capable of being meaningfully encoded in a technical system, without bias or distortion. As applied to the affairs of cities, this is effectively an argument that there is one and only one correct solution to each identified need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something that can be encoded in public policy, without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)”
According to Greenfield, every aspect of this argument is questionable, but the claim that anything at all is perfectly knowable is downright “perverse.” “However thoroughly sensors might be deployed in a city,” he argues, “they will only ever capture what is amenable to being captured. In other words, they will not be able to pick up every single piece of information necessary to the formulation of sound civic policy.”
The notion that there is one and only one solution to urban problems is deeply puzzling. Cities are made up of individuals and communities who often have competing preferences, and it is impossible to fully satisfy all of them at the same time.
“There is also the question of interpretation. Advocates of smart cities often seem to proceed as if it is self-evident that each of our acts has a single, salient meaning, which can be recognised, made sense of and acted upon remotely by an automated system, without any possibility of error. The most prominent advocates of this approach appear to believe that no particular act of interpretation is involved in making use of any data retrieved from the world in this way. But data is never ‘just’ data, and to assert otherwise is to lend inherently political and interested decisions an unmerited gloss of scientific objectivity.”
Greenfield is also deeply puzzled by the notion that there is single solution to any urban problem. Cities are made up of individuals and communities, often with competing preferences, and it is impossible to fully satisfy all of them at the same time. And even is such a solution existed, the wholesale surrender of municipal management to an algorithmic toolset seems to place an undue amount of trust in the party responsible for authoring that specific algorithm.
“Quite simply, we need to understand that creating an algorithm intended to guide the distribution of civic resources is itself a political act. And, at least for now, nowhere in the current smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.”
“One sceptical observer of many presentations at the Future Cities Summit, suggests that a smarter way to build cities might be for architects and urban planners to have psychologists and ethnographers on the team. That would certainly be one way to acquire a better understanding of what technologists call the ‘end user’ — in this case, the citizen. After all, as one of the tribunes asks the crowd in Shakespeare’s Coriolanus: ‘What is the city but the people?’” — From: The truth about smart cities, by Steven Poole (The Guardian, 2014)
“As matters now stand, the claim of perfect competence that is implicit in most smart-city rhetoric is incommensurate with everything we know about the way technical systems work. It also flies in the face of everything we know about how cities work. The architects of the smart city have failed to reckon with the reality of power, and the ability of elites to suppress policy directions that don’t serve their interests. At best, the technocratic notion that the analysis of sensor-derived data would ever be permitted to drive municipal policy is naive. At worst, though, it ignores the lessons of history.”
Greenfield’s long read in The Guardian is an adapted extract from his latest book, Radical Technologies: The Design of Everyday Life (June 2017).
And this …
In Do Creative Minds Matter?, Kyna Leski, author of the excellent The Storm of Creativity, tells a beautiful anecdote about the serendipitous design of the Metropolitan Opera House Chandeliers, and the role her father, the architect Tadeusz ‘Tad’ Leski, played in their genenis.
After proposing forty-three designs and innumerable perspective sketches to John D. Rockefeller III and the board of the Lincoln Center, ‘Tad’ Leski was “finally seeing some resolution. The long-standing war between modern and traditional architecture was approaching a truce. But then, just before his next meeting with Rockefeller, he splattered a white blob of paint across one of the final sketches. While some perfectionists might have caved, Leski’s right brain caused him to realize a cosmic connection, to dab the mess, draw some white lines through it and transform it into a chandelier of white light on the scale of the Big Bang, for which physical evidence just happened to be appearing. ‘Let’s go with that,’ Rockefeller decided at the meeting shortly thereafter.”
In The Genesis of the Metropolitan Opera House Chandeliers, Kyna Leski adds, “Rockefeller thought the sketches were great. And he particularly liked the idea of the exploded geometry of the splotch as the form of the chandeliers. An accident was the genesis of the Chandeliers of the Metropolitan Opera House. A drop of paint followed the laws of gravity, surface tension and impact instead of the intentions of the artist. This moment of genesis is suspended like a drop over a page just beyond where my father had intended and before an idea of exploded geometry came to light.”
“Origins are critical in establishing authorship,” Leski writes. “But like any beginning, the origin of a work of art or invention is not crystal clear. Constellations, the dots of light in the sky that we connect and name, are imaginary. They inspire myths of princesses, heroes, winged horses and sea monsters. We mentally connect the dots of light as mnemonic devices. Narrative connections serve our imagination and memory. The actual physical locations of these points of light are stars light years away from us, spread out in three and four dimensions. From another point of view, away from the Earth, the constellations would not be recognizable and could not be connected the same way. Different points of view inspire different stories that inform memory and shape what we know.”
The ancient Stoic philosophers regularly conducted an exercise known to us as inversion, says James Clear in Inversion: The Crucial Thinking Skill Nobody Ever Taught You.
“The Stoics believed that by imagining the worst case scenario ahead of time, they could overcome their fears of negative experiences and make better plans to prevent them. […] When I first learned of it, I didn’t realize how powerful it could be. As I have studied it more, I have begun to realize that inversion is a rare and crucial skill that nearly all great thinkers use to their advantage,” Clear writes.
“You can learn just as much from identifying what doesn’t work as you can from spotting what does. What are the mistakes, errors, and flubs that you want to avoid? Inversion is not about finding good advice, but rather about finding anti-advice. It teaches you what to avoid.”
According to Clear, “Inversion can be particularly useful for challenging your own beliefs. It […] prevents you from making up your mind after your first conclusion. It is a way to counteract the gravitational pull of confirmation bias.” Clear believes it’s an essential skill for leading a logical and rational life, and allows you to step outside your normal patterns of thought and see situations from a different angle. “Whatever problem you are facing, always consider the opposite side of things.”
Also Massimo Pigliucci mentions inversion in hi slatest book, How To Be a Stoic. “The basic idea,” Pigliucci writes, “is to regularly focus on potentially bad scenarios, repeating to yourself that they are in fact as bad as they may seem, because you have the inner resources to deal with them. The negative vizualization exercise, what the ancient Romans called premeditatio malorum (literelly, foreseeing bad stuff), may focus on something as mundane as the irritation you feel when someone cuts you off in traffic or on events as critical as the death of a loved on, or even your own.”
Daily Stoic recently interviewed philosopher, bestselling author, and founder of The School of Life Alain de Botton. He concluded one of his answers with the following piece of wisdom:
“Serenity therefore begins with pessimism. We must learn to disappoint ourselves at leisure before the world ever has a chance to slap us by surprise at a time of its own choosing. The angry must learn to check their fury via a systematic, patient surrender of their more fervent hopes. They need to be carefully inducted to the darkest realities of life, to the stupidities of others, to the ineluctable failings of technology, to the necessary flaws of infrastructure. They should start each day with a short yet thorough premeditation on the many humiliations and insults to which the coming hours risk subsequently subjecting them.
One of the goals of civilisation is to instruct us in how to be sad rather than angry. Sadness may not sound very appealing. But it carries — in this context — a huge advantage. It is what allows us to detach our emotional energies from fruitless fury around things that (however bad) we cannot change and that are the fault of no-one in particular and — after a period of mourning — to refocus our efforts in places where our few remaining legitimate hopes and expectations have a realistic chance of success.”
Humanity is more technologically powerful than ever before, and yet we feel ourselves to be increasingly fragile. Why?
“What contemporary post-apocalyptic culture fears isn’t the end of ‘the world’ so much as the end of ‘a world’ — the rich, white, leisured, affluent one. Western lifestyles are reliant on what the French philosopher Bruno Latour has referred to as a ‘slowly built set of irreversibilities,’ requiring the rest of the world to live in conditions that ‘humanity’ regards as unliveable. And nothing could be more precarious than a species that contracts itself to a small portion of the Earth, draws its resources from elsewhere, transfers its waste and violence, and then declares that its mode of existence is humanity as such,” argues Claire Colebrook, a professor of English, philosophy, and women’s, gender and sexuality studies at Pennsylvania State University, in End-times for humanity. She is also the autor of, amongst others, Death of the PostHuman, a series of essays on extinction (2014).
“To define humanity as such by this specific form of humanity is to see the end of that humanity as the end of the world. If everything that defines ‘us’ relies upon such a complex, exploitative and appropriative mode of existence, then of course any diminution of this hyper-humanity is deemed to be an apocalyptic event. ‘We’ have lost our world of security, we seem to be telling ourselves, and will soon be living like all those peoples on whom we have relied to bear the true cost of what it means for ‘us’ to be ‘human.’”
The lesson Colebrook takes from this analysis is that the ethical direction of fragility must be reversed. “The more invulnerable and resilient humanity insists on trying to become, the more vulnerable it must necessarily be. But rather than looking at the apocalypse as an inhuman horror show that might befall ‘us,’ we should recognise that what presents itself as ‘humanity’ has always outsourced its fragility to others. ‘We’ have experienced an epoch of universal ‘human’ benevolence, a globe of justice and security as an aspiration for all, only by intensifying and generating utterly fragile modes of life for other humans. So the supposedly galvanising catastrophes that should prompt ‘us’ to secure our stability are not only things that many humans have already lived through, but perhaps shouldn’t be excluded from how we imagine our own future.”
“Why, if information regarding our polluting and parasitic existence is so extensive, are we so incapable of thinking intensively, of imagining a different inclination beyond that of the adaptation and survival of man?” — Claire Colebrook in Post-human Humanities (part of Death of the PostHuman)
“This is why contemporary disaster scenarios still depict a world and humans, but this world is not ‘the world’, and the humans who are left are not ‘humanity’. The ‘we’ of humanity, the ‘we’ that imagines itself to be blessed with favourable conditions that ought to extend to all, is actually the most fragile of historical events. If today ‘humanity’ has started to express a sense of unprecedented fragility, this is not because a life of precarious, exposed and vulnerable existence has suddenly and accidentally interrupted a history of stability. Rather, it reveals that the thing calling itself ‘humanity’ is better seen as a hiatus and an intensification of an essential and transcendental fragility.”
Almost six months after receiving the Nobel Prize in Literature, Bob Dylan has fulfilled the award’s criteria by delivering a lecture. Recorded June 4th in Los Angeles, finds the rock legend describing “great literature at length both to explain his songs and to show why they’re beyond explanation,” says Spencer Kornhaber in Bob Dylan’s Nobel Lecture Says the Unsayable.
“The speech itself is typically Dylan in a few ways,” Kornhaber writes. “It seems perched between sincerity and trolling, draws from Western culture’s most elemental influences, and works according to its own logic. Reaction has been mixed; some people have pointed out that Dylan’s writing has the sophistication of a high-school book report (e.g.: “Moby Dick is a seafaring tale. One of the men, the narrator, says, ‘Call me Ishmael’”). But part of the point surely is in the colloquial style of his retelling: He’s turning tomes into folktales. He’s also arguably doing something more subtle. Through summary, he’s showing how literature and song defy summary.”
“But the rest of the world cannot let a rogue US destroy the planet. Nor can it let a rogue US take advantage of it with unenlightened — indeed anti-Enlightenment — ‘America first’ policies. If Trump wants to withdraw the US from the Paris climate agreement, the rest of the world should impose a carbon-adjustment tax on US exports that do not comply with global standards.” — Joseph Stiglitz in Trump’s reneging on Paris climate deal turns the US into a rogue state