A lost memory from my childhood Christmas TV schedule, also lost in 2015

Revisiting six memos

murraygm
Design, Strategy, Data & People

--

Back in January I proposed Six memos for 2015, in which I suggested six ideas that I believed would be popular or at least interesting in 2015. They were: Resilience, Ambient persuaders, Ambiguity, Mavens, Emotional sensing and Personal data rights. The thing about these sort of clickbait ‘predictions’ is that people don’t tend to revisit them, they just push out a new set every year — as if the list itself is it’s very own medium. So to break with that tradition, I’m not publishing a new list, instead I’m going look at how much traction they got last year and explore some of the ideas that surround them.

First up, Resilience.

As a concept for business it didn’t really take off in 2015. Overall it increased in use only slightly (according to Google Trends). Mostly the notion of resilience is still focussed squarely on the environment and climate change or our personal psychological state and ability to deal with stressful situations. Of course both of those things do impact the business world, whether that’s at a strategic level or personal work/life activities. I think one of the issues with idea of resilience is that it’s about ‘weathering the storm’ — it’s about coping with the bad things, the stressful things, the negative events. It doesn’t fit the upbeat, peppy business talk, where we are agile, responsive, adaptive, lean, growth hacking super bros (especially in the US). I still like it as an idea for thinking about business and design, and there‘s plenty of writing in relation to investment risk and volatility (eg: FM Global’s Resilience Report which is at a macro economic level). I’ve recently been reading Antifragile by Nassim Nicholas Taleb and I think his notion of “antifragile” over the idea of resilience is rather brilliant. It helps get past “weathering the storm” to the slightly more positive position of “what doesn’t kill you makes you stronger”. For him an antifragile system is one with the ability to gain or benefit from those things that diminish or harm fragile systems. And he describes those things as “The Extended Disorder Family”.

The Extended Disorder Family from Antifragile

Resilience is important in dealing with the effects of things mentioned above, but I think that I missed the subtlety in the meaning of ‘resilience’ before. It’s important for us not to simply rebound from adversity but to learn from it and come back stronger. I still think we are going to hear a lot more about resilience in the coming years and most likely without the nuanced understanding that ‘antifragile’ brings to the table. I believe this because our exposure to the ‘disorder family’ will grow, thanks to climate, economic, social, technological and cultural change in the coming years. Ultimately understanding and creating ‘antifragile’ systems will be the key to dealing with that change but this will start from a standpoint of being resilient.

Ambient persuaders

This was also a bit of a misnomer. The focus has continued to be about ‘nudging’ and thankfully, with a lot more being written about the ethics of nudges and how people react to being nudged. Although behavioural economics continued to be a hot topic this year, not much was said about the tactical side, about the manifestation of the nudge or indicator, about the ambient signifier (with the exception of sites like ambient-accountability.org and Dan Lockton’s work). When we did hear about these techniques most was around climate action, governmental policy change and wearable devices, especially regarding health. I chose the phrase ‘ambient persuader’ because for me a ‘nudge’ is an action and I wanted to point out the manifestation, the thing that leads to the action. I wanted to find a way to talk about those designed elements whose purpose was to nudge. In the ’50s Vance Packard wrote about The Hidden Persuaders in advertising and PR. For me much of nudging comes from those ideas. The creation of devices that trigger psychological states that nudge people towards certain actions. The ambient idea is really about separating them from the dark patterns of MadMen. These aren’t the subliminal messages telling me I’m worthless because I don’t have the latest gadget, but rather background signals that indicate to me that I’m making a good decision or that I need to change what I’m doing to help improve my life or the life of others. Ambient persuaders can also be hints to mindfulness and connectedness eg. even that my partner, although thousands of miles away, is there thinking about me, alive, and in tune. This year we saw the Apple watch and the ability to simply tap a friend on the wrist from afar and even this Kickstarter for a connected pillow to hear your partner’s heartbeat. They both are about reminding or remaining intimately connected, to ensure the other person is ‘in mind’. They aren’t strictly ambient or persuasive but they are less intrusive than the tyranny of notifications that overt nudges bring, where the contemporary manifestation of MS Office Clippy could permeate our world. Here’s a brilliant, if a little dystopian glimpse from Superflux at what that could be like.

That’s why I believe there’s still a lot more discussion to have around ambient persuaders. We need to investigate how they can be used to hint, guide and show the way, rather than overtly nag us. We need to talk openly about the techniques employed, the ways in which they manifest and the impact they may have on our already very noisy lives. So let’s see what happens in 2016. One thing already to look forward to is Dan Lockton’s book Design with Intent for O’Reilly later this year.

Ambiguity

This year we saw a lot about deep learning and attempts to teach AIs to handle nuance. Ultimately how an intelligent machine deals with ambiguity will be the key to their usefulness and thus how deeply we allow them into our lives. Researchers are beginning to understand more about the types of ambiguities that arise with machines and how they can be very different to those of humans. Image content processing and understanding is an area where we’ve seen a lot written about this year. The horrendous classification mistake by Google’s Photos App wasn’t a decision that would have even registered as ambiguous to humans, we would have classified correctly (unless deliberately making a racist slur). But the machine made a terrible mistake, one that rightly caused a lot of hard questions to be asked of the engineers. But that’s where I think the big problems lay for AI, what it may be ‘certain’ about could be an area of ambiguity for humans and things we are certain about may be very ambiguous for the machine, and in ways that don’t match what we understand about ourselves. How we ‘see’ and ‘understand’ an image, is very specific and it doesn’t mean that a machine when taught to ‘see’ and to ‘understand’ images will do it in exactly the same way we do — we don’t know enough about ourselves to truly model like for like. Ultimately the machine thinks differently and what it finds ambiguous can also be different — here’s a great piece in Nautilus on exactly that.

The other side of ambiguity is the need to embrace it, as it makes us the interesting nuanced creatures we are. And that also came up a lot this year. Mostly in reaction to Big Data. Mushon Zer-Aviv in a presentation for the HKW 100 Years Project talked about the desire for ‘disambiguation’. He speaks of how we look to big data as a perfect representation of the real world and how we employ reductive approaches to ‘disambiguated’ and create a common point of understanding. But how in doing so we lose all that is real, human and valuable. He rejects this and calls for a ‘reambiguation’ of things. Which reminds me of the Swiss historian, Jacob Burckhardt and his idea that “the essence of tyranny is the denial of complexity”. That’s from the late 1800’s when he feared the “terribles simplificateurs [simplifiers]”; the employment of generalization and abstraction to divide and categorise and ultimately remove individual agency. This reflects the fear of being analytically excluded through blunt categorization and normalisation of populations (without the nuances that come with ambiguities).

So in one form or another ambiguity was a big topic this year, and I believe next year we will see this debate focus further, driven by questions about how machines can accurately and safely make decisions that impact our lives.

Mavens

I don’t think there’s any escape form these lone wolves, isolated experts, and crazed egos, especially as even more is being said about the trials of working collaboratively and Collaborative Overload. However there were a few glimmers of hope. Mostly these take the form of opening up debate and questioning those who speak as authorities. We started seeing the beginning of this with people questioning the position of the storyteller (especially in data presentation). People revealing the mechanics, the tricks and role of the unreliable narrator. One of the keys to challenging these mavens is in opening up dialogue and enabling collaborative discourse. Not closed ‘truths’ (to be accepted) but data and facts open to secondary investigation — open to all of Carl Sagan’s BS detection.

In a great piece by Catherine D’Ignazio she asks What Would Feminist Data Visualisation Look Like? One point that particularly resonated with me, especially in regards to ‘mavens’ was that we “Make dissent possible”.

… one way to re-situate data visualization is to actually destabilize it by making dissent possible. How can we devise ways to talk back to the data? To question the facts? To present alternative views and realities? To contest and undermine even the basic tenets of the data’s existence and collection? A visualization is often delivered from on high. An expert designer or team with specialized knowledge finds some data, does some wizardry and presents their artifact to the world with some highly prescribed ways to view it. Can we imagine an alternate way to include more voices in the conversation? Could we effect visualization collectively, inclusively, with dissent and contestation, at scale?

So let’s keep tugging at the curtain and revealing the reality of these wizards, let’s ensure that their ‘facts’ are not blindly accepted but rather points for discussion and where necessary, dissent.

Emotional sensing

The use of emotional sensing for UX and experience research has continued to rise, but it’s not as prevalent or talked about as I expected. Much is still focussed on simply measuring the ‘effect’ of a design and other forms of evaluative research. In fact we are still battling over the use of empathy in business and battling even harder to convince many organisations to treat people with respect. There are many that still hold the belief that empathy has no place in business, economics, data or science and that it must always be a case of pure objective scientific detachment or left to the market to sort out. Personally I believe those people are deluded if they truly believe they can dehumanise themselves and the systems they create (I know, not a very empathetic thing to say). But emotional sensing isn’t really about a generalised idea of empathy, it’s probably more closely aligned to cognitive empathy. Cognitive empathy is about recognising and understanding another’s emotional state, sometimes know also as ‘perspective taking’ (Check out Indi Young for a practical guide to it’s use in design and business research).

It’s this kind of emotional sensing and cognitive empathy that’s been on my radar this year. There have been a lot articles about machines being able to recognise emotions. First up there’s a burgeoning category of applications or systems that analyse behaviour to access emotional state such as the use your smart phone behaviour to derive whether you may be suffering from depression. This is indirect emotional sensing. It’s machine learning, big data, pattern analysis after the fact and a form of diagnostic recognition. Not really about understanding the person, more just about profile matching in the model. However, there’s also been a fair amount written about the recognition of human emotions by machines in human machine exchanges. This opens up the interesting part, machines applying cognitive empathy, where they understand the emotions in the exchange and modify of their behaviour based on the emotional responses from the human. And that is key to ‘authentic’ feeling, human/machine exchanges and our acceptance of robot helpers, assistances and nurses. But it’s an area full of ambiguity and difficulty as often humans aren’t particular good at it either. The field is called ‘affective computing’. On one side you have companies like Affectiva who are building deep datasets for real-time recognition of “emotional responses to digital media”. Which does feel a little horrifying as linked closely to the Advertising world. On the other we have Microsoft and Azure recognising emotions in pictures. Many of the underlying techniques are being incorporated into services and technology right now, so I expect to hear a lot more about emotionally aware and emotionally responsive services, interfaces and systems this year. Mostly from the robotics domain as the costs for consumers are coming down, fancy one of these? — https://www.autonomous.ai/personal-robot.

But of course all this talk of emotions came in the year we had the brilliant Inside Out from Pixar. A film that squarely focussed our attention on an animated child’s emotional development.

Personal data rights

So that brings me to the last of my memos from 2015; Personal data rights. When I wrote about personal data rights I was thinking about the need for a shift in ownership. A model where the individual held (by default) the right of use and access to any data about them and more importantly created from or by them. There’s a great piece on data ownership here via the Quantified Self blog, but not much else out there.

However, back in June I visited QS15 (The Quantified Self Conference in San Francisco) and it opened my eyes to the sort of data people are collecting and sharing. It was incredible, and much of it was essentially people trying the hack their health and understand their minds or bodies better. Many of us share our steps, running and fitness data with the likes of Apple, Fitbit and RunKeeper but what about those other data sets from the body. At QS15 there were some amazing individuals willing to share their journeys collecting and analysing very personal data (great set of videos of the talks available here). People are tracking all manner of things, from the standard fare of location and activity, through to sleep, heart rate, blood pressure and heart rate variability (great resource for HRV analysis on Paul LaFontaine’s blog), and on to microbiome (and not just the gut), brain activity, detailed aspects of their menstrual cycle, blood markers, glucose and even building custom hardware for controlling diabetes, or monitoring the electromagnetic fields in their apartment. A lot of this was very individual and personal, often spreadsheets and notebooks and done to help with an existing condition or to try and improve performance or quality of life. But it has a very active community feel, with people sharing, helping and supporting each others’ efforts to arrive at techniques and best practices. In some cases services and companies have stepped in to support this, most notably uBiome that offers a simple and fairly cheap way to get you microbiome sequence. And a lot more are coming. Some linked directly to sensors and devices or focused on specific issues or data. Others are looking broader and at the use of self collected data to inform on mass, as more and more people start to see the benefits from experimentation, small group analysis and sharing multiple types of data to further understand very specific issues. One company that is making a play for this collaborative personal data space is We Are Curious (not public as of writing). They promise a way to pool your data and use it to ask questions about your well being and health and it’s CEO is Linda Avey (co-founder of 23andme the consumer DNA profiling company). These are all outside of the mainstream health industry and still (mostly) have a hacker or DIY ethos driving them. One of the conversations I sat in on at QS15 was concerned with the use of personal data by corporations and organisation to exclude people. The main concern was with how a deep but partial view of an individually means that they may be categorised as ‘abnormal’ essentially outside of the standard deviation for this of that measure (could be blood pressure), and thus flagged as higher risk. The fear many had was that this information may be used or shared without consent and thus end up being used to inform other machine driven scoring systems like health insurance costs or even access to resources. The problem here is that the ‘population’ is that you are measured against and those who already feel like outsiders fear further marginalisation and see this as an acute issue. And it’s not quite as paranoid as it sounds, if we think about how IBM’s Watson focus of late has been heavily on its use in health care (here’s XKCD’s thoughts on that) or the fact that Health and Life insurance is a $644 billion industry in the US. In fact insurance companies are leading the charge here. NPR ran an article in April on how John Hancock Insurers want you to trade data for discounts and in the auto insurance market, data on how you drive is quickly becoming the new model for how your premiums are calculated.

Ultimately that’s the key question, what do you get in return for sharing your data? Unfortunately that in itself can be a little short sighted as it focusses on the immediate one-to-one exchange, nothing is said of how that data will be used later. Often we have no transparency regarding how it will be mined for patterns, aggregated, deep learned and modelled, providing the as yet unknown insights and decisions to shape the business or organisation that wields it. The belief is that by understanding this big data they will be able to hold a mirror up to reality and judge your part in it. Those who are good and ‘play by the rules’ will be rewarded and those who don’t, punished by higher costs or exclusion. All driven by the algorithm, automated and no longer subject to human error or ambiguity. But as I mentioned before ‘ambiguity’ means that machines don’t always get things right and we must be careful not to simply yield to the idea of the perfect model, as the statistician George E. P. Box put it:

“The most that can be expected from any model is that it can supply a useful approximation to reality: All models are wrong; some models are useful”.

We are generating richer, more detailed, more specific and more personal data than ever before. Sharing it can be very beneficial but what will be the cost of blindly feeding the model? For me there’s still a lot that needs to be discussed about this secondary use of the data we share (whether or not it’s anonymised). I expect to see more this year on where this secondary use of data is seen as invasive or unethical and debate about who really owns it.

So that’s the review of last year’s six memos Resilience, Ambient persuaders, Ambiguity, Mavens, Emotional sensing and Personal data rights. Did I capture the zeitgeist back in January 2015? Perhaps not, but I believe these ideas still have plenty of time to play out and expect most of them in to crop up again in 2016.

--

--