“Means Well” Technology and the Internet of Good Intentions

Natalie Kane
Phase Change
Published in
7 min readJan 5, 2016
Flickr / Pelle Sten

So, let’s talk about the color-changing condom. Proposed as a potential idea by a group of three schoolboys to quite literally highlight where your partner or particular hook-up has an STI, this helpful prophylactic turns green for chlamydia, purple for genital warts, blue for syphilis and yellow for herpes.

Seems like a good idea right? Well, not really, I know that it is an innovation that means well but what hasn’t been anticipated is the conversation that happens before, during, and after the use of this new technology. What this could signal for the future of quantified self and personal diagnostic tools, and the way we mediate important emotional exchanges through them, is vital in understanding where those socio-technical gaps lie, where the breaks and fractures happen, and to whom.

Means well technology seems to exist in isolation of how we normalize and understand objects, never quite understanding or using them how the designer wants us to, because we are humans with doubts and fears and cultural ‘stuff’ that often rubs up against the technology that is supposedly meant to help us.

In the example of the color-changing condom, talking about STIs and contraception, and even sex, is hard for young people (and older ones), so why do we think adding more technology like this into the mix will make it easier? Can you imagine the conversation that happens when your partner recognizes the brand that uses this newly-developed antibody, one that you’ve been trying to secretly settle your fears? Or when the colour changes, how do you talk about it without all of the embarrassment, shock and shame? Short answer is, you can’t.

These designs don’t come with a guide on how to talk to your partner about these things, and there’s an argument that they shouldn’t have to, as Scott Smith highlighted within his Emotion and the IoT talk at Thingscon in Amsterdam last year, good intentions can be problematic, such as the case of the Samaritan’s Radar, which alerted you to signs of depression in your friend’s social media feeds, neglecting the opportunities for abuse of the most vulnerable that could arise. If your friends are told you’re depressed, so can those not after your best interests.

Winning the TeenTech award this year, one of the designers Chirag Shah said of the product “It prevents [people] from getting embarrassed going to clinics, and [lets them] find out in the privacy of their own home.” However, holding this attitude to self-diagnostic tools is equally moot, because again, what do you do with that information? If you discover that you have an STI, or other health concern signaled by a device, you still have to see a person to deal with the problem, or, as unfortunately is the case with many late diagnoses, not see anyone at all.

There’s another huge strand of design that comes under this umbrella of means well technology but ultimately carries the same weight, overpopulated with gadgets and chemicals meant to protect; rape prevention technology. The minute that this popped into design practice, it caused controversy, and rightly so, as apart from this thinking insisting responsibility upon the victim, it ultimately encourages designers to continually further immediate, short-term detection as a desirable design characteristic rather than seeing where this solution causes more problems within an already complex system.

But as with the color-changing condom, what do you do when your nails do turn that particular shade of pink? How can you escape the situation without causing alarm, which may cause you the same level of danger and vulnerability? So often these myriad emotional, social, cultural interactions aren’t considered because, well, it’s difficult, and because on an individualistic, preservation-oriented level, we use these devices to look out for ourselves.

We do not exist in a vacuum with the technology we choose, as in the case of Google Glass, other people’s technology happens to us. We are subject to someone else’s solution, which causes us problems that we are only starting to full understand. So, if we develop further personal technologies of detection and protection, what does it mean to have this often one-way knowledge, to see without knowing what to do with what is shown?

Ultimately, the technology shows you a problem, but not how to deal with it afterwards, or prepare you for it. It attempts to nudge you into behaviors without know what behaviors it is nudging you out of. As designers, we know how we’d like our designed object to work, that you, or your user, are saved in the nick of time, or that you can have a good laugh about the STI that has been revealed afterwards, but we know that just isn’t the case.

Stigma, and our understanding of shame and normalcy, is at the crux of a lot of means welltechnology. As Georgina Voss outlines in her talk “Sensitive Media” at Thingscon 2015, shame and stigma are socially constructed, and varies ‘massively by space to space’ with the definitions defined culturally, not by the technology as an isolated thing. That difference between an object-as-designed, and an object in-the-world, is like oxidization, with the systems that come into contact with it developing as rust.

Like some proposed date rape prevention measures, they mean well, and in a typical use case scenario (we need to stop having “ideal” use-case scenarios, or assume that we are best placed to determine what a problem is) it works as intended, but what happens when it’s found out in the world as it is for us now? Who steps in when the product “works” as intended?

Behavior matters, because if we start purely solving problems with technology, and not analyzing the social and cultural systems this technology exists within, we’re in trouble. Rather than designing purely for solution, we should be looking at the narratives driven around these things, and not just in focus groups and optimum use case scenarios. Looking at how people break things now, and make them fit their social norms, cultural mythologies, and group behaviors. Can you remember how many horror stories you heard as a teenager about sex and relationships, how many things you had to silently dispel as a young adult, fearful of being found out for your ignorance?

Paro, the robotic seal

We’re already seeing this means well technology in the areas where we need to be listening most, in situations of care. As colleague and friend Tobias Revell writes and often mentions whenever I talk to him about Paro, our favorite* animatronic seal pup gradually being used as a companion for older people in Japan, that it is an “algorithm in a seal-suit…sensors programmed to respond in a certain way — a disparity between reality and reality-as-experienced in a cute furry object — a constructed fantasy.

“It means well, but ultimately it normalizes the offloading of emotional labour into something less messy, and less complicated than a human. ”

It will listen to your stories where your overworked, time-poor grandchildren, or partner, or carer won’t. It signals a future that we think is inevitable and unavoidable, that we will forever continue on a trajectory that means that these objects are a requirement. The same can be said of the excellent design fiction film Uninvited Guests by Superflux, which highlights the precise problems of pushing technology on those that you think will need it, the elderly, when it is actually you that needs it to validate the fact you’re not a terrible child to your aging parents. That this is the only way to cope with an aging population, rather than look at design as a means to enable, not placate. A smart walking cane that forces 10,000 steps on you is not an enabler of health, it is a prison of homogenous expectation. Sara Watson in her piece for the Atlantic, talked of the need for fitness technology to be dynamic, because the universally accepted metrics embedded in our understandings of fitness rub against something as acutely individual as a recovery process.

Evan Selinger and Brett Frishmann urge, rightfully, on stepping away from the “programmable world’”, where we are made predictable by the models of data collection enacted upon us, from social media to the internet of things for the purpose of ease, betterment, or a richer understanding of humanity. If this means well technology becomes a standard element of this — which it arguably has already — in its catch-all application of kindness, helpfulness or safety which doesn’t acknowledge the instability and variety of context and behaviors, then we are essentially designing for a version of the world in which the fire is always put out.

I’m not arguing that we should stop being altruistic in our technologies, but rather who we’re being altruistic for and how. It’s difficult to know every circumstance in which your newly developed technology can be abused, or broken by clashes with socio-cultural behaviors and constructs, but knowing that it can be broken or abused is a good starting point. Acknowledging that your innovation will be inserted into a system, rather than a disrupter or savior of it, might just help navigate away from the homogenous, simplistic and “programmable” future that we’re mistaking as being a preferable one.

*not our favorite.

Originally published on Thingclash.

--

--

Natalie Kane
Phase Change

Curator. Writer. Researcher. Inhabits the worlds of FutureEverything, Changeist and many more. Http://ndkane.com