Part I introduced care ethics and care politics, Part II looked at care work and care issues in design and engineering. Part III discussed what Star Trek and Cheers tell us about care and the workplace. This is Part IV, where I conclude and talk about next steps.
Subscribe to my Tinyletter to get occasional updates on this project and other things I’m working on.
Thank you for bearing with me while I discussed television. Hopefully, you all found that exercise helpful in thinking about how different forms of care work are set up, constructed, and utilized in a broader scheme. There are different infrastructures of care in play in each of these examples, different ways in which care has been accounted for, neglected, outsourced or resourced, bought or sold.
As I conclude this talk, I want to bring up a few points raised earlier. First, the limits of the current system of valuing care work in technology, and what we call “technical” work. Second, recognizing the value of care work. And third, how we might integrate care ethics into our current open source activism.
I believe that we can make steps towards recognizing the value of care, and in recognizing the care work, both paid and unpaid, that contributes to making technology. And in doing so, we can begin to put different leadership mechanisms in place, with the ROI of building better technology and better communities. Moreover, I think Open Source communities, facing what Audrey Eschright describes in the problems and challenges of the third wave, can innovate and lead in addressing care work.
Before I conclude, I want to bring attention to one of the most harmful implicit assumptions we make about care work: the characterization that it is “not technical” in nature.
Tech culture tends to call things “technical” that it prizes, and to characterize all other necessary labor as “not technical”. Kate Losse, an early support worker at Facebook, describes how one common pattern in start-ups: free labor from female friends “doing everything from recruiting engineers from their social networks, mediating founder relationships and disputes, providing product feedback, designing social events” often leads to organizational cultures that devalue such work, “considering it optional or “fun,” perhaps a matter of social obligation, rather than serious and valuable.”
In this scenario, work defined as “technical” is limited in scope, highly valued, and considered the sole responsibility of engineers, while “non-technical” jobs are: “often expected to be freely performed by people who may have other jobs at the company (such as administrators) or who may not have any job at the company at all (such as girlfriends and dates).”
Yet, this is a purposefully opaque, gendered, and inequitable interpretation of what it means to do technical work, conflating “work that can be called technical” with “work people called engineers do”.
Care work requires specialized knowledge and practical problem solving, which is, by definition, highly technical. To be a competent care worker, and I think we can all think of examples, requires both exhaustive knowledge of the condition at hand and the environment that surrounds it, and how to manage the people involved.
The sociological definition of care I shared earlier describes three main components to the work: instrumental, emotional, and informational.
Instrumental care work is the type of things one does just to keep things running, and instrumental care work in tech settings runs the gamut of preparing and serving food to troubleshooting user issues to addressing service outages. Instrumental care work requires quickly gaining a depth and breadth of knowledge, often situated in the workplace. For example, in order to answer support questions, one has to know not only the functions of a product, but also its inner workings. If you’ve ever worked in a restaurant kitchen or on a maintenance crew, you’ll know that the technical details of order, process, and task mastery are key to doing those jobs.
Emotional care work includes work around listening, communicating and being present. It is similarly widespread: it’s key to any form of management or delegation, any sort of engagement with humans (and, for that matter, machines). It’s also most of what you do when you’re caring for a sick person. Hearing concerns, addressing them, rephrasing and reiterating. This all also requires a technical command of language, of process, of what is possible and what is not, the limitations of both technology and organizations.
Informational care work is perhaps the most reliant on technical bodies of knowledge, as this is the sort of work one does when one puts into action information or guidelines in order to provide care for someone or something. Babyproofing your house is a form of informational care work, as is preparing meals for your friend who just had heart bypass surgery and can’t have any salt or dairy or red meat. As is writing user stories for a feature. Each relies on having learned something and then applying it.
Recognizing care work means recognizing how difficult it is, recognizing the skills it requires, and recognizing the knowledge embedded in it. It also means recognizing what care makes possible.
Care work’s impact
Care work has an economic impact. As the economist Heather Boushey writes:
American businesses used to have a silent partner. This partner never showed up at a board meeting or made a demand, but was integral to profitability. That partner was the American Wife… She took care of all the big and small daily emergencies that might distract the American Worker from focusing 100 percent on his job while he was at work… This meant that for decades, the American Wife gave American businesses a big, fat bonus. Her time at home made possible the American Worker’s time at work…This unspoken yet well-understood business contract is now broken.
Care work benefits us all, it’s required of most of us, and as Boushey and others have pointed out, the capacity for care is a matter of economics. At a societal level, we have fewer resources for care than ever before. Yet why does our thinking about it seem so grounded in an alternate reality, one that relies on unpaid, underpaid, and undervalued work?
What if, instead of trying to solve or hack care problems, we accepted them? What if we took an approach that assumed care work was part of all of our lives? What if we did this for our work in building technology, but also took it into account when we built it for others?
This would mean making serious commitments to planning projects in a way that took into account people’s lives, being extremely judicious about work travel and after-hours expectations, investing resources into care, and holding folks accountable for doing so.
But in order to accomplish this, we would need a business case. We would need data, and documentation, and stories around it. Is this a project for the open source community?
This would also require more articulation than we’ve done so far, and perhaps some standardization. Which leads me to my final point, on thinking about codifying some ideas about care ethics and technology.
Codes of Ethics
Technology workers have an understandably ambivalent relationship with professional identities. I get it. But if we are going to talk about the work we do and our own ethics, professional codes of ethics are something we should acknowledge.
Last year, I interviewed an RN as part of a user study. As I asked her questions about her work and life, one really poignant thing she shared was how HIPPA regulations impacted patients’ charts. If, for example, a patient had not authorized their family members to access their medical information, she could not share it, even if they asked.
I was struck by this, especially because I’d never had a conversation with a security engineer this straightforward, this practical, this principled, about users, their data, their privacy and security. Nurses and medical professionals have both professional codes of ethics and legal regulation around the administration of care and technology. Why don’t we.
What would it mean to have a serious conversation among technology workers about our responsibility to our users’ privacy, security? Could we build out codes of ethics, could we articulate design principles?
Open Source and Radical Caring
Is it possible to reimagine technology, and open source technology in terms of care ethics? What is our responsibility to each other, to our users, to the world outside of our communities?
Open source projects function as ad-hocracies, with those who have the most free time to contribute having the loudest voices and most say in the project’s priorities. I’m certainly not the first to say that this is a system with several inherent flaws that often reenforce unfortunate care work dynamics.
And there is care evident in open source work: in cultivating community, in reporting and documenting bugs, in ensuring performance and stability. In learning and providing safe spaces to learn.
Could we see care as a venue for activism and resistance, as part of our push for open source’s future?
The performance artist Harry Giles recently theorized that care has the possibility for radicalism:
in a political situation in which care is both exceptionally necessary and exceptionally underprovided, acts of care begin to look politically radical. To care is to act against the grain of social and economic orthodoxy: to advocate care is, in the present moment, to advocate a kind of political rupture. But by its nature, care must be a rupture which involves taking account of, centring, and, most importantly, taking responsibility for those for whom you are caring. Is providing care thus a valuable avenue of artistic exploration? Is the art of care a form of radical political art? Is care, in a society which devalues care, itself shocking?
Maybe that is a fitting theory for conclusion: that by thinking about care, doing care, valuing care, practicing and designing for care, we can shock these system that surround us, find their vulnerabilities, improve on them, and make life, and technology, better for us all.