Lunch №4: Designing Systems, Software, and Values.

In this series, I am documenting a string of lunches hosted by bUm, Berlin’s new hub for civic engagement. We started the project as a way to gather people interested in the intersection of digitization and what it means to live a good life. As it evolves, we hope to inspire conversations about how we use technology, while offering some insights from people who work in this field every day.

Lunch at bUm, Joana Breidenbach introduces the project.

Appetizers

Our fourth conversation was the first event to officially take place at bUm! bUm is an old energy conversion site, which has been renovated to become a hub for social and political engagement. Originally, it was rented by Google with the intention of making a Google campus in Berlin. However, due to local protests regarding the gentrification and corporatization of Kreuzberg, the space was given to two local social organizations, betterplace lab, and KARUNA. The space is now being used to host different groups, events, projects, etc. that promote civic engagement and creative solutions for the future.

The Digital Table, a project which seeks to understand how digitization affects our lives and societies, was a perfect event to kick start the space!

The focus of this lunch was on values and design. How are digital spaces designed to either promote or deter value based interactions? What do we mean when we talk about values and digital spaces? Are we talking about our own personal values, the values of tech companies, societal values or something altogether different? To better understand this topic, we invited a group of people who have thought deeply about these questions, and come to some interesting ideas about how we might want to engage online.

Tell me something personal…

Joe Edelman, a social scientist and designer, started off the discussion with an exercise. He asked us to go around and speak about one personal value that we had lived up to in the past few days, and one that we hadn’t. As everyone spoke I felt a sense of connection, especially when people shared the values they hadn’t lived up to. It helped me see where people wanted to go in their lives — and what kind of people they wanted to become. The values we talked about varied from how we wanted to be in relation to others, our work, or our ability to be self-disciplined.

For example, the value I had lived up to was allowing someone in my life to freely make their own plans without interfering or micromanaging in my own interest. The value I didn’t live up to was expressing my frustration when small but uncomfortable encounters accumulate. After sharing these thoughts, the group was able to show-up to the conversation in a more nuanced way. Something from our personal lives had entered the space and created an environment where ideas could be shared more freely. We had all managed to impress and disappoint ourselves in some area that was personally meaningful — and hearing these declarations is point of surprisingly strong connection.

Human Systems

Collages by Lola Dupre

How then, does this kind of connection happen in digital spaces? What are the mechanisms and systems that either enable or deter it? And why is it important that people connect through personal values in the first place?

Digital well-being and ‘tech-ethics’ are topics that have been gaining more traction in the past few years. They show up in a lot of different ways. For example, the buzz about digital and information diets. Or, a kind of ethics washing that happens at large companies so that they appear good and boost employee morale, with minimal real change in corporate processes. Another kind of tech-ethics is the attitude of preventing worse offenses. This leads to employee walk-outs at large tech companies like Amazon or Microsoft, and while these acts of protest are really important, preventing something bad doesn’t get us to the future we actually want.

It was due to this sort of culture that Joe Edelman left the tech world of silicone valley, and started to imagine a different kind of tech-ethics in collaboration with organizations like the Center for Humane Technology. Now, Joe works as a philosopher, social scientist, and designer who believes the current political moment demands a restructuring of human systems. To do this, we have to understand how humans behave and what is important to them.

This is where the discussion around values comes in. In order to sort out what is important to us we need some structure– and values provide a language to negotiate and understand our needs, desires, dreams, and ideals.

Rather than creating a veneer of ethical behavior in the workplace, Joe is interested in designing technology to become what he calls ‘livable technology’. This means technology that moves and evolves with us as our values develop and mature. For this to happen, infrastructure and software needs to be in place that supports a positive vision for tech.

Values over Preferences

Photo by Dewang Gupta

At the moment, most social softwares (like FB, IG, Twitter, or productivity tools like slack or email) are concerned with the preferences of users rather than with their values. This occurs for many reasons, but mainly it is because the statistics collected about users are based on their behavior, not on their values.

Think back to the sharing I described at the beginning of this post, where each person described a value they had or hadn’t lived up to. If what you want from life is measured solely on your behavior, all of those moments in which you don’t live up to your values are seen as indicators of your desired way of being. Of course, this doesn’t correspond to how we actually want to behave in the world.

Behavior can express a lot about user preferences in any given moment, but it is much more difficult to measure values. When you like a comment, or swipe left or right, or share a post, these are all behaviors that can be clearly measured. What doesn’t get measured is if those behaviors are in line with your values, with how you would like to be in the world. In order to understand user values, you have to ask.

This is why the systems through which we communicate and live online (our online cities if you will) have to change in order for people to feel more seen and empowered in digital spaces. We need digital infrastructure which promotes interactions based on values rather than on preferences. To create this, tech companies have to change the way in which they collect user data. They have to ask their users about their values.

Values vs. Ideological Commitments

Photo by Jonathan Poncelet

Now, you might be thinking “should we really be supporting ALL values? Don’t some lead to hate speech, misogyny, homophobia, religious persecution, etc.?” When we talk about values, the question of how to tolerate intolerance inevitably arises. We mean a lot of things by the word value, but when Joe talks about value based technology, he is talking about personal values, not social or collective values. In order to clarify this, he calls the collective values ideological commitments. Ideological commitments include things like misogyny, racism, feminism, being a liberal or conservative, etc.

Personal values are more like visions for how you want to be in the world, they are much more subtle and embedded in our psychology. They can be seen in statements like, I want to be kind, more honest, tougher, tell people I love them more, have more self-discipline, be more flexible, etc. They don’t suggest any kind of ideology or prescription for how the rest of society should live. Personal values are something much closer to plans and intentions for your own life. However, Joe did make a point to say that many ideological commitments might boil down to personal values such as autonomy or power, and he believes that these underlying values are the right thing to support.

What to do with Conflict?

What happens when there are conflicting values? Laura Haaber Ihle, a visiting research fellow at Harvard, brought up this question. For her, ethics comes in when you have a conflict in values, either in yourself or between people.

When we talk about software or algorithms we want there to be a systematic way to address problems. Large numbers of people can’t be serviced if solutions must be determined case-by-case. Can there be a systematic or structural way to deal with value conflicts online?

Tina Egolf, an entrepreneur and expert in the future of work, talked about the power of choice when it comes to resolving conflicts. If I have conflicting values with a platform I am using, say Facebook, I should have the choice about whether or not I want to join this space or not. If I don’t have a choice, if there is a monopoly for example, then this is where trouble arises. Similarly, Joe talked about how conflicting values within yourself isn’t really a problem. There might be some difficulty or internal struggle, but that is what making decisions, and having the freedom to make decisions is all about. However, if your values are being suppressed, if you aren’t able to act on them due to external pressure like an authoritarian government, software limitations, or social pressure, then the problem is no longer about conflict but about suppression. When we have value conflicts we might experience frustration or confusion, but when we face suppression we are immersed in emptiness and meaninglessness. This I think, is of far greater concern.

Systemic Conflict

Value conflicts become slightly different when they are no longer the values of the individual user, but the values of companies or organizations. Small organizations, like betterplace, have certain values like transparency that is part of the initial impulse to manifest the organization. If you agree with these values, you can then choose to join as a user or not.

However, the financial systems in place also have many predetermined values that make it difficult for start-ups or small organizations to develop independently. Rewards and benefits are so firmly tied to the values of the financial market, that if you try to stray seriously from that world you lose investment, users, and recognition. Therefore, social start-ups like betterplace have to fight for their values to be recognized, and are often forced to change and mold depending on what investors and stakeholders require.

When it comes to larger tech corporations (Facebook, Amazon, Google, Apple..etc) I think there should be a very different distinction between user values and company values. Companies like Facebook and Google are so large and ubiquitous that they have become much closer to a public utility. Because of this widespread influence, their freedom to determine their own values is a bit more confusing. They should perhaps be a more neutral platform on which personal user values can be expressed. Again, these are not the same as ideological commitments.

User Experience: into the tech

Photo by Brett Jordan

What do these ideas look like in the actual technology? Max Senges, an ethicist at Google, wanted to understand how this value based vision would feel for the user. What are the tangible changes, would there be different settings in apps, or preferences that you enable on your device? So much of this already exists, he said, what would be the real difference?

Joe described a series of design strategies that developers and designers could put into place. These are not acts that the user would take, but changes in how tech companies design their products and gather user data. It would include for example much more text based surveys which are slower and sometimes more expensive. You would have to ask users what they want this space to be, and what is important to them. There would also have to be changes in corporate structures. For example, things like raises, promotions, recognition etc. are often predicated on making design decisions that are in line with the old preference-based metrics. If you want to change how we gather metrics so that user values are prioritized rather than their preferences, you have to change the structure of rewards and benefits that perpetuate designers making these decisions.

On a larger scale however, there are more systemic changes that might need to shift in order for this value-based vision to blossom. One of those structures, Joe mentioned, is free-market capitalism. It is unclear to me if the system Joe described is really possible in our current financial system. Further, saying that these structures might need to shift feels rather unsatisfying and vague. Of course, this is a very important point that I feel needs its own lunchtime discussion.

Shared Values/ Society

Simone de Beauvoir — I chose this photo because I think de Beauvoir is someone who understood the depth of this topic quite brilliantly. Credits: By Moshe Milner — Crop of File:Flickr — Government Press Office (GPO) — Jean Paul Sartre and Simone De Beauvoir welcomed by Avraham Shlonsky and Leah Goldberg.jpg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=39952804

The last idea I would like to discuss came up briefly in the closing of our discussion. It didn’t get much space, but it is a question that I am personally interested in:

How much are these values that we call ‘personal’ really our own, and how much do they even need to be? I think many of us feel that our personal values are not totally separate from social influence, but we also like to take credit for developing them ourselves. We tend to notice this even more acutely when we analyze other people’s values. We say, ‘ohh it makes sense he values politeness so much, he’s from Texas’. (This is something that I have actually said about a friend).

In that moment, I gave the culture around my friend much more credit for his values than I gave his own free will. We like to imagine that we are in control of our values, that we have worked hard to define and live up to them. However, the line between our own ability to create values and our surrounding culture is so thin that it is difficult to decipher what belongs to what. I don’t necessarily think we play zero role in defining our personal values, just that the attempt to strip societal influence from our core personalities, behaviors, and values is a subtle and complicated game. I am not even sure what is left when these influences are stripped away, or if it is possible at all. Do we get down to something that is authentic and truly individual? Or is this something more like a western male dream that idealizes the unhinged and free human living solely by (his) own free will?

From this last sentence you might be able to guess how I currently feel about this topic. However, like I said above, I find this question to be fascinating and endlessly unclear. To close, we might ask how necessary it is to understand the difference between personal vs. social values in order to develop tech and digital infrastructures around it. I don’t know the answer to this question, and feel it is a good place to leave our first series of lunchtime debates.

Thank you to all of our wonderful participants!

  • Joana Breidenbach: Joana is a trained anthropologist and social entrepreneur. She co-founded betterplace, betterplace lab, and Das Dach, and is the co/author of many books including New Work Needs Inner Work.
  • Ben Mason: Ben is the project lead for “Digital Routes to Integration” at the betterplace lab.
  • Siena Powers: Siena is a writer, researcher and podcast producer at Das Dach.
  • Max Senges: Max is a philosopher and Lead for R&D Partnerships and Internet Governance at Google.
  • Laura Haaber Ihle: Laura is a research fellow at the Harvard Department of Philosophy, where she focuses on the questions that arise in the intersection between AI, ethics and knowledge production and dissemination.
  • Joe Edelman: Joe is a hilosopher, social scientist, and design researcher at Will & Intent. Background as an entrepreneur, game designer, engineering manager at CouchSurfing.
  • Tina Egolf: Tina is an entrepreneur and an expert in the future of work and product strategy. She founded workmaker.labs, and is currently working on THE GUILD.
  • Franziska Stiegler: Franziska is a psychologist working with mental health in the workplace.
  • Hannes Kloepper: Hannes is first and foremost an entrepreneur. He co-founded iversity and is now the CEO of GET.ON institute.

As far as necessary, all rights to the images used here have been clarified with the artists or producers. For some images I have paid a small amount of money. For others I made a donation to non-profit projects in agreement with the artists. However, most of the creators agreed that their works are used here free of charge. I’d like to express my gratitude to all of them.

--

--

Siena Powers
The Digital Table:  An exploration of wellbeing in the digital age– over lunch.

Siena is a freelance writer and producer working with Das Dach and the betterplace lab. She lives in Berlin, DE.