Provocation #3: There is no plurality without ambiguity

PROVOCATIONS
5 min readJun 14, 2023

--

Data-intensive systems such as machine learning and artificial intelligence are often portrayed as having univocal architectures and binary (namely yes/no) reasoning. For example, the metaphorical discourses of code as law and algorithms as recipes depict digital technologies as incapable of encouraging nuance and dialogue.

However, the work we have been doing at CGHR suggests that communication technology design can — and must — embrace ambiguity in the encoding and decoding of data. Just like law and recipes, code affords ambiguity, even if dominant discourses about all three intimate the opposite.

In our experience, ambiguity can be a generative force when approaching data.

Ambiguity makes space for us to think up and create technologies that, rather than transforming public voices into mere inputs for pre-programmed or hermetically sealed tasks, align with interpretive openness and plurality.

In this case, plurality means that no machine has the last word, and instead different and diverging meanings and interpretations to coexist — namely, ambiguity. This is the world in which we actually live.

Though there is always power in technology, its flow is less fixed the more ambiguity is afforded.

Balancing categorisation and interpretation

Designing for ambiguity involves achieving a healthy tension between categorisation and interpretation in technology-building.

Data-intensive technologies require a certain degree of categorical abstraction, of removing some layers of complexity to collect and aggregate data points, identify patterns and feed algorithms to support analysts in turning data into knowledge. Such processes often follow the logic of classification (or coding), which means developing variable categories that inevitably simplify humans and their experiences, render homogeneous what is different and privilege some worldviews over others.

However, this is not an all-or-nothing dynamic.

It is also possible to purposely imbue technologies with tolerance for ambiguity. This is about coding (in the computing sense), but it is also about allowing for nuance in the encoding and decoding of data (in the communications sense). Encoding is about shaping the meaning of the message, while decoding is about interpreting that meaning.

The more freedom the encoder is given in making their message on their own terms, including through embedding interpretation cues, the better the decoder can understand the encoder, but also the messier the data and the more heteronomous the work of decoding.

To facilitate this, technologists must be comfortable with ambiguity. Data creation should start with fundamental principles of interpretivist social science, especially that of listening.

In addition, technologies can render the processes of representation and interpretation visible as different actors (or programmed actions) make sense of data — from inception to processing to translation into outputs. When operating in this way, the idea of a closed canonical structure is replaced with an open process that makes room for contestation and re-interpretation. This can include creating channels for those represented by data to challenge this very data.

For data-intensive technology, designing for ambiguity means letting go of hard and fast categorical rules and centring communication and interpretation.

Ambiguity in practice

As we learnt through Africa’s Voices, a good balance of classification and ambiguity allows for engaging with publics and analysts in a way that respects their agency and enables recognition.

CODA, the open-source qualitative coding tool developed for this project by Cambridge computer scientists associated with CGHR, is testament to how technology can be designed with ambiguity at the fore. CODA, a shared interface for team members expert in local languages to label text message data, supported by machine assistance, prioritises their rare interpretive skills, makes provenance of their interpretive acts legible to others and allows for iterative reinterpretation as ambiguities are renegotiated.

Along with enabling ambiguity in decoding, CODA’s interface and machine assistance allows for more ambiguity in the public’s encoding of messages, because the system can support a more heterogeneous variety of messages in local language and expression.

Another project of ours that prioritises ambiguity is The Social Life of Data, a web-based experience coded by CGHR intern Jamie Hancock for The Whistle. The Social Life of Data operates on the meta level, in that it is about ambiguity itself. Here, the user is invited on a choose-your-own-adventure journey of one bit of data as it travels between humans, machines and contexts, revealing the interpretative nuances underpinning its decoding in different settings.

Rather than strategically ignoring the always ambiguous processes of encoding and decoding data, these two projects intentionally share the decisions and nuances involved in making knowledge into data and data back into knowledge.

The virtues of ambiguity

Certainly, designing for ambiguity, and openly so, will not suffice for overcoming the broad range of inequalities brought about by digital technologies. However, this principle can advance pluralism as an accepted norm — as well as pluralism in practice.

First, ambiguity undermines problematic universalist paradigms that associate technology and data with neutrality and objectivity. As mentioned earlier, plurality happens when multiple and diverging voices are allowed to coexist in digital environments. In such a case, no single voice is the centre nor has privilege over the rest.

Second, embracing ambiguity makes it possible to avoid the dynamic in which a rush to settle knowledge controversies ends up privileging the most powerful voices and their interpretations. This is particularly relevant in the so-called post-truth era, when demands for clear-cut certainties have become the orthodox order of the day. Prioritising ambiguity allows us to sit in the knowledge controversy for a bit longer, and in so doing, to relish the critical spaces that the controversy opens up for us to interrogate power and knowledge.

Third, we see ambiguity as linked to the principle of ambivalence that, following feminist thinking, constitutes a condition for reflexivity and inclusivity. Ambivalence is difficult to define, but it could be said that rather than seeking to ‘solve’ contradictions, it calls for embracing discomfort, staying with the trouble and being open to our own vulnerability. Like ambivalence, ambiguity makes things slower (as slow tech design also does!), and in doing so it increases the opportunities for critical awareness and reflection.

Ambiguity for voice

Designing for ambiguity challenges many of the dictums accompanying developments such as big data and the current wave of artificial intelligence. Indeed, it’s the opposite of the machine intelligence based chatbots that converse with confidence and yet are riddled with inaccuracies dubbed hallucinations.

Whereas existing technologies and epistemologies privilege opacity and speed, ambiguity embraces openness and taking the time. Like slow tech design, ambiguity goes against the tide of real-time, automated data processing occurring in obscure algorithmic black boxes. Designing for ambiguity opposes the rush to settle and categorise that’s characteristic of binary and positivist epistemologies.

In sum, ambiguity makes space for us to speak and be heard on our own terms, as well as to hear other voices on their terms.

It provokes questions about whose voices and views shape our knowledge, how such voices and views come to matter and how else we might understand our world.

--

--

PROVOCATIONS

Rethinking tech with rights practitioners and civic activists. By the Centre of Governance and Human Rights at the University of Cambridge.