Our Critical Moment

A letter from Data & Society’s executive director

Janet Haven
Data & Society: Points
6 min readSep 21, 2022

--

Today we’re launching Data & Society’s three-year strategy and sharing the organizational priorities that will take us through 2024 — the decennial anniversary of Data & Society’s public launch. I’m writing to share those priorities with you, and to make a case for why they are so urgent right now.

In considering that urgency, I reflect on some words of caution my graduate school thesis advisor had for my American Studies cohort. In learning from history, he told us, we should recognize that people of nearly every era believe they are living through the “critical moment” — that their particular experiences and contexts will shape the future decisively, in ways other moments in history have not. That perception, he reflected, was usually inflated.

I think about that often as I lead Data & Society through a period that feels packed with critical moments, ones with global implications: the pandemic and its long-term impacts on mental and physical health and social connections; our growing societal recognition of the approaching climate catastrophe set against the paralysis of governments and the dominance of corporate interests; the worldwide reckonings in response to the murder of Black people at the hands of American law enforcement; the audacity of the rollback of reproductive and other rights by the highest court in the United States. I take my thesis advisor’s warning seriously and try to keep these times in perspective. Yet I can’t shake the conviction that our moment is a critical one.

Society’s relationship to technology

It is, undeniably, a critical time for our field, which intersects and collides with these urgent issues, along with so many others. In all of these areas, data and technology are held up as sources of truth and clarity, as a pathway to objective “solutions,” and as shortcuts to building societal trust. Yet we know that data sets are incomplete, and that they bear the biases of the humans who construct them. We know that seemingly impartial decision-making, based on the guidance of algorithmic systems, in fact replicates and expands inequality, denies human autonomy, has foreclosed access to justice, and impacts lives and communities in ways that are only beginning to be understood.

​​Corporations and tech platforms have tremendous power, but we’re facing problems that these companies, and global capitalism, cannot solve — and indeed are likely to exacerbate. How societies choose to design and govern data-centric, predictive technologies — whether and how we rise to meet and find solutions to these critical moments — will determine our collective future. It is clear that we need nuanced, creative responses to the complexity we will continue to encounter in a data-centric world. And it is clear that those responses will in turn require norms and laws that defend and expand equity, justice, and human dignity. In the face of immense and overlapping challenges, creating real change will also require new kinds of social contracts and alliances, and new centers of power. Simply put, we cannot afford for the future to be determined by what powerful platforms are, and are not, willing to do.

That is why our strategy focuses on groups and institutions other than big tech — namely governments, communities, and social movements, which each hold and wield their own power in designing and governing data-centric technology. Building on a long history of both research and policy advocacy, our work on the Datafied State examines the role governments around the world play in shaping society’s relationship to technology. As governments adopt and use automated decision-making systems, what values will they prioritize? Will they acquire and use data in ways that build trust in institutions, or in ways that inhibit them? Through empirical research and policy engagement, we seek to broaden these questions to consider how the adoption and use of data-centric systems and automated technologies shapes societal trust — and opens or forecloses access to rights and opportunities. The rigorous work of many scholars, both within and beyond our own network, has deepened our understanding of the impacts and harms that unfold at the intersection of automated systems and government services and functions, including how these systems impact individuals at precarious moments in their lives. This work, collectively, has underscored the urgency of designing systems of accountability for AI and automation in the public interest.

To take one example, Data & Society’s AI on the Ground team is putting this idea into practice through their work to create public interest methodologies for assessing algorithmic impact. This is a vital element in the broader ecosystem of accountability tools and experiments, aspects of which many in our field are working to design and test. Yet we still lack a broader frame for understanding the role that data-centric technologies play in state functions, and in different political environments. Our just-released publication, a “Primer on AI in/from the Majority World,” led by principle researcher Sareeta Amrute, researcher Ranjit Singh, and senior producer Rigoberto Lara Guzmán, outlines a set of themes and offers a syllabus to explore the emerging conditions of living with data systems and their connections with state and private power outside of the global North. Over the coming year, we plan to publish an anthology of key concepts on the Datafied State, edited by Director of Research Jenna Burrell.

Participation and representation

Our strategy foregrounds the power of communities, historically disempowered groups, and social movements in the drive to build equitable and just systems of algorithmic accountability. The stakes could not be higher: Even as governments begin the long process of enacting laws and regulations to govern algorithmic systems (such as the European Union’s expansive AI Act), the essential questions of who can participate, how meaningful participation happens, and the effects and trade-offs for participatory governance are woefully unaddressed. Our work on this is rooted in critical scholarship and community organizing led by scholars and activists of color. We are particularly driven by the rigor and range of scholarship coming from the Critical Race and Data Studies community at NYU, founded by Data & Society board member, scholar, professor, and dean Charlton McIlwain; from examples of participatory research and organizing projects such as Our Data Bodies; and from the leadership of peer organizations including Data for Black Lives and the Algorithmic Justice League.

Our work on participation, agency, and accountability seeks to contribute new scholarship, methodologies and practice to these vitally important questions. Data & Society’s Trustworthy Infrastructures program, led by Dr. Sareeta Amrute, shifts the frame of debates on how to “solve for” disinformation to a set of participatory explorations of existing community practice in creating secure online infrastructures and experiences.

Through dedicated media and policy engagement, our goal is to extend the robustness of this work by connecting it with the ongoing public discourse. In that discourse, the narrative of techno-solutionism is pervasive and damaging: it directs vast amounts of money to inadequate solutions for our most fundamental problems and forecloses meaningful debate on the complex, values-driven trade-offs we desperately need. We insist that technology is neither the source of, nor the solution to, the host of societal challenges we face; it is neither neutral nor inevitable. Society impacts technology, and technology impacts us and our communities in turn. Our research offers evidence to challenge the prevailing techno-solutionist narrative, and our communications and engagement work identifies ways to apply and amplify it.

A critical moment of opportunity

None of this is to say that technology can’t play a role in addressing complex societal challenges — we know it can. But we must be attentive to the nuances of this relationship, the implicit and explicit tradeoffs, and the assumptions embedded in them. As Jenna Burrell and Marion Fourcade argue in “The Society of Algorithms,” the rise of a “coding elite,” and related ideas about the objectivity supposedly embodied by algorithmic tools, conceals the specific values of those who write the code and the unprecedented power they exert. By recognizing those values, and bringing them to the foreground, we can understand their influence — and debate and reconsider them.

Attention to the social implications of data-centric systems is too often treated as an add-on to advancing technical innovation. It should not be. In research, in academia, in policy, in media, we need a fundamental shift in priorities that recognizes the societal impact of technology as central to just futures. At Data & Society, we believe these systems can and must be grounded in equity and human dignity, and that our role, as part of a vibrant and growing field, is to illuminate the stakes and help shape solutions.

Whether history will see this as a critical moment or not, we are treating it as such. The decisions we make today about data-centric and automated technologies both impact real people and their opportunities, right now — and hold sway over the next critical moment, and the one after that. That recognition presents our field with tremendous power and real opportunity. I am hopeful that we can seize it, together.

Visit our website to explore our full strategy.

--

--