Ethics and the Future of MIT’s New College of Computing

MIT students protest the inauguration events for the Schwarzman College of Computing, on Feb. 28, 2019

The Inauguration of MIT’s Schwarzman College of Computing

This is a potent image. Visceral, urgent, reverberating with echoes of the 1960s and student protests of the Vietnam war. 50 years later, students are still protesting Henry Kissinger, and, more broadly, the inauguration of MIT’s new College of Computing. The College is dedicated in honor of Stephen A. Schwarzman, advisor to Donald Trump, CEO of the Blackstone Group (the largest private equity firm in the US), friend of the Crown Prince of Saudi Arabia, Mohammed bin Salman, and beneficiary of the oil-fueled Saudi sovereign wealth fund. Schwarzman gifted $350 million of his $12 billion net worth to make the college possible.

These facts are particularly bitter because “ethics” are (ostensibly) central to the College’s ambition. “The College’s attention to ethics matters enormously to me, because we will never realize the full potential of these advancements unless they are guided by a shared understanding of their moral implications for society,” Mr. Schwarzman stated in an MIT News piece. In similar ways, the word “ethics” floated throughout the three-day inaugural celebrations — with a few exceptions, it was nothing more than an aesthetic gloss. It was a gesture toward vague debates that must be had (sometime tomorrow) and, even worse, a brittle way of legitimizing all manner of statements like as we careen toward automation, the ‘future of work’ is a matter of ‘ethical AI’. The rhetorical focus on ethics only threw the protests into sharper contrast. The event speakers’ and the MIT administration’s flat treatment of deeper issues vindicated the critics.

To say the least, I was conflicted as I took this photo. I was crossing Massachusetts Ave, walking against the stream of protesters, on my way to participate in the college’s inauguration event. This photo captures a potent ideological choreography.

And I have been struggling with that symbolic crossing for the weeks since the event. The stakes are impossibly high. The future of computing. MIT’s history of defense contracts, Lincoln Labs, DARPA. Oil money. The direction of research and technology. Oppressive regimes that systematically violate human rights. Spaces for debate. Spaces for collaboration.

Can we hold out our hands to accept this kind of money while opening our mouths to utter the word ethics? Can we work toward building a better system, from our position inside of an existing one that we know is broken? Should we walk to the right or the left? Yes, the stakes are impossibly high.

An Ethical Bind

Before I write anything more, I want to make my position on these initial questions clear. This post is nothing if not an emphatic indictment of:
 – the financial industry and the global political system as they exist, insofar as they create the systemic possibility of accruing obscene personal wealth and normalize such a pursuit,
 – Kissinger’s actions during the Vietnam war,
 – the Saudi Sovereign Wealth Fund, its source of money, and its investment portfolio,
 – the human rights violations perpetrated by the Saudi administration, 
 – the human rights violations perpetrated by United States administration.

But I crossed the crosswalk. The nostalgia of this photo is proof that the tension I (and, I hope, all of my colleagues at MIT) are feeling is the contemporary manifestation of a very-old, very-familiar paradox: is it best to work within an objectionable system, in the hope of changing it; or, to take to the streets in protest? Here at MIT, controversy flared only a few months ago around research and financial ties to the Saudi government, as it has flared in the past.

That this is an unsolved, and perhaps unsolveable, problem is important. Both perspectives are valid. Both can be argued. And, as I describe below, I hope that both will be treated as legitimate, and that both will contribute to the agenda of the Schwarzman College of Computing, as it emerges. But I doubt that MIT, as a community, could arrive at a consensus between the two poles; they are too polarizing. Indeed, the (ongoing) protests, the open letter, the MIT administration’s un-nuanced response, and the power dynamics at play, make a clear consensus difficult to imagine.

So perhaps this symbolic crosswalk isn’t the most important axis. Perhaps the paradox is unsolved / unsolveable because neither side directly affects the root of the issue: the objectionable political and economic system that generated Schwarzman’s money. Walk right, walk left, the system keeps doing its damage.

Drawing a New Axis

Instead of debating whether to accept the money or to reject it, we should be asking if — and imagining how — the morality of global political / financial infrastructures can be distinct from the morality of the technologies they enable. In other words, we need nuanced and transparent ways of thinking about the objectionable system, and what MIT does with the money it receives.

In short, we need a new axis.

It is not enough to gesture toward ethics, or state that “the event will allow for many voices.” The important question becomes: can the MIT administration confidently state “another, a better, world is possible!” and can it create the conditions for this College to meaningfully work toward an alternative system?

The answer isn’t immediately clear. And the step sideways cannot be taken lightly. As we, the MIT community, contemplate a new axis, we must understand exactly what it implies and demands. I suggest that we ask these questions, and weigh the answers seriously (*):

1. What are we working for; what are we working to change?
Can we maintain a vision of what, specifically, we are working toward — in this case, ethical AI, and how it can meaningfully advance a better political / financial system (and stay away from purely aesthetic uses of the term)? Can we maintain grounded, and (again) specific principles that circumscribe what we reject — in this case, obscene inequality and human rights violations — and the systems we hope to change — the oil and gas industry and oppressive governments?

2. Did we decide those priorities openly and collaboratively?
Do the above two positions arise from open discussion, and accommodate radically democratic deliberation? Do they celebrate plurality (in the process) and strive for justice (in the outcomes)? Can we work with stakeholders in the existing systems, to acknowledge their expertise and build consensus?

3. Can we prototype ideas based on those priorities?
Can the (long-term) vision we are working toward be meaningfully revealed in the (shorter-term) work we do? Can we make the visioning process and the emerging vision accessible, legible, and welcoming — and, through that, build legitimacy? This is a design challenge, and it should be iterative, with the democratic deliberation of question 2.

4. Are our prototypes coherent and compatible? Could they build toward an alternative system? 
Is there a plausible future in which these design-glimpses accrete and hybridize and develop their own structural integrity, to the point that they can stand without the systemic scaffolding we object to? How can we measure that? By what criteria? In this case, does the Schwarzman College of Computing have the freedom to develop mutually compatible fragments of (actually) ethical technology that combine in a future without obscene inequality and human rights violations?
(**)

In his comments at the event, Joi Ito, director of the Media Lab, described AI as “jetpacks and blindfolds that will send us careening in whatever direction we’re already headed. It’s going to make us more powerful but not necessarily more wise.” I agree, but I also believe that, at this moment in the historical arc of technology and AI, we have a unique and slim opportunity to direct the jetpacks. And we need to ask these questions if we are to chart a strategic and programmatic course for the College. We need to ask these questions if we are going to be wise about the future of computing.

Moving Forward, Critically and Creatively

So I put these questions to the broader MIT community, and, more specifically, to the leadership and faculty of the new Schwarzman College of Computing. I don’t have answers. But I know these questions are urgent, and I want to raise them. I am certainly not the first; with this piece, I am adding my voice to a growing chorus (which extends beyond MIT, too).

If we can answer these questions with integrity, then the second axis is a viable one. And if we move along it, we have the responsibility to creatively and critically explore how computing could assist in replacing morally objectionable political / financial infrastructures. Returning to the image of the crosswalk, we should not be fighting about whether to stand on one side or the other, but working together to walk against the onrushing traffic of the political / financial status quo, knowing that we walk in the ethical direction.

At this point, the money has been accepted, the College has been inaugurated. It appears to be a fait accompli. Before anything else, the MIT administration should be clear about its position on the College’s funding, and specifically, the relationship between political / financial systems and the goal of MIT research. One can inform the other. MIT is great at tackling difficult problems; if we draw a new axis, we can ensure that we tackle problems that have ethical dimensions, and that we tackle them in an ethical way. A clear statement, transparently and collaboratively constructed, is the starting point to ask: what are we are doing here, really?

In other words, as this college is inaugurated, I urge the leadership to be both bolder and more precise in its ambition. To allow open deliberation in the process, and open collaboration in the action. Yes, we must debate the ethics of AI. Obviously. But how can we reveal the terms and the stakes of that debate so that they are broadly intuitive, and invite any and all to participate in it? (The official “Idea Bank” has only 14 submissions so far). How can we bring a broad spectrum of voices to the table, to be critical and creative?

This could begin with an open, exploratory scenario mapping: collectively exploring possible futures and deliberating which are desirable (questions of creating: connection A). Montreal has done inspiring work on this, pioneered by the University of Montreal, and resulting in a Declaration for Responsible AI. Once we have articulated an ethical position, and explored futures, we can meaningfully build toward them. And we must constantly check our process: (e.g. who is included or excluded?), and constantly check the outcomes along the way (e.g. do our prototypes fit together, toward an alternative, better system?)(questions of checking: connection B). The process draws on the arts and community organizing, as much as on inter-lab collaboration of the kind MIT is known for. Learning from design practice, these steps are iterative — we will never “arrive” at the future. We are constantly working toward it, and these questions can help us steer (or, pilot jetpacks).

Public Imagination

The most important thing is the better system we constantly, iteratively, design and define. We should talk about the future we want! I challenge us to articulate ludicrously ambitious futures! To actually speak them! Futures beyond “less bias in news recommendation algorithms” or “a financially solvent MBTA” (which are important to solve now / yesterday). If it is anything, this College should be a space to seriously propose futures like: a regenerative (carbon-negative) global and local transportation infrastructure; an economy based on commons ownership models — that is, collective value-capture from public resources (urban space, ecological resources, aggregate data); use-based rather than private-asset-based real estate valuation; and so on. Having collaboratively defined such radically ambitious futures, we are then in a position to ask: what role can a new, trans-disciplinary College of Computing have in making these futures a reality? We can work backward from those to set interim goals like overcoming algorithmic bias and increasing the efficiency of transportation systems.

And we would not be alone in speaking such bold futures, nor in spurring innovation to solve them. In addition to the Montreal Declaration for Responsible AI, there has been inspiring work in Canada at the federal level — a collaboratively developed strategy for Social Innovation and Social Finance, now being implemented. Around the world, policy theorists, notably Mariana Mazzucato and the Institute for Public Purpose at UCL (read: an institute for public purpose! Now that would be a bold move, MIT!) are advocating a mission-oriented approach to innovation policy. Shoshana Zuboff exposed the internal mechanics of “surveillance capitalism” and articulated its dangers, while Nesta has outlined “platform cooperatives” as one (of many possible) alternatives. The UNDP recently brought on 120 explorers, experimenters and solvers to launch a new global lab network to advance ambitions social goals at the planetary scale.

None of these are perfect… but they are bold. These, and many others, are very seriously talking about ludicrously ambitious futures, and in so doing, enabling us to collectively realize them. Congresswoman Alexandria Ocasio-Cortez recently tweeted that “something we desperately need right now [is] public imagination. When we focus on imagining and debating new possibilities of what we want to accomplish, instead of relentlessly fixating on limitations, we build the will to do more.”

Why not MIT? Despite its reputation for innovation, and its unparalleled technical capacity, MIT is remarkably timid in advancing visions that have social rights at their core — at least, in any kind of cohesive or specific way, across the institute. The new College of Computing is an opportunity to change that. Especially because the administration has — rhetorically, at least — described such a degree of boldness as the goal of the College. Structurally, it will be without the bureaucratic encumbrances, turf wars, and histories of existing departments. It embraces (again, rhetorically, for now) the role of the arts (question 3) and of collaboration (question 2).

In this College, we have the opportunity to answer Dan Hill’s call for a strategic design that stitches together alternative, ethics-centered ideas with the actual doing of actual things (the latter of which MIT has no trouble with). The missing piece is evaluation criteria — exactly the kind of criteria we will, hopefully, be collaboratively designing through this process (the criteria that link our ethical position to our agenda to the better system; the ones that let us know we’re moving along the axis; questions of checking, B). The initiatives mentioned above, and strategic design and social innovation more broadly, sorely need such criteria. I believe one of the great challenges for computation and AI is to meaningfully advance our capacity to understand outcomes that are, today, qualitative and elusive. Things like joy, or delight, or trust, or knowing that penguins still exist, or accounting for future generations as we make decisions today. I would suggest this as a central pillar of the College, and I would join my colleagues — you, anyone — in imagining how such a measurement tool could transform social innovation. We could collaboratively prototype what it might look like. If we get a chance to.

A Challenge for the College

So yes, I am conflicted about being part of the Schwarzman College inaugural events. Part of me wishes I had joined with the protesters in that crosswalk. I hold their concerns with the utmost respect, and I, too, add my voice to their condemnations of obscene inequality and human rights violations. But I also believe that we shouldn’t be asking whether it’s best to protest or to work with objectionable funders. We should focus on a different, perpendicular axis: can we separate systemic infrastructures and their objectionable effects from the technologies they enable? If we can, and if we work with integrity, we will find ourselves in a position to design technologies that amount to a better system, an ethically-grounded infrastructure. Our critique and creativity can actively build toward an alternative, which protest alone does not.

In short, I believe that it is possible to — and that we must —articulate our objections and channel them into an clear strategic plan for developing ethical AI. We can step toward a future that includes technology toward and by and for an equitable society. Perhaps the College’s birth in protest and legitimate critique will provide a much-needed ethical rudder for its future work.

We are at an inflection point in the history of computing, AI, and technology as a whole. MIT is looked to, around the world, as a leader — we have a weighty relationship to humanity’s technological future. So we have a responsibility to explore and debate that future, to fight as hard as we can for its openness, ethics, and equity. We have an opportunity to foster public imagination. This is urgent. I challenge the MIT administration, and the leadership of the Schwarzman College of Computing, to engage this future ethically, transparently, collaboratively, and with a dose of ludicrous optimism.

(*) This list can be applied in other cases of the same paradox. I have been thinking in similar terms about the Amazon HQ process, and the Alphabet-connected Waterfront Toronto development, to name a few.

(**) 5. Will the Nobel Prize committee actually accept Kissinger’s retraction? (Kidding. Sort of.)