The Real Problem With Technology Ethics Is Our Leaders — And How We Enable Them

Idolizing tech CEOs is only setting the rest of us up for harm.

Nidhi Sinha
6 min readDec 4, 2023

My experience with Sam Altman

For the past two years, I have been working as a research fellow at the Center for AI and Digital Policy, a civil society organization advocating for national and international AI policy rooted in human rights. One of my most notable projects was being one of six signatories of an FTC complaint against OpenAI filed in March 2023, which requests a moratorium on future GPT releases until stronger guardrails are in place. The FTC is currently investigating OpenAI over consumer harms and security practices.

I joined a Stanford extension class this fall on technology ethics, geared towards practitioners. An online class with over 200 participants, it sought to bring together those working in the tech industry to discuss incorporating ethics into their practices. Each week, a guest speaker came to talk about their work and do a Q&A. Sam Altman came on November 7, a week before his dramatic firing and re-hiring at OpenAI. Up until now, I somewhat naively believed there was a common consensus we could all reach on the best way to protect our present and our future from AI. It was not until my encounter with Sam Altman coming on as a guest speaker that one thing became apparent: men like him are only interested in using their power to push their views onto others, whether in personal interactions or national legislation.

My classmates had upvoted earlier for me to pose my question to Sam Altman first amongst those submitted for consideration. I noted my involvement on the OpenAI FTC complaint before asking him the following:

“As big tech is moving faster than regulators can keep up, how can we better incentivize tech companies to be more ethical while we’re still waiting for this legislation to take effect?

There have been these closed door senate sessions with big tech leaders that you yourself have been a part of. How can we be ensured that these sessions are keeping in all different stakeholder approaches, all of the different perspectives within the harms and risks of AI, and how would we get more diverse stakeholders involved?”

Altman skirted around the questions, boasting about the structure of his company, quite ironically given the recent criticisms, and saying regulators should simply move faster. While I felt these answers were unsatisfactory PR verbiage, I was ready to accept them and move on. Before he finished responding though, he quickly added

you can go ahead and write all the complaints you want for, like, a moratorium on GPTs. I don’t think you actually mean that in a serious way, but that’s all right.”

His second dig came 30 minutes later, while answering another classmate’s question. He was in the middle of explaining the impact AI will have on the workforce when suddenly interjecting

“But beyond that, I think human creativity and desire to do new things and to contribute back […] not everyone wants to go work really hard and create new stuff, […] the world does need some people to, like, write silly letters calling for moratoriums, or whatever, but I think most people really do want to create and push things forward”

These sudden and heated comments were clearly personal attacks. I felt scared. I am a young woman of color who has just started her career. He is one of the most powerful men in the world. At the moment, I didn’t know how far the behavior could escalate- whether he would begin outright yelling at me over Zoom or somehow blacklist me from future tech jobs. He has only ever been a distant figure to me, but now that we were face to face, the power he held over me, as a white man, as an established CEO, as a speaker in my classroom, all felt too tangible. He clearly felt the power too, and was more than comfortable capitalizing upon it by making hostile comments well after my face and voice disappeared from the screen.

The Sam Altman we saw in class isn’t the Sam Altman we are being sold by the media. He is described as having more clarity of thought than the rest of us, operating on a level of efficiency most of us could only dream of. This supposed composed demeanor cracked almost immediately when confronted with a mere mention of opposition. He showed up to a class about technology ethics and could not handle the questions being turned around onto him. We can either brush this off as an odd interaction, or sit with the reality that this is not a man who truly values opposing input. This is not someone interested in finding consensus, but in bending the world to his will.

As it turns out, I am not the first person to speak out about Altman’s true character. My story completely pales in comparison to the allegations from his sister, Annie Altman. Stories like mine are mere warning bells to not glorify these people; hers is a damning spotlight.

Stop laughing, start listening

Too often, the behavior of tech leaders in personal interactions is brushed aside as quirks of the trade. We wait too long for things to escalate to the point of severe action. Elon Musk recently won a defamation case from when he accused a man of being a pedophile in response to his outlandish proposal to donate a submarine to rescue a soccer team trapped in a cave in Thailand being shot down. Now that he has publicly endorsed an antisemitic conspiracy theory on the platform formerly known as Twitter, investors are backing out in droves. Marc Andreessen once “sarcastically” tweeted that anti-colonialism has been economically catastrophic for Indians. He just released an entire manifesto that, among other things, calls the UN’s Sustainable Development Goals “demoralization” — goals that include gender equality and peaceful societies. These smaller scale interactions are brushed aside as jokes, or misinterpretations. We need to stop laughing and start listening. If any of these people are comfortable enough capitalizing upon their power over specific individuals or groups, why on earth would they not extend that same privilege to do so with the general public?

Any criticism of these men is immediately met with shock that one would dare try to stifle innovation, a concept which has become a sort of nirvana status in tech. Who does it all come at a cost to though? ChatGPT may optimize your code, but at the cost of paying workers in Kenya $2 a day to label violent content with no access to trauma resources. It can summarize long papers, but at the cost of wrecking our environment by draining our water and emitting vast amounts of CO2. Such achievements feel negligent in light of the harm they cause.

My experience humanized Altman to me. He has always been some elevated figure in the headlines. Watching him completely crumble upon the slightest hint of opposition filled him out as a human being in my eyes. Maybe the story of the lone male genius is exactly that, a story. It’s the same story we’ve been told for over two decades at this point; when will we stop believing the myth? Altman claims his technology will revolutionize the world, that he can safely harness the power of AGI. He also can’t handle criticism in an academic discussion on technology ethics. He is as prone to hubris and fallibility as the rest of us, except his failings run the risk of hurting us all.

Imagining a new future

We cannot and should not rely on leaders like Sam Altman to direct our futures. The future they want to create is one that places profit and power as their top priorities, and mitigating any harm to others as an afterthought. The future that we collectively have the power to move towards is one that centers care, equity, and peace. There are some truly incredible people who have been putting in the work for years already to turn such a vision into reality. Now even more are joining the movement to advance AI that will benefit us all. Altman only knows me for my civil society work, but my background is in computer science. I get excited about new technologies coming out too, but I maintain a critical lens of the harmful implications that can lurk behind the sparkle. I don’t want to just imagine a world where technology can be a radical tool for social change — I want to help create it. Enough of us are coming together that it can happen.

Nidhi Sinha is a research fellow at the Center for AI and Digital Policy, providing AI policy recommendations rooted in human rights, rule of law, and democratic values. She believes strongly in the power of collective action to uplift us all towards a better future. She also believes technology can get us there, if we’re smart about it.

--

--

Nidhi Sinha

Working at the intersection of technology and ethics!