Will Griffin: Chief Ethics Officer, Hypergiant Industries
In this “10 Question” interview series, ethicists working within tech companies discuss their inspirations, contributions, and advice for those wanting to get involved in this space.
Will Griffin is the Chief Ethics Officer of Hypergiant Industries, where he has developed and implemented a uniquely effective company-wide AI ethics framework. This framework Top of Mind Ethics (TOME) has won the Communitas Award for Ethics. His past entrepreneurial work has also earned him the prestigious IAB/Brandweek Silver Medal for Innovation and the culturally significant NAACP Image Award. His career has taken him from investment banking to consulting to media to tech, and he holds a JD from Harvard Law School.
Use Case: A specific situation in which a product or service could potentially be used.
Red Team: A small group of interdisciplinary thinkers assigned to research and raise ethical objections or negative outcomes around a use case proposal.
1. What’s the role of ethics within the company?
Hypergiant is primarily an AI consultancy but also a SaaS business. We have just over 150 people, with an ethics team of three. Our approach is to involve everyone in ethics. I say this partially because this is a mandate from our CEO; but also, ethics is baked into our workflows.
When clients first approach us with a business or technical problem they want to solve, our developers, designers, and data scientists collaboratively generate a list of potential tech in business solutions. Then, we show the use case team how to vet solutions using our ethical framework. Only solutions that meet the ethical criteria get presented back to clients. Everyone who touches a use case must learn and understand how to use our framework.
Our view is that it doesn’t do any good to need to work with compliance down the line. We believe ethics needs to be in the hands of the frontline developers and designers.
It’s our Top of Mind Ethics (TOME) approach. The framework has three parts:
- Goodwill: Is there positive intent for the use case?
- Categorical Imperative: If everyone in our company, every company in our industry, and every industry in the world used AI in this way, what would the world look like?
- Law of Humanity: Are people being used as a means to an end, or is the purpose to benefit people?
Individuals, from design to data science, have to utilize that framework to vet their piece of the use case. If there are issues raised, they have to resolve those issues with the way that they created this solution.
We don’t expect hires to be trained in ethics or moral philosophy during the recruiting process; we focus on talent. Once you’re in, you have no choice to accept ethics or not — it’s part of the workflow. It’s important to our clients, many of whom are in the defense and intelligence space. Now, we’re getting into higher education; places where there’s a premium on values, and mistakes could be catastrophic.
2. What’s your background? How did you get involved with this work and end up where you are today?
Originally from Austin, I attended a science-oriented high school across from my home in a Black neighborhood. The school district wanted to create a magnet program in the newest school, which due to segregation, happened to be mine. I went to Dartmouth for debate, which had the best team at the time. Afterwards, I worked at Goldman Sachs in mortgage-backed and asset-backed securities before entering Harvard Law School.
I’m amongst the third of graduates who never practice law after, and instead, go into business. I went to McKinsey & Company before working in media consulting and strategy. Working at News Corporation, the parent company of Fox News and Wall Street Journal, affected how I see the needs for ethics at the forefront because the impacts on society can be great if they are not top of mind. Since then, I’ve joined small companies in roles I’m interested in.
In the 90s, ethics roles didn’t exist. If you tried to pursue that, you probably became a professor because they were the ones who spoke about these issues at the time.
People in industry didn’t think about it; it’s why the tech companies that shaped the industry struggle today. They never had to deal with a regulated environment. Now, their societal impact has reached a level where citizens are demanding accountability.
Serial entrepreneur and founder of Hypergiant, Ben Lamm, shared his grand vision for what he wants the company to accomplish, which ultimately drew me in. Upon entering the role, I realized how ethical reasoning for tech is a new field that will last decades.
These are key issues that will define the way society is organized and isn’t just an industry field.
Having a background in law has been useful in an ethics role. You understand the laws once they’re formalized and the reasons behind why they exist. The problem solving and logic taught in law school has always come in handy throughout my career.
3. How does your team operate within the company and what is your day to day work like?
Our CEO says embedding ethics into workflows is required. Period. The mandate has to come from the top decision-maker.
Otherwise, you’re relying on everyone’s goodwill. If ethics is only considered in some cases and not others, then it’s not a process. It’s a crapshoot; most companies operate that way today when it comes to ethics.
It’s why, every week, we’re seeing public backlash of new technologies and features because they’re not being ethically vetted. To me, big tech companies have some of the top thinkers on ethics in AI. However, these individuals are often not in charge of ethical workflows and don’t have the leverage to stop major decisions.
When these companies market “ethics and AI” products to other companies or create ethics review boards, these top thinkers are not always consulted. They have a stake in the industry but are not pulled into these initiatives, which tells me how serious the company is about it.
At Hypergiant, clients are most attracted to working with our red team. They say, “Hey, I want to work with them.” The use case owner is the highest level manager who’s responsible for delivering the project back to the client. The owner’s team is everybody building the project; it’s a combination of R&D data scientists, engineers, designers. The group has to create an affirmative case that passes the framework: Goodwill, Categorical Imperative, and Law of Humanity. Then, the case moves onto the red team. The red team typically has three to seven members from different groups. The size depends on the use case, scale, priority, and who’s available. Some members have creative imaginations that help them easily identify what could go wrong, while others are great at analyzing research.
For example, if the use case is robotic process automation, the red team researchers help provide a list of ten examples where applications of this technology went wrong. Then, they meet with the use case owners. They discuss what could go wrong if the client implemented the solution and everyone used it in harmful ways. If this occurs, the file gets sent back to the use case owner for them to resolve the objections or modify the project. Otherwise, it reaches the ethics review board. Sometimes it’s just one representative, such as the CEO, CTO, a board member, or outside counsel. They will either accept, reject, or ask again for modifications.
Finally, the file gets placed in an archive so that, now, if I ever work on a predictive maintenance use case, I can examine how we made decisions on similar projects in the past. All this previous research helps build a better case for the future.
4. What’s a concrete example of a positive change you or your team influenced?
We have a large Japanese conglomerate company that we do business with, one of the largest suppliers of train cars in the world. Japan is a leader in train technology, and this company is one of the pioneers in that area. They have a large amount of business in North America, and run the majority of trains, from the New York Metro to the Chicago L train and Caltrains. Most of the airports you visit with automated people movers — they’re the suppliers of those as well.
They wanted to implement a predictive maintenance program, utilizing sensor data and machine intelligence, that could predict with high accuracy when their trains might break down and which parts might have issues. The initial thinking was that the company will be able to save a lot of money and reduce the need for maintenance crews and teams.
I had concerns about people being laid off as a result of this technology. When we did our ethical vetting, we realized that most of the jurisdictions that you’re doing business with — the government entities, like the city of Chicago and the state of California — use their transportation infrastructure programs primarily as job programs. In a lot of cases, they use minority and diverse suppliers to help build this infrastructure. That’s how you get voters to pass these big multi-billion dollar job programs.
So when we did the ethical vetting and our red team came through, we thought: We understand the desire is to replace teams of people with robots, because the sensors can execute the thinking. However, by doing that, we’re now undercutting the point of the job programs and those jurisdictions.
It makes it less likely that the company will actually get approval for these training contracts because now they no longer have an argument as to how they’re actually going to increase jobs.
Once we helped them consider this perspective, and made them broaden their thinking, we implemented predictive maintenance. In our pitch, we leaned in on this argument that the technology actually makes a safer condition for both the workers and the passengers, because they’re no longer interacting with trains that might break down. It also allows people to be more efficient with their time.
I think that’s one of the use cases I’m most proud of because we delivered in a major way for a client. We were able to produce the work on the predictive maintenance and AI side, in addition to also delivering on the sales side.
I think the client understood, “This group really does understand our business. They didn’t just give us a tech solution, which is what a lot of AI consulting groups will do.”
Ethics allowed us to think more broadly and say back to the client, “Okay, how do you plan on delivering value back to your end customer, the political jurisdictions and governmental entities?” So many use cases have made me realize: When you think of society as a stakeholder, you discover more ways to deliver value. That’s what puts you far ahead of another company that’s thinking narrowly about a tech solution.
5. What’s surprised you the most about doing this work or being in this role?
How much fun it is. Unlike law or compliance, the rules aren’t really set, so there’s still an opportunity for you to make some important first arguments and first ideas.
You can be innovative with the way you communicate how ethics can actually be baked into products and services. The newness of the field is exciting, honestly.
The fact that these issues are going to be with us for a long time means that when I invest time today, its value won’t be obsolete in two years. Imagine if you built your business on the Lycos browser; that’s an old browser that used to be popular. Yahoo killed Lycos. Google killed Yahoo. Unlike that, we know this field is going to be around for a long time.
6. What have been your biggest challenges and takeaways?
Because I’ve been an entrepreneur in the past, I’m used to pitching work and getting straight to the money. That’s an exciting feeling. I like closing deals and that’s a big part of what I’ve done in my past. At Hypergiant, there’s a process around educating clients and customers so they understand why this work is important. As a result, it takes a little longer to get to the cash, though we’re trying to increase the velocity of deals. For me, educating clients and customers feels like the biggest challenge because I’m not a professor.
7. How do you deal with situations where your ethics framework isn’t taken seriously by clients?
Clients give us their specs or their request for proposal, and we vet it on our end before we give them any solutions back. That’s one way to solve a bunch of problems. Otherwise, if we rely on the clients, then our ethics become situational. The client can’t dictate for us what our ethics are. They can dictate whether or not they want to do business with us, but they can’t dictate for us what is acceptable or unacceptable for us from an ethical point of view.
Also, because we work on enterprise projects, we typically have a long term relationship with clients. And because we’re developing a reputation for ethics and our ethical vetting, there are clients now who are starting to choose us because of the fact that we do the ethical vetting. In cases where we hand off projects to IT teams and CTOs within the clients, they’re trained on our ethical vetting process so they’re able to do that internally. The client doesn’t have to be “pro-ethics” with their use cases; they just can’t be anti-ethics. If they give us a use case, we don’t say, “Oh, no, we don’t want to be in that business.” That rarely happens, though it actually has happened sometimes.
For example, Palantir has done work to help separate children at the border with Homeland Security. We don’t want to be part of that. It’s not like we’re getting a lot of work that we want to turn down.
It’s more so that we believe there’s always a way to do work that’s consistent with humanity.
As long as you’re not anti-ethics as a client, you’ll get your business solution solved in a way that’s more robust. What we try to stress to the client is this scenario: You can get back the fastest and most innovative solution that you think you want. But then in a year, if the city of Sacramento bans the use of that technology or that use case, you will have wasted your money.
Our goal is to produce the concept in an ethical way so that it survives regulation and legislation in the future.
IBM, for example, invested so much money in facial recognition, trying to develop the law enforcement market. Now, it turns out, the technology is being used in ways that our civil libertarians are completely opposed to, which means IBM has to get out of that business. When you create products and technology that hurt society, once it gets figured out, that is how innovation gets stifled.
8. What’s an area in this space that you wish you could make more of an impact in or want to see improve?
We currently work in defense and want to do more in that space. In 1948, Harry Truman signed the order to integrate the military. As a result, he enacted the requirement that all military defense contractors integrate their companies. You couldn’t do business with the military if your company wasn’t integrated. With that single order, the process of integrating corporate America began. It was very powerful because the Department of Defense spends 100 billion dollars a year on defense contractors. If you had an AI code of ethics, that everyone who did business with the U.S. military had to conform to, it would change the face and ethics industry overnight because Google is just a supplier, like Amazon or Microsoft. Most major companies in the tech space have contracts with the Department of Defense.
If we could say, “We can’t do business with you if you don’t conform to [the code of ethics]”, the face of the industry would change overnight.
This would be an area where we could have so much impact. The second area where we’re trying to do more work is in higher education. Why? Because that’s where the majority of our computer scientists and UX designers are coming from. If we could teach higher education how to make it a requirement for ethics to be embedded into their computer science or engineering curriculums in the same way that law schools and business schools have an ethics requirement, we could change the future pipeline of the industry.
Those are the two industries and areas that I’m most excited about doing more use cases within. I feel like those are the two areas that have the most impact on the future. Now, a lot of people might ask, won’t these clients be more excited about making contracts with Google, Amazon, or Facebook? Yes, that’s where the best work is being done without question. However, they will not lead on ethics: they have no incentive and that’s not the nature of their leadership. But the Department of Defense and higher education…I’m very bullish on their prospects and the ability to change the face of the industry.
9. How do you like to engage with tech ethics research or discussions outside of work?
We host a series of webinars where we chat with some really intelligent people and thought leaders. Christie Post, our Head of Content Partnerships and Marketing, finds our guests. Recently, we spoke to Kay Firth-Butterfield (Head of AI and ML, World Economic Forum) and Sangeeta Mundal (Vice President, Crown Castle).
Our two-part series, “Rethinking AI Ethics, Regulation & Policy”, concludes this week with a panel featuring Jasmine McNealy (Ph.D., Associate Professor, University of Florida) and Dr. Dorothea Baur (Principal & Owner, Baur Consulting AG), who is knowledgeable about the EU’s regulatory world for AI and Ethics. I’m always open to meeting with people like David Ryan Polgar (Founder, All Tech is Human) and new talent who want to chat and have a conversation. We always want to learn who else is out there doing this work.
10. What advice do you have for students, new grads, or tech workers who want to get involved but don’t know how to start?
Ask yourself, “What story am I trying to craft?” From there, build up experiences that give you the credibility to enter the next stage of your journey. I saw an interview where Chadwick Boseman shared that he’d often tell his agents, “I want to work with that actor, but I don’t want to play that role. I want to meet him when I’m doing something better than that.”
Chadwick said that because he said no at certain times, it made him available for the things that got him to today.
He said, “For me, it’s always been: first, who am I? I have to know who I am first to know how to navigate the world, because if I become something that I’m not supposed to become, then I’m in the wrong place, whether I’ve made it in other people’s eyes or not.”
That’s been supremely important to the exits I took in my career, like getting out of Goldman Sachs and McKinsey. Did I want to make money? Yeah. But I didn’t want to wind up there. Once I got there and developed more of a worldview and became more in tune with what I wanted to do personally, I left and kept going elsewhere with my journey. Consider how the majority of people applying to competitive computer science programs can’t attend; many students don’t make it to graduation.
Always be grateful for the experiences that you do have. It’s all a blessing. Some people say it’s a privilege, but it’s even greater than that.
Thanks for checking out the interview and supporting the series!
This project was started by Tiffany Jiang and Shelly Bensal out of curiosity. Even before we graduated and started working in tech, we asked ourselves: Who are the ethicists working within tech companies today? Which companies offer such roles or teams? How much of the work is self-initiated? Lastly, what does “responsible innovation” or “ethics” work entail exactly?
We hope this series can serve as a helpful resource to students, new grads or anybody wishing to do work in this space but don’t know how to get involved. If you have any thoughts or comments you want to share with us, we’d love to hear them. Let us know if there’s someone you’d like us to interview next!
Twitter: (@EthicsModels) | Email: email@example.com.