Bringing Steward-Ownership to Life While Playing on the Field of Capitalism — Blog Post 2 on the “OpenAI Saga”

Purpose
39 min readDec 20, 2023

--

Photo by Gertruda Valaseviciute (Unsplash)

This article is part two of our two-part series on OpenAI, its ownership structure and the events around Sam Altman’s ousting in November 2023.

You can read our introductory thoughts, and learn more about the framework we use to study this case in this overview.

The first article focuses on the history and ownership structure of OpenAI and analyzes it in the context of steward-ownership.

The following blog post looks at the events around Sam Altman’s ousting in November 2023 and how the structure and stewardship of the people in charge came to life.

1. OpenAI in November 2023

OpenAI is, as Sam Altman has phrased it, “just not a regular company” in two ways: 1) It is the frontrunner company working on AI — a technology that could absolutely change the path of humanity. 2) It has an unconventional purpose-oriented ownership structure, particularly for a Silicon Valley tech startup founded by some of the biggest names in Silicon Valley.

In a first publication on OpenAI, we discussed the legal structure of OpenAI in the context of steward-ownership. From our analysis and understanding of the case, OpenAI is a steward-owned company on paper. OpenAI, the operating entity, is controlled by the OpenAI Nonprofit. The board of directors of OpenAI Nonprofit are acting as stewards of OpenAI. This legal structure has not changed over the course of events following the ousting of Sam Altman. [1] We will assume some general knowledge about the structure in this article, a more detailed analysis can be found here.

With that in the back of our mind, we will be looking at the events around Sam Altman’s ousting in November 2023 and what triggered them. What exactly happened? Where is room for potential in the legal structure of OpenAI and the way it was embraced in the company? What are the limitations of steward-ownership? We draw learnings from the case as related to steward-ownership, governance and financing structures.

1.1. The “OpenAI Saga”: What happened in November 2023

For anyone interested in a detailed recap of the events at OpenAI in November 2023, there are way better places to go look at; let us refer you to:

Still, we want to give a short wrap up and timeline here:

On Friday, November 17th 2023, the board of directors of OpenAI Nonprofit fired Sam Altman as CEO of OpenAI and ousted him from the board of directors together with his colleague and board chairman Greg Brockman (who quit his position as President of OpenAI after the decision). Both Sam Altman and Greg Brockman are well known and respected in the AI industry, spokespeople for OpenAI and some of the main people holding relationships with stakeholders, employees and investors. Both were blindsided by the decision. [2]

The board of directors of OpenAI Nonprofit controls the voting power over OpenAI, giving it the right to make decisions on the management of OpenAI. It is self-regulating, meaning that the simple majority can elect and remove directors from the board, including the chairman.

At the point of Altman’s and Brockman’s removal from the board, only six of nine board seats were filled, with:

  • Greg Brockman, Chairman of the board, co-founder and President of OpenAI
  • Sam Altman, CEO and co-founder of OpenAI
  • Ilya Sutskever, Chief scientist and co-founder of OpenAI
  • Helen Toner, Director of strategy at Georgetown University’s Center for Security and Emerging Technology,
  • Tasha McCauley, senior management scientist at the RAND Corporation
  • Adam d’Angelo, co-founder of Quora (tech company)

As shown in Infographic 1, three of the directors were internal (working in the company), the other three were external (not working in the company).

Graphic 1: Board of directors in November 2023

In this “power vacuum” [3], with three board seats unfilled, Ilya Sutskever together with the three external directors formed the simple majority that voted to remove Greg Brockman and Sam Altman. The board’s official reason was that “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities”.[4] The board did not publicly go beyond this explanation, not even in conversations with employees and investors.

Speculation about the events are linking them to motives of ego, specific safety concerns around a new technological breakthrough[5] and interpersonal struggles[6]. A main driver of the conflict was the tension between the people in OpenAI wanting to develop AI safely and securely and the ones wanting to develop AI products more quickly and commercially, a tension around which OpenAI’s structure was built from the very beginning on.[7]

After initial attempts to reconcile failed, Sam Altman and Greg Brockman accepted job offers by Microsoft (the main investor in and partner of OpenAI) to build up a Microsoft AI Team.

At the same time, interim CEOs at OpenAI as well as investors and employees tried to move the board to reinstate Sam Altman, culminating in a letter signed by over 700 over OpenAI’s ~760 employees threatening that if Sam Altman would not be reinstated as CEO and the board of the nonprofit changed, they would follow Sam and Greg to Microsoft.

The unprecedented move by the workforce created a pivotal moment. Faced with the risk of mass exodus, the threat of the organization being completely absorbed by Microsoft without an actual takeover, and increasing stakeholder uproar, the board reinstated Altman as CEO and presented a completely reformed board of directors in the OpenAI Non-Profit on Monday, November 29th 2023.

After ongoing negotiations between the board of directors and Sam Altman, which also involved the CEO of Microsoft Satya Nadella, Sam Altman was reinstated as CEO with a completely changed board just a few days later.[8] The board as of December 1st 2023 is made up of:

  • Adam d’Angelo, Founder of Quora (only one remaining from old board)
  • Bret Taylor, former co-chief executive of software firm Salesforce and chairman of Twitter during sale to Elon Musk
  • Laurence H. Summers, former US treasury secretary

Additionally, it was announced that Microsoft will hold an observing board seat. This will not allow them to participate in votes.[9]

Graphic 2: Board after Sam Altman and Greg Brockman’s ousting

From a board of directors that included two women and only people without financial stakes in OpenAI, the recomposed board now comprises only men, some of whom have financial stakes in OpenAI. However, this is supposed to change in the future with the board growing, with more representation of different voices being the goal.[10] As of December 2023, Sam Altman and Greg Brockman have not reclaimed their seat in the board of directors. At least Sam Altman is expected by many to take on a board seat again in the future.[11]

People around the world were incredulous: Five days of a rollercoaster and now everything is back to normal? Not really. The happenings at OpenAI were an industry event, both for the AI sector as well as for the whole tech sector. OpenAI is widely seen as one of the most important businesses in the world at the moment and many employees, stakeholders and investors were very unhappy with the decision of firing Sam Altman in the first place and particularly with the manner of execution and communication of this process.

1.2. The scenario the structure was originally built for?

Verbal shots were fired after the perceived chaos at OpenAI — so what went wrong? Better yet: did anything go wrong, or was this exactly how the founders intended the structure to work in such a scenario?

In principle, the structure acted as it should have: the board of directors was — justifiedly or not — worried about Sam Altman and Greg Brockman and assumed the position that in order to attain the nonprofit mission, it would be beneficial to remove them from the board of directors and fire Sam Altman. This is in line with the original setup by the founders (including Altman and Brockman) that ensures that it is not only the right but the duty of OpenAI Nonprofit “to protect AI from forces of capitalism”[12], “[protect] the company from the short-term interests of shareholders” and ensure that OpenAI would work to serve the “best interests of humanity.”[13]

Before he moved to support the return of Sam Altman, co-founder and former director Ilya Sutskever said to employees: “This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.”[14] After the event, former director Toner stated that “our decision was about the board’s ability to effectively supervise the company”, in which the Nonprofit board seemed to have felt limited due to Sam Altman’s actions.[15]

The situation then changed when Sam Altman, Greg Brockman and the majority of the employees were threatening to transfer and set up a “rival firm (…) within profit-maximizing Microsoft”.[16] As journalist Sigal Samuel puts it:

“If you ask the question: Did the board make the right decision when they fired or when they hired him, paradoxically I think you could say yes to both. If they saw something that made them concerned that Sam was taking the company in a direction that was not super safety conscious — and remember, their mandate is to protect the best interests of humanity and not of OpenAI — it was right to get rid of him. But at the same time, when the condition changed and it looked like there might be mass exits of top talent to Microsoft, a company that seems significantly less safety conscious, it might have been the wisest move to rehire Sam, Greg and keep in house all of that top talent and keep them in a company that is at least nominally committed to AI safety and has some direction on them that will keep them caring about safety.”[17]

The structure was built for exactly this circumstance. ”You can argue [about] how they executed it, but it was meant to give the power to shut this all down if they determined that what was happening at OpenAI was unsafe or was not going to leave to beneficial AGI.”[18] This is however not saying that everything went as it should have and was intended to, particularly in terms of decision-making process and communication.

1.3. Explosion of unresolved tensions

OpenAI’s structure was consciously built to create an adequate framework for dealing with the explosive tension upon which the organization was built: the tension between either relying on market forces and shareholder value-interests to solve the potentials and risks of AI for humanity, or removing and protecting the development and deployment of artificial intelligence from the forces of hyper-capitalism.

On a basic level, this tension exists in every single company — profits vs. prosperity, shareholder value vs. purpose, impact, sustainability. But AI pushes this tension and the implications of a negative outcome to the extremes. The founders of OpenAI understood that and tried to build a structure that allowed for both poles to exist and be integrated in decision-making.

So two poles exist within OpenAI: one perspective is to go slow and deliberately to limit the harm of AI, and the other perspective is to go fast, scale and commercialize AI products in the hyper-competitive market that AI has become so that humanity can adapt to it. The Financial Times describes that “tension had been building inside the company for some time, as Altman’s ambition to turn OpenAI into Silicon Valley’s next tech powerhouse rubbed up against the company’s founding mission to put safety first.”[19] Both Altman as well as the majority of OpenAI’s employees are regarded as falling in the ‘go fast and commercial’-category.[20]

1.4. What are the outcomes of the incident?

As The Economist puts it, the events in November were “head spinning”, and now “heads have (…) spun back more or less to where they started. Or have they?”.[21] What the outcome of the events will be for OpenAI as well as for the whole AI industry is still open and only time will tell.[22]

Already today, it is clear that power dynamics have changed within OpenAI, with the new board members seemingly closer to the faster, more commercial approach to developing AI and, from their background,[23] “likely to be more attuned to investor’s interests”.[24] The more commercially driven side of the internal division seems to have won, at least for now.[25] It is clear both from Sam Altman and investors that the plan is to rework governance aspects of OpenAI — we will see what it is developed to. [26]

A potential shift towards less safety, more speed might well be contagious for the whole AI sector.[27] As for the AI sector, customers and investors have been shaken in their confidence in OpenAI and will surely diversify their risk in the future.[28]

2. What can we learn from this case?

We would now like to dive into some of the key learnings from our understanding of what went on at OpenAI. What was missing, where do we see room for potential, what might have been some of the problems? To do so, this part will untangle and discuss some of the most important aspects needed to fill the particular ownership structure of OpenAI — which technically can be considered a steward-ownership structure — with a lot of room to unlock its (true) potential. Many of these points are interconnected and there are surely many learnings to be drawn. Our goal is to pick out the most relevant learnings connected to the application of steward-ownership.

2.1. OpenAI’s structure did not live up to its potential

A somewhat healthy and normal discourse around the right approach for a tech company developing one of the most crucial technologies at present seems to have split the board of directors and the organization, culminating in chaos played out in the open and affecting stakeholders and the whole sector.[29]

This happened even though presumably the founders had very good intentions and thinking when building OpenAI and its structure. As stated above, they realized the relevance, both in chances and dangers of AI, especially when meeting the “flaws of capitalism”, as Altman puts it.[30] So they wanted to ensure that AGI was developed not based on shareholder value or profit maximization as the primary motivator, but developed safely and in a way that it would benefit the whole of humanity. In order to make this work, the founders sought to find different answers to the allocation of money and power in the company — ultimately finding a different answer to what ownership is in the organization. When they gradually built the structure accordingly, this seems to have occurred in a steward-ownership-aligned way — probably without their knowledge.

In this sense, the development of the for-profit entity of OpenAI was never a goal in itself to generate financial value, but rather to generate profits with well developed AI products as a means to fulfill the bigger mission. It was a solution to play on “the field of capitalism” and to acquire the investments needed to do so whilst ensuring that the mission always comes first.

We believe that this ultimately reflects their sense of responsibility for OpenAI’s mission and impact — and was a really good decision for what they wanted to achieve. In fact, steward-ownership has great potential as an ownership structure for companies like OpenAI — companies that want to remain independent and keep the purpose in focus whilst operating within competitive markets. In theory, the steward-ownership structure fosters the development of stewardship and processes for holding tensions, such as those illustrated above, and integrating them in decision-making processes.

But let us be clear, the events at OpenAI in November 2023 have shown that some aspects of the structure did not fully come to life. So this is our first take-away:

→ Simply aligning ownership does not make the cut, the structure also needs to be filled with life. This requires a clear vision and understanding of what kind of ownership is wanted without it being limited to the polarity of for-profit vs nonprofit.

2.2. Division between nonprofit and for-profit structure — but what about profit-for-purpose

What struck us when looking into OpenAI was the alleged division between OpenAI as a nonprofit and OpenAI as a for-profit. This seems true both for what was written and said about them as well as statements coming directly from employees at OpenAI. As one article reports, “divisions formed between veteran employees who remembered OpenAI’s early days as a research organization and the new guard who hailed from money-making Silicon Valley startups”.[31] This ‘either-or’ approach led to two opposing cultures working against one another, with the nonprofit “protecting” the mission from the for-profit.

The divisive treatment as either a nonprofit or a for-profit is also striking in press and media coverage of the incidents at OpenAI in November 2023, where it was described as a clear failure to “resolve the misalignment between research and commercialization”[32] and stating that “the recent dynamism of OpenAI has mostly been attributed to a classic Silicon Valley-style startup attached to an AI Safety think tank. In hindsight, it seems obvious that the two organizations would come to blows.”[33]

It also made everyone unhappy with the development of OpenAI: the people seeing OpenAI as a nonprofit did not see the company as nonprofit-y enough and judged the for-profit add-on and investment money as compromising the founding idea.[34] On the other hand, people seeing the company mostly as a for-profit were unhappy with the structure and that OpenAI Nonprofit is in power.[35]

What is missing from this picture is the potential of the structure to move beyond the polarity of nonprofit or for-profit to a synthesis of the two. Not black or white, nonprofit or for-profit, but making profits for purpose; the necessity to become profitable at some point but not to only make profits to create shareholder value but instead to serve a purpose.

This is especially striking when considering that the creation of the for-profit was a direct result of the mission of the Nonprofit. It was clear that for the mission to be attained, a for-profit part of OpenAI was essential and thus the dual entities were not designed to work against each other but instead to be aligned. Profits were never meant to be an end in themselves with the sole mission to generate shareholder value. Instead the for-profit entity was designed to set up OpenAI as a profit-for-purpose organization working towards the overall mission of OpenAI: developing AI products safely and in benefit for humanity.[36]

The ownership structure of OpenAI holds the potential for incorporating this combination. Companies like Bosch, Patagonia, Novo Nordisk, Organically Grown Company, and many more have established a clear synthesis of profit-for-purpose.[37] They do not see themselves as nonprofit on the one side or for-profit on the other, but are finding ways to solve problems for people, planet and society by making profits. As Patagonia puts it: “Will Patagonia just max out sales so it can give away more money each year? No. This is not an excuse to ignore the real tension we’ll continue to face between growth and the environmental impact of our operations. But the new ownership structure provides a way to put the value that comes with responsible growth to work fighting the climate crisis.”[38]

This synthesis has not worked out quite the same for OpenAI.[39] One very real reason for this can lie inherently in this division, tension and culture clash between OpenAI as a nonprofit and as a for-profit, particularly as it is shaped by the employees but also by investors.

→ Establishing a nonprofit and for-profit culture and arm does not necessarily lead to a synthesis of both but can amplify an internal tension if not clearly synthesized.

2.3. Separation of power and money creates room for more clarity

For steward-ownership to fully come to life, the structure needs to be set up in a way that ensures clarity of roles and responsibilities. To do so, steward-ownership allows for a differentiated answer to the allocation of money and power and showcases a way to separate the ownership rights.

This separation is a main difference between the ownership structure of OpenAI and many renowned steward-owned companies from Patagonia to Bosch. In OpenAI, voting rights and economic rights are not fully decoupled but are fully held by the OpenAI Nonprofit. In comparison, as illustrated in Infographic 2, Patagonia, Bosch and many others are using a double-entity structure in which power and money are clearly separated. If excess dividends are paid out, they flow into the nonprofit entity while the control remains with the stewarding entity responsible for the purpose of the operating entity without having any profit rights and financial incentives. Other companies like Zeiss have a single-entity which has two separate boards to mirror this split.

Graphic 2: Double-Entity vs. Single-Entity Model

The single foundation structure can work really well when the nonprofit mission is the main focus of the construct. Then, the for-profit creates profits for the nonprofit and there is a clear priority. However, if such a structure is designed for a for-profit-for-purpose organization to focus on making profits to further its purpose in the long run, a clear emphasis on the purpose of the operating entity is relevant.

For the latter purpose, the single foundation model used by OpenAI can therefore result in two challenges.

Firstly, having power and money of an operating entity combined in one nonprofit entity with the same board controlling both can result in the purpose of the nonprofit being treated preferential to the purpose of the operating entity. If the nonprofit entity becomes too decoupled from the operating entity, it can lead to a situation where the nonprofit pushes the operating entity to maximize profits or even sell it to generate funds for the nonprofit purpose. At OpenAI, the Nonprofit entity currently has the same purpose as the operating entity, so this danger might not be problematic at the moment. This, however, could change in the future if Nonprofit and for-profit become further decoupled in terms of mission and people working in them. Then, the Nonprofit could become an absentee owner of the operating entity.

Secondly, in a single foundation structure like that of OpenAI, there might be a lack of distinct differentiation and clarification regarding the role of the nonprofit. For many, inside and outside, it might seem as if the nonprofit is fulfilling its nonprofit goals but does not have anything to do with the operating entity. So when they exhibit their legal control, as was the case with OpenAI in November 2023, it is seen as the Nonprofit blocking the purpose of the operating entity or slowing it down. Relevant stakeholders — and maybe even people responsible in the board of directors — might not quite grasp the role of the Nonprofit board not only as responsible for the Nonprofit mission, but also the operating entity. They are the stewards — the people who are responsible for the mission of the operating entity.

If there would have been a clear separation of voting rights and economic rights, the role of the board in the stewarding entity would have been more clear inside and outside. It would also have entailed a clearer decision-making process determining the best stewards, asking what connections to stakeholders and what competencies would be necessary for stewarding the OpenAI as a profit-for-purpose entity. A nonprofit board usually requires different skills to successfully steward the mission than is required for steering a profit-for-purpose organization. This was clearly missing for some of the stakeholders as was reported: “One of the sources says some investors had previously feared OpenAI’s remaining independent directors — with little background in corporate governance — could end up failing in their oversight duties.”[40]

Looking at the events in November, with clearly separated rights and distinguished roles, it would not have been a distant nonprofit board wanting “to slow down OpenAI’s work”, as the board was accused of.[41] Rather, it would have been viewed as a process initiated by the stewards of OpenAI, the people carrying the responsibility for the mission of the operating entity in full understanding of what is going on there.

A separation of money and power in entities or boards and a clear description of their respective roles can create room for bringing the potential of steward-ownership to life.

2.4. Conscious design of stewardship is needed so it can come to life: Who are the best stewards?

Our perception is that in the case of OpenAI, stewardship — the responsibility to steer and safeguard the purpose — for OpenAI as a whole has not fully come to life. In particular, it seems as if the board of directors, at least in part, acts and is seen more like a common nonprofit board. This does not reflect their role as stewards — a role that conventionally shareholders of the organization would hold.

In OpenAI’s original design, they talk about “capped for profit” and about protecting the company’s mission from short-term interests and “resisting outside pressure”.[42] It is very much about what OpenAI should not become. Stewardship in contrast is about what it should be — and who should determine what it should be, now and in the future. For the structure to not be an empty shell but actually filled with life, a conscious stewardship process needs to be established that helps to identify the most suitable people to take over the stewards role and stewardship for the organization in the future.

This is particularly crucial in steward-ownership. Steward-ownership is challenging two of the most common and powerful mechanisms for power distribution over companies. Traditionally, either those born into families of company owners or those with the financial means to purchase shares get voting rights in companies. Replacing these often automatic mechanisms requires a very conscious approach to stewardship.

Steward-ownership disrupts the pattern of blood or money by replacing automatism with the critical question:

Who is most suitable, willing and value-aligned to assume the role of a steward and make the most pivotal decisions for the company?

For an organization to approach the question of stewardship and fill it with life, we can learn from insights and knowledge of generations before us. From the concept and approach to stewardship as is known from ancestral wisdom as guardianship for land, resources and communities, for example. Or, from the long history of family-owned businesses that have found ways to navigate stewardship with large families. And of course from examples of steward-owned businesses themselves.

Drawing from these examples, the question “who is the right steward for the individual organization?” can best be grasped through a deep dive into questions around the who, such as:

  • Who should definitely be represented in the group of stewards? Which individuals, which experiences, which groups of people?
  • Should the stewards be internal (close to operations of the company) or external (rather supervisory) or a mix of both? Why?
  • Should there be a more ability-based, meritocratic approach or a more democratic approach involving all members of a group of people (e.g. all employees)?

For any company but particularly for one as influential as OpenAI, it is crucial that stewards have clarity in their role and responsibilities and that the right group of people takes on these responsibilities. This also includes establishing the right balance of internal and external perspective in the group of stewards with a shared focus on the mission of the organization. Given the events in November, we are not sure whether this was the case at OpenAI.

It can be argued that particularly in the case of developing artificial intelligence, a more diverse board set-up (particularly for a board that serves to benefit “all of humanity”) would be needed. At the same time, a board more connected to the operations and stakeholders of the company could have potentially ensured involvement and/or insights into the decision-making processes.

If the quality of stewardship is measured, among other things, by how and whether the stewards provide sufficient space and navigation for and through tension(s) — in the case of OpenAI, tension around fast and slow, safe and commercial — then an obvious conclusion would be that there is still a lot of potential in the (conscious) development of stewardship in OpenAI.

→ There needs to be a clear and conscious design of stewardship in a steward-owned business.

2.5. Nuanced and fitting governance structures: How is the steward group governed, how are decisions made and who should be included?

Besides the question of who takes stewardship over an organization’s purpose, the question of how the group of stewards is governed and how decision-making processes in the organization are structured, is crucial. Good decision-making processes in steward-owned businesses require — as in any other company — good governance mechanisms. This applies to the governance of the group of stewards itself as well as governance mechanisms that ensure the involvement of other stakeholders.

2.5.1. Governance of the group of stewards

The first part of structuring governance deals with the governing of the group of stewards: succession and decision-making in the group that holds the majority of voting power over the organization.

There is no one-size-fits-all solution — neither on the ownership design nor on the stewardship design. Each individual organization needs to find nuanced answers that work for their particular corporate culture, values, set-up and mission. But we have been making some learnings on which questions to ask and which events to take into consideration, such as:

Succession, removal and election of stewards

  • How should the succession process for stewards work? Should the group be self-regulated (choosing itself), elected or approved by someone?
  • Should it be possible to remove stewards? Under what circumstances?
  • Should stewards have to be reappointed regularly or hold the position as long as they want?

Checks and balances

  • Should there be limits to the powers of stewards?
  • Should anybody have to be consulted or have a say in specific matters?
  • What incentives should and shouldn’t stewards have?

Decision-making processes

  • What are good decision-making processes? What majorities are needed for what types of decisions?

So let us look at OpenAI’s approach to governance of the group of stewards. As established, the members of the board of directors of OpenAI are its stewards, legally controlling OpenAI. The board of directors removes and appoints itself (self-regulating) — one manner of trying to ensure that future directors are always aligned with the board’s values and mission. The majority of board members do not have any financial stakes so that incentives for decision-making are not primarily driven by personal monetary incentives. So far, so good.[43]

So while there are quite clear processes for changing the composition of the board, from how we understand the case, there is also some potential for improvement:

  • A simple majority of board members was able to throw board members off the board in an intransparent process, even intransparent for OpenAI’s employees. This process can be established to protect the organization, but is quite an extreme mechanism.
  • A pivotal decision like firing/ releasing two founding members (Sam Altman and Greg Brockman) of the board of directors could be made without prior consultation of the full board (including the respective members) and other relevant actors. Advice and consultation processes with other critical stakeholders would have been needed, maybe even veto-rights for a consulting body.
  • The required number of board seats was not filled, suggesting a non-functioning succession process. Only six from nine board seats were filled, even though it was previously deemed as the necessary number for good decision-making.[44] In 2023, “one-third of the board left within two quarters”, leaving a “power vacuum that didn’t get filled”.[45]
  • There were very many quick changes of board members (yearly changes with some members only staying a year or less). But stewardship is not just taking a board seat for some time; it is a lot more stable and long term-oriented with conscious succession processes. We are not aware of any concrete succession processes; from the outside the process of board changes seem to have gone quite quickly and without significant transition phases. To us, a good structuring of succession processes particularly for stewards is one of the most crucial tasks when building a functioning stewardship design in steward-ownership (and beyond). This is an important and lengthy process that should ideally go beyond mere board changes.
  • In this particular case and mission affecting humanity, what could have also been valuable is to have a more inclusive appointing process going beyond the board of directors appointing itself.

To conclude, for an organization and leading board built around the tensions mentioned above, the design of the board seems to have been very much based on the thought that “(…) we’re aligned and we all want the same thing. And it won’t become a problem because we’re going to stay aligned.”, as director of Cornell University’s Tech Policy institute Sarah Kreps puts it.[46] Processes for conflicts and their potential consequences seem to have been sparse.

The board and governance structure of the board will be continuing to develop. Altman said himself that “clearly, our governance structure had a problem. And the best way to fix that problem is gonna take a while. (…) designing a really good governance structure, especially for such an impactful technology is (…) gonna take a real amount of time for people to think through this, to debate, to get outside perspectives, for pressure testing”

→ Steward-ownership cannot solve all the problems. It is a framework on top of which good governance can and needs to be built. Clear roles, succession and decision-making processes as well as the right people are crucial to ensure that this is not an empty shell but is filled with stewardship.

2.5.2. Governance mechanisms beyond the group of stewards

Another potential learning in the OpenAI structure is that it might have been a good step for the organization to establish a multi-stakeholder approach to governance. Hereby, we don’t necessarily mean including different stakeholders in the group of stewards for the company (see above). We particularly mean that there are further governance mechanisms to be considered that go beyond directly holding voting power.

There are more ways to integrate stakeholder groups into decisions. There is a multitude of options from information rights, the right to be kept informed about major decisions and regularly updated, consultation rights, the necessity to be heard in specific scenarios, specific consent requirements or certain veto rights or minority shareholding rights for specific decisions and situations.

More stakeholder-integration on some of these levels can not only foster better decision making, but can also lead to a more aligned communication and implementation of decisions.

This is obviously something that not only we but also OpenAI has already learned: after the events in November, Microsoft obtained an observing seat in the board (information rights, potentially even extending to consultation rights).[48] Something similar might make sense for employees, researchers, customers or potentially even government representatives, depending on the needs of OpenAI and its stakeholders.

Building a multi-stakeholder governance beyond the allocation of voting power can hold potential for the organization.

2.5.3. Healthy investor-relationship design with provision rights for investors

When talking about multi-stakeholder governance, one aspect that we have touched upon a few times already is the relationship between the investors and OpenAI. We do understand that major investors like Khosla Ventures or Microsoft were quite upset with the way that the decision of the board of directors was formed and communicated. To them, Sam Altman was the primary contact point, the person who they had trusted to handle billions. Microsoft CEO himself said he had “no relationship with the board” and that “we will never again get in a situation where we get surprised like this ever again”.[49]

The decision of Microsoft to not get to know the board of directors seems peculiar. On the one hand, this again shows the misconceptions around the role of the board of directors of the Nonprofit. It seems like Microsoft and others underestimated or misunderstood this role and didn’t see the board as what it was — the group of stewards with legal control of OpenAI. On the other hand, it also shows flaws in OpenAI’s investor-company relationship-building and governance.

Investors in steward-owned businesses do not take over the steering wheel and majority of the voting power, but are considered enablers and important partners. They have high stakes in the organization and often bring valuable experiences and resources to the table. So including them in decision-making can be considered a huge asset and value in itself.

Which mechanisms are used, and when, and to which decisions they should apply depends on the relationship between the investor and the organization. A change of CEO for example could be specified as an event in which investors have either a veto-right or a consultation duty for the stewards before the decision is made. Both would make sure the investors have an influence and their voice is heard. As mentioned above, Microsoft will now have an observing board seat in the OpenAI Nonprofit board of directors.[50]

While this holds up, of course investor provision rights should not undermine the principle of self-governance. This is a balancing act which allows stewards to remain in control with investors still being able to add their perspective, block certain decisions, take influence and be a valuable partner.

→ Shutting out investors completely cannot be the answer. A healthy investor-company relationship in steward-owned businesses can also be based on specific provision rights for investors.

While we can draw some learnings, we are too far away from the inner workings of OpenAI to talk more concretely on what would be the best next steps for OpenAI in terms of bringing stewardship to life. We don’t know what the best group of stewards would be, how much of a democratic touch it needs, what types of external and internal involvements make sense for the particular organization. Similarly, the best design of succession and decision-making processes for the organization is very open to us. But we do sense that there is huge potential in answering those questions specifically for OpenAI’s stewardship and governance.

3. Boundaries of Steward-ownership

Besides some of the learnings we can draw, the case of OpenAI also illustrated once again that designing an innovative and aligned ownership structure is an important step, but it does not solve everything. There are limits and boundaries. It can only be a building block for a healthy business and healthy economy. And there are (power) dynamics that go beyond legal structures and power allocation, some of which we can clearly see in the OpenAI case.

3.1. “Soft” power of money

In most ownership structures, money equals power. Shares with voting rights are mainly allocated depending on who is willing to give the most money, and voting rights are often clustered together with economic rights. In steward-ownership, this is not the case. But money still has a power component to it that plays an important role.

As we have established above, OpenAI needs money if they want to proceed with their mission. ChatGPT is far from being profitable. Further, developing and running AI needs a lot of computing power and a lot of skilled tech experts — and here, OpenAI is competing with the likes of Google, Stripe, Apple and, as we have witnessed, Microsoft. The monetary needs of OpenAI were the main reason for establishing the for-profit organization and the investment money got them to where they are today. But this also makes them dependent, both on their current investors (e.g. Microsoft did not make most of their investment in cash, but in computing credits) and future investments.[51] And OpenAI is expected to raise much more capital if they want to continue their work.[52]

This dependency on investment money — and how to make sure that the “soft” power of money doesn’t overtake the hard-coded legal power — is a tension that all steward-owned businesses face. But in OpenAI, both the quantity of capital and dependency on capital have reached new levels.

So from what point on is the “soft” power of money too large? What quantities and what dependencies can a legal construct sustain? We are not sure. But the happenings at OpenAI suggest that the soft power of money is at least already operating in the background. The board of directors did make the ultimate and not investor-driven decision of firing Sam Altman and Greg Brockman and exhibited their legal power. However, they did end up hiring Altman and Brockman back and changed the composition of the board. How much of this was AI-safety driven and how much steered by monetary considerations is hard to judge, but the board is reported to have been “under pressure from Microsoft Corporation, other investors and employees (who also have capped economic shares)”.[53] As the New York Times writes: “Satya Nadella, Microsoft’s chief executive, didn’t have the power to stop OpenAI’s board from firing Altman, but he did have the power to render the action hollow.”[54]

So did money tip the scale and undermine the structure here? A lot of articles judge: Yes. New York Times says “Team Capitalism” won,[55] AI expert Stuart Russel states that OpenAI has “succumbed to the commercial impulse” that they had previously recognised to “lead to disaster”[56] and the NZZ regards the chaos as pointing to money overweighting idealistic founding motives[57]. “A technology potentially capable of ushering in a fourth industrial revolution unlikely to be governed over the long term by those who wanted to slow it down — not when so much money was at stake”.[58] Altman himself argued that the structure would hold up against shareholder value-interest and that making money will never be the primary focus.[59] Looking at the process around firing and rehiring Altman, we are not sure whether the scale has already tipped too far. Particularly with the governance to be changed, only time will tell how it develops.

3.2. “Soft” power of key individuals

Another “soft” power at work is that of key individuals and their relationships and standings. We are talking about Sam Altman and to a lesser extent Greg Brockman here. If you have an individual like Altman who is seen as the main spokesperson for the organization, holds the most relevant relationships with partners and is the inspiration for many employees — there is a power dynamic here that is not represented in allocation of legal control. In the case of Sam Altman, initially he had both: legal control as a director and a legal role as CEO as well as himself and his relationships as an asset. Stripped of his legal roles, it was quickly visible that he still had enough leverage to get back into OpenAI. The Washington Post writes: “As much as boards have technical legal power, so do the organizations they rule. It’s all a construct, and the people of OpenAI will get their way.”[60]

So what to take away from this? People matter, personal relationships matter, independent of the legal structure they operate in. This makes alignment and functioning governance mechanisms crucial. It also makes it relevant to include key individuals in decision-making and design processes with them, not against them, and think about consequences of losing them in terms of knowledge, vision and stakeholder relationships.

3.3. People have to take on roles and responsibilities

Another boundary of steward-ownership that is somewhat visible in the OpenAI case is that you can design even the most perfect of structures, from ownership over finance to governance, and still not have the people to fill it with life, take on roles and responsibilities.

OpenAI does indeed have a rudimentary board governance including e.g. the element that 9 people should be on the board of directors. But when people fell away, these roles were not filled again. The remaining board of directors didn’t take sufficient care of whether roles were sufficiently filled and governance structures stringent — and the leaving ones did not take the role of succession seriously enough.

3.4. Regulation of artificial intelligence

Another clear boundary of steward-ownership is the larger scale implications of a topic such as AI. As a field with profound implications on the future of humanity, the question remains how much should be freely decided in the hands of a few organizations working on it. Currently, many of the organizations working on AI and the industry itself are very aware of risks of the future of AI and are trying themselves to build up corporate structures that internalize responsibility for the outcomes of their work — from OpenAI to Anthropic, alternative ownership and financing structures in this sector are quite common — and steward-ownership is currently the best model that we know of for these types of businesses.

But is this going to be enough? The Economist writes: “The chief lesson is the folly of solely relying on corporate structures to policy technology.”[61] The New York Times elaborates: “Ensuring that A.I. serves humanity was always a job too important to be left to corporations, no matter their internal structures. That’s the job of governments”.[62] We are not the ones to answer what kind of additional regulations and regulatory institutions AI needs. Steward-ownership, or for that matter any ownership structure, does not solve the question of necessary regulation and state- or industry-led guardrails for development, creation and application of AI but it creates a very good framework of internalizing responsibility and decoupling it from profit-maximization. And one thing seems to be common opinion: for AI to be beneficial for humanity, adequate global regulation will be necessary.[63]

4. Conclusion

The primary take away from the happenings at OpenAI should be: Steward-ownership alone does not solve everything, it needs to be actively and consciously designed to come to life. Simply changing your ownership structure is not enough to guarantee a purpose-driven and well-functioning business; instead, aspects from functioning governance, aligned incentives, the right people and a conducive environment are relevant as well.[64]

In the case of OpenAI, there is a lot of room for potential for bringing steward-ownership and stewardship to life; a task that the board of directors will hopefully take on in the next months. At the same time, long-time investor Khosla said right after the incident: “There is nothing wrong with the structure. There are non profits owning companies in Europe. I don’t think there is anything wrong with the structure but with the governance“. Sam Altman also still believes in the potential of the structure after his return, saying that he doesn’t think that “[money] will ever be our primary motivator”.[65]

This is a major factor differentiating OpenAI from other organizations in the sector. Even while the events at OpenAI unfolded, the alternative of OpenAI in a shareholder primacy model or bought up by a large corporation wasn’t seen as an adequate alternative. The trust in big corporations to develop AI safely is very low.[66] Jessica Lessin, editor-in-chief of the tech news website The Information, reflects on the fear of AI solely controlled by large shareholder value-oriented corporations, writing “I think that’s why so many in tech were panicked about the near-collapse of OpenAI. Against the big tech behemoths, it has been the one insurgent that has really broken through in usage and revenue.“[67] And that is exactly what OpenAI was set up to do: counterweight profit-seeking corporations as “AI [is] too powerful a technology to be controlled by profit-seeking corporations or power-seeking states”.[68]

Particularly when talking about game-changing technologies like AI, we need different models and new answers to corporate ownership models and the question of ‘who has power and why’, and ‘who benefits from financial value created, why’. It seems apparent that in such a critical sector, the standard mechanism of money = power is no longer the answer.

The same goes for the financial drivers of corporate behavior. In 2003, in the documentary ‘The Corporation,’ the behavior of large corporations was analyzed using characteristics typically employed to assess the mental state of patients. The result was that the behavior of these companies met the criteria for severe mental illnesses. According to the documentary, corporations, or at least a significant portion of large corporations, operate like psychopaths: lacking compassion and devoid of responsibility towards others. A pretty scary thought in general that becomes even more scary in the face of AGI. This behavior is then what regulations are built upon. It seems that the founders of Open AI sought to build corporate safeguards to not turn to psychotic behavior and corporate identity as a company. To put it in Sam Altman’s words, they were looking for a structure that was able to stand up to “play on the field of capitalism” and strive to move beyond it.

“I like capitalism. I think it has huge flaws, but relative to any other system the world has tried, I think it is still the best thing we came up with. But that doesn’t mean we shouldn’t still strive to do better.” — Sam Altman [69]

We are thrilled that Sam Altman, his co-founders and the other people responsible at OpenAI took the courageous steps challenging allegedly set notions of corporate ownership and entrepreneurship. Working on a technology that can easily change not only our worldview but our reality not only requires cutting edge technological innovation and foresight, but also proven social innovations. In building a steward-ownership model, they have turned to “corporate therapy”, as the New Yorker phrases it.[70} When fully coming to life, steward-ownership has the potential to face some of the flaws of capitalism whilst allowing companies to play on the “field of capitalism”. They just play a different game — challenging the (imagined and established) laws of corporate physics. So let’s hope they don’t draw the wrong learnings and get discouraged.

Disclaimer: we have no inside knowledge of OpenAI’s inner workings and have not had any informants for this article. Everything is based on publicly available information and is in that sense only as good as its sources.

This content is Creative Commons “CC BY-ND 4.0” licensed. For more information, please visit our website here.

Sources

[1] Peers, M. (2023): OpenAI Drama’s First Season Ends but Second Season Is Possible. The Information. Accessed 23.11.2023

[2] Dave, P. (2023): How OpenAI’s bizarre structure gave 4 people the power to fire Sam Altman. WIRED.

[3] Dwoskin, E. and Tiku, N. (2023): Altman’s polarizing past hints at OpenAI board’s reason for firing him. The Washington Post. Accessed 20.12.2023

[4] OpenAI (2023): OpenAI announces leadership transition. https://openai.com/blog/. Accessed 20.11.2023

[5] see Tong, A., Dastin, J. and Hu, R. (2023). OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say. Reuters. Accessed 20.12.2023 and Biggs, T. (2023). Helen Toner, the effective altruist who sparked the Open AI coup. The Sunday Morning Herald. Accessed 20.12.2023

[6] Biggs, T. (2023). Helen Toner, the effective altruist who sparked the Open AI coup. The Sunday Morning Herald. Accessed 20.12.2023

[7] Roose, K. (2023). A.I. belongs to the Capitalists now. New York Times. Accessed 20.12.2023

[8] Sweeney, L. and Clark, E. (2023): Inside OpenAI, a rift between billionaires and altruistic researchers unravelled over the future of artificial intelligence. ABC News. Accessed 02.12.2023

[9] Victor, J., Mascarenhas, N. (2023). Microsoft to Become Non-Voting Observer in Latest Shake-up of OpenAI Board. The Information. Accessed 29.11.2023

[10] Altman, S. (2023): Interview with Sam Altman. What now? with Trevor Noah. Accessed 18.12.2023

[11] The Economist (2023): The fallout from the weirdness at OpenAI. Accessed 13.12.2023

[12] Roose, K. (2023): A.I. belongs to the Capitalists now. New York Times. Accessed 20.12.2023

[13] Bansal, T. (2023, October 12): Does Open AI’s Non-Profit Ownership Structure Actually Matter?. Forbes. Accessed 29.11.2023

[14] Victor, J. et al. (2023): Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety’. The Information. Accessed 17.11.2023

[15] Victor, J., Mascarenhas, N. (2023). Microsoft to Become Non-Voting Observer in Latest Shake-up of OpenAI Board. The Information. Accessed 29.11.2023

[16] The Economist (2023): The fallout from the weirdness at OpenAI. Accessed 13.12.2023

[17] Today, Explained (2023): Chaos at OpenAI. Accessed 18.12.2023.

[18] Roose, K. and Newton, C. (2023): Mayham at OpenAI. Hard Fork. Accessed 20.12.2023

[19] Water, R. and Thornhill, J. (2023): Tech’s philosophical rift over AI. Financial Times. Accessed 26.11..2023

[20] The Economist (2023): With Sam Altman’s return, a shift in AI from idealism to pragmatism. Accessed 18.12.2023

[21] The Economist (2023): Inside OpenAI’s weird governance structure. Accessed 29.11.2023

[22] Palazzolo, S. (2023): The Unknowns Left in the OpenAI Saga; A Silver Lining in Nvidia’s Blowout Earnings. The Information Accessed 18.12.2023

[23] The Economist (2023): With Sam Altman’s return, a shift in AI from idealism to pragmatism. Accessed 18.12.2023

[24] The Economist (2023): Inside OpenAI’s weird governance structure. Accessed 29.11.2023

[25] Roose, K. (2023): AI belongs to the Capitalists now. New York Times. Accessed 24.11.2023

[26] see Dwoskin, E. and Tiku, N. (2023): Altman’s polarizing past hints at OpenAI board’s reason for firing him. The Washington Post. Accessed 20.12.2023, Altman, S. (2023). Interview with Sam Altman. What now? with Trevor Noah. Accessed 18.12.2023 and Swisher, K. (2023): Microsoft CEO Satya Nadella on the OpenAI Debacle. Accessed 18.12.2023

[27] The Economist (2023): With Sam Altman’s return, a shift in AI from idealism to pragmatism. Accessed 18.12.2023

[28] The Economist (2023): With Sam Altman’s return, a shift in AI from idealism to pragmatism. Accessed 18.12.2023

[29] see Metz, C. (2023): Inside the Coup at OpenAI. The Daily. Accessed 23.11.2023 and Desai, K. (2023): To Continue Innovating, OpenAI Should Return to Its Nonprofit Roots. The Information. Accessed 18.12.2023

[30] Altman, S. (2023). Interview with Sam Altman. What now? with Trevor Noah. Accessed 18.12.2023

[31] Palazzolo, S., Gardizy, A.,Clark K. and Victor, J. (2023): What Comes Next for Sam Altman’s OpenAI. The Information. Accessed 18.12.2032

[32] Desai, K. (2023): To Continue Innovating, OpenAI Should Return to Its Nonprofit Roots. The Information. Accessed 18.12.2023

[33] O’Laughlin, D. (2023): OpenAI Boardroom Battle: Safety First. FabricatedKnowledge. Accessed 21.11.2023

[34] Isige, J. (2023): OpenAI Went From Non-Profit To A For-Profit Company With Profit Caps — Now That Cap May Not Last Long. Business2Community. Accessed 02.12.2023

[35] Simmons, B. (2023): Part 1: Jaylen Brown’s Crucial Season, an OpenAI Rebellion, and KC’s WR Shortage. The Bill Simmons Podcast. Accessed 18.12.2023

[36] Simmons, B. (2023): Part 1: Jaylen Brown’s Crucial Season, an OpenAI Rebellion, and KC’s WR Shortage. The Bill Simmons Podcast. Accessed 18.12.2023

[37] Bansal, T. (2023). Does OpenAI’s Non-Profit Ownership Structure Actually Matter? Forbes, accessed. 29.11.2023

[38] Patagonia (2023): Earth is now our only shareholder. Some Questions and Answers. https://www.patagonia.com/ownership/. Accessed 07.12.2023

[39] Bansal, T. (2023). Does OpenAI’s Non-Profit Ownership Structure Actually Matter? Forbes, accessed. 29.11.2023

[40] Dave, P. (2023): How OpenAI’s bizarre structure gave 4 people the power to fire Sam Altman. WIRED.

[41] see Roose, K. (2023: A.I. belongs to the Capitalists now. New York Times. Accessed 24.11.2023 and Victor, J. and Mascarenhas, N. (2023): Microsoft to Become Non-Voting Observer in Latest Shake-up of OpenAI Board. The Information. Accessed 01.12.2023

[42] The Economist (2023): Inside OpenAI’s weird governance structure. Accessed 18.12.2023

[43] Levine, M. (2023): Who Controls OpenAI? Bloomberg.com. Accessed 20.12.2023

[44] Metz, C., Mickle, T, and Isaac, M. (2023): Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding. The New York Times. Accessed 20.12.2023

[45] O’Laughlin, D. (2023): OpenAI Boardroom Battle: Safety First. FabricatedKnowledge. Accessed 21.11.2023

[46] Palazzolo, S. (2023): The Unknowns Left in the OpenAI Saga; A Silver Lining in Nvidia’s Blowout Earnings. The Information Accessed 18.12.2023

[47] Hearth, A. (2023): Interview: Sam Altman on being fired and rehired by OpenAI. Accessed 18.12.2023

[48] Victor, J., Mascarenhas, N. (2023). Microsoft to Become Non-Voting Observer in Latest Shake-up of OpenAI Board. The Information. Accessed 29.11.2023

[49] Swisher, K. (2023): Microsoft CEO Satya Nadella on the OpenAI Debacle. Accessed 18.12.2023

[50] Victor, J., Mascarenhas, N. (2023). Microsoft to Become Non-Voting Observer in Latest Shake-up of OpenAI Board. The Information. Accessed 29.11.2023

[51] Hao, K. (2023): The Chaos Inside OpenAI — Sam Altman, Elon Musk and existential risk explained. Big Think. Accessed 20.12.2023

[52] Levine, M. (2023). Who Controls OpenAI? Bloomberg.com. Accessed 18.12.2023

[53] Levine, M. (2023). Who Controls OpenAI? Bloomberg.com. Accessed 18.12.2023

[54] Klein, E. (2023). The Unsettling Lesson of the OpenAI Mess. New York Times.: The Unsettling Lesson of the OpenAI Mess

[55] Dwoskin, E. and Tiku, N. (2023): Altman’s polarizing past hints at OpenAI board’s reason for firing him. The Washington Post. Accessed 20.12.2023

[56] Water, R. and Thornhill, J. (2023): Tech’s philosophical rift over AI. Financial Times. Accessed 26.11..2023

[57] Fulterer, R. (2023): Ringen um Sam Altman: Open AI wollte selbstlos dem Wohle der Menschheit dienen. Diese Illusion musste zerplatzen. NZZ. Accessed 20.11.2023

[58] Roose, K. (2023): A.I. belongs to the Capitalists now. New York Times. Accessed 20.12.2023

[59] see Bansal, T. (2023,): Does Open AI’s Non-Profit Ownership Structure Actually Matter?. Forbes and

Altman, S. (2023). Interview with Sam Altman. What now? with Trevor Noah. Accessed 18.12.2023

[60] Dwoskin, E. and Tiku, N. (2023): Altman’s polarizing past hints at OpenAI board’s reason for firing him. The Washington Post. Accessed 20.12.2023

[61] The Economist (2023): The fallout from the weirdness at OpenAI. Accessed 13.12.2023

[62] Klein, E. (2023). The unsettling lesson of the OpenAI Mess. New York Times, accessed 27.11.2023

[63] Anagnost, A. (2023): AI Can Reshape the Physical World — If We Regulate It Properly. The Information. Accessed 06.12.2023

[64] Bloomberg Technology (2023): First OpenAI Investor Weighs In on Altman Comeback. Accessed 27.11.2023

[65] Altman, S. (2023). Interview with Sam Altman. What now? with Trevor Noah. Accessed 18.12.2023

[66] Roose, K. and Newton, C. (2023): Mayham at OpenAI. Hard Fork. Accessed 20.12.2023

[67] Lessin, J. (2023): The Next AI Battle: Adding It to Existing Products. The Information. Accessed 18.12.2023

[68] Klein, E. (2023). The unsettling lesson of the OpenAI Mess. New York Times, accessed 27.11.2023

[69] Altman, S. (2023). Interview with Sam Altman. What now? with Trevor Noah. Accessed 18.12.2023

[70] Romeo, N. (2022). Can Companies Force Themselves to Do Good. The New Yorker. Accessed 05.11.2023

--

--

Purpose

Purpose serves a global community of entrepreneurs, investors, and citizens who believe companies should remain independent and purpose-driven.