OpenAI in the Context of Steward-Ownership — Blog Post 1 on the “OpenAI Saga”

Purpose
26 min readDec 20, 2023

--

Photo by Zac Wolf (Unsplash)

This blogpost is part one of our two-part series on OpenAI, its ownership structure and the events around Sam Altman’s ousting in November 2023.

You can read our introductory thoughts, and learn more about the framework we use to study this case in this overview.

This blog post focuses on the history and ownership structure of OpenAI and analyzes it in the context of steward-ownership.

The second blog post looks at the events around Sam Altman’s ousting in November 2023 and how the structure and stewardship of the people in charge came to life.

1. Recap: Founding Mission & History of OpenAI

“AI is too important to get this [ownership and governance] wrong”
Marissa Mayer, former CEO at Yahoo[1]

To understand the ownership structure of OpenAI and evaluate it in the context of steward-ownership, we need to head back to 2015, the year OpenAI was founded. Artificial intelligence was mostly still — at least outside of the academic realm — in its infancy, with limited public awareness and applications. Not in the Silicon Valley though, where tech giants started seeing massive commercial potential for AI and started plugging it out of the scientific and academic realm into products at high speed.[2]

OpenAI co-founders Greg Brockman and Ilya Sutskever described AI as a surprising field that had developed from a technique of hand-coded algorithms for individual problems to one showcasing state-of-the-art achievements for a variety of problems due to developments in deep learning.[3] With the possibility of AI reaching intellectual human performance, they stated that “[i]t’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly ”[4], describing the complexity of AGI and capturing, in essence, the founding spirit of OpenAI.

1.1. OpenAI as an AI research laboratory

OpenAI was founded in 2015. The founding group included leading scientists, research engineers and highly successful entrepreneurs such as Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman and Reid Hoffman. Their collective intention was to form “a leading research institution which can prioritize a good outcome for all over its own self-interest”[5] with the “mission to develop artificial intelligence that would be safe and beneficial for humanity and provide a counterweight to profit-driven AI labs”[6].

At the same time, for-profit companies such as Google were not only fast progressing in AI development, but had also recruited some of the world’s best AI researchers[7] — often coming from the academic field, now working towards a commercialization of AI products.[8] Yet, from the start and based on the conviction of OpenAI’s founders that “AI was too powerful a technology to be controlled by profit-seeking corporations or power-seeking states”[9]. To do so, they knew that they had to depart from conventional ownership structures focused on shareholder value. Convinced that for-profit incentives could corrupt the beneficial use of this very powerful technology,[10] they built OpenAI as a nonprofit organization. The goal was to establish a counterweight with the mission to develop the AGI technology in the open, for the benefit of humanity as well as free from corporate pressures[11] “to build value for everyone rather than shareholders”[12].

In fact, OpenAI’s founding spirit and the founders’ understanding of safe and responsible development of AGI was to play a big part in recruiting employees over the years.[13] AI researcher Nathan Lambert notes that, in its early days, OpenAI succeeded to bring together three cultures that cemented the company: a researcher culture interested in new ideas and openness, a non-profit culture interested in safety and wealth distribution, and a Bay-Area technologist culture interested in publishing cool ideas.[14] According to Lambert, these cultures melded together well “until the company shifted its ownership structure”[15] away from a purely nonprofit structure.

1.2. OpenAI’s ownership shift

With an initial pledge of $1 billion of donation money, OpenAI had planned to create machines that simulated human learning and reasoning, intending to only spend “a tiny fraction of its $1-billion initial investment in the first few years.[16] However, the need for extensive computational power and further innovations soon disclosed the tension point of needing to scale much faster than they had originally been planned — also to keep track with other cutting edge AI companies and products, such as Google Brain’s “Transformer”.[17]

Against the backdrop of ongoing tensions regarding AI and AGI safety concerns and/or the speed with which AI developments were conducted, co-founder and board member Elon Musk left the organization in 2018, reneging outstanding donations that were part of the original “$1 billion in funding, contributing only $100 million before he walked.”[18]

Meanwhile, it was becoming apparent that donations would not suffice to “pay for expensive computing-capacity and top-notch talent”[19] to keep up with players in the field. They initiated the transformation of OpenAI into a dual structure “where the nonprofit — controlled by a board chosen for its commitment to OpenAI’s founding mission — would govern the for-profit, which would raise the money and commercialize the AI applications necessary to finance the mission.”[20] By 2019, OpenAI had created a for-profit subsidiary with capped returns for investors and with returns above the cap to be funneled back into the OpenAI nonprofit.[21] In an initial post about this structure, Greg Brockman and Ilya Sutskever stated:

“We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit — which we are calling a “capped-profit” company.” [22]

1.3. Investments, Collaboration and Partnership

In 2019, the “capped-profit” part of OpenAI had successfully secured investments needed to further advance and build technologies and products like ChatGPT. This included a $1 billion investment from Microsoft.[23] The initial investment from Microsoft in July 2019 marked the first out of three multi billion dollar investments (2019, 2021, 2023), forming their collaboration and partnership within AI and across AI supercomputing[24] and allowing OpenAI “to continue (…) independent research and develop AI that is increasingly safe, useful, and powerful”.[25]

Since its initial investment, Microsoft has invested another $12 billion into the company[26], a large part of it not as cash but as cloud computing credits. The investment deal entails capped dividend rights for Microsoft as well as an exclusive arrangement between the two companies “with Microsoft supplying cloud services to OpenAI and OpenAI supplying AI tech to Azure cloud services and other Microsoft offerings”[27]. Each organization being able “to independently commercialize the resulting advanced AI technologies”.[28]

This capped-profit structure with large equity rounds was a significant departure from OpenAI’s initial nonprofit structure. At the same time, it was in line with the Nonprofit’s mission to ensure that AI is built in a way that serves humanity. Their approach to fulfilling this mission was to have OpenAI entrepreneurially and independently research AI and develop and test AI products on the market, bringing them into the open. The ownership shift was the team’s way of building a structure that allowed them to do so and receive the necessary investment for this approach whilst preserving the prioritization of mission and original values of the organization. In that approach, making profits was a means to an end, not the goal of the structure in itself.

2. Deep Dive into OpenAI’s Structure

This brings us to the current structure of OpenAI. OpenAI describes its own structure in quite a lot of detail here. The following is a deep dive to establish a common understanding of the structure based on our understanding from OpenAI’s website and press releases and to put it in the context of steward-ownership.

2.1. OpenAI’s structure

OpenAI set up a structure that combines a nonprofit with a for-profit structure owned by the nonprofit entity. This structure is shown in the following infographic.

Graphic 1: Open AI’s corporate structure

The founding entity is the OpenAI, Inc. 501(c)(3) Public Charity (from here on OpenAI Nonprofit), a public charity without shareholders, which is controlled by a self-regulating board, the board of directors. OpenAI Nonprofit owns and controls three other entities: the OpenAI GP LLC, a Holding company and the OpenAI Global LLC. From what we understand, the latter is where most employees are employed and where the operations happen — the operating entity.

Relevant entities are the OpenAI Non-Profit (top) and the OpenAI Global LLC (bottom). They are forming a limited partnership in which the general partner (managing entity) is controlled by the OpenAI Nonprofit, while employees and investors like Microsoft are limited partners.

2.2. Ownership rights

We will now look at OpenAI’s structure more closely from an ownership perspective. But what does this mean? While ownership is often considered as one inseparable package, it is in fact a bundle of different rights:

(1) The right to extract economic value (dividend right)

(2) The control right (voting rights)

(3) The right to sell, transfer or destroy it.

Often, these legal ownership rights are bundled together and not differentiated, but in fact they can also be unbundled and distributed separately.

So the setup of the ownership structure answers the question of: Who has legal control and why? And: Who benefits from value created (to what extent)?

Steward-ownership is a specific way of addressing these questions.

2.3. Steward-ownership

Steward-ownership is an ownership structure that provides an alternative to shareholder primacy. “Its ambitious goal is to dethrone shareholder primacy and profit maximization as defining features of capitalism” wrote the New Yorker, and added “it rewrites the psychology of companies, changing the deep structures that shape their behavior. Business owners now have a potent new tool to translate their ideas for a better future into reality.”[29] What sounds like a new model actually has a surprisingly long history: Steward-ownership structures have been used for centuries by companies like Bosch, Zeiss, Novo Nordisk or Carlsberg, particularly in Northern Europe.[30]

Steward-ownership enshrines two principles in the legal structure of the company:

  • Self-governance: The control over the company — i.e. the majority of the voting rights — is always held by people who are closely connected to the company, its operation, purpose and values. Voting rights are neither automatically inherited nor can they be speculated with and sold for financial gain of the shareholders. They are passed on from generation to generation of stewards not based on genetic relation or wealth but based on aligned abilities, values and familiarity with the company.
  • Capital lock/ Purpose-orientation: The company’s value as well as its profits cannot be extracted by the shareholders. Instead, profits serve the purpose of the company and are either reinvested in the company, stakeholders, used to cover capital costs and give investors risk-adequate returns or donated. The company is no longer an asset with the main purpose to create wealth for its shareholders (legally speaking) but is serving a purpose. Profits are not an end in itself, but a means to this purpose.

By enshrining these principles, the company’s structure ensures that money and power, economic rights and voting rights of the company are decoupled. And this is legally binding in the long run.

So along these questions and principles, let’s dive deeper into the structure of OpenAI.

2.4. Self-governance in OpenAI: Who has legal control within the OpenAI structure and why?

2.4.1. Legal control in OpenAI
We will first look at the allocation of legal control within the OpenAI structure to see how the structure relates to the principle of self-governance.

Infographic 2 depicts the flow of legal control (majority of voting rights) within OpenAI.

  • OpenAI Nonprofit is fully controlled by the board of directors.
  • OpenAI Nonprofit fully owns and controls the OpenAI General Partner LLC.
  • The OpenAI General Partner LLC acts as the general (managing) partner for the holding company as well as for OpenAI Global LLC. This gives it full power to control and govern both entities.
  • All investments are structured in a way that investors do not automatically have legal control following their investment.
Graphic 2: Legal control at OpenAI

The structure was set up to ensure that when capital was raised from investors, the OpenAI Nonprofit still remained in control over the operations and research of the company in the long run. Legally, this is achieved. Both the OpenAI Holding as well as the operating OpenAI Global LLC are controlled by the OpenAI NonProfit — which is controlled by the board of directors.

2.4.2. Board of directors of OpenAI Nonprofit
This obviously makes the board — and governance of the board — absolutely crucial for safeguarding the long-term purpose of OpenAI.

So for a short detour into board governance: The board has the power to control the operations of OpenAI and replace management.[31] The “fundamental governance responsibility of the board [is] to advance OpenAI’s mission and preserve the principles of its Charter.”[32] It doesn’t have any fiduciary duty to investors in the company and is merely held to the mission of the nonprofit to make sure that OpenAI builds AGI that benefits all of humanity.[33]

The governance of the board is set up to be independent of the for-profit company and profit or shareholder value interests. “OpenAI’s board members are not venture capitalists, don’t own equity at all, are not motivated by hopes of a trillion-dollar valuation, (…)”[34]. Essentially, the board is self-regulating and self-governing: Board members elect and remove themselves with a simple majority of the votes.[35] The board governance can only be changed by the board majority.[36] The majority of the board has to be independent, meaning they must not be financially invested in the for-profit entity.[37] As far as we know, there is no other specification for the composition of board members.

2.4.3. Self-governance in OpenAI
The legal control/majority of voting rights steering OpenAI lies with the board of directors. This means that the directors of OpenAI Nonprofit are not just board members, they are the stewards of OpenAI. They fulfill the role that in conventional ownership structures shareholders would fulfill: holding the ultimate instance of power and responsibility over the company, its operations, purpose and future. We state this so clearly because the role of board members can sound more like an advising or supervisory position, but in the case of OpenAI, the board of directors is even more crucial.

Looking at it from a technical side, the directors are …

  • at least in theory chosen based on the question of who is best to steer the mission of OpenAI in the future;
  • not financially incentivized for the major part (minority can have financial stakes in OpenAI);
  • not chosen based on the family they are born in or their financial wealth.

To summarize: voting power remain with people closely connected to the mission ✅ and are neither inherited ✅ nor speculatively sold ✅. They are not primarily financially incentives but are acting in the interest of the purpose of the company ✅. Investors with high financial interest do not automatically hold voting rights as a result of their investment ✅.

One definition of self-governance that many companies are using is that the majority (or all) of the people holding voting power has to be actively connected to the organization and its leadership. While not determinative for the “steward-ownership: yes or no?” question, it can be intentionally structured to incorporate external checks and balances. This remained almost true until the events in November 2023, with 50% of the board actively engaged in OpenAI.

However, this seems to not be part of the governance structure but rather have been a coincidence. It is also not the case anymore with the transformed board in place in the beginning of 12/2023, where all three board seats are held by external directors.

→ In the broadest definition, the principle of self-governance seems to be fulfilled in OpenAI.

However, there is no clear separation between economic rights and voting rights beyond the nonprofit structure itself. In the future, this could potentially undermine self-governance and lead to a prioritization of the Nonprofit over the purpose of OpenAI.

At the same time, whether stewardship for a steward-owned company can really come to life depends on an organization’s grasp of the principle of self-governance, what stewardship means for them and whether the right people (subjective definition of “right” as most fitting for the steward position in a given company) are in control. Here, governance and inner workings of the board of directors are very relevant, and from the outside, it seems like there is room for potential in the structure of OpenAI.

2.5. Capital lock in OpenAI: Who benefits from financial value created (to what extent) and why?

So let’s look at the distribution and structuring of financial rights in OpenAI, the flow of money, and see how it relates to the second principle of steward-ownership, the capital lock. Who benefits from financial value created in OpenAI and why?

2.5.1. Capital flow in OpenAI
To set the base, let’s first look at the flow of capital within the organization. As depicted in Infographic 3, the capital for operations stems from donations made into OpenAI Nonprofit as well as from investors, including Microsoft.

Graphic 3: Capital flow

2.5.2. Profit rights in OpenAI
The economic rights to any future profits or increase in value of OpenAI are structured as shown in Infographic 4.

Graphic 4: Profit rights

Investors as well as employees in OpenAI can potentially receive returns from the value generated in the company both from dividend payouts as well as from selling their economic shares in the company on secondary markets. However, this is structured as a “capped” for-profit structure. Investors’ returns on their investment are capped to a maximum of 100X.[38] This means that if an investor had invested $ 1 million early on, the maximum that they could ever receive back is $ 100 million — if the company ever generates this amount of dispensable money. This maximum cap will start to increase by 20% every year starting in 2025.[39]

This maximum cap does not mean that all investors receive it; while early investors might receive 100X if the company is ever able to generate this, later investors might receive significantly lower multiples. Any profits generated beyond the individually specified cap flows to the OpenAI Nonprofit.[40]

OpenAI description of this relationship states:

“Investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount — and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP — are owned by the original OpenAI Nonprofit entity.” [41]

We understand this to be true both in the case of dividends paid out as well as in the case of shares being sold on secondary markets — if the share price is beyond the agreed cap, the residue goes to the nonprofit thereby being used for its mission of developing AI that serves societal good.

Microsoft’s unique role as the most important financial partner of OpenAI gives it a special role in the payback waterfall of investments. Forbes reports that in the case of OpenAI making profits, 75% of the profits are used to pay back the initial investment sum of Microsoft. After break-even is reached, 49% of profits are used until the agreed cap of supposedly 20x is set.[42] Additionally, Microsoft and OpenAI’s partnership also includes non-limited licenses that allow them to incorporate OpenAI’s technologies in all of their own products.[43]

The decision of whether profits are paid out lies with the board of directors. Until OpenAI has created AGI (have developed artificial general intelligence akin to human intelligence) they have “reserve[d] the right to reinvest all profits back into the firm”.[44] This decision specifically lies with the majority of the board that has not invested in OpenAI.[45]

2.5.3. Capital-lock in OpenAI
For the principle of capital lock/purpose-orientation, we are looking at whether financial value created serves the purpose of the company in the long run. To answer this, we have to both look at financial incentives of people in control (see above) as well as the financing structure. Crucial questions to ask are:

  • Is it ensured that the people exercising voting rights can independently, detached from personal financial and external incentives, decide what is best for the company?

Technically, the answer is YES. Formal control over OpenAI’s operations lies with the board of directors. The majority of the board cannot hold any financial interest in the company ✅. Only the directors without financial stakes in OpenAI can vote on matters that might affect returns for investors or influence potentially conflicting interests between investors and OpenAI’s societal mission ✅.[46]

“The non-profit board controls the for-profit unit and is beholden to a mission of ‘ensuring the creation and adoption of safe and beneficial’ artificial general intelligence, rather than financial incentives.”[47]

  • Is it ensured that while investors and other early supporters can receive risk-adequate returns, the question of “how much is enough?” is answered, and that there is no unlimited extraction of value from the company at the cost of its purpose?

This question pushes us in the field of steward-ownership aligned financing: How can investments be structured in line with the principles of steward-ownership? OpenAI’s “capped return” model includes a maximum cap as well as individual caps for each investment relationship. This means that every investor had to answer the question of “how much is enough/how much is risk-adequate?” for the investment decision ✅. While the cap might seem exorbitantly high with a max of 100X (and above, with the maximum cap potentially rising by 20% each year), a limit is still in place — and OpenAI is also an exorbitantly risky company with an exorbitant potential.

Until the individually agreed upon cap of the investors is reached, a certain percentage of distributable profits needs to be used to redeem the investment. However, until AGI is reached, the decision whether profits are paid out remains with the board of directors at OpenAI, ensuring that the purpose always comes first ✅. There is a scenario and clear process that ensures that in the long-term future, OpenAI can be free of financial obligations to external investors again and profits can fully go to the Nonprofit. The use of profits in the Nonprofit is limited by what is deemed as charitable under the incorporation as a 501(c)(3) in the US as well as bound to the mission of OpenAI Nonprofit ✅.

→ From what we know, the principle of capital lock seems to be technically fulfilled in OpenAI.

However, we are not sure about the concrete structuring of the contracts around Microsoft’s investment and the partnership it has with OpenAI. This applies in particular to the exclusive licensing rights granted to Microsoft around ChatGPT-3 and the value created through access to the most advanced AI technologies.[48] From this partnership, Microsoft can financially benefit beyond the agreed capped returns. Through this “close entanglement”, Microsoft was able to “dramatically reverse its fortunes in AI”.[49] The implications of this might undermine the capital lock. However, “Open AI’s licensing terms with Microsoft (…) cover only pre-agi technology.”[50] But the definition of AGI and decision-making process by the board of directors of whether AGI is reached or not seems a bit unclear.[51]

Another consideration is that because OpenAI Nonprofit indirectly holds both voting rights and economic rights, a future board of directors might be incentivized to maximize profits for the charitable entity and even sell OpenAI to finance the mission of the Nonprofit. This is currently unlikely as OpenAI’s operations serve the mission of OpenAI Nonprofit. Nevertheless, this could undermine the mission of the profit-for-purpose subsidiary at some point in the future.

2.6. Legal lock: Are these principles secured in the long run?

In steward-owned businesses, the principles above are not merely stated, but are secured in the long run. In most models this means: it is made as difficult as legally possible to revert the principles in the future.

As the control over OpenAI’s operations (including investment contracts) lies with the board of directors, the long-term security of upholding of the principles of steward-ownership is embedded here.

In terms of the principle of capital lock, OpenAI Nonprofit is a 501(c)(3) public charity and is obligated to pursue its mission. Ultimately, the financial value created in OpenAI is thus serving the Nonprofit’s mission. OpenAI is structured as a single foundation model without installing two separate boards for governing the economic rights (money) and voting rights (power) over OpenAI, theoretically the OpenAI Nonprofit could decide to sell OpenAI or maximize its profits to maximize charitable money. This is structurally more secure in double-entity structures of companies like Patagonia or Bosch which actively separate power (the stewardship for the mission) and money (excess profits for philanthropic activity) on an entity-level (see graphic below).

Nevertheless, given that it currently serves OpenAI Nonprofit’s mission to have OpenAI create AI products to serve the benefit of humanity while being controlled by the Nonprofit, this sell-out scenario is currently quite unlikely.

As for the principle of self-governance, there is no legal lock in the OpenAI Nonprofit structure that ensures that self-governance is upheld in the long run. However, the 501(c)(3) does not come with shares that are inherited or sold but instead with board seats passed on to the people that the board deems most fitting to the mission of the company. There are no (financial) incentives for the board of directors to change this. In contrast, the whole founding origin and history of OpenAI is pointing towards a long-term commitment to self-governance deeply rooted in the bylaws, structure and motivation of many people involved.

It should be considered that after the upheaval at OpenAI this November, the board composition completely changed. For example, Microsoft obtained a non-voting observer seat on the Nonprofit board.[52] While the original structure is still in place, who is to say whether the newly appointed directors share this motivation and understanding for the founding mission. At the same time, the previous board chose the new directors looking specifically for people that would lead OpenAI in the future and ensure the long-term creation of artificial intelligence in a way that benefits society.

A remaining question is whether the board of directors could theoretically legally turn the OpenAI Nonprofit, which secures the mission, to a for-profit entity. In theory, this is possible, in practice, it is close to impossible. In general, a nonprofit 501(c)(3) organization can decide to give up its tax-exempt status and transition to a for-profit entity. However, nonprofits have a legal obligation to ensure that assets dedicated to charitable purposes are used in accordance with those purposes. So this would have significant tax and legal implications, making this scenario quite unlikely — albeit there might be scenarios where it is possible.

→ The 501(c)(3) structure provides a light long-term lock.

2.7. Summary: OpenAI’s structure from the perspective of steward-ownership

From our understanding, the OpenAI structure is — in a technical sense — a steward-ownership structure, upholding the principles of steward-ownership.

We do have a few open questions around the structure, particularly around long-term legal security. We feel like we have a good understanding of the ownership and shareholding structure (distribution of voting rights and economic rights). However, we do not know how much of the legal quality and idea behind the structure in terms of self-governance and capital lock are undermined through contracts. Even if there was no substantial undermining of the ownership structure in contracts yet, this might change through future financing rounds or even through re-negotiations of the current investments, given the uproar of investors over the events in November.

If OpenAI wants to leverage the potentials of their ownership structure, it will be relevant for stewardship of OpenAI to fully come to life, with people that feel and act fully responsible for the purpose of OpenAI as a whole and have a clear and sturdy understanding of the envisioned ownership structure.

Before we come to a close, we want to restate here that our evaluation is based on what we know and can publicly access about the OpenAI structure; there might be underlying details that we are not aware of.

Takeaways:

→ OpenAI can technically be considered a steward-owned company.

→ Whether the structure comes into its potential depends on the degree to which stewardship comes to life.

→ Whilst a comprehensive capital lock seems to be in place, there might be contractual details that undermine the principles of steward-ownership.

3. Conclusion

From our perspective, entrepreneurs who explicitly pose questions about “who should have power and why” and “who should benefit from the financial value created and why” and then find fitting answers for their situation are already creating value. The (legal) focus that comes with classic corporate ownership structures is on profit maximization for shareholders, while power remains with the ones paying most money or being born into the right family. AI seems too important for such an unsophisticated answer to the question of how to allocate power over its development and purpose.

The founders of OpenAI realized this and, assuming responsibility for the potentials and risks connected to AI, set out to build a structure that would establish different power dynamics and incentives within their organization.[53] In doing so, they not only challenged technological paradigms, but also common notions about corporate ownership and entrepreneurship.

Partner of AI Street Capital Nathan Benaich was quoted saying that OpenAI’s corporate structure “was an experiment to defy the laws of corporate physics, and it appears that physics won out.”[54] Building on this analogy, one of Sam Altman’s learnings of 2023 was “don’t fight the business equivalent of the laws of physics.” [55] Now luckily enough, our understanding of the laws of physics have continued to evolve (and will most likely continue to do so). For example, Einstein had to challenge the conception of time and space to develop the relativity theory. As we have seen throughout this article and beyond, steward-ownership challenges and expands the “laws of corporate physics”, and it is working well in many organizations around the world. And at least in the origins of OpenAI, the founding team sought for different answers — and we believe they had compelling reasons to do so.

Looking at OpenAI’s origin, it is conceptually logical that the organization adopted a steward-ownership structure. Steward-ownership challenges conventional corporate ownership paradigms and the allocation of money and power in companies. It shifts the perception of a company from solely being an asset of its shareholders to an organization where individuals come together to work towards a common purpose and use profits to solve problems for people, planet and society. Moreover, it establishes a framework for companies to remain independent in the long run. Independent of shareholder value pressures and independent of power plays of large corporations. An independence that OpenAI most clearly wanted to maintain.

OpenAI’s structure can clearly be linked to a long history of steward-owned companies that rethink ownership on a deep design level. At the same time, it marks just the beginning of steward-ownership in the realm of artificial intelligence. Artificial intelligence has profound implications for humanity as well as the known unknown of further technical advances that have the potential for a big-picture power shift. This particular nature of artificial intelligence establishes a new relevance for questions around who has power and who benefits financially. It is a sphere in which alternative corporate ownership models such as steward-ownership will play a substantial role.

But of course, this story is not at an end yet. This article intended to establish a foundational deeper understanding of OpenAI’s structure from the perspective of steward-ownership. But what happened at OpenAI in November 2023? What can we learn from the events around Sam Altman’s ousting in relation to the ownership model? And what further potential lies within OpenAI’s steward-ownership structure?

While many different factors have played a role in the events in November 2023, at the heart of it lie “money, power and ego” and the clash between visions of AI safety research and the ambition to create viable AI products, also expressed in an internal dichotomy between a nonprofit and for-profit culture. So there is much to discuss, which we have delved into in our next blog article.

This content is Creative Commons “CC BY-ND 4.0” licensed. For more information, please visit our website here.

Sources

[1] Dave, P. (2023): How OpenAI’s Bizarre Structure Gave 4 People the Power to Fire Sam Altman. Wired. Accessed 19.12.2023

[2] Hao, K. (2023): The chaos inside OpenAI — Sam Altman, Elon Musk, and existential risk explained. Big Think. Accessed 19.12.2023

[3] Brockman, G., Sutskever, I., OpenAI (2023): Introducing OpenAI. https://openai.com. Accessed 26.11.2023

[4] Brockman, G., Sutskever, I., OpenAI (2023): Introducing OpenAI. https://openai.com. Accessed 26.11.2023

[5] Brockman, G., Sutskever, I., OpenAI (2023): Introducing OpenAI. https://openai.com. Accessed 26.11.2023

[6] Dave, P. (2023): How OpenAI’s bizarre structure gave 4 people the power to fire Sam Altman. WIRED. accessed 19.11.2023

[7] Victor, J., Palazzolo, S., Gardizy, A. & Efrati, A. (2023): Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety’. The Information. Accessed 30.11.2023

[8] Hao, K. (2023): The chaos inside OpenAI — Sam Altman, Elon Musk, and existential risk explained. Big Think. Accessed 19.12.2023

[9] Klein, E. (2023): The unsettling lesson of the OpenAI Mess. New York Times, accessed 27.11.2023

[10] Hao, K. (2023): The chaos inside OpenAI — Sam Altman, Elon Musk, and existential risk explained. Big Think. Accessed 19.12.2023

[11] Metz. C (2023): Inside the Coup at OpenAI. The Daily on OpenSpotify. Accessed 25.11.2023

[12] Brockman, G., Sutskever, I., OpenAI (2023): Introducing OpenAI. https://openai.com. Accessed 26.11.2023

[13] Victor, J., Palazzolo, S., Gardizy, A. & Efrati, A. (2023): Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety’. The Information. Accessed 30.11.2023

[14] Bansal, T. (2023, October 12): Does Open AI’s Non-Profit Ownership Structure Actually Matter?. Forbes. Accessed 29.11.2023

[15] Bansal, T. (2023, October 12): Does Open AI’s Non-Profit Ownership Structure Actually Matter?. Forbes. Accessed 29.11.2023

[16] Bansal, T. (2023, October 12): Does Open AI’s Non-Profit Ownership Structure Actually Matter?. Forbes. Accessed 29.11.2023

[17] OpenAI (2019): OpenAI LP Blog. https://openai.com. Accessed 29.11.2023

[18] James, V. (2023): Elon Musk reportedly tried and failed to take over OpenAI in 2018. The Verge. Accessed 29.11.2023

[19] The Economist (2023): Inside OpenAI’s weird governance structure. Accessed 29.11.2023

[20] Klein, E. (2023): The unsettling lesson of the OpenAI Mess. New York Times, accessed 27.11.2023

[21] Metz, C. (2023): The fear and tension that led to Sam Altman’s ouster at OpenAI. The New York Times. Accessed 27.11.2023

[22] OpenAI (2019): OpenAI LP Blog. https://openai.com/blog/openai-lp. Accessed 29.11.2023

[23] Metz, C. (2023): The fear and tension that led to Sam Altman’s ouster at OpenAI. The New York Times. Accessed 27.11.2023

[24] Microsoft (2023): Microsoft and OpenAI extend partnership. https://blogs.microsoft.com. Accessed 29.11.2023

[25] OpenAI (2023): OpenAI and Microsoft extend partnership. https://openai.com. Accessed 29.11.2023

[26] Metz, C. (2023): The fear and tension that led to Sam Altman’s ouster at OpenAI. The New York Times. Accessed 27.11.2023

[27] Ramel, D. (2023): How Important Is OpenAI Tech to Azure Cloud? Ask Nadella. https://virtualizationreview.com. Accessed 30.11.2023

[28] Microsoft (2023): Microsoft and OpenAI extend partnership. https://blogs.microsoft.com. Accessed 29.11.2023

[29] Romeo, N. (2022). Can Companies Force Themselves to Do Good. The New Yorker. Accessed 05.11.2023

[30] Thoren, B. (2023): Don’t believe the podium talk at Davos–but capitalism is really starting to change. Fortune. Accessed 28.11.2023

[31] Roose, K. (2023: A.I. belongs to the Capitalists now. New York Times. Accessed 24.11.2023

[32] OpenAI (2023): OpenAI announces leadership transition. https://openai.com/blog/. Accessed 20.11.2023

[33] Victor, J. Palazzolo, S., Gardizy, A, Efrati, A. (2023): Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety’. The Information. Accessed 20.11.2023

[34] Levine, M. (2023): OpenAI Is Still an $86 Billion Nonprofit. Bloomberg. Accessed 28.11.2023

[35] Dave, P. (2023). How OpenAI’s bizarre structure gave 4 people the power to fire Sam Altman. Wired. Accessed 19.12.2023

[36] Dave, P. (2023). How OpenAI’s bizarre structure gave 4 people the power to fire Sam Altman. Wired. Accessed 19.12.2023

[37] Dave, P. (2023). How OpenAI’s bizarre structure gave 4 people the power to fire Sam Altman. Wired. Accessed 19.12.2023

[38] Bansal, T. (2023, October 12): Does Open AI’s Non-Profit Ownership Structure Actually Matter?. Forbes. Accessed 29.11.2023

[39] The Economist (2023): Inside OpenAI’s weird governance structure. Accessed 18.12.2023

[40] Phan, T. (2023): OpenAI Saga: The Best Links and Memes. Readtrung.com. Accessed 26.11.2023

[41] OpenAI (2019): OpenAI LP Blog. https://openai.com. Accessed 29.11.2023

[42] see Q.ai. (2023): Microsoft Considers Investing $10 Billion In OpenAI, Maker Of ChatGPT: Here’s What It Means For Investors. Forbes. Accessed 24.11.2023 and Isige, J. (2023): OpenAI Went From Non-Profit To A For-Profit Company With Profit Caps — Now That Cap May Not Last Long. Business2Community. Accessed 02.12.2023

[43] Levine, M. (2023). Who Controls OpenAI? Bloomberg.com. Accessed 20.12.2023

[44] The Economist (2023): Inside OpenAI’s weird governance structure. Accessed 18.12.2023

[45] Efrati, A., Gardizy, A., Woo, E. (2023): Altman Agrees to Internal Investigation Upon Return to OpenAI. The Information. 23.11.2023

[46] see Dave, P. (2023): How OpenAI’s bizarre structure gave 4 people the power to fire Sam Altman. WIRED. accessed 19.11.2023 and Levine, M. (2023) Who Controls OpenAI? Bloomberg,com. Accessed 18.12.2023

[47] Efrati, A., Gardizy, A., Woo, E. (2023): Altman Agrees to Internal Investigation Upon Return to OpenAI. The Information. 23.11.2023

[48] Dickson, B. (2020): The implications of Microsoft’s exclusive GPT-3 license. BDTechtalks. Accessed 15.12.2023

[49] Novet, J. (2023): Microsoft’s $13 billion bet on OpenAI carries huge potential along with plenty of uncertainty. CNBC. Accessed 27.11.2023

[50] The Economist (2023): Inside OpenAI’s weird governance structure. Accessed 18.12.2023

[51] Mawira, B. (2023): OpenAI’s Board to Determine AGI: Implications for Microsoft and the World. MSN.com. Accessed 21.11.2023

[52] Victor, J., Mascarenhas, N. (2023). Microsoft to Become Non-Voting Observer in Latest Shake-up of OpenAI Board. The Information. Accessed 29.11.2023

[53] OpenAI (2019): OpenAI LP Blog. https://openai.com. Accessed 29.11.2023

[54] Dave, P. (2023). How OpenAI’s bizarre structure gave 4 people the power to fire Sam Altman. Wired. Accessed 19.12.2023

[55] Altman, S. (2023). What I Wish Someone Had Told Me. https://blog.samaltman.com/ Accessed 09.01.2024

--

--

Purpose

Purpose serves a global community of entrepreneurs, investors, and citizens who believe companies should remain independent and purpose-driven.