The OpenAI Board’s Role in Determining the Achievement of AGI

The OpenAI Board’s Role in Determining the Achievement of AGI

Stay updated with our daily and weekly newsletters featuring the latest in AI industry news and exclusive content.

OpenAI’s nonprofit board of directors will decide when the company achieves artificial general intelligence (AGI). AGI is defined as a highly autonomous system that outperforms humans in most economically valuable work. OpenAI’s for-profit arm is legally bound to the nonprofit’s mission. Once AGI is reached, this technology will be excluded from intellectual property licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

The concept of AGI is still debated, so it’s intriguing that six individuals on OpenAI’s board will make this monumental decision. This decision will have significant implications not just for OpenAI but also for its major investor, Microsoft.

Information on OpenAI’s structure was shared by developer advocate Logan Kilpatrick, responding to a comment by Microsoft president Brad Smith. Smith had attempted to portray OpenAI as more trustworthy due to its nonprofit status, despite the Wall Street Journal reporting OpenAI’s search for a valuation of up to $90 billion in existing share sales. Kilpatrick referred to OpenAI’s website, which details its complex nonprofit/capped-profit structure. OpenAI’s for-profit subsidiary, OpenAI Global, LLC, can make and distribute profits but remains subject to the nonprofit’s mission.

Despite a strong partnership between OpenAI and Microsoft, once OpenAI reaches AGI, Microsoft’s involvement may diminish. OpenAI CEO Sam Altman recently expressed enthusiasm about collaborating with Microsoft to build AGI. In a Financial Times interview, Altman stated that the partnership was thriving and anticipated more significant investments.

OpenAI’s structure reveals that Microsoft accepted capped equity and agreed to leave AGI technologies for OpenAI’s nonprofit and humanity. An OpenAI spokesperson emphasized that the company’s mission is to build AGI that is safe and beneficial for everyone. The board consults diverse perspectives from experts and stakeholders to inform its decisions.

The nonprofit board comprises Greg Brockman, Ilya Sutskever, Sam Altman, and non-employees Adam D’Angelo, Tasha McCauley, and Helen Toner. Some board members have connections to the Effective Altruism movement, which has faced criticism. However, OpenAI clarified that their board members are not active effective altruists; interactions with the EA community are focused on AI safety.

Legal experts find it unusual for a board to decide on AGI but recognize it fits OpenAI’s mission of providing safe AGI that benefits humanity. There’s an emphasis on the board’s fiduciary duty to support this mission rather than serve investors’ interests.

Some experts doubt AGI’s near-term feasibility and question OpenAI’s focus on it, suggesting it diverts attention from the current impact of AI models. OpenAI’s definition of AGI remains vague, describing it as a point along the continuum of intelligence rather than a clear-cut achievement.

The relationship between OpenAI and Microsoft could lead to conflicts, especially if OpenAI’s nonprofit mission clashes with Microsoft’s for-profit interests. Implementing a profit cap is straightforward, but balancing profit with the nonprofit mission could be challenging. It’s uncertain how these potential conflicts will be managed, raising questions about the future of this unique partnership.

Stay updated with the latest news by subscribing to our daily newsletters.