The OpenAI Files, a detailed investigative report revealing the unknown reality of OpenAI, has been released, and the dangers posed by the ideal of non-profit and the reality of profit



This article, originally posted in Japanese on 20:00 Jun 19, 2025, may contains some machine-translated parts.
If you would like to suggest a corrected translation, please click here.

OpenAI , known for developing the advanced conversational AI 'ChatGPT,' was founded in 2015 as a non-profit research institute to open source AI amid active development of artificial intelligence and robotics. OpenAI started as a non-profit organization to prevent the misuse of robots, but its form has attracted attention, with the announcement in December 2024 of a plan to transition to a for-profit business, and in May 2025 of announcing that it would abandon commercialization and maintain management by a non-profit organization. ' The OpenAI Files, ' jointly created by non-profit technology monitoring organizations 'The Midas Project' and 'The Tech Oversight Project,' reports on the results of about a year of research and collection of information on the internal structure of OpenAI.

The OpenAI Files
https://www.openaifiles.org/



OpenAI is a non-profit organization, but ChatGPT has developed a paid service and has gained a large number of users. For a long time, OpenAI was in the form of a 'non-profit organization with a for-profit subsidiary.' However, there were problems such as the fact that developing AI models is very costly, but a non-profit organization cannot prioritize the pursuit of shareholder interests and has limited fundraising, so CEO Sam Altman and others aimed to 'commercialize' OpenAI. However, when the company officially announced its policy to switch to a form led by a for-profit company, co-founder Elon Musk and Meta, a competitor in the AI development field, as well as former OpenAI employees, Nobel Prize winners, law professors, and groups of civil society organizations, requested the government and courts to prevent OpenAI from becoming a commercial company due to safety concerns. As a result, OpenAI announced that it would abandon commercialization and maintain management by a non-profit organization.

OpenAI decides to abandon commercialization and continue to be managed by a non-profit organization - GIGAZINE



Although OpenAI will continue to operate as a non-profit organization rather than a for-profit company, it still has a for-profit subsidiary under the non-profit organization, and its market size exceeds $ 300 billion (about 43 trillion yen). The Midas Project and The Tech Oversight Project, non-profit technology monitoring organizations, pointed out that 'OpenAI is abandoning its legal obligations to humanity and transforming itself into a for-profit company in exchange for the right to bring unlimited profits to investors,' and conducted an internal investigation focusing on the organizational restructuring being carried out at OpenAI.

The internal investigation report, 'The OpenAI Files,' outlines four areas: organizational restructuring, CEO, transparency and safety, and conflicts of interest.

・Organizational restructuring
In addition to the attempt to commercialize, the 'change in profit cap' is an important point regarding the relationship between the non-profit organization and the for-profit subsidiary of the organization OpenAI. In 2019, OpenAI established rules to limit investors' profits to 'up to 100 times the return on investment.' This is a restriction to prevent a situation in which, if they were to succeed in developing an AI that can automate all human labor, 'only those who sucked all profits from humanity and invested in AI will gain wealth.' However, in 2023, it was reported that OpenAI quietly changed the rules to allow the profit cap to be raised by 20% each year, and in 2025, OpenAI showed a move to transition to a for-profit company with no profit cap, which is a major shift from its original stance, the monitoring group points out.

According to the report, in an internal email at the time of OpenAI's founding, the company's biggest concern was 'developing artificial general intelligence (AGI) before Google.' It is believed that the profit cap was imposed in order to distribute the enormous wealth that would be gained from AGI when it was developed to the general public. Therefore, the profit cap may have been lifted because many more companies are now participating in the AGI market than when OpenAI was first founded, making it less likely that a single company would monopolize it and making restrictions unnecessary. Nevertheless, the watchdog group has expressed a critical opinion, saying, 'The intense pressure from investors to remove the profit cap from OpenAI should make it clear how bad a deal this could be for humanity.'



CEO
Watchdog groups have raised concerns about OpenAI CEO Sam Altman's leadership practices and misleading statements. At his first startup, senior executives twice called for Altman's removal as CEO, citing 'deceptive and confusing behavior.' At Y Combinator , where he previously served as CEO, he was fired after being accused of 'constant absenteeism and prioritizing personal lining.'

The watchdog group has examined past reports and internal documents regarding Altman's credibility and integrity, pointing out that cases in which Altman made false claims and cases of dishonest and manipulative behavior were presented to the board of directors. In fact, dozens of former OpenAI employees have reportedly resigned due to disillusionment with Altman's dishonesty, with former senior employees Dario Amodei and Ilya Sutskever describing Altman's behavior as 'abusive.'



Other specific cases and reports about Altman's problems are summarized on the following page. Taken together, the watchdog group said, 'We question Mr. Altman's integrity and whether he is suitable to oversee OpenAI.'

CEO Integrity
https://www.openaifiles.org/ceo-integrity

・Transparency and safety
According to a report from Fortune, an American business magazine, OpenAI promised to 'devote 20% of its computing resources to a newly formed AI safety research team' in 2023, but in reality, no resources were allocated. In 2024, it was reported that the safety team repeatedly requested computing resources but was refused. In addition, various media outlets have reported that awareness of safety and transparency is extremely declining, with the company rushing to evaluate the safety of AI models to meet product delivery deadlines and former employees claiming that the company prohibited employees from warning regulators about safety risks.

Additionally, concerns have been raised about OpenAI's organizational culture, with watchdog groups reporting that the company forced employees to sign threatening non-disclosure agreements that stated they would lose all vested stock if they criticized the company, even after they left the company.



・Conflict of interest
OpenAI's board members are seen as having potential conflicts of interest. Many of the board members invest in or directly manage companies that do business with OpenAI or that benefit widely from OpenAI's influence on the industry. For example, CEO Altman has invested heavily in companies that partner with OpenAI or are rumored to partner with it. In addition, board chairman Brett Taylor runs an AI startup that utilizes OpenAI's models, and board member Adebayo Ogunlesi runs a $30 billion AI infrastructure fund. Even if OpenAI remains a completely non-profit organization, some board members will indirectly benefit greatly from OpenAI's success.

If OpenAI were to restructure from a profit-capped corporation to a public benefit corporation without profit limits, it could unlock tens of billions of dollars of new investment and drive further commercialization, which could be hugely beneficial for the businesses that its board members run or invest in. This could have negative consequences for humanity as a whole, but OpenAI was founded to prevent such consequences in the first place.

The watchdog group points out that OpenAI's 'advancement of charitable purposes' may conflict with the board members' 'own economic interests,' creating conflicts of interest. In fact, Altman is concerned about the risk that board members may pose a threat due to conflicts of interest, and in the past, he has argued that board members should leave the board for founding an AI lab.

'OpenAI has repeatedly crumbled under market pressures, despite its lofty promises of safe and responsible AI development. OpenAI's reorganization is both a final expose for a company that can no longer uphold its founding myth and a natural experiment in what happens when idealism and economic power collide. We believe OpenAI still has a small window of opportunity to reclaim its mission,' the watchdog said in the report.

in Software, Posted by log1e_dh