The organization has a complex corporate structure. As of April 2025, it is led by the non-profit OpenAI, Inc.,[1] founded in 2015 and registered in Delaware, which has multiple for-profit subsidiaries including OpenAI Holdings, LLC and OpenAI Global, LLC.[10]Microsoft has invested US$13 billion in OpenAI, and is entitled to 49% of OpenAI Global, LLC's profits, capped at an estimated 10x their investment.[11][12] Microsoft also provides computing resources to OpenAI through its cloud platform, Microsoft Azure.[13]
In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem.[14][15]
History
This section appears to be slanted towards recent events. Please try to keep recent events in historical perspective and add more content related to non-recent events.(August 2025)
According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."[6]
Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence.[27][28] OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly".[23] The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible",[23] and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest."[29] Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence.[30]
Vishal Sikka, former CEO of Infosys, stated that an "openness", where the endeavor would "produce results generally in the greater interest of humanity", was a fundamental requirement for his support; and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work".[31] Cade Metz of Wired suggested that corporations such as Amazon might be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook, which own enormous supplies of proprietary data. Altman stated that Y Combinator companies would share their data with OpenAI.[30]
2016–2018: Non-profit beginnings
Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list great AI researchers.[16] Brockman was able to hire nine of them as the first employees in December 2015.[16] OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google.[16] It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016.[32] OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission."[16] OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead.[16]
In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research.[33]Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours.[34][35] In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications.[36][37][38][39]
In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone.[40] In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks.[41]
In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text.[43]
2019: Transition from non-profit
In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment.[44] According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company.[41] Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to.[45] Before the transition, public disclosure of the compensation of top employees at OpenAI was legally required.[46]
The company then distributed equity to its employees and partnered with Microsoft,[47] announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft.[48][49][50]
OpenAI Global, LLC then announced its intention to commercially license its technologies.[51] It planned to spend $1 billion "within five years, and possibly much faster".[52] Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence.[53]
The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC.[41] In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest.[45] Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI.[54]
2020–2023: ChatGPT, DALL-E, partnership with Microsoft
In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product.[55]
Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic.[56]
In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture.[57]
The release of ChatGPT was a major event in the AI boom. By January 2023, ChatGPT had become what was then the fastest-growing consumer software application in history, gaining over 100 million users in two months.[58]
In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days.[59] According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024.[60]
In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value.[61] On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure.[62][63] Rumors of this deal suggested that Microsoft may receive 75% of OpenAI's profits until it secures its investment return and a 49% stake in the company.[64] The investment is believed to be a part of Microsoft's efforts to integrate OpenAI's ChatGPT into the Bing search engine. Google announced a similar AI application (Bard), after ChatGPT was launched, fearing that ChatGPT could threaten Google's place as a go-to source for information.[65][66]
On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products.[67]
On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI.[68]
On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus.[69]
On May 22, 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence.[70] They consider that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They propose creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also call for more technical safety research for superintelligences, and ask for more coordination, for example through governments launching a joint project which "many current efforts become part of".[70][71]
In July 2023, OpenAI launched the superalignment project, aiming to find within 4 years how to align future superintelligences by automating alignment research using AI.[72]
In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools.[73]
On September 21, 2023, Microsoft had begun rebranding all variants of its Copilot to Microsoft Copilot, including the former Bing Chat and the Microsoft 365 Copilot.[74] This strategy was followed in December 2023 by adding the MS-Copilot to many installations of Windows 11 and Windows 10 as well as a standalone Microsoft Copilot app released for Android[75] and one released for iOS thereafter.[76]
On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries.[77] On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand.[78] Access for newer subscribers re-opened a month later on December 13.[79]
2024
OpenAI began collaborating with Broadcom in 2024 to design a custom AI chip capable of both training and inference targeted for mass production in 2026 and to be manufactured by TSMC in 3 nm node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market.[80][81][82]
In February, amidst SEC probes and investigations into CEO Altman's communications[84] OpenAI announced Sora, its text-to-video model.[85][86]
On February 29, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August.[87][88][89][90]
In June, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative.[95][96]Paul Nakasone then joined the board of OpenAI.[97] OpenAI also acquired Multi, a startup focused on remote collaboration.[98]
In August, cofounder John Schulman left to join rival startup Anthropic, and OpenAI's president Greg Brockman took extended leave until November.[99][100]
In September, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee,[101] CTO Mira Murati left the company.[102][103]
In October, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank.[104] It also acquired the domain Chat.com.[105][106]
In December, the company launched the Sora model.[107][108] It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry.[109] Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared.[110]
2025
Investments
On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners plan to fund the project over the next four years.[111]
In March, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024.[112] Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models.[113]
April began as OpenAI raised $40 billion at a $300 billion post-money valuation, marking the largest private technology deal on record. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter, and Thrive.[114][115]
On April 9, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference.[116]
On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024.[117][118][119] The deal was the company's largest acquisition to date.[120][121]
In June, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips.[122]
In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI.[123] In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services.[124][125] OpenAI subsequently began a $50 million fund to support nonprofit and community organizations.[126]
In September, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications.[127] The company also announced development of an AI-driven hiring service designed to rival LinkedIn.[128]
Products
On January 23, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States.[129][130] OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE).[131] Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning.[132][133]
In July 2025, reports indicated that artificial intelligence models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities.[134]
Management
Key employees
CEO and co-founder: Sam Altman, former president of the start-up accelerator Y Combinator
In the early years before his 2018 departure, Musk posed the question: "What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." He acknowledged that "there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about"; but nonetheless, that the best defense was "to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."[135]
Musk and Altman's counterintuitive strategy—that of trying to reduce the potential harm of AI by giving everyone access to it—is controversial among those concerned with existential risk from AI. Philosopher Nick Bostrom said, "If you have a button that could do bad things to the world, you don't want to give it to everyone."[28] During a 2016 conversation about technological singularity, Altman said, "We don't plan to release all of our source code" and mentioned a plan to "allow wide swaths of the world to elect representatives to a new governance board". Greg Brockman stated, "Our goal right now... is to do the best thing there is to do. It's a little vague."[146]
Conversely, OpenAI's initial decision to withhold GPT-2 around 2019, due to a wish to "err on the side of caution" in the presence of potential misuse, was criticized by advocates of openness. Delip Rao, an expert in text generation, stated, "I don't think [OpenAI] spent enough time proving [GPT-2] was actually dangerous." Other critics argued that open publication was necessary to replicate the research and to create countermeasures.[147]
In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers.[148][149] OpenAI also planned a restructuring to operate as a for-profit company.[150]
In March 2025, OpenAI made a policy proposal for the Trump administration to preempt pending AI-related state laws with federal laws.[151] According to OpenAI, "This framework would extend the tradition of government receiving learnings and access, where appropriate, in exchange for providing the private sector relief from
the 781 and counting proposed AI-related bills already introduced this year in US states."[152]
The emergence of DeepSeek has led major Chinese tech firms such as Baidu and others to embrace an open-source strategy, intensifying competition with OpenAI. Altman acknowledged the uncertainty regarding U.S. government approval for AI cooperation with China but emphasized the importance of fostering dialogue between technological leaders in both nations.[156] In response to DeepSeek, OpenAI overhauled its security operations to better guard against industrial espionage, particularly amid allegations that DeepSeek had improperly copied OpenAI's distillation techniques.[157]
Financial performance and cash burn
OpenAI has experienced rapid revenue growth since the launch of ChatGPT, with the company's annualized revenue reaching $12 billion by July 2025, representing approximately $1 billion in monthly revenue.[158][159] This increase from $3.7 billion in 2024 revenue demonstrates the widespread adoption of the company's AI products across consumer and enterprise markets.[160] The revenue growth is primarily driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to 5 million business users.[161][162]
Despite this substantial revenue growth, OpenAI' cash burn remains high due to the intensive computational costs required to train and run large language models. The company's projected cash burn for 2025 has been revised upward to approximately $8 billion, representing a $1 billion increase from earlier projections made at the beginning of the year.[163][164] This elevated spending reflects rising infrastructure costs, model training expenses, and significant investments in research and development to maintain competitive advantage in the rapidly evolving AI landscape.[165]
Looking ahead, OpenAI has revised upward its long-term spending projections, now expecting to burn approximately $115 billion through 2029—roughly $80 billion more than the company's previous estimates.[166] The annual cash burn is projected to escalate significantly, with spending expected to reach $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028.[167][168] These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development.[169]
The company's financial strategy reflects a strategy of prioritizing market expansion and technological advancement over near-term profitability, with OpenAI targeting cash flow positive operations by 2029 and projecting revenue of approximately $200 billion by 2030.[167] This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry.[170]
In June 2020, OpenAI announced a multi-purpose API which it said was "for accessing new AI models developed by OpenAI" to let developers call on it for "any English language AI task".[171][172]
On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board[173][174] and resigned from the company's presidency shortly thereafter.[175] Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry [pl], and researcher Szymon Sidor.[176][177]
On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure.[178] Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out.[179] The board members agreed "in principle" to resign if Altman returned.[180] On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO.[181] The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined.[182]
On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events.[183] Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him.[184] About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign.[185][186] This prompted OpenAI investors to consider legal action against the board as well.[187] In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time.[188]
On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining.[189][190] Concerns about Altman's response to this development, specifically regarding the discovery's potential safety implications, were reportedly raised with the company's board shortly before Altman's firing.[191] On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations;[192] Microsoft resigned from the board in July 2024.[193]
OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management.[194]
Lack of technological transparency
In March 2023, the company was also criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this strategic turn. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years.[195]
Non-disparagement agreement
Before May 2025, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement.[196][197] Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity.[198] However, leaked documents and emails refute this claim.[199] On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement.[200]
Proposed shift from nonprofit control
OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled.[201]
The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general.[202] The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange.[203] PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission.[204][203]
According to UCLA Law staff, to change its purpose, OpenAI would have to prove that its current purposes have become unlawful, impossible, impracticable, or wasteful.[205] Elon Musk, who had initiated a lawsuit against OpenAI and Altman in August 2024 alleging the company violated contract provisions by prioritizing profit over its mission, reportedly used this lawsuit to stop the restructuring plan.[204] On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer.[206][207] The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale,[208] but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued.[207]
In May 2025, the nonprofit renounced plans to cede control of OpenAI after outside pressure. However, the capped-profit still plans to transition to a public benefit corporation,[209] which critics said would diminish the nonprofit's control.[210]
In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4.[218]
In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground.[219][220] The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI.[221]
In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the Chat GPT service".[223]
In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed.[224]
Use by military
OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons".[225][226]
In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT.[231][232]
Data scraping
In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs.[233] They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models.[234]
Suchir Balaji, a former researcher at OpenAI, was found dead in his San Francisco apartment on November 26, 2024. Independent investigations carried out by the San Francisco Police Department (SFPD) and the San Francisco Office of the Chief Medical Examiner (OCME) concluded that Balaji shot himself.[237]
The death occurred 34 days after a New York Times interview in which he accused OpenAI of violating copyright law in developing its commercial LLMs, one of which (GPT-4) he had helped engineer. He was also a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in The New York Times's court filings as potentially having documents relevant to the case. The death led to speculation and conspiracy theories suggesting he had been deliberately silenced.[237][238]Elon Musk, Tucker Carlson, California Congressman Ro Khanna, and San Francisco Supervisor Jackie Fielder have publicly echoed Balaji's parents' skepticism and calls for an investigation.[239][240]
In February 2025, the OCME autopsy and SFPD police reports were released. A joint letter from both agencies to the parents' legal team noted that he had purchased the firearm used two years prior to his death, and had recently searched for brain anatomy information on his computer. The letter also highlighted that his apartment's only entrance was dead-bolted from inside with no signs of forced entry.[237]
Privacy backlash over ChatGPT's search results
In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data.[241][242][243]
Wrongful-death lawsuit over ChatGPT safety (2025)
In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco.[244]
See also
Anthropic – American artificial intelligence research company
^Wiggers, Kyle (November 6, 2024). "OpenAI acquired Chat.com". TechCrunch. Archived from the original on November 27, 2024. Retrieved December 15, 2024.
^"OpenAI API". OpenAI. June 11, 2020. Archived from the original on June 11, 2020. Retrieved June 14, 2020. Why did OpenAI choose to release an API instead of open-sourcing the models? There are three main reasons we did this. First, commercializing the technology helps us pay for our ongoing AI research, safety, and policy efforts. Second, many of the models underlying the API are very large, taking a lot of expertise to develop and deploy and making them very expensive to run. This makes it hard for anyone except larger companies to benefit from the underlying technology. We're hopeful that the API will make powerful AI systems more accessible to smaller businesses and organizations. Third, the API model allows us to more easily respond to misuse of the technology. Since it is hard to predict the downstream use cases of our models, it feels inherently safer to release them via an API and broaden access over time, rather than release an open source model where access cannot be adjusted if it turns out to have harmful applications.
^Coldewey, Devin (June 11, 2020). "OpenAI makes an all-purpose API for its text-based AI capabilities". TechCrunch. Archived from the original on June 12, 2020. Retrieved June 11, 2020. If you've ever wanted to try out OpenAI's vaunted machine learning toolset, it just got a lot easier. The company has released an API that lets developers call its AI tools in on "virtually any English language task."