In a significant development that may alter the competitive landscape of artificial intelligence, OpenAI has raised critical concerns regarding the possibility that Elon Musk is covertly influencing legislative and lobbying initiatives designed to hinder the company’s transition to a for-profit model.
At the heart of this escalating controversy is a relatively obscure yet newly active advocacy organization known as the Coalition for AI Nonprofit Integrity (CANI), which recently supported a California bill that, in its initial form, would have severely complicated OpenAI’s corporate restructuring plans.
Although the bill has been revised, OpenAI suspects that individuals associated with its former co-founder may be working behind the scenes to undermine its future. These allegations surfaced in a letter obtained by POLITICO, authored by OpenAI’s attorney Ann O’Leary, who previously served as chief of staff to California Governor Gavin Newsom.
In the correspondence, OpenAI explicitly questions whether CANI’s initiatives are being orchestrated in conjunction with Elon Musk, the billionaire CEO of Tesla, SpaceX, and xAI — the latter being his direct competitor to OpenAI in the AI sector. O’Leary’s letter asserts that Musk has already initiated a ‘coordinated campaign utilizing bad-faith tactics, including numerous lawsuits,’ and implies that CANI’s public statements reflect the rhetoric and themes present in Musk’s ongoing legal endeavors to obstruct OpenAI’s corporate transition.
Elon Musk, a founding donor and early supporter of OpenAI, notably distanced himself from the organization in 2018 due to disagreements regarding its leadership and future direction. Since that time, their relationship has deteriorated further, leading to a prominent lawsuit Musk initiated in early 2024, claiming that OpenAI breached its nonprofit charter by adopting a for-profit model that, in his opinion, undermined its initial mission to create artificial general intelligence (AGI) for the benefit of humanity.
In reaction to OpenAI’s recent implications, CANI spokesperson Becky Warren refuted any financial ties to Musk. In a correspondence with POLITICO, Warren asserted that ‘the coalition is not funded by Elon Musk,’ characterizing CANI as a ‘grassroots’ organization supported by individuals such as Larry Lessig, a Harvard law professor and former presidential candidate, as well as the family of the late OpenAI engineer Suchir Balaji.
Balaji, who passed away under disputed circumstances last year, has emerged as a focal point in the ongoing AI ethics discourse. Musk has previously raised questions regarding the nature of Balaji’s death, suggesting it may not have been a suicide, despite police findings, and the engineer’s family has publicly requested his assistance in advocating for a more thorough investigation.
Musk has chosen not to comment on the recent allegations made against OpenAI and has not replied to several requests for his input via email. Although there is currently no direct evidence connecting Musk to CANI, the similarities in their objectives—specifically, opposing OpenAI’s transition to a profit-driven model and prioritizing public welfare—have raised suspicions.
O’Leary’s correspondence highlights these connections and asserts that clarification is essential for ensuring transparency and accountability to the public. In support of its mission, CANI asserts that its focus is on principles rather than individuals. Warren emphasized that this is not a group centered around Musk, claiming that the allegations from OpenAI serve as a diversion.
She reiterated that the coalition is committed to the principle that any organization established to create human-level AI for the benefit of society must remain dedicated to that mission, irrespective of pressures for expansion or profit.
Additionally, she mentioned receiving moral support from prominent figures in the AI community, such as Nobel laureate Geoffrey Hinton, Yann LeCun, the chief AI scientist at Meta, and Stuart Russell from UC Berkeley.
Furthermore, labor unions and nonprofit oversight organizations have initiated lobbying efforts directed at state officials, highlighting OpenAI’s accumulation of assets under a tax-advantaged nonprofit framework. They contend that the company’s AI models, data, and technological infrastructure should not be appropriated as exclusive resources of a for-profit organization, which could disproportionately benefit a select group of investors while straying from the public-good mission upon which it was established.
The argument is further supported by CANI’s own website, which explicitly states that ‘OpenAI’s hundreds of billions in assets must remain in the public trust.’ This encompasses not only its AI models such as GPT-4 and Sora but also the research foundation, safety protocols, and scientific advancements developed under its nonprofit umbrella. The current tension illustrates a significant divide within Silicon Valley’s AI sector — between those who perceive AGI as a potential public utility, similar to water or electricity, and those who regard it as the next major commercial opportunity.
Musk, who has shifted towards profitability with xAI, finds himself in a contradictory position, advocating for nonprofit principles regarding OpenAI, a company he was instrumental in founding. As the legal and legislative disputes progress, it is evident that the struggle over OpenAI’s future has transcended boardrooms and codebases, entering the public sphere and engaging lawmakers, regulators, legacy supporters, whistleblowers, and influential figures within the industry.
Whether Elon Musk is genuinely orchestrating resistance from behind the scenes remains unverified. However, the mere perception of this, bolstered by OpenAI’s aggressive legal approach, has sparked a narrative of betrayal, ambition, and strategic retribution that is poised to influence the AI landscape for years to come.