This post is outside my usual focus on practical AI tools for nonprofits and educators. But I think the topic is important enough to warrant a departure. I recently cancelled my OpenAI subscription after nearly three years as a paying customer. This post explains why, and why it matters to the nonprofit and education communities I serve.
What OpenAI Was Founded to Be
OpenAI was incorporated in 2015 as a nonprofit. Its stated mission was to develop artificial general intelligence that "safely benefits humanity." Notably, it was structured with no investors, no equity, and no expectation of financial return. The founders, including Sam Altman and Elon Musk, wanted to build something outside the profit motive entirely. The logic was straightforward: technology this powerful should serve people, not shareholders.
That structure lasted four years. In 2019, OpenAI pivoted to what it called a "capped-profit" model, allowing investors to earn returns up to 100 times their initial investment, with any excess flowing back to the nonprofit. The company said no existing legal structure "strikes the right balance," and that the change was necessary to raise the capital needed to compete.
In October 2025, the restructuring was completed. OpenAI is now a Public Benefit Corporation. The original nonprofit holds a 26% equity stake (valued at roughly $130 billion). Microsoft holds 27%. The remaining shares belong to current and former employees and outside investors. Sam Altman, who received no equity during the nonprofit era, now holds a personal stake in the company.
Along the way, OpenAI changed its mission statement six times. The most recent version reads: "to ensure that artificial general intelligence benefits all of humanity." The previous version included two phrases that are now absent: the word "safely," and the phrase "unconstrained by a need to generate financial return."
Those aren't small edits.
The Military Pivot
In January 2024, OpenAI quietly updated its usage policy. The previous policy explicitly banned uses related to "military and warfare" and "weapons development." The new policy replaced that language with a vague prohibition on using technology to "harm yourself or others." There was no press release and no public announcement. Journalists at The Intercept noticed the change and reported it.
Within days, OpenAI confirmed the change was intentional and announced it was already working with DARPA on cybersecurity projects. By December 2024, the company had formalized a "strategic partnership" with Anduril Industries, a defense technology company that makes AI-powered drones, radar systems, and missiles. The partnership focused explicitly on counter-drone systems, using OpenAI models to detect, assess, and respond to aerial threats in real time. MIT Technology Review called it the completion of OpenAI's "military pivot."
In February 2026, OpenAI struck a deal with the Department of Defense to deploy its AI models inside the Pentagon's classified computing systems. That deal was announced the same day the Trump administration banned Anthropic (the maker of Claude) from all federal use, a piece of timing that drew immediate criticism. Sam Altman later acknowledged in a memo that the announcement "looked opportunistic and sloppy."
The Guardrail Problem
OpenAI has publicly stated that it will not allow its technology to be used in fully autonomous weapons systems or for domestic mass surveillance of American citizens. Those are the right lines to draw. The problem is that its contract language doesn't actually enforce them.
On autonomous weapons: OpenAI's stated red line prohibits use of its technology in autonomous weapons only "where law, regulation, or Department policy requires human control." But current Pentagon policy does not actually require human approval before an autonomous weapon uses lethal force. The Electronic Frontier Foundation published a detailed analysis in March 2026 titled "Weasel Words: OpenAI's Pentagon Deal Won't Stop AI-Powered Surveillance," arguing that the language is far too vague to provide real protection.
On domestic surveillance: the Pentagon initially demanded that AI vendors allow their tools to be used for "any lawful purpose," which would include purchasing and analyzing large commercial data sets on American citizens under Executive Order 12333, a well-documented intelligence community practice. OpenAI initially agreed to that language. After significant public backlash, the contract was amended to prohibit "intentional" domestic surveillance of U.S. persons. Critics, including the EFF, note that the word "intentional" leaves considerable room for indirect or incidental surveillance, and that the amendment came only after public pressure, not as a condition of signing.
These are not hypothetical concerns. In March 2026, the U.S. military struck over 1,000 targets in Iran in an operation officials described as the most AI-assisted in history. The Pentagon confirmed use of "advanced AI tools." The full accounting of which companies' technology contributed to which decisions has not been publicly released.
What Anthropic Did Instead
Anthropic, the company that makes Claude, took a different path. When the Pentagon demanded that Anthropic drop its safeguards against autonomous weapons and mass surveillance as a condition of contract renewal, Anthropic refused. In February 2026, the Trump administration responded by banning all federal agencies from using Anthropic products and designating the company a "supply chain risk to national security," a designation previously reserved for foreign adversaries like Huawei.
Anthropic filed federal lawsuits against the administration in March 2026, alleging illegal retaliation. Employees from both Google and OpenAI (the company Anthropic was competing against for the Pentagon contract) signed a letter supporting Anthropic's position. Anthropic's own public statement was direct: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."
Irony arrived quickly. Despite the ban, military users reported that Claude had become so embedded in Pentagon workflows that switching it off was proving harder than Defense Secretary Hegseth expected. The tool banned for having too many guardrails had apparently become indispensable precisely because it was reliable and safe to use.
Why This Matters for Nonprofits
Cochise AI exists to help nonprofits and educators use AI responsibly and effectively. Most of the organizations I work with are deeply values-driven. They are food banks, shelters, schools, and community groups that operate on thin margins and high trust. The tools they adopt reflect, at least implicitly, something about what they stand for.
I am not suggesting that every nonprofit needs to take a political stance on military AI. That is not my business to decide for anyone else. What I am suggesting is that the values and governance structure of an AI company are worth paying attention to when choosing whose tools to rely on. OpenAI was founded as a nonprofit specifically because its founders believed that a profit motive was incompatible with the responsible development of powerful AI. They were right about that. The subsequent decade has demonstrated it.
OpenAI still makes excellent products. ChatGPT remains impressive. But the company that built it has changed in ways that matter, and those changes happened incrementally and quietly, with minimal transparency and no meaningful public accountability. That pattern concerns me more than any single contract decision.
What I Use Now
I now use Anthropic's Claude as my primary AI tool, and it is what I recommend to the organizations I consult with. Claude performs at least as well as ChatGPT on the tasks most relevant to nonprofit and education work: writing, summarizing, drafting communications, analyzing documents, and generating structured content from instructions. Anthropic's policies on autonomous weapons, surveillance, and safety are publicly stated and, as recent events have demonstrated, defended under pressure.
That matters to me. It may matter to you too.
If you have questions about transitioning from ChatGPT to Claude, or about evaluating AI tools for your organization more broadly, I am happy to help. Use the contact form to start a conversation.