Bay Area artificial intelligence giant Anthropic was assailed by President Donald Trump as a company of “Left-wing nut jobs” after it sought to prevent certain uses of its technology by the U.S. military. The firm’s rival, AI titan OpenAI, promptly cut a deal with the Pentagon.
The fracas embroiling the San Francisco AI companies over work for the U.S. military has highlighted Silicon Valley’s increasingly consequential contracts with the U.S. Department of Defense. The high-profile dispute has also raised questions about companies’ willingness to assist with government surveillance of Americans, and to allow their products to be used in military weapons systems that can kill on their own.
Last week, Anthropic pulled out of a pending agreement with the Defense Department over stated concerns about how its technologies could be used for autonomous warfare and mass domestic surveillance. OpenAI, the maker of ChatGPT, the next day announced a deal with the Pentagon that some experts say lacks safeguards to protect citizens’ privacy and limit robot-aided war.
The dealings sparked worries that Silicon Valley AI firms, competing among each other and against China, will leave issues of safety and privacy behind in the frenzied scramble for technological supremacy.
“There have been some dynamics of this that have been race to the bottom-y,” said Nathan Calvin, vice-president of state affairs and chief attorney at Encode, an AI education and legislative advocacy group based in Washington, D.C.
“There’s a concern that there’s a sense of inevitability, that, ‘If we don’t do it, then somebody else will.’”
Silicon Valley tech companies have a long and occasionally fractious history of working with the U.S. military. Protests in 2018 by Google employees led the Mountain View firm to decline to renew its contract with the Pentagon for its “Project Maven” AI-boosted warfare initiative. But Google’s venture capital arm Gradient Ventures invested in Cogniac, a San Jose firm that worked for the U.S. Army on similar and “lethality” focused technology.
The Pentagon in 2022 awarded Google and Oracle — a software giant until recently headquartered in Redwood City — along with Microsoft and Amazon — $9 billion in contracts for a computing project called the “Joint Warfighting Capability Cloud.“
Anthropic, founded five years ago, “has leaned into and has been very interested in working with the military,” Calvin noted.
According to Anthropic, the schism with the Trump administration opened up last month when the company, which in July 2025 signed a two-year, $200 million contract with the U.S. Department of Defense to provide “AI capabilities that advance U.S. national security,” demanded guardrails around future work.
The proposed contract and the agreement between the Pentagon and OpenAI have not been released, and news reports about their terms have relied on statements by company and government officials.
On Feb. 26, Anthropic CEO Dario Amodei said in a statement that in certain cases, “AI can undermine, rather than defend, democratic values.” Using AI for “mass domestic surveillance is incompatible with democratic values,” Amodei wrote.
“To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI.”
Amodei said autonomous weapons that select targets and attack by themselves “may prove critical for our national defense,” but today’s AI systems “are simply not reliable enough” to use for such weapons, and lack the “critical judgment” of highly trained, professional troops.
On Feb. 27, OpenAI CEO Sam Altman announced his company had cut a deal with the Pentagon.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. “We put them into our agreement.”
But doubts about those prohibitions began to swirl after senior State Department official Jeremy Lewin posted on X that the deal “flows from the touchstone of ‘all lawful use.’”
“It looks like the government is doing what the government usually does, which is interjecting enough ambiguity into the language of agreements to leave surveillance and autonomous weapons systems still on the table,” said Matthew Guariglia, a senior policy analyst at civil liberties nonprofit the Electronic Frontier Foundation.
Anthropic, OpenAI and the Defense Department did not immediately respond to questions from this news organization.
Altman on Monday sought to allay concerns about OpenAI’s technology being used to track Americans, saying on X that the Pentagon understood that “deliberate tracking, surveillance, or monitoring of U.S. persons” isn’t allowed under the contract. But Altman also told OpenAI employees that the government would make the “operational decisions” about the technology’s use, CNBC reported.
Surveillance and privacy concerns revolve around AI’s capability to rapidly process massive amounts of data and identify patterns and connections.
Federal agencies, including U.S. Immigration and Customs Enforcement, are already buying information on Americans from data brokers, and such information — which can include detailed location data and private information bought on the “dark web” after data breaches — can be sucked into AI along with people’s social media posts and other information, Guariglia said.
“You could ask AI to sift through all of it to create a file on a specific person,” Guariglia said, “which could be up to and including grading you on how loyal you are to the current administration.”
Without AI, it would take huge numbers of people to conduct such surveillance, Calvin said, and, “there might be a lot of individual human beings who would say, ‘Hey, I’m not really comfortable with that.’”
Altman’s statements did not appear to address autonomous warfare, Calvin noted.
Anthropic’s pushback against the Pentagon drew praise from Silicon Valley Democratic Rep. Ro Khanna, who told this news organization Amodei had shown “enormous courage.” But the company’s move brought fury from the White House.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War,” President Donald Trump wrote Feb. 27 on social media, referring to his administration’s name for the Department of Defense.
That same day, Amodei sent a memo to Anthropic staff, saying the “real reasons” the Trump administration and Defense Department “don’t like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” tech website The Information reported. OpenAI president Greg Brockman and his wife each donated $12.5 million to a Trump super PAC, federal election data show. Amodei added, “We haven’t given dictator-style praise to Trump (while Sam has),” an apparent reference to Altman, who in September lauded the President as “a very refreshing change.” Amodei referred to OpenAI employees as “sort of a gullible bunch.”
On Wednesday, the defense department officially designated Anthropic as a “supply chain risk,” typically reserved for companies connected to adversaries, such as China’s Huawei. The label bars defense contractors and vendors from using Anthropic’s technology in their Pentagon work. Amodei said his company would challenge the designation in court. On Thursday, he apologized in a public statement for “the tone” of the Feb. 27 memo.
It was unclear how the designation might affect the Pentagon’s use of Anthropic’s Claude AI tool, which the Washington Post reported has been used for air attacks in the war against Iran, and was deployed in the U.S. attack on Venezuela and seizure of its president, Nicolás Maduro, according to the Wall Street Journal.
Meanwhile, Palo Alto AI company xAI, led by Elon Musk, has, according to news reports, signed a deal with the Pentagon that grants “all lawful use,” and the New York Times has reported that Google is eager to sell its flagship AI tool Gemini to the defense department.
Google did not immediately respond to a request for comment.
Silicon Valley’s AI will play a key role in America’s military capabilities, Calvin said.
“How do we incorporate this into defense without jeopardizing civil liberties and privacy and safety and liberty and things like that?” he said. “These are really hard and high-stakes questions.”
Khanna said the U.S. needs an “AI bill of rights” that would rein in intrusive and dangerous collaborations between Silicon Valley and the Pentagon, to safeguard Americans’ privacy and keep “humans in the loop” for lethal military force.
“There has to be an ethics framework,” Khanna said.
#Silicon #Valley #dispute #raises #questions #AIs #military