$200 million xAI government contract was late addition to program

Posted by David Ingram | 7 hours ago | News | Views: 7



Controversially including Elon Musk’s xAI in a set of Defense Department contracts worth up to $200 million was a late-in-the-game decision under the Trump administration, a former Pentagon employee says. The contracts had been in the works for months, with planning dating to the Biden administration, said Glenn Parham, a former Pentagon employee who worked on the early stages of the initiative.

Before Parham took a government buyout in March, he said, planning for the contracts hadn’t included xAI. Parham was a generative artificial intelligence technical lead at the Pentagon’s Chief Digital and Artificial Intelligence Office and helped negotiate deals and integrate AI into Defense Department initiatives.

“There had not been a single discussion with anyone from X or xAI, up until the time I left,” he said. “It kind of came out of nowhere.”

The Pentagon wound up announcing contracts with four companies last week: Anthropic, Google, OpenAI and xAI. Each contract has a floor of $2 million and a ceiling of $200 million, with the amount of the payout depending on how each partnership goes. (The OpenAI contract was initially announced last month.) Including Musk’s xAI raised questions among artificial intelligence experts.

Days before the announcement, Grok, xAI’s chatbot, had gone on an antisemitic tirade that the company struggled to control. The company was also launching controversial animated AI “companions” that can be sexually suggestive and violent. Musk said he merged X and xAI in March.

In short, xAI didn’t have the kind of reputation or track record that typically leads to lucrative government contracts, even as Musk had a long history of working with the government. Critics wondered whether xAI’s models were reliable enough for government work.

Last Tuesday, Senate Minority Leader Chuck Schumer, D-N.Y., called the contract “wrong” and “dangerous” on the Senate floor, bringing up Grok’s antisemitic incident, during which it called itself “MechaHitler.” He insisted that “the Trump administration must explain how this happened, the parameters of the deal and why they think our national security isn’t worth meeting a higher standard.”

Parham said the program, which is billed as a partnership between the Defense Department and the U.S. tech companies that are on the frontier of artificial intelligence development, originally focused on more established AI firms, including OpenAI and Anthropic, which, in addition to being older than xAI, also have long-term deals with major cloud computing firms and established relationships with the military.

It’s not clear what prompted Pentagon officials to add xAI to the mix of contractors since March. The department’s Chief Digital and Artificial Intelligence Office, which announced the contracts, didn’t answer written questions about why it chose xAI, but the Pentagon said in a statement that the antisemitism episode wasn’t enough to disqualify it.

“Several frontier AI models have produced questionable outputs over the course of their ongoing development and the Department will manage risks associated with this emerging technology area throughout the prototype process,” the Defense Department told NBC News in a statement Friday.

“These risks did not warrant excluding use of these capabilities as part of DoD’s prototyping efforts,” it said.

The department said “frontier AI models,” by their nature, are at the cutting edge and so offer both opportunity and risk.

xAI didn’t respond to requests for comment Friday and Monday.

Including xAI adds a wrinkle to Musk’s complicated relationship with the federal government. Even before Musk’s time as a White House adviser this year to President Donald Trump, his business empire already had deep ties inside the government, including contracts for Musk’s rocket company, SpaceX. Musk and Trump are now locked in an on-again, off-again feud, and Musk has vowed to launch a third political party focused on reducing the federal debt. He repeated the vow as recently as July 6, though he doesn’t appear to have taken concrete public steps to set it up. Trump has threatened Musk’s government contracts during the dispute.

Some experts said they could see why the Defense Department might want to include xAI as a partner, despite its flaws.

“I think the department benefits when it’s engaged with as many organizations as possible,” said Morgan Plummer, the policy director for Americans for Responsible Innovation, an advocacy group that generally favors a middle ground on regulating AI.

Parham said that the idea for the $800 million program predates the Trump administration and that work on it began in October after President Joe Biden issued an executive order on AI and national security. He said that he worked on it for about five months before he left and that, in all, he spent nearly three years at the Defense Department working on AI.

The contracts with the four AI companies also significantly deepen the military’s relationship with the buzziest of emerging technologies. In exchange for the millions of dollars, the military will get use of each company’s large language model (LLM), which for many users often takes the form of a chatbot. Experts said they expect the military to use the LLMs for a variety of purposes, from more mundane tasks like summarizing emails to more complicated uses like translating languages or analyzing intelligence.

Other AI projects spearheaded by the Defense Department include Project Maven, a system that integrates a large amount of data and data sources with machine learning, for display and use during conflict.

Within the AI industry, xAI’s capabilities are hotly debated. Grok scores highly on some benchmarks of artificial intelligence, such as a benchmark named “Humanity’s Last Exam,” which consists of questions submitted by subject matter experts. But its recent dalliance with neo-Nazism — and, before that, with race relations in Musk’s native South Africa — made the chatbot an object of derision in the industry and among the broader public.

“Grok is probably the least safe of these systems. It’s doing some really weird stuff,” said AI critic Gary Marcus, an emeritus professor of psychology at New York University.

Marcus pointed out Grok’s ideological diatribes and xAI’s decision not to release safety reports that have become industry standards for leading AI models.

Parham said he believes xAI may need more time than the three other Pentagon contractors to have its technology fully available to the military. He said other companies, including Anthropic and OpenAI, have already gone through a lengthy government review and compliance process to have their software — including their application programming interfaces, which coders use to build on top of LLMs — authorized for use. He said that, up through March when he left, xAI hadn’t done the same.

“It’s going to take them much longer, I think, to actually [get] their models rolled out in government environments,” he said. “It’s not impossible. It’s just they’re far, far, far, far behind from everybody else.”

Parham said the approval process for Anthropic and OpenAI took over a year from paperwork submitted to authorization granted.

The Pentagon’s use of commercial LLMs has drawn some criticism, in part because AI models are generally trained on enormous sets of data that may include personal information on the open web. Mixing that information with military applications is too risky, said Sarah Myers West, a co-executive director of the AI Now Institute, a research organization.

“It introduces security and privacy vulnerabilities into our critical infrastructure,” she said.

xAI is a relatively young startup. Musk started it in 2023 after having co-founded OpenAI years earlier and then had a falling-out with its CEO, Sam Altman.

Some experts in AI and defense systems said they were shocked by Grok’s recent antisemitic meltdown and wondered whether something similar might recur as part of government use.

“I would have some safety-associated concerns based on the release of their most recent model,” said Josh Wallin, who researches the intersection of AI and the military at the Center for a New American Security, a Democratic-leaning think tank.

Wallin said Grok’s antisemitism tirades demonstrate a potential for unpredictable or risky behavior, such as presenting false or misleading information as fact, known as hallucinations.

“Let’s say you’re automatically generating reports from different intelligence sources or you’re producing a daily report for a commander. There’d be concern about whether what you’re getting is a hallucination,” he said.



NBC News

Leave a Reply

Your email address will not be published. Required fields are marked *