60 U.K. Lawmakers Accuse Google of Breaking AI Safety Pledge

Posted by Harry Booth | 6 hours ago | AI, Uncategorized | Views: 11


A cross-party group of 60 U.K. parliamentarians has accused Google DeepMind of violating international pledges to safely develop artificial intelligence, in an open letter shared exclusively with TIME ahead of publication. The letter, released on August 29 by activist group PauseAI U.K., says that Google’s March release of Gemini 2.5 Pro without accompanying details on safety testing “sets a dangerous precedent.” The letter, whose signatories include digital rights campaigner Baroness Beeban Kidron and former Defence Secretary Des Browne, calls on Google to clarify its commitment.

For years, experts in AI, including Google DeepMind’s CEO Demis Hassabis, have warned that AI could pose catastrophic risks to public safety and national security—for example, by helping would-be bio-terrorists in designing a new pathogen or hackers in a takedown of critical infrastructure. In an effort to manage those risks, at an international AI summit co-hosted by the U.K. and South Korean governments in February 2024, Google, OpenAI, and others signed the Frontier AI Safety Commitments. Signatories pledged to “publicly report” system capabilities and risk assessments and explain if and how external actors, such as government AI safety institutes, were involved in testing. Without binding regulation, the public and lawmakers have relied largely on information stemming from voluntary pledges to understand AI’s emerging risks.

Yet, when Google released Gemini 2.5 Pro on March 25—which it said beat rival AI systems on industry benchmarks by “meaningful margins”—the company neglected to publish detailed information on safety tests for over a month. The letter says that not only reflects a “failure to honour” its international safety commitments, but threatens the fragile norms promoting safer AI development. “If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards,” Browne wrote in a statement accompanying the letter. 

“We’re fulfilling our public commitments, including the Seoul Frontier AI Safety Commitments,” a Google DeepMind spokesperson told TIME via an emailed statement. “As part of our development process, our models undergo rigorous safety checks, including by UK AISI and other third-party testers – and Gemini 2.5 is no exception.”

The open letter calls on Google to establish a specific timeline for when safety evaluation reports will be shared for future releases. Google first published the Gemini 2.5 Pro model card—a document where it typically shares information on safety tests—22 days after the model’s release. However, the eight-page document only included a brief section on safety tests. It was not until April 28—over a month after the model was made publicly available—that the model card was updated with a 17-page document containing details on specific evaluations, concluding that Gemini 2.5 Pro showed “significant” though not yet dangerous improvements in domains including hacking. The update also stated the use of “third-party external testers,” but did not disclose which ones or whether the U.K. AI Security Institute had been among them—which the letter also cites as a violation of Google’s pledge.

After previously failing to address a media request for comment on whether it had shared Gemini 2.5 Pro with governments for safety testing, a Google DeepMind spokesperson told TIME that the company did share Gemini 2.5 Pro with the U.K. AI Security Institute, as well as a “diverse group of external experts,” including Apollo Research, Dreadnode, and Vaultis. However, Google says it only shared the model with the U.K. AI Security Institute after Gemini 2.5 Pro was released on March 25.

On April 3, shortly following Gemini 2.5 Pro’s release, Google’s senior director and head of product for Gemini, Tulsee Doshi, told TechCrunch the reason it lacked a safety report was because the model was an “experimental” release, adding that it had already run safety tests. She said that the aim of these experimental rollouts is to release the model in a limited way, collect user feedback, and improve it prior to production launch, at which point the company would publish a model card detailing safety tests it had already conducted. Yet, days earlier, Google had rolled the model out to all of its hundreds of millions of free users, saying “we want to get our most intelligent model into more people’s hands asap,” in a post on X.

The open letter says that “labelling a publicly accessible model as ‘experimental’ does not absolve Google of its safety obligations,” and additionally calls on Google to establish a more common-sense definition of deployment. “Companies have a great public responsibility to test new technology and not involve the public in experimentation,” says Bishop of Oxford, Steven Croft, who signed the letter. “Just imagine a car manufacturer releasing a vehicle saying, ‘we want the public to experiment and [give] feedback when they crash or when they bump into pedestrians and when the brakes don’t work,’” he adds.

Croft questions the constraints on providing safety reports at the time of release, boiling the issue down to a matter of priorities: “How much of [Google’s] huge investment in AI is being channeled into public safety and reassurance and how much is going into huge computing power?”

To be sure, Google isn’t the only industry titan to seemingly flout safety commitments. Elon Musk’s xAI is yet to release any safety report for Grok 4, an AI model released in July. Unlike GPT-5 and other recent launches, OpenAI’s February release of its Deep Research tool lacked a same-day safety report. The company says it had done “rigorous safety testing,” but didn’t publish the report until 22 days later.

Joseph Miller, director of PauseAI U.K. says the organization is concerned about other instances of apparent violations, and that the focus on Google was due to its proximity. DeepMind, the AI lab Google acquired in 2014, remains headquartered in London. U.K.’s now Secretary of State for Science, Innovation and Technology, Peter Kyle, said on the campaign trail in 2024 that he would “require” leading AI companies to share safety tests, but in February it was reported that the U.K.’s plans to regulate AI were delayed as it sought to better align with the Trump administration’s hands-off approach. Miller says it’s time to swap company pledges for “real regulation,” adding that “voluntary commitments are just not working.”



Time

Leave a Reply

Your email address will not be published. Required fields are marked *