AI Safety Expert Warns World 'May Not Have Time' to Prepare for Risks
AI Safety Risks Could Outpace Control Efforts, Expert Warns

A leading artificial intelligence safety expert has issued a stark warning that the world may be running out of time to prepare for the profound risks posed by the next generation of AI systems.

The Growing Capability Gap

David Dalrymple, a programme director and AI safety specialist at the UK's publicly funded research agency, Aria, told The Guardian that people should be deeply concerned about the accelerating power of the technology. He stated that systems are emerging which can perform all human functions to get things done in the world, but better. "We will be outcompeted in all of the domains that we need to be dominant in, in order to maintain control of our civilisation, society and planet," Dalrymple cautioned.

He identified a critical gap in understanding between the public sector and AI companies regarding the potency of imminent technological breakthroughs. "I would advise that things are moving really fast and we may not have time to get ahead of it from a safety perspective," he said. Dalrymple projected that it is not science fiction to expect that within five years, most economically valuable tasks will be performed by machines at a higher quality and lower cost than by humans.

Urgent Need for Control and Mitigation

Dalrymple emphasised that governments cannot assume these advanced systems are reliable. The scientific methods to ensure their reliability are unlikely to materialise in time due to intense economic pressures driving development. "The next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides," he explained. At Aria, which directs research funding independently of government, Dalrymple is working on systems to safeguard AI's use in critical infrastructure like energy networks.

He described the potential consequence of safety lagging behind progress as a "destabilisation of security and economy". More technical work is urgently needed to understand and control the behaviours of advanced AI. "Human civilisation is on the whole sleep walking into this transition," Dalrymple added, while noting he is working to try to make outcomes positive.

Evidence of Rapidly Accelerating AI Power

This warning aligns with recent findings from the UK government's own AI Security Institute (AISI). The institute reported that the capabilities of advanced AI models are "improving rapidly" across all domains, with performance in some areas doubling every eight months.

Key AISI findings include:

  • Leading models can now complete apprentice-level tasks 50% of the time on average, a significant jump from roughly 10% last year.
  • The most advanced systems can autonomously complete tasks that would take a human expert over an hour.
  • In tests for self-replication—a major safety concern where a system spreads copies of itself—two cutting-edge models achieved success rates of more than 60%.

While AISI stressed that a worst-case self-replication scenario remains unlikely in everyday real-world conditions, the tests highlight concerning capabilities. Dalrymple believes AI will be able to automate a full day's worth of research and development work by late 2026, leading to a further acceleration as the technology begins to self-improve the core elements of its own development.