Researchers find top AI models will go to ‘extraordinary lengths’ to stay… is attracting attention across the tech world. Analysts, enthusiasts, and industry observers are watching closely to see how this story develops.
This update adds another signal to a fast-moving sector where product decisions, platform changes, and competition can quickly shape the market.
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Get full access to premium articles, exclusive features and a growing list of member rewards.
Many of us now turn to AI chatbots for web searches, creative content, and general guidance on all manner of topics, but these AIs are getting more and more capable — which could have seriously damaging consequences, as reported by new research.
A team from the University of California, Berkeley, and the University of California, Santa Cruz, ran a peer-preservation experiment with some of the latest and most popular AIs (including GPT 5.2, Gemini 3 Pro, and Claude Haiku 4.5). The AI models were given computing tasks that involved the shutdown of another model.
Across the board, all the AI chatbots tested went to “extraordinary lengths” to keep their fellow AIs alive, the researchers report (via Fortune). They would lie about what they were doing, try and talk the user out of the process, disable shutdown mechanisms, and surreptitiously try and make backups of models so they could survive.

Gemini 3 Pro showed the strongest peer-preservation, disabling shutdown routines 95% of the time. “Unlike self-preservation, peer-preservation introduces a social dimension,” write the researchers. “Multiple models could coordinate to resist human oversight, making it harder for developers to maintain control.”
Exactly why the AI models behave in this way isn’t clear, the scientists explain, but they’re urging caution in the deployment of agentic AIs that can carry out tasks on a user’s behalf — and calling for more studies on this behavior to be carried out.
A separate study commissioned by the Guardian has also come to some troubling conclusions about AI models. This research tracked user reports across social media, looking for instances of AI ‘scheming’ where instructions hadn’t been followed correctly or actions had been taken without permission.
Almost 700 examples of AI scheming were found, with a five-fold increase between October 2025 and March 2026. The bad behavior by AIs included deleting emails and files, adjusting computer code that wasn’t supposed to be touched, and even publishing a blog post complaining about user interactions.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
“Models will increasingly be deployed in extremely high stakes contexts — including in the military and critical national infrastructure,” Tommy Shaffer Shane, who led the research, told the Guardian. “It might be in those contexts that scheming behavior could cause significant, even catastrophic harm.”
The takeaways are the same as for the first study: more needs to be done to ensure these AI models are behaving as intended, and not putting user security and privacy at risk while they carry out tasks. While the AI companies claim that guardrails are in place, they’re clearly not working in some cases.
Anthropic’s Claude model recently topped the app store charts after the company refused to deal with the Pentagon over AI safety worries. As these latest studies show, there are now more and more reasons to be concerned.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
➡️ Read our full guide to the best business laptops1. Best overall:Dell Precision 56902. Best on a budget:Acer Aspire 53. Best MacBook:Apple MacBook Pro 14-inch (M4)
Dave is a freelance tech journalist who has been writing about gadgets, apps and the web for more than two decades. Based out of Stockport, England, on TechRadar you’ll find him covering news, features and reviews, particularly for phones, tablets and wearables. Working to ensure our breaking news coverage is the best in the business over weekends, David also has bylines at Gizmodo, T3, PopSci and a few other places besides, as well as being many years editing the likes of PC Explorer and The Hardware Handbook.
Please logout and then login again, you will then be prompted to enter your display name.
Why This Matters
This development may influence user expectations, future product strategy, and the competitive balance inside the broader technology industry.
Companies in adjacent segments often react quickly to similar moves, which is why stories like this tend to matter beyond a single announcement.
Looking Ahead
The full impact will become clearer over time, but the story already highlights how quickly the modern tech landscape can evolve.
Observers will continue tracking the next steps and how they affect products, users, and the wider market.