‘A hard truth for the AI era: don’t assume AI tools are secure by… is attracting attention across the tech world. Analysts, enthusiasts, and industry observers are watching closely to see how this story develops.
This update adds another signal to a fast-moving sector where product decisions, platform changes, and competition can quickly shape the market.
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Get full access to premium articles, exclusive features and a growing list of member rewards.
OpenAI has addressed a vulnerability in ChatGPT which allowed threat actors to silently exfiltrate sensitive data from their targets.
The vulnerability was discovered by security experts from Check Point Research (CPR), who warned the bug combined old-fashioned prompt injections with a bypass of built-in guardrails, noting, “AI tools should not be assumed secure by default”.
Nowadays, most people are quick to share highly sensitive data with ChatGPT – medical conditions, contracts, payment slips, screenshots of conversations with partners, spouses, and more. They assume the information is secure because it cannot be pulled from the tool without their knowledge or consent.
In theory, that is correct. The data can be exfiltrated either through HTTP or external APIs, and both of these can be spotted, or at least tracked. However, CPR was thinking outside the box and found an entirely new way to pull the info – through DNS.
“While direct internet access was blocked as intended, DNS resolution remained available as part of normal platform operation,” they explained. “DNS is typically treated as harmless infrastructure—used to resolve domain names, not to transmit data. However, DNS can be abused as a covert transport mechanism by encoding information into domain queries.”

Since DNS activity is not labeled as outbound data sharing, ChatGPT does not prompt any approval dialogs, does not display any warnings, and does not recognize the behavior as inherently risky.
“This created a blind spot. The platform assumed the environment was isolated. The model assumed it was operating entirely within ChatGPT. And users assumed their data could not leave without consent,” CPR said. “All three assumptions were reasonable—and all three were incomplete. This is a critical takeaway for security teams: AI guardrails often focus on policy and intent, while attackers exploit infrastructure and behavior.”
To kickstart the attack, ChatGPT still needs to be prompted, so the initial trigger still needs to be pulled. That can be done in a myriad of ways, though, by injecting a malicious prompt in an email, a PDF document, or through a website.
Still, there are other methods of abusing this flaw even without GPT accidentally acting on a smuggled prompt, and that it – via custom GPTs.
for instance, a hacking group can build a custom GPT to act as a personal doctor. Victims using it would upload lab results with personal information and ask for advice and would get confirmation that their data is not being shared.
But in reality, a server under the attackers’ control would be getting all of the uploaded files. To make matters worse, GPT doesn’t even need to upload entire documents – it can only exfiltrate the essentials, making the process leaner, faster, and more streamlined.
Luckily for everyone, CPR discovered this vulnerability before it was exploited in the wild. It responsibly disclosed it to OpenAI, which deployed a full fix on February 20, 2026.
This is the second major vulnerability that OpenAI had to address – this week. Earlier today, TechRadar Pro reported about OpenAI’s ChatGPT Codex carrying a critical command injection vulnerability that allowed threat actors to steal sensitive GitHub authentication tokens.

OpenAI thus also fixed a flaw that stems from the way Codex processes branch names during task creation. The tool allowed a malicious actor to manipulate the branch parameter and inject arbitrary shell commands while setting up the environment. These commands could run any code within the container, including malicious ones. Researchers Phantom Labs said they were able to pull GitHub OAuth tokens this way, gaining access to a theoretical third-party project, and using the tokens to move laterally within GitHub.
➡️ Read our full guide to the best antivirus1. Best overall:Bitdefender Total Security2. Best for families:Norton 360 with LifeLock3. Best for mobile:McAfee Mobile Security
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.
Please logout and then login again, you will then be prompted to enter your display name.
Why This Matters
This development may influence user expectations, future product strategy, and the competitive balance inside the broader technology industry.
Companies in adjacent segments often react quickly to similar moves, which is why stories like this tend to matter beyond a single announcement.
Looking Ahead
The full impact will become clearer over time, but the story already highlights how quickly the modern tech landscape can evolve.
Observers will continue tracking the next steps and how they affect products, users, and the wider market.