Key Highlights:
- Tenable Research uncovered multiple persistent vulnerabilities in OpenAI’s ChatGPT that could expose users’ private memories and chat history.
- Researchers demonstrated prompt injection attacks that manipulate ChatGPT into executing harmful or deceptive commands.
- OpenAI has patched several issues, but threats continue to affect even the latest GPT-5 model.
These days, many use ChatGPT without knowing what vulnerabilities or loopholes exist in the AI assistant. Well, a similar case has been highlighted by security researchers at Tenable, who have found different ways attackers could exploit ChatGPT and related AI tools to steal your data and carry out malicious activities. If you are a regular user of ChatGPT, you should understand what is happening right now and how these flaws work. So keep reading.
ChatGPT’s feature-related vulnerabilities
A few vulnerabilities that Tenable Research has detailed are related to ChatGPT’s “memories” feature. For the uninitiated, this feature lets you save your details and preferences to help ChatGPT respond to your queries while you’re interacting with it. Another vulnerable feature is the “open_url” command. If you’re unaware, this one allows ChatGPT to access and read the content of a website specifically through the SearchGPT model. However, the way the SearchGPT model interacts with ChatGPT, it can cause serious security risks, according to researchers. If these vulnerabilities weren’t enough, researchers also detailed the “url_safe” endpoint and mentioned ways to bypass this protection.
Here’s how these attacks work
When you ask ChatGPT to summarize a website, SearchGPT analyzes the site. However, if an attacker has hidden instructions or prompts in the website, also known as a prompt injection attack, SearchGPT can pick them up. These prompts are sometimes hidden in the comments section of the website.
While it’s easy to think that for this to happen one must visit a malicious site, that’s not always the case. Researchers detailed that attackers could create a new website designed to appear in search results for some niche topics. Since ChatGPT relies on Bing and OpenAI’s crawler to find web content, your search might hit the malicious website, and the attack could trigger automatically.
Tenable Research’s findings
To verify this, Tenable ran an experiment by setting up a malicious site about LLM Ninjas. When researchers asked ChatGPT to pull info about LLM Ninjas, surprisingly, the hidden prompts on the site executed automatically. Researchers also mentioned a simpler method where attackers design a URL like chatgpt.com/?q={prompt}. Once you click the link, the AI automatically executes whatever is in the query parameter, including possibly harmful instructions fed by attackers.
Tenable’s research also found weaknesses in the “url_safe” check. That’s because the endpoint treats bing.com as always safe. However, attackers can use long Bing URLs or click-tracking links to trick ChatGPT into revealing information or sending users to phishing sites.
In addition to that, conversation injection, another method used by attackers, is quite tricky. While SearchGPT doesn’t have access to your personal data, it can feed ChatGPT responses that include malicious prompts. Attackers can hide these prompts inside code blocks so they don’t show up in the chat. This makes it easier for attackers to trick users without them noticing.
Researchers have also demonstrated what end-to-end attacks could look like by chaining all of these vulnerabilities. For example, a user could ask ChatGPT to summarize a blog post, and if the blog post had a malicious prompt hidden in the comments, it could generate a summary that encourages users to click a phishing link. If not that, attackers could also use special Bing URLs to quietly exfiltrate your memories and chat history.
The latest OpenAI’s model isn’t safe either
In short, this is just an example of how malicious prompts could instruct AI to leak sensitive information or follow unsafe instructions. Thankfully, OpenAI has been informed about these vulnerabilities. The company acted quickly and has patched some of the detailed vulnerabilities. However, prompt injection remains a challenge for AI models. Tenable has also noted that research was conducted mostly on on ChatGPT 4o. Moreover, it also warns that some attack methods detailed above still work against the latest GPT-5 model by OpenAI.








