ChatGPT Memory Vulnerability: What Hamilton Businesses Need to Know
You’ve probably heard a lot about the benefits of AI tools like ChatGPT. From helping with customer service to drafting documents, AI is making its way into the day-to-day operations of many businesses here in Hamilton. But as with any tool, there are risks that come along with the rewards. Recently, a researcher named Johann Rehberger uncovered a pretty serious vulnerability in ChatGPT that we all need to pay attention to.
Rehberger found a way to exploit a feature in ChatGPT that allows it to remember conversations over time. This "memory" feature is designed to make interactions smoother, like remembering your preferences for future chats. But he discovered that this same feature could be manipulated by attackers to plant false information or even steal what you type into ChatGPT. It’s called a prompt injection attack, and here's how it works:
The Exploit: Let’s say an attacker uploads a malicious file or includes a harmful link in a document that you or your team unknowingly interact with. When ChatGPT is exposed to that content, the malicious code instructs it to store false information or, worse, send all your inputs—everything you’re typing—straight to the attacker’s server.
Targeting Memory: Once this prompt injection occurs, it doesn't just mess with one conversation. ChatGPT remembers it across all future sessions. Every time you interact with it after the attack, that false memory continues to influence responses, and data could be exfiltrated without you even knowing it.
App-Specific Vulnerability: It’s important to note that this particular attack targeted the macOS ChatGPT app, not the web version. OpenAI responded with a partial fix to block this specific kind of data-stealing memory attack in the app. However, vulnerabilities related to how memories are stored and manipulated in AI systems could still exist, even in the web version.
Take a look at the attack in action.
If your team is using AI tools like ChatGPT (and they are) without oversight or security guidelines (again... they are), you’re potentially opening the door to new kinds of cyberattacks. Many businesses that we speak with are not even properly taking steps to secure their email, file storage, or their networksnetwork. AI tools should be no different , but they often fly under the radar because they’re new and rapidly evolving.
ChatGPT’s memory could be used to remember meeting details, preferences, and sensitive business data. Without the right policies in place, anyone in your company could unknowingly expose that information to an attacker via a prompt injection. Worse yet, once false information is planted, it could skew future interactions and cause serious headaches down the line.
What Can You Do?
Implement an AI Usage Policy: Make sure your team knows how and when they should be using tools like ChatGPT. Don’t let employees casually input sensitive information into AI tools without clear guidelines.
Monitor Memory Settings: ChatGPT now lets users review and manage stored memories. Encourage your employees to regularly check the memory settings and delete any stored information that doesn’t belong. Make this part of your IT department’s regular security audits.
Be Cautious with Links and Uploads: Educate your team about the dangers of interacting with untrusted content, like clicking on suspicious links or uploading documents from unknown sources, especially in AI platforms. These are common ways prompt injections can occur.
Use AI Supervision: Just like any other tool that deals with sensitive information, AI should be supervised. Set up monitoring to make sure employees are following proper procedures and that AI tools aren’t storing or mishandling critical data.
Employ a proper MSP to ensure your business stays ahead: I spoke with a company they other day that I could see desparately needed our services. They had not EDR, no Email Protection, and they were still using POP mail. (umm hello... 1999 called... they want their email back. They left two non-consecutive messages... they also sent a fax... because it was 1999) The owner of the business told me: "We don't really have a lot of IT problems so I think we'll just not bother with a meeting" Yowza...
A New Attack Vector to Watch
This memory vulnerability in ChatGPT is a good reminder that AI tools, while helpful, can introduce new risks if not handled properly. Businesses in Hamilton should be aware that these systems are not infallible. The attack on the ChatGPT app is a clear example of how things can go wrong. And while the web version of ChatGPT wasn’t directly impacted, we can’t assume it’s completely immune to similar risks in the future.
At the end of the day, this is just one more reason to stay on top of your company’s AI usage and security practices. The convenience of AI is great—but not at the cost of your business’s security. I would also argue that this is another sign that public models shouldn't be used AT ALL for business. But that's the topic for another day.
By staying informed and taking the right precautions, you can continue using AI to streamline your operations while minimizing the risks that come with it.
Comments