Large language models (LLMs) running AI chatbots are probably some of the largest repositories of information from across the internet. This information could be innocuous or very sensitive. OpenAI, the company behind ChatGPT, is now dealing with the additional drama of leaked conversations spilling beans on some sensitive information.
It’s important to know that this is still a developing story. So, it’s very possible that new details could arise and completely change the trajectory of this recent drama. For the time being, there are two parts to the story.
OpenAI is still dealing with leaked conversations through ChatGPT
The first part of the story involves an ArsTechnica reader showing some pretty troubling screenshots. According to the publication, the reader in question uses ChatGPT extensively. Well, after logging on to the service one day, the reader noticed several conversations there that seemed to materialize out of nowhere. They did not have these conversations with ChatGPT.
That’s weird enough, but things get worse when we realize the contents of the conversations. One of the conversations involved a worker at a pharmacy using ChatGPT to troubleshoot a drug portal. Looking at the redacted conversation, it’s evident that the worker was very frustrated.
However, what’s more shocking about the conversation is that the workers seemed to have unloaded a ton of sensitive information in the conversation. This includes several usernames and passwords along with the name of the portal that they were using. So, in the wrong hands, this could prove devastating for the pharmacy in question and any people who rely on their medication.
This is only one of the conversations that were leaked through ChatGPT and we’re not entirely sure how widespread this is. In any case, this is not something that anyone wants to see. Even though they’re not supposed to, people sometimes tend to put some sensitive information into their chatbot conversations.
Now for the second part of the story. Right off the bat, this seems like a very devastating turn of events. However, it appears that the ArsTechnica publication came a bit too early. Android Authority reached out to OpenAI about the situation, and the company responded.
“ArsTechnica published before our fraud and security teams were able to finish their investigation, and their reporting is unfortunately inaccurate. Based on our findings, the users’ account login credentials were compromised and a bad actor then used the account. The chat history and files being displayed are conversations from misuse of this account, and was not a case of ChatGPT showing another users’ history.”
So, it appears that ChatGPT is not haphazardly passing out conversations to other users like it’s dealing out poker cards. The user in question seems to have been hacked by a bad actor. This unknown hacker then leaked the conversations.
While we have no evidence to dispute OpenAI, the explanation sounds a bit odd, rather, it seems incomplete. The bad actor chose to leak the conversations to random people- that seems weird. In any case, it seems that ChatGPT is not arbitrarily leaking conversations to other ChatGPT users. Obviously, there’s more to this bad actor story, but users should, hopefully, rest assured.
It’s important not to share personal information with any AI chatbot. You never know where that data ends up. Unfortunately for the pharmacy worker, the login credentials are now on OpenAI servers,
Will keep you updated on this story as more details come out.