Conversation records of users were exposed through a ChatGPT flaw.

According to the boss of the artificial intelligence chatbot, a ChatGPT bug allowed some users to see the titles of other users’ conversations.

Users on the social media sites Reddit and Twitter shared images of chat histories that they claimed were not theirs.

OpenAI CEO Sam Altman stated that the company feels “awful,” but that the “significant” error has been corrected.

However, many users are still concerned about their privacy on the platform.

Since its launch in November of last year, millions of people have used ChatGPT to draft messages, write songs, and even code.

Each conversation with the chatbot is saved in the user’s chat history bar and can be accessed at any time.

Is the world ready for the impending AI storm?
However, users began to notice conversations in their history that they claimed they had not had with the chatbot as early as Monday.

One Reddit user shared a screenshot of their chat history, which included titles like “Chinese Socialism Development” and Mandarin conversations.

The company told Bloomberg on Tuesday that it temporarily disabled the chatbot late on Monday to fix the error.

They also claimed that users were unable to access the actual chats.

The CEO of OpenAI announced on Twitter that a “technical postmortem” would be held soon. However, the error has raised concerns among users, who are concerned that the tool could expose their personal information.

The bug appeared to imply that OpenAI has access to user chats.

According to the company’s privacy policies, user data such as prompts and responses may be used to continue training the model.

However, that information is only used after any personally identifiable information has been removed.

The gaffe comes just one day after Google unveiled Bard, its chatbot, to a group of beta testers and journalists.



Leave a Reply

Your email address will not be published. Required fields are marked *