Skip to content

ChatGPT's private conversation function publicly exposed on the web, according to OpenAI, prompting the removal of the feature from the platform.

ChatGPT's search engine visibility feature has been eliminated by OpenAI, owing to privacy concerns.

ChatGPT's private conversation feature leaked online due to an issue, prompting OpenAI to...
ChatGPT's private conversation feature leaked online due to an issue, prompting OpenAI to discontinue the functionality. The discussions were discovered by web crawlers and appeared on Google.

ChatGPT's private conversation function publicly exposed on the web, according to OpenAI, prompting the removal of the feature from the platform.

In a move to safeguard user privacy, OpenAI has announced the removal of a feature that allowed private conversations with its AI model, ChatGPT, to be discoverable on search engines like Google. The decision comes after concerns were raised about the potential for accidental data leaks and the indexing of sensitive information.

Journalist Luiza Jarovsky reported on this development, highlighting cases where users, often unknowingly, checked a box that said "Make this conversation discoverable," thereby opening their chats to indexing by search engines. This could lead to embarrassing or private information, such as names, resumes, emotional reflections, or confidential work information, becoming publicly accessible.

Dane Stuckey, OpenAI's Chief Information Security Officer, announced the removal of the feature on Thursday. He explained that the feature, initially designed to help users discover useful conversations, had posed significant privacy risks. Stuckey also mentioned that the company is working to have already indexed content removed from search engines.

The short-lived experiment, which was a part of the ChatGPT app, was an opt-in feature, meaning users had to actively consent to make their conversations searchable. However, the anonymized publicly shared chats sometimes included enough specific information for identification, causing concern among users.

Users responding to Jarovsky's posts expressed their concern that some people might carelessly check the box without reading the fine print and unintentionally share sensitive information. The removal of the feature will be completed by tomorrow morning for all users.

The privacy concerns centered on the fact that publicly shared ChatGPT conversation links were being indexed by search engines, resulting in thousands of chats becoming publicly accessible with potentially identifying or confidential details included. Users sometimes created shareable links to revisit chats or share with trusted parties but did not intend for these to be broadly searchable online.

Although OpenAI did not explicitly attach user identities to the conversations, the content itself sometimes included enough specific information for identification. After public outcry and media reports highlighting these risks, OpenAI decided to disable the toggle and focus on user security and privacy.

Despite requests for comment from Jarovsky and representatives from OpenAI, neither responded to the website. However, the decision to remove the feature and prioritize user privacy is a clear indication of OpenAI's commitment to protecting its users' information.

  1. The data-and-cloud-computing technology used in ChatGPT, OpenAI's AI model, was found to pose significant privacy risks due to a feature that allowed general-news outlets to index sensitive information of users, which could accidentally lead to accidents involving data leaks.
  2. Journalist Luiza Jarovsky's report on the removal of the data-and-cloud-computing feature from ChatGPT highlighted that some users had unknowingly checked a box, potentially exposing sensitive information such as their names, resumes, emotional reflections, or confidential work information to the public.

Read also:

    Latest