OpenAI is taking steps to offer users more control over their data when using its popular AI chatbot, ChatGPT. The startup has added the option for users to turn off their chat histories and prevent their conversations from being used to train OpenAI’s models. The move aims to provide greater privacy for those who share sensitive information with the chatbot, enabling them to feel more comfortable using the technology for a range of applications.

With millions of people experimenting with AI chatbots like ChatGPT and Google’s Bard, questions are being raised about how these systems process user data. OpenAI has said that its software filters out personally identifiable information, but concerns remain about the use of conversational data to train AI models. By giving users more control over their data, OpenAI hopes to allay some of these concerns.
In a demo of the new feature, OpenAI used the example of planning a surprise birthday party to illustrate how users can now turn off their chat histories with ChatGPT. This means that conversations will no longer be saved in the chatbot’s history sidebar, and OpenAI’s models won’t use that data to improve over time.
While OpenAI will continue to train its models on user data by default, the startup will store data for only 30 days before deleting it. The company said this was to identify abusive behavior. OpenAI has also announced that users can now email themselves a downloadable copy of the data they’ve produced while using ChatGPT, including conversations with the chatbot.
Looking ahead, OpenAI is planning to roll out a business subscription plan in the coming months that will not train users’ data by default. By offering users greater control over their data, OpenAI is taking a step toward greater transparency and accountability in the use of conversational data to train AI models. This move is likely to be welcomed by users who value their privacy and want to ensure that their data is being used responsibly.