
Key Highlights –
- AI company Anthropic will start using user chats with its Claude AI to train future models by default, after September 28, 2025.
- This policy aligns Anthropic’s practices with those of major competitors like Google and Meta.
- Users of Claude’s free and premium tiers must now actively navigate to their settings to prevent their conversation data from being used.
In a major move that follows a broader trend in the AI industry, the leading company Anthropic has officially changed its privacy policy, announcing that it will now use data from user conversations with its Claude AI models for training and improving its systems.
The new policy, which takes effect immediately for new users, requires existing users to actively opt out by 28th of September this year, if they do not want their data included in the training process. This shift from an opt-in to an opt-out model raises questions, given how Anthropic’s earlier policies had a more privacy-centric approach.
The updated policy applies to all users on the free, Pro, and Max tiers of the Claude AI service. Previously, Anthropic had a firm policy against using user data for model training unless a user had explicitly consented.
The company’s new terms state that user inputs and outputs may be used for model training and service improvement unless the user actively selects the opt-out setting in their account.
The company has justified the change by stating that real-world interaction data is crucial for delivering “even more capable, useful AI models” and strengthening safeguards against harmful usage.
The Shift in Anthropic’s Policy
The decision by Anthropic to transition to an opt-out policy is questioned by the AI community, which has long viewed the company as a leader in ethical AI development. The new policy not only defaults to data collection but also extends the data retention period for those who opt in.
Under the updated terms, Anthropic will now retain user data for up to five years for training purposes, which is a solid jump from the previous 30 day retention period for those who did not explicitly consent.
However, the policy for commercial users remains unchanged. Data from commercial products like Claude for Work and the company’s API services will continue to be governed by separate commercial agreements and will not be used for model training without explicit consent.
How to Opt Out
For existing users, the transition period is already underway. A pop-up notification is being rolled out within the Claude application, giving users a clear choice to either accept the new terms or opt out of data sharing. Users who do not make a selection by September 28, 2025, will need to do so to continue using the service.
The company has clarified that users are “always in control” of their data sharing settings and can change their preference at any time in the Privacy section of their Claude account settings. Additionally, deleting a conversation with Claude will ensure that it is not used for future model training, regardless of the user’s opt-in status.
You may also be interested to read: Grok 2.5 and Grok 3 to Become Open-Source Models: Elon Musk
Why Is Opt-Out The New Industry Standard?
Anthropic’s policy change, though late but still follows the industry trend among the largest AI developers. The move places Anthropic on par with tech giants like Google and Meta, both of which have already adopted a similar opt-out approach for their consumer-facing AI products.
This shift, however, places the burden of privacy protection on the user. Critics argue that defaulting to data collection exploits user inertia, as many people may not read the updated terms or take the time to change their settings. This could lead to a massive increase in the volume of user data used for AI training, raising new questions about data security and the potential for sensitive information to be inadvertently included in future AI models.
The new policy shift could also come with the development of taking precautions as to how user accesses the AI tools. In recent headlines, OpenAI and its CEO Sam Altman were sued by parents of a teen, who committed suicide, claiming ChatGPT was responsible for his death. You can read more about it here.