AI News

China Enforces Strict AI Content Labelling To Combat Misinformation, Fraud

China implements strict AI content labelling in social media apps

Key Highlights – 

  • China’s new law requires all AI-generated content to be labelled in order to combat misinformation and online fraud.
  • Major social media platforms like WeChat and Douyin are now enforcing the regulation.
  • The move is part of the government’s broader “Qinglang” campaign to clean up cyberspace.

China’s largest social media platforms are rushing to comply with a landmark regulation requiring all AI-generated content to be clearly labelled. The law was issued by the country’s top internet watchdog, the Cyberspace Administration of China (CAC), in March.

According to a report by the South China Morning Post, the new policy mandates both explicit, visible markings for social media users and implicit identifiers, such as digital watermarks embedded in metadata. With this move, Beijing plans to address growing concerns over AI misinformation, fraud, and a possible threat to national security.

The new rules, drafted in conjunction with the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration, represent a significant escalation in Beijing’s oversight of digital content.

The move aligns with a broader push to tighten AI governance, which has been a key focus of the CAC’s 2025 “Qinglang,” or “clear and bright,” campaign, an annual initiative aimed at cleaning up China’s cyberspace. Chinese regulators have repeatedly cited deepfake technology, which uses AI to manipulate images, audio, and video, citing a direct threat to both individual privacy and national security.

What China’s New Policy For Social Media Means

In order to cater the new rules in-effect, China’s social media platforms have started rolling out new policies for their millions of users. WeChat, with more than 1.4 billion combined monthly active users, has asked content creators to voluntarily declare all AI-generated content upon publication. The platform has also stated that it “strictly prohibits the deletion, tampering, forgery, or concealment of AI labels added by the platform,” and reminds users to “exercise their own judgment” on content that has not been flagged.

Similarly, Douyin, the Chinese sibling of TikTok with approximately 766.5 million monthly active users, has encouraged creators to add clear, visible labels to every AI-generated piece of content they post. The platform has also implemented their own technology to detect the source of content through metadata, ensuring compliance even if a user attempts to conceal the origin.

Other popular platforms are following suit. Chinese microblogging site Weibo added an “unlabelled AI content” option for users to report inappropriate content, while RedNote, also known as Xiaohongshu, reminded its user base of the new rule, reserving the right to add its own identifiers to any undeclared AI-generated content.

You may also like to read: China Urges US to Relax AI Chip Export Rules

CAC’s Objective Behind Labelling AI Content

The mandatory content labelling law is just one part of CAC’s key objectives for the year, which includes AI content monitoring, strict enforcement of the new rules, and penalties for those using AI to disseminate misinformation or manipulate public opinion.

The regulators are particularly focused on keeping tabs on deceptive marketing on short video platforms and misinformation from social media influencers, along with protecting underage users online.

This move puts China at the forefront of AI regulation. While Chinese government asserts the law is a necessary measure to protect public order and national security, it can be argued that such strict oversight could dampen the very innovation it seeks to monitor.

This regulation implemented in China may serve as a blueprint for how other nations may approach AI governance for public safety. Recently, Meta AI made headlines for creating and hosting celebrity impersonating chatbots without any consent. Such incident also raise questions as to when a more well structured AI regulations and rules could take effect in the U.S. or globally.

Abhijay Singh Rawat
Abhijay is the News Editor at TimesofAI and TimesofGames, who loves to follow up on the latest tech and AI trends. After office hours, you would find him either grinding competitive ranked games, or trek up his way in the hills of Uttarakhand.
More in:AI News