
As AI models are becoming more and more capable, it has become difficult to tell what’s genuine and what’s fake on the internet. While there are some measures to detect AI-generated content, it’s not feasible for the general public, who tends to believe what they see. However, the concern goes beyond mere misinformation or entertainment. AI-generated content can be weaponized to produce harmful or illegal material, especially involving minors.
Spain to investigate major social media platforms over circulation of AI-generated child sexual abuse material
Spain isn’t taking this lightly, as it has ordered an investigation into social media platforms like X, Meta, and TikTok for allegedly spreading AI-generated child sexual abuse material. The move comes at a time when social media platforms are increasingly being scrutinized by European regulators over harmful content, anti-competitive behavior, and features designed to keep minors hooked to the platform.
As reported by Reuters, Prime Minister Pedro Sanchez has emphasized that authorities “cannot allow algorithms to amplify or shelter” such heinous crimes. Apparently, a technical report from three government bodies prompted this investigation, which is part of a broader package of social media regulations unveiled earlier this month. The Justice Ministry will ask prosecutors to investigate whether X, Meta, and TikTok are involved in the creation or distribution of illegal content using AI.
The concern around illegal AI-generated child abuse content is supported by a chilling stat: one in five young people in Spain, mostly girls, report that fake nude images of themselves were created by AI and shared online while they were minors. Other countries are also ramping up their efforts to curb illegal AI-generated content distribution online.
Also check: Meta and Google Face Trial as Instagram and YouTube’s App Design Is Blamed for Mental Health Harm
Grok has already been under massive scrutiny from regulators around the world
Earlier today, Ireland’s Data Protection Commission (DPC) also opened a formal probe into X’s AI chatbot Grok over how it handles personal data and whether it could generate harmful sexualized content. In the past few months, Grok has been under scrutiny by major regulators around the globe. India has warned X regarding Grok, while Malaysia and Indonesia have blocked Grok AI use in their regions.
Meanwhile, France, Brazil, and Canada have filed several complaints against similar platforms with the European Commission. Major tech companies are already under investigation for failing to comply with EU’s Digital Services Act (DSA). With AI tools advancing rapidly, it’s clear that traditional moderation efforts aren’t enough. Platforms will need to continue improving labeling, detection, and reporting mechanisms. Otherwise, the risk of misuse remains high, and governments will likely have to step in more









