Australia: Background and Purpose of the Emergency Forum on the Surge in AI-Generated Child Sexual Abuse Material (CSAM)
On July 14, 2025, Canberra hosted an “Emergency Roundtable on the Surge in AI-Generated Child Sexual Abuse Material (CSAM),” organized by the Australian National Children’s Commission. The forum was convened with the following objectives:
- Assess the Current Situation and Share Trends
Based on data from the U.S. National Center for Missing and Exploited Children showing a 1,325% year-over-year increase in reported cases—exceeding 67,000 in 2024—the forum discussed the deepening crisis driven by misuse of cutting-edge technologies. - Examine Technical and Regulatory Countermeasures
Explore collaboration with platform operators, standardization of AI-based detection technologies, and the direction of legislative reforms. - Build an International Framework
Bring together domestic and international experts—including 2025 Australian of the Year Grace Tame, the eSafety Commissioner, Bravehearts, and Childlight Australia—to establish global leadership on this issue.
Responses by Major Countries
European Union (EU)
- Addition of CSAM Provisions to the AI Act
The European Parliament supports amendments to the AI Act that comprehensively ban AI-generated CSAM and criminalize possession or distribution of all generative tools and manuals, with no exception for personal use. Final trilogue negotiations are underway.
United Kingdom
- Tougher Penalties and Enhanced Preventive Measures
The Internet Watch Foundation (IWF) reported 1,286 verified AI-generated CSAM videos in the first half of 2025. The UK government has passed legislation imposing up to five years’ imprisonment for possession, creation, or distribution of AI tools used to produce CSAM.
United States
- State-Level Legislative Activity
While no federal law has been enacted, more than 550 AI-related bills have been introduced in at least 45 states. These proposals focus on public safety and privacy protections, underscoring the need for unified federal standards.
Other Developments
- Canada
In spring 2025, the cabinet approved the proposed Online Child Protection Enhancement Act, which highlights prevention of AI-generated CSAM and considers mandating detection obligations for telecommunications providers. - Japan
The Ministry of Internal Affairs and Communications and e-Gov jointly revised platform operator guidelines to explicitly include CSAM detection requirements for AI misuse, strengthening administrative sanctions up to ¥50 million for violations.
Key Challenges Ahead
- Keeping Pace with Technological Evolution
Develop detection algorithms and platform implementation frameworks that can match the rapid advancement of AI models. - Legal and International Coordination
Establish mechanisms for cross-border information sharing and joint investigations that transcend differing national legal systems. - Victim Support
Expand not only measures to prevent circulation of abusive materials but also trauma care and rehabilitation services for survivors.