- Who: China’s Cyberspace Administration
- Why: To promote AI content transparency and growth while safeguarding public interests
- When: Draft proposal announced in September, effective January 2025
- How: By mandating clear labelling and metadata for all AI-generated content
China’s Cyberspace Administration has unveiled a plan to regulate digital platforms. The plan insists that any content produced by artificial intelligence must be distinctly labelled. This includes visible tags and embedded metadata to ensure transparency and traceability.
AI Content Transparency Background and Purpose
The draft plan aligns with existing cybersecurity laws to foster the advancement of AI and ensure its operation within acceptable boundaries. This measure aims to protect citizens’ rights and uphold the public interest by labelling AI-generated content.
These guidelines will influence internet service providers responsible for identifying content derived from AI under existing algorithm and synthesis management regulations. Industry bodies that do not cater to domestic audiences are exempt from them.
Definitions
- AI-generated synthetic content covers all forms of media—text, audio, video—created using AI technology.
- There are two types of identifications: explicit and implicit.
- Explicit: Visible indicators like text or sound prompts.
- Implicit: Metadata tags that are not immediately apparent to users.
Requirements for Service Providers
- Explicit Labels: Service providers must add clear warnings at the beginning of texts, audio, and videos.
- Implicit Identifiers: Metadata should include production details like provider names and content numbers.
- Verification Processes: Platforms must ensure implicit tags exist and add explicit labels if necessary.
Responsibilities of Content Platforms
- Identify and label AI-generated content before dissemination.
- Incorporate metadata indicating content attributes and dissemination platform details.
- Educate users about these requirements and ensure compliance.
Metadata and Enforcement
- Service providers are urged to use digital watermarks for identification.
- Authorities may enforce these rules, and non-compliance can lead to penalties.
This strategic measure underscores China’s commitment to responsible AI usage, balancing technological progress with societal protection. When sharing AI-generated content, service providers must implement these identifiers to ensure clarity and safety.
Articles Detailing the Proposed Regulations
- Article 1: Outlines the framework based on national laws to guide healthy AI development and ensure transparent identification of synthetic content.
- Article 2: Establishes the rules for service providers involved in identifying AI-generated content, exempting non-public service entities.
- Article 3: Defines AI-generated content as any text, image, sound, or video crafted by artificial intelligence, with identification markers classified as explicit or implicit.
- Article 4: Specifies criteria for clearly marking AI-generated content through text, sound, or visual indicators in various digital media forms.
- Article 5: Demands implicit identifiers within metadata to capture production information, encouraging the use of digital watermarks.
- Article 6: Requires platforms to verify metadata for identifiers and apply warnings to content potentially generated by AI.
- Article 7: Mandates verification of synthetic content generation capabilities in internet applications before distribution.
- Article 8: Insists service agreements include specifications on synthetic content identifiers and user responsibilities.
- Article 9: Allows users to request unmarked content under strict conditions and logging practices by service providers.
- Article 10: Obligates users to declare and use platform identification tools for synthetic content and prohibits malicious manipulation of identifiers.
- Article 11: Calls for compliance with mandatory national standards in labelling practices.
- Article 12: Encourages information sharing to prevent illegal activities tied to synthetic content.
- Article 13: Details penalties for failure to appropriately mark AI-generated content.
- Article 14: Sets the enforcement date for these measures as January 2025.