EU Regulation Inquiry: Does the AI Act encompass deepfake controls?
The EU AI Act, a groundbreaking regulation approved by the European Parliament on March 13, 2024, aims to combat the growing issue of deepfakes. This Act requires providers of AI systems that generate synthetic audio, images, videos, or text, including deepfake generators, to label any output as artificially generated or manipulated in a machine-readable format detectable as synthetic content.
This transparency measure is a crucial step towards fighting misinformation and impersonation. However, experts highlight that watermarking, the primary labelling method, should be combined with other measures as watermarking alone has limitations in preventing misuse and ensuring traceability.
The Act mandates digital watermarking or similar labelling to enable clear identification of deepfake and AI-generated content. The requirement for providers mainly pertains to watermarking, which is a transparency technology that attaches a "unique signature" to the output of an AI model.
However, concerns about the effectiveness, technical implementation, and robustness of watermarking as a remedy against deepfakes have been raised. Watermarking alone is insufficient because it can be circumvented or removed by sophisticated actors. AI watermarking is still evolving, lacking robust, tamper-resistant, and globally standardized methods.
For watermarking to be fully effective, it requires broad adoption across platforms, transparency about methods used, and integration with complementary technologies such as cryptographic signatures, provenance metadata standards, model fingerprinting, and AI content detectors. The ongoing arms race between AI deepfake generation sophistication and detection techniques suggests watermarking needs continual improvement and cannot guarantee complete prevention of misuse.
The EU AI Act also imposes requirements on deployers (users) of deepfake generators to disclose that the content has been artificially generated or manipulated. However, the circumstances under which the requirement for deployment disclosure may be loosened are not specified in the Act, generating uncertainty. Moreover, the Act does not offer concrete measures against cases of non-compliance with its provisions.
The Q&A series hosted on The Sumsuber and social media platforms aims to address these concerns and shed light on the EU AI Act and deepfake regulations. This week's Q&A will feature the AI Policy and Compliance Specialist, Natalia Fritzen, who will discuss the EU AI Act and deepfake regulations. Questions for the Q&A series can be submitted to Sumsub's Instagram and LinkedIn.
The Sumsub Q&A series is launched bi-weekly, with new Q&A answers released every other Thursday. The series aims to provide a platform for experts from legal, tech, and other fields to discuss the implications and solutions for the growing issue of deepfakes.
References:
- EU AI Act
- Watermarking Deepfakes: Challenges and Opportunities
- Deepfake Detection and Watermarking: A Comprehensive Review
- The Role of Watermarking in the EU AI Act
- The EU AI Act, by mandating watermarking for AI-generated content, introduces a technology aimed at increasing transparency and combating deepfakes.
- However, the effectiveness of watermarking as a standalone solution against deepfakes is questionable, and its full potential can only be realized through broader adoption, integration with complementary technologies, and continual improvement.