ChatGPT and similar AI platforms now faced with EU regulations demanding openness about their data practices
The European Commission has introduced new legal guidelines and a voluntary code of conduct for the AI industry, effective from tomorrow. These rules, based on the EU AI Act adopted in May 2024, aim to strengthen copyright protection and ensure the safety of AI models like ChatGPT and Gemini.
Under the new rules, developers will have to document safety measures for particularly powerful models that could potentially pose a risk to the public. They will also specify what measures they took to protect intellectual property and disclose how their systems work and what data they were trained on.
Operators of AI systems will now have to report which sources they used for their training data and specify whether they automatically scraped websites. There will be a contact point for rights holders within the companies, and adherence to the voluntary code of conduct could provide higher legal certainty and lower administrative burden for providers, according to the Commission's assessment.
However, these new rules have sparked debate among creators and developers. The EU AI Act's intellectual property (IP) protection provisions, especially Article 53, have been criticized as inadequate and favoring AI model providers over creators.
Article 53 was designed to facilitate enforcement of copyright and related rights by requiring transparency around training data and AI-generated content. However, creative industries including CISAC, ICMP, IFPI, and IMPALA have condemned the current implementation as a "missed opportunity" and a "betrayal." They argue the provisions fail to provide meaningful protection or effective legal tools against unauthorized use of protected works in generative AI training.
The General-Purpose AI Code of Practice, linked to Article 53 compliance, includes chapters on Transparency, Copyright, and Safety and Security for AI providers. However, rightsholder groups criticize these guidelines for insufficiently addressing IP protection concerns, lacking mechanisms for fair compensation or control for authors and publishers.
Other EU copyright rules linked to the AI context include the Copyright in the Digital Single Market Directive (CDSM), which allows text and data mining (TDM) for research with an opt-out mechanism for rights holders. Critics highlight this mechanism as insufficient and not a substitute for direct licensing or more robust IP enforcement.
In summary, while the EU AI Act's Article 53 theoretically requires AI model providers to disclose training data sources and facilitate copyright enforcement, authors, artists, and publishers feel the current implementation inadequately addresses their concerns about unauthorized use and compensation. They call for stronger, enforceable protections and collaborative licensing approaches to better safeguard their intellectual property rights in the age of generative AI.
The European AI Authority will start enforcing the new AI Act rules from August 2026 for new models, and from August 2027 for models that were on the market before August 2, 2025. Non-compliance with the new AI Act rules could result in fines of up to 15 million euros or three percent of the company's global annual turnover.
- Starting tomorrow, developers will have to disclose how their AI systems have been trained on, including the sources of training data, in order to comply with the new AI Act rules.
- The European AI Authority will impose fines of up to 15 million euros or three percent of a company's global annual turnover for AI model providers that fail to follow the new safety measures and intellectual property protection guidelines established by the AI Act.