EU Imposes New Regulations on Artificial Intelligence Applications, Including ChatGPT
Starting August 2, 2025, providers and deployers of general-purpose AI (GPAI) models, such as ChatGPT, Google Gemini, Anthropic's Claude, and Mistral's generative models, will be subject to new security and transparency rules in the European Union under the EU AI Act.
These rules aim to increase transparency, security, and accountability in AI systems on the market. The European Commission states that these changes will bring more transparency, security, and accountability to AI systems on the market.
Key aspects of these new regulations include:
- Transparency Requirements: Providers must publish a public summary detailing the training data used for their AI models. This includes disclosing the sources of large datasets and relevant domain names, as well as explaining data processing aspects to allow legitimate parties to exercise their rights.
- Copyright Protections: The AI Act establishes clear rules ensuring that GPAI models respect intellectual property rights.
- Safety and Security Measures: Providers must assess and mitigate any systemic risks linked to the capabilities and deployment of their AI models.
- Voluntary Code of Practice: There is a voluntary General-Purpose AI Code of Practice launched by the European Commission in July 2025. Signing this code helps companies demonstrate compliance, offers legal certainty, reduces administrative burdens, and provides clear implementation guidelines. OpenAI and Anthropic are among the first to sign this code, setting a regulatory benchmark.
- Enforcement Timeline: While the AI Act went into force on August 1, 2024, the specific GPAI obligations become effective on August 2, 2025, with enforcement by the EU AI Office planned to start one year later for new models.
These rules apply to companies that develop AI systems used in the EU ("providers") as well as those that deploy such AI systems in the EU ("deployers") regardless of where they are headquartered.
In summary, from Saturday, August 2, 2025, providers and deployers of general-purpose AI models must ensure transparency about training data and safety measures, comply with copyright rules, and are encouraged to adopt the voluntary Code of Practice to align with the EU’s AI Act requirements and maintain market access.
It is important to note that the EU AI Act aims to safeguard fundamental rights in the community space, but not all its provisions are already applicable. The EU also foresees fines for infringements. AI providers that commercialize models covered by this regime will have to comply with the new rules, and those operating systemic risk models will have to notify the European Commission-created cabinet and mitigate such risks.
In the case of ChatGPT, the company has two more years to fully adapt, and the EU strongly recommends that providers like OpenAI already adopt the current code of conduct as preparation for full compliance by 2027. The new rules will involve increased documentation, transparency, security, and copyright compliance for AI models. They will also include risk assessment requirements for AI models in the EU.
- The new European Union AI Act, effective from August 2, 2025, requires providers and deployers of general-purpose AI (GPAI) models, such as ChatGPT, to ensure increased transparency about the training data used in their AI models, adopting technology like artificial-intelligence to meet these requirements.
- Under the EU AI Act, providers and deployers of GPAI models, including AI systems like Google Gemini, Anthropic's Claude, and Mistral's generative models, must also comply with copyright protections and safety and security measures for their AI systems to maintain market access in the European Union.