Updates for Artificial Intelligence: Adjustments for ChatGPT and similar platforms
The European Union (EU) is taking a bold step in regulating Artificial Intelligence (AI), positioning itself as a pioneer in the field with clear legal guidelines. The new AI Act, set to come into force on August 2, 2025, imposes stricter transparency requirements on providers of general-purpose AI (GPAI) models.
Under the AI Act, GPAI providers, such as OpenAI (behind ChatGPT), Meta (Facebook and Instagram), and Google, among others, will have specific obligations. These include maintaining detailed technical documentation, publishing summaries of the training data used, complying with EU copyright law, and sharing relevant information with regulators and downstream users.
The transparency obligations aim to ensure clarity about how AI models are built and trained, contributing to greater safety, accountability, and legal certainty in AI deployment within the EU. Article 53 and Article 55 of the Regulation (EU) 2024/1689 (AI Act) mandate clear disclosures on the model’s capabilities, limitations, and training data characteristics.
Providers must produce and keep up-to-date comprehensive technical documentation that can be inspected by regulatory authorities. They are also required to publish summaries of their training datasets, including information sufficient to understand the nature and provenance of the training data, helping users and regulators assess potential risks or biases.
For models classified as presenting systemic risks, such as very large models exceeding 10^25 FLOP, there are additional transparency and safety measures. These include formal notifications to the European Commission, ongoing model evaluations, incident reporting, and cybersecurity safeguards.
The purpose of these transparency rules is to facilitate responsible AI development and usage, ensuring AI systems deployed in the EU market operate with openness that enables auditing, oversight, and redress where needed. There is a grace period until August 2027 for already available AI models like ChatGPT.
The new AI regulation directly affects millions of Germans who use AI tools daily. The EU AI Office will coordinate oversight at the European level, while each EU member state must designate a market surveillance authority by August 2.
Automatic labeling of AI content is the goal, with social networks increasingly labeling AI-generated content. The labeling requirement is primarily intended to make deepfakes and manipulation more difficult. Violations of the new AI rules could result in fines of up to €35 million or 7% of global annual turnover - whichever is higher.
Some tech companies, like Anthropic (the provider of ChatGPT competitor Claude), show more cooperation and want to sign the code of conduct. However, others, such as Meta, categorically reject the new AI regulation, accusing the EU of going beyond the actual law.
These measures establish a foundation of traceability, accountability, and informed user interaction aligned with broader EU goals for trustworthy AI governance. The new rules will influence the behavior of major tech companies, affecting millions of AI users in Germany and Europe.
What does the new AI Act require from providers in the EU, such as OpenAI, Meta, and Google? They will have specific obligations including maintaining detailed technical documentation, publishing summaries of the training data used, complying with EU copyright law, and sharing relevant information with regulators and downstream users. Additionally, the purpose of these transparency rules is to facilitate responsible AI development and usage, ensuring AI systems deployed in the EU market operate with openness that enables auditing, oversight, and redress where needed.