Skip to content

Tech companies from across the globe endorse the European Commission's AI Ethics Guidelines

Businesses integrating Code must adhere to AI Act responsibilities commencing from August 2nd.

Tech Firms Join EU Commission's Artificial Intelligence Ethics Guidelines
Tech Firms Join EU Commission's Artificial Intelligence Ethics Guidelines

Tech companies from across the globe endorse the European Commission's AI Ethics Guidelines

From August 2, 2025, companies providing General-Purpose AI (GPAI) models must comply with specific obligations under the EU AI Act, as detailed in the newly published Code of Practice and related guidelines. These obligations focus mainly on transparency, copyright compliance, and—if applicable—safety and security measures for GPAI models with systemic risk.

Transparency Obligations

Under Article 53(1)(a) and (b) of the AI Act, GPAI model providers must maintain and regularly update comprehensive documentation on the functioning of their models. This documentation should be provided to the AI Office, national authorities, and downstream users integrating the GPAI models. Additionally, providers must publicly disclose summaries of training data where relevant. These requirements ensure that the AI’s behavior and development are traceable and understandable by regulators and users.

Article 53 of the AI Act also requires GPAI providers to implement policies to comply with EU copyright law, including adopting “appropriate and proportionate technical safeguards” to prevent output that infringes on copyright. Providers must also prohibit or mitigate the generation of infringing outputs. This means GPAI providers need clear processes and technical tools to respect third-party rights regarding data used for training and model outputs.

Safety and Security Requirements for GPAI Models with Systemic Risk

Providers of GPAI models deemed to have systemic risk must fulfill stricter obligations. These include risk evaluations, risk mitigation plans, incident reporting, and cybersecurity measures. These measures aim to address potential widespread impact or misuse risks posed by certain large-scale AI models.

The AI Code of Practice on GPAI

The Code of Practice published on July 10, 2025, serves as a voluntary but detailed compliance framework, helping companies align with these legal duties. However, it does not provide a “safe harbour” or automatic compliance certification; diligent implementation of the transparency, copyright, and safety measures is necessary.

Enforcement and Compliance

This marks the start of an enforceable regulatory phase under the AI Act for GPAI providers. The European Union is establishing national oversight authorities to enforce the AI Act. Providers that already have a General-Purpose AI model on the market needed to sign before 1 August. As of Saturday, 27 EU member states should have appointed national oversight authorities to ensure businesses comply with the AI Act.

Key Players and Responses

Some 26 companies, including Amazon, Google, Microsoft, IBM, Open AI, Mistral AI, and Aleph Alpha, have signed up to the AI Code of Practice on GPAI. Google has not signed up to the AI Code of Practice on GPAI before 1 August, but has stated it will sign later. xAI will have to demonstrate compliance with the AI Act's obligations concerning transparency and copyright via alternative means.

Kent Walker, the president of global affairs at Google’s parent company Alphabet, voiced concerns about the AI Act's potential impact on innovation in a blogpost on Wednesday. He emphasized the need for a balanced approach that encourages innovation while ensuring safety and transparency.

The AI Code of Practice on GPAI touches on transparency, copyright, and safety and security issues. It aims to help providers of GPAI models comply with the AI Act. The Code addresses concerns about the potential impact on innovation by providing a detailed compliance framework that can guide companies through the regulatory process.

[1] European Commission. (2025). AI Act: Code of Practice on the use of General-Purpose AI systems. Retrieved from https://ec.europa.eu/info/publications/ai-act-code-practice-use-general-purpose-ai-systems_en

[2] European Commission. (2025). AI Act: Regulation laying down harmonised rules on artificial intelligence. Retrieved from https://ec.europa.eu/info/publications/ai-act-regulation-laying-down-harmonised-rules-artificial-intelligence_en

[3] European Commission. (2025). AI Act: Guidelines on the content of risk management systems for high-risk AI systems. Retrieved from https://ec.europa.eu/info/publications/ai-act-guidelines-content-risk-management-systems-high-risk-ai-systems_en

[4] European Commission. (2025). AI Act: Guidelines on the content of the technical documentation for high-risk AI systems. Retrieved from https://ec.europa.eu/info/publications/ai-act-guidelines-content-technical-documentation-high-risk-ai-systems_en

Artificial intelligence providers, as specified in the EU AI Act, are required to implement policies that ensure copyright compliance, such as adopting technical safeguards to prevent output that infringes on copyright and prohibiting or mitigating the generation of infringing outputs.

The AI Code of Practice, which touches on transparency, copyright, and safety and security issues, aims to help providers of General-Purpose AI models comply with the AI Act by providing a detailed compliance framework, including policies that ensure copyright compliance.

Read also:

    Latest