AI and GDPR Compliance in Design Phase - Ep. 3: Development Stage Guidelines
In the realm of Artificial Intelligence (AI), compliance with data protection regulations, such as the General Data Protection Regulation (GDPR) and the upcoming EU Artificial Intelligence Act, is paramount. During the development phase of AI systems, key measures are being implemented to ensure compliance, particularly focusing on data management, privacy, and transparency.
During this critical stage, compliance involves the implementation of data protection by design and by default. This encompasses several key aspects:
- The use of privacy-enhancing technologies, such as differential privacy, federated learning, synthetic data, homomorphic encryption, and secure multiparty computation, to minimize and anonymize personal data used in training or validation.
- Ensuring data quality and relevance, aligning with the AI system’s purpose and legal basis for processing.
- Embedding technical and organizational security measures to protect personal data throughout development.
- Maintaining documentation and records demonstrating GDPR compliance efforts, including data lineage and data processing activities as stipulated by the AI Act.
- Preparing the AI system architecture to support individuals’ rights, such as access, rectification, erasure, objection, and portability.
- Addressing outputs of the AI model that might affect data subjects’ rights or create bias or discrimination risks.
- Incorporating ongoing risk assessments to mitigate potential systemic or privacy risks emerging during development.
A recent legal analysis by JD Supra (2025) highlights that the development phase builds upon the data preparation and anonymization strategies established in the design phase and focuses on refining the model and system while embedding GDPR principles. The European Data Protection Board (EDPB) guidance and CNIL recommendations also emphasize tailoring technical security and data annotation to maintain compliance.
Under the EU AI Act, effective from August 2025, providers of AI models, including general-purpose AI models, must perform risk assessments, ensure transparency (e.g., data source disclosure), and enable compliance mechanisms during this development stage. The AI Act’s transparency and risk mitigation requirements complement GDPR’s data protection mandates, underscoring the need for closely aligned compliance work through development.
In summary, GDPR compliance measures during the AI development phase under the AI Act focus on embedding privacy and security by design, handling personal data carefully with lawful bases, ensuring ongoing compliance documentation and risk mitigation, and preparing the AI system to respect fundamental rights once deployed.
Sources:
- JD Supra, “AI and GDPR: A Road Map to Compliance by Design – Development Phase” (2025)
- EU Commission, Regulatory Framework on AI and GPAI obligations (2025)
- CNIL Recommendations on AI system development under GDPR (2025)
- In the development of AI systems, adherence to cybersecurity law such as the EU Artificial Intelligence Act requires risk assessments, transparency, and compliance mechanisms, ensuring alignment with personal-finance and data-and-cloud-computing regulations.
- Ensuring compliance with data protection regulations during the development phase of AI systems necessitates the incorporation of privacy-enhancing technologies and the implementation of data protection by design and by default, as stipulated by the General Data Protection Regulation (GDPR).
- As technology advances and businesses delve further into AI, the need for a robust understanding of the interplay between AI, finances, and cybersecurity law becomes increasingly important to ensure seamless operations and respect for personal data rights.