Examining AI Risks: Unveiling the New Strategies and Obstacles Faced by Organizations, According to Our Recent Report
The FPF Center for Artificial Intelligence has published a comprehensive report titled "AI Governance Behind the Scenes: Emerging Practices For AI Impact Assessments". The report, which was created with input from dozens of expert stakeholders over six months, offers insights into the practices and challenges companies face when conducting AI impact assessments.
As AI models become more widespread and powerful, many organizations follow a four-step approach to conducting AI impact assessments. This approach includes performing structured AI impact assessments, integrating these assessments into broader risk management frameworks, engaging multidisciplinary teams, and iteratively updating assessments as AI models evolve or new use cases emerge.
However, companies face several challenges in this process. The difficulty of anticipating all potential harms from complex AI systems given the fast-evolving nature of the technology is one such challenge. Another challenge is managing and mitigating risks across different stakeholders and regulatory regimes simultaneously. Translating high-level AI governance principles into actionable requirements understood throughout an organization is another hurdle companies have to overcome. Addressing gaps in workforce skills and experience related to AI ethics, accountability, and compliance is another key challenge.
The report suggests that while AI impact assessments are becoming a central governance practice, companies are still in the early stages of refining methods and overcoming hurdles to make AI risk management effective and practical. To enhance their AI Impact Assessments, organizations should consider bolstering information-gathering processes from third-party model developers and system vendors, improving internal education about AI risks, and enhancing techniques that measure risk management strategies' effectiveness.
For more information about the FPF report, visit here. To learn more about FPF, visit our website. For media inquiries, contact Media@our website.
FPF is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use. The organization has offices in Washington D.C., Brussels, and Tel Aviv.
Novel uses of AI can create uncertainty about when risk has been brought within acceptable levels. Many organizations struggle to obtain complete information from model developers and system providers, adding to the complexity of assessing diverse AI risks and operationalizing assessments into practical governance measures.
[1] The information above synthesizes the main points from the authoritative FPF Center for AI source dated 2025. No other search results provided additional direct details on AI impact assessment practices and challenges from the FPF report specifically.
- The FPF Center for Artificial Intelligence's report, "AI Governance Behind the Scenes: Emerging Practices For AI Impact Assessments," suggests that companies are still refining methods and overcoming challenges to make AI risk management effective and practical.
- As AI models evolve and new use cases emerge, organizations should consider enhancing techniques that measure the effectiveness of their AI impact assessment strategies.
- To make AI impact assessments more comprehensive, organizations may need to bolster information-gathering processes from third-party model developers and system vendors.
- The FPF, a global non-profit organization, brings together various stakeholders to evaluate the implications of data use, including societal, policy, and legal impacts.
- The challenge of obtaining complete information from model developers and system providers adds to the complexity of assessing diverse AI risks and operationalizing assessments into practical governance measures.