Continuous surveillance after deployment: analyzing AI performance
In the rapidly evolving world of Artificial Intelligence (AI), post-deployment monitoring and reporting have become crucial for understanding the impacts of AI models and applications, ensuring public trust, and maintaining safety.
Across sectors and geographies, the need for post-deployment oversight is emphasised, with a focus on risk-based oversight, transparency, accountability, and safety. However, the practices and regulations vary significantly.
In the United States, the "America’s AI Action Plan" issued by the White House in July 2025, under EO 14179, prioritises AI development with a deregulatory approach. The plan encourages federal agencies to eliminate unnecessary rules and promotes ideologically neutral AI systems, favouring rapid deployment over stringent post-market oversight.
In contrast, the European Union has adopted a more rigorous approach with the AI Act, a risk-based regulatory framework that mandates stringent post-market monitoring, transparency, and human oversight for medical AI systems, especially in critical care medicine.
Effective post-deployment practices often include continuous monitoring of AI model performance, transparent reporting of outcomes and incidents, human oversight to intervene or override automated decisions, establishment of governance bodies or committees, and collaboration between public regulators, private developers, and sector experts.
However, challenges remain. Regulatory efforts are sometimes fragmentary and reactive, and the operational burden of continuous monitoring can be large, especially in complex and high-stakes environments. The effectiveness of these measures depends heavily on clear regulatory mandates, interdisciplinary governance, resource commitment for monitoring, and adaptable frameworks that reflect evolving AI risks.
In the future, it will be essential to develop similar measures for AI that work best across sectors. This will require ongoing testing and monitoring to understand the impacts of specific AI deployments, as many harms cannot be reliably anticipated before a model becomes available.
More can be done by large AI companies to roll out research grant programmes for post-deployment assessment. Governments, too, have a role to play in empowering themselves to access and decide about sharing information in the case of AI, especially in high-risk sectors like healthcare.
As AI continues to be deployed in various sectors, from mundane tasks to complex applications, it is crucial to maintain transparency, accountability, and safety to ensure public trust and confidence in this transformative technology.
- To maintain trust and confidence in AI as it permeates various sectors, from simple tasks to complex applications, engagement from large gadget and technology companies might include the establishment of research grant programs focused on post-deployment assessment.
- In the realm of AI regulation, while the United States approaches it with a deregulatory approach and rapid deployment focus, the European Union adopts a more stringent stance, with the AI Act mandating extensive post-market monitoring, transparency, and human oversight in critical care medicine – highlighting differing outcomes driven by technology and artificial-intelligence strategies.