AI's Adaptability Demands Frequent Checks and Monitoring
AI models, due to their adaptability, necessitate more frequent checks, documentation, and monitoring than traditional models. This is crucial to prevent regulatory penalties and catastrophic consequences from models that may drift, fabricate information, or leak sensitive data. However, the volume of work involved in validating and monitoring AI models can be overwhelming for traditional governance structures with manual processes and annual validation cycles.
To tackle this challenge, AI systems are being employed to monitor other AI systems, a concept known as 'judge LLMs'. This approach is gaining traction in governance circles. However, it's important to note that there's currently no public evidence of European companies implementing a democratic type of AI governance using multiple independent AI judge models. The EU AI Act, the upcoming European AI regulatory framework, focuses on risk-based AI governance but doesn't explicitly mention this democratic approach.
To ensure the effectiveness of 'judge LLMs', human validators are necessary to periodically review and enhance their performance. Regulations like the EU AI Act can guide organizations in using AI responsibly by prohibiting certain types of AI and categorizing risk tiers.
In conclusion, the adaptive nature of AI models requires robust governance mechanisms, including frequent validation and monitoring. While AI systems can monitor other AI systems, human oversight remains crucial. Regulations like the EU AI Act can help organizations navigate AI responsibly, although the use of multiple independent AI judge models for democratic AI governance is not yet widely adopted in Europe.