'According to a recent report, the use of advanced artificial intelligence system named 'Shadow AI' is escalating the financial impact of data breaches'
In a recent report, IBM has highlighted the growing threat of unmonitored artificial intelligence (AI) tools, also known as "shadow AI," to the security of businesses. The report, based on 470 interviews with individuals at 600 organizations that suffered a data breach between March 2024 and February 2025, emphasizes the need for organizations to take AI security seriously to prevent costly data breaches.
The report indicates that supply-chain intrusion is the most common origin point for attacks on AI platforms, with hackers accessing the AI tool through compromised apps, APIs, or plug-ins. Weak authentication controls are a significant factor in these hacks, as 97% of organizations reporting AI-related breaches lacked adequate AI access restrictions [1].
The report also reveals that hackers find generative AI valuable for launching attacks, particularly for AI-generated phishing and deepfake impersonation attacks. On average, 16% of data breaches involved attackers using AI, most often for AI-generated phishing (37%) and deepfake impersonation attacks (35%).
Unmonitored AI tools contribute to costlier data breaches by introducing significant security vulnerabilities that hackers exploit. Organizations that experienced breaches involving shadow AI tools faced an average increment in breach costs of approximately $670,000 compared to firms with little or no shadow AI usage [1].
The report further highlights that a lack of proper AI governance policies is a common issue in organizations. Only 34% of organizations with AI governance policies regularly check their networks for sanctioned tools, and fewer than half have an approval process for AI deployments.
The findings underscore that failing to properly monitor and secure AI platforms enables attackers to exploit these tools as entry points, leading to more extensive, expensive breaches. Basic security practices, such as enforcing zero-trust principles and network segmentation around AI tools, are critical to mitigating these risks [1].
In summary, the report suggests that unmonitored AI tools increase data breach costs by providing weakly defended entry points exploited through supply-chain intrusions and poor access control, leading to prolonged incident duration, broader data compromise, and higher operational disruption costs [1][2].
[1] IBM's annual Cost of Data Breach report [2] Ponemon Institute's 2025 Cost of a Data Breach report
The IBM report, based on interviews from 600 organizations, reveals that data breaches are often initiated through supply-chain intrusions into AI platforms, with inadequate authentication controls and weak AI access restrictions being significant factors [1]. Hackers find generative AI useful for phishing and deepfake impersonation attacks, with AI-generated phishing and deepfake impersonation attacks accounting for 37% and 35% of data breaches, respectively [1]. This lack of AI security leads to costlier data breaches, as organizations with unmonitored AI tools faced an average increment of around $670,000 in breach costs compared to firms with little or no shadow AI usage [1]. Moreover, the report underscores that a lack of proper AI governance policies is common in businesses, with only 34% of organizations regularly checking for sanctioned tools and fewer than half having an approval process for AI deployments [1]. To mitigate these risks, it's crucial for businesses to enforce zero-trust principles and network segmentation around AI tools as basic security practices [1]. The findings suggest that failing to properly monitor and secure AI platforms enables attackers to exploit these tools as entry points, leading to more extensive, expensive breaches [1][2].
[1] IBM's annual Cost of Data Breach report [2] Ponemon Institute's 2025 Cost of a Data Breach report