Skip to content

Cyber experts racing to fix AI security oversights compromise system integrity.

Businesses increasingly rely on Generative AI as a crucial tool, causing cybersecurity experts to face new challenges and potential threats.

Businesses increasingly rely on generative AI, transforming it from a novelty into an essential...
Businesses increasingly rely on generative AI, transforming it from a novelty into an essential tool. However, this shift pose challenges for cybersecurity experts, who now grapple with new security concerns.

Cyber experts racing to fix AI security oversights compromise system integrity.

In the current business landscape, generative AI is no longer a novelty but a fundamental necessity, leading to headaches for cybersecurity professionals. According to Palo Alto Networks' latest report, the traffic from generative AI skyrocketed by over 890% in 2024, with usage primarily as a writing assistant (34%), conversational agent (29%), and enterprise search tool (11%).

Popular apps including ChatGPT, Microsoft 365 Copilot, and Microsoft Power Apps are part of this rise, but it's causing significant security issues. Data loss prevention (DLP) incidents related to Generative AI more than doubled in early 2025, making up 14% of all data security incidents across SaaS traffic.

Organizations, on average, have about 66 GenAI applications in use but surface 10% as high-risk. Shadow AI and unauthorized access to data pose major challenges, causing concerns about malicious links, malware, and unintended usage. The regulatory landscape is also rapidly evolving, with non-compliance leading to severe penalties.

Researchers emphasize the issue of visibility into AI usage as a key problem, making it difficult for security teams to monitor and control tools across the organization. While the benefits of GenAI are clear, they create new opportunities for data leakage, compliance failures, and security challenges.

Organizations need to take action: implement conditional access management, guard sensitive data from unauthorized access and leakage, use real-time content inspection, and adopt a zero trust security framework to identify and block sophisticated, evasive, stealthy malware, including threats within generative AI responses. Leveraging advanced security tools can help secure AI ecosystems.

As enterprises race to embrace AI, it's crucial they keep up with security measures to stay ahead of these emerging risks.

To stay informed about the latest trends and challenges in AI and cybersecurity, sign up for our daily newsletter and receive a free copy of our Future Focus 2025 report.

  • AI adoption drives productivity but poses new cybersecurity challenges
  • Public sector uneasy about AI security risks
  • Enterprises concerned about agentic AI risks, Gartner suggests addressing it with more AI agents

Insights from Enrichment Data:

  • Rapid growth in generative AI usage and traffic
  • Increased DLP and data breach incidents
  • Proliferation of high-risk generative AI tools
  • Lack of clear AI policies
  • Main use cases and associated risks: writing assistant, conversational agent, and enterprise search
  • Recommended solutions: strengthen DLP, AI governance, audit and restrict risky applications, employee training, and leverage advanced security tools.
  1. With the surge in generative AI usage in enterprises, promoting compliance with cybersecurity regulations and strengthening data loss prevention practices is critical to mitigate security risks, given the increase in DLP incidents and the proliferation of high-risk AI tools in use.
  2. As concerns about agentic AI risks escalate, organizations should consider implementing a multi-faceted approach for managing AI security, combining employee training, auditing and restricting risky applications, strengthening data governance, and leveraging advanced security tools to address the rising cybersecurity challenges posed by generative AI.

Read also:

    Latest