Title: Three Critical AI and Cybersecurity Discussions Business Leaders Often Neglect
Joseph Ours serves as the head of the AI Strategy Practice at Centric Consulting. By 2028, experts predict companies will allocate over $30 billion to combat sophisticated information-based threats, pulling from both cybersecurity and marketing budgets. Despite this, many organizations remain unprepared for the evolving risk of AI-driven deception.
Understanding the Threat Landscape
Although misinformation, disinformation, and malinformation are often used interchangeably, they each pose distinct threats. Misinformation refers to incorrect information shared without ill intent, often due to a lack of knowledge or understanding. Disinformation, however, is intentionally false information designed to cause harm. Malinformation is a blend of both, built on truths but manipulated to mislead or misguide. AI-generated malinformation, such as deepfakes, poses unique challenges due to its complexity and ability to use both AI systems and human judgment to deceive.
Collectively referred to as "weaponized information," misinformation, disinformation, and malinformation are gaining traction, increasingly causing harm through the use of advanced AI tools. As leaders focus on conventional cybersecurity threats, critical discussions about information security are being overlooked.
The Accessibility of Information Warfare
Most business leaders underestimate the ease with which information threats can now be created. While traditional disinformation required state-level resources and coordination, today's threats can be generated by nearly anyone with a computer and AI tools. This accessibility means threats can originate from various sources such as disgruntled employees, competitors, or opportunistic individuals, not just sophisticated state actors or cybercrime rings.
Examples of the devastating impact of information threats include the Indian restaurant falsely accused of serving human meat on Facebook, which resulted in a loss of half their revenue. A furniture retailer was falsely accused of involvement in a child trafficking ring, causing permanent damage to their reputation. Even a Canadian couple used AI-generated content to inflate stock prices and turn a profit.
The Legal Wild West
The second overlooked conversation is the legal void concerning these technologies. With limited regulations and no global framework in place, businesses operate in a regulatory vacuum, making it challenging to create clear policies, enforce consequences, protect themselves from fraud, and hold bad actors accountable.
The Expanding Gray Area
One of the most uncomfortable conversations to address is the ethical confusion surrounding these tools, which blur lines that were once clearly defined as violations. The effectiveness of these threats is rooted in human psychology, as people are 70% more likely to share falsehoods than truths.
In response to these threats, organizations should take several steps:
- Adopt zero-trust principles: Apply cybersecurity's zero-trust framework to information verification, ensuring that nothing is trusted without validation, from candidate responses to voice authentication attempts.
- Update HR protocols: Develop new verification methods in areas like hiring and internal communications.
- Enhance authentication systems: Implement multifactor authentication, combining multiple verification methods for increased security.
- Train for skepticism: Invest in employee training, focusing on AI literacy, identifying information threats, and verifying sources. Develop organizational awareness of confirmation bias and encourage appropriate skepticism.
To combat information threats, a collective effort is required, involving regulatory changes, collaboration with industry partners, and individual accountability. While cyber threats will always evolve faster than our ability to combat them, businesses that establish a foundation of informed skepticism will be better equipped to face the challenges of tomorrow.
Joseph Ours and his team at Centric Consulting can help organizations implement the Zero-trust principles to verify information and protect against misinformation and disinformation attacks. Despite the ease with which AI tools allow anyone to generate malinformation, Joseph Ours emphasizes the importance of educating employees to develop a culture of skepticism and fact-checking to mitigate the risks.