Inside AI Policy

October 7, 2025

AI Daily News

Security group flags AI as emerging tool for countering insider threats

By Charlie Mitchell / August 11, 2025

The Intelligence and National Security Alliance in a paper says artificial intelligence tools are increasingly being deployed in the Defense Industrial Base to counter “persistent” insider threats, and emphasizes the need for “human oversight” in its recommendations for deployment.

“Insider threats, whether caused by negligence, malicious intent, or compromised credentials, remain a top concern for government, academia, and industry. With traditional tools struggling to keep pace, many organizations are turning to AI to strengthen detection and decision-making within their insider risk programs,” INSA said in an Aug. 6 release.

“The paper outlines how AI can bolster insider risk management (IRM) by analyzing behavioral patterns, detecting early signs of disengagement, and enabling more timely, informed responses. It highlights key factors for successful implementation, including cross-functional collaboration, strong human oversight, continuous model validation, and clear policies that respect both security and privacy,” INSA said.

The paper, “Recommendations when Using AI in Insider Risk Managements,” was developed by INSA’s Insider Threat Subcommittee.

INSA “is a nonpartisan, nonprofit membership organization dedicated to advancing collaborative, public-private approaches to intelligence and national security priorities,” with over 175 member organizations. The group’s board includes executives from major tech, defense and government services organizations.

“Artificial Intelligence is influencing how practitioners approach IRM. Where traditional methods relied heavily on static rules and manual log reviews, AI introduces capabilities for analyzing large datasets, detecting subtle behavioral anomalies, and enabling faster responses,” the paper says.

“However,” it says, “these advances come with their own challenges: concerns over bias, privacy implications, operational complexity, and the need for rigorous governance.”

The paper says AI’s “potential applications” in insider risk management include detection, response and investigation, while highlighting a handful of “considerations and concerns”:

  • ACCURACY & FALSE POSITIVES: AI can reduce noise compared to static rules, but like any automated detection approach, it can still misinterpret behavior. Poorly designed or trained models may generate false positives, creating reputational or legal risks if actions are taken without sufficient human validation.
  • BIAS & OBJECTIVITY: AI systems reflect the data and assumptions on which they are built. Without careful design and oversight, models may reinforce existing biases or produce inconsistent results.
  • PRIVACY & CULTURE: Continuous behavioral monitoring raises legitimate concerns about employee privacy and trust. Overly intrusive practices can erode workplace morale if not balanced with clear policies, transparency, and proportional safeguards.
  • ADVERSARIAL EVASION: Sophisticated insiders may attempt to manipulate detection mechanisms or disguise activities to evade monitoring.
  • OPERATIONAL READINESS: Effective use of AI requires clean, well-integrated data sources and staff who can interpret outputs accurately. AI should complement human judgment, not replace it.

The paper also provides five recommendations:

  1. PROVIDE HUMAN OVERSIGHT: AI-generated insights should be treated as decision support, not definitive judgments. Human analysts remain essential for interpreting context and determining appropriate actions.
  2. ESTABLISH COLLABORATIVE GOVERNANCE: Successful AI-enhanced IRM programs need functional governance (across IT, HR, Compliance, etc) to ensure coordinated detection, risk scoring and response.
  3. MONITOR DISENGAGEMENT: Organizations may wish to examine how to responsibly identify patterns of disengagement while respecting privacy and avoiding undue assumptions about intent.
  4. MAINTAIN & VALIDATE MODELS: AI models require ongoing tuning to remain effective as threats and behaviors evolve. Regular testing and validation can help maintain relevance and accuracy.
  5. PRIORITIZE ETHICAL USE & TRANSPARENCY: Clear policies on monitoring practices, data handling, and employee communications are essential to maintain trust and comply with legal and ethical standards.

INSA says, “Ultimately, AI should be seen as multiplier amplifying your team’s ability to detect, interpret, and respond to insider risk. The organizations that will thrive are those that combine data, people, and ethical intelligence-deliberately, thoughtfully, and decisively.”