The Institute for Security and Technology, a nonprofit think tank, in a new report details artificial intelligence risk mitigation strategies that organizations can employ against threats in both the technical and policy realms.
“This report, the second in a two-part series, presents 39 risk mitigation strategies for avoiding institutional, procedural, and performance failures of AI systems,” according to the report, “Risk Mitigation Strategies for Safeguarding Against Future Failures,” released March 20.
“These strategies aim to enhance user trust in AI systems and maximize product utilization. AI builders and users, including AI labs, enterprises deploying AI systems, as well as state and local governments, can use and implement a selection of the 22 technical and 17 policy-oriented risk mitigation strategies presented in this report according to their needs and risk thresholds,” it says.
The report is the second in a two-volume set on “Navigating AI Compliance,” following the release of “Tracing Failure Patterns in History” in December.
“Through implementing these practices,” the think tank says, “organizations building and utilizing AI systems not only reduce regulatory risk exposure and build user trust for their product, but they could also attract top talent, gain a competitive edge, enhance their financial performance, and increase the lifetime value of their solutions.”
The report pulls out nine recommendations to emphasize for “AI builders and users” in the executive summary:
- Implement proportional compliance measures for high-impact AI applications. AI builders and users should consider which compliance measures are most appropriate for their work, especially when building or deploying AI systems in sensitive or high impact areas.
- Acknowledge and address acceptable risks in AI development and deployment. Unintended consequences are not to be confused with compliance failure. Still, these unplanned effects should be acknowledged by developers, builders, and regulators as they consider thresholds of acceptable tolerance for the enhanced risks associated with exposed attack surfaces and features or functionalities of AI that are not yet thoroughly understood or anticipated.
- Prioritize data management and privacy practices to maintain user trust. Implementing proper data management and privacy-enhancing practices will protect user rights, maintain trust, and comply with data protection regulations.
- Implement robust cybersecurity controls for AI infrastructure protection and enhanced reliability. Cybersecurity controls, red-teaming, fail-safe mechanisms, and other techniques protect AI systems from attacks and strengthen their reliability in various scenarios.
- Utilize safety and risk assessments to proactively mitigate AI harms. Safety and risk assessment procedures, such as incident reporting frameworks and AI safety benchmarks at different stages of the lifecycle, identify and mitigate possible harms before they occur–potentially mitigating both procedural and performance failures.
- Design and implement compliance and AI literacy training for staff. Training should be mandatory for all staff members involved in the AI supply chain, from data providers to model developers and deployers.
- Build trust by implementing transparency mechanisms. Transparency and interpretability mechanisms such as model cards, data cards, and disclosure frameworks are necessary to build user and stakeholder trust, facilitate accountability, and enable informed decision-making.
- Enhance AI explainability and disclosure frameworks to improve understanding of system behavior. Efforts to increase the explainability of AI systems, supplemented with disclosure frameworks for model evaluation, allow both builders and users to better understand the behavior patterns and outputs of these systems and potentially safeguard against performance failures.
- Employ strategies for non-discriminatory AI. Bias mitigation strategies across model training, data collection, and ongoing monitoring and maintenance, in addition to adversarial debiasing, can prevent performance failures and help to ensure fairness while preventing discriminatory outcomes in AI systems.
The report aims to “provide AI builders with technical and policy-oriented risk mitigation strategies for avoiding compliance failures in the future.” Further, the report offers “technical and policy-oriented risk mitigation strategies for responsible deployment of AI systems” and seeks to “illuminate the various ways in which sound compliance practices can generate return on investment (ROI).”
IST has delved frequently into the nexus of AI and security, including in another December 2024 report that examined possible liability reform related to generative AI, and an October report addressing uses of AI in network defense.