Inside AI Policy

May 9, 2024

AI Daily News

Software group says artificial intelligence regulation must promote uses for cyber defense

By Charlie Mitchell  / October 11, 2023

Artificial intelligence is increasingly important to countering sophisticated cyber attacks, according to BSA-The Software Alliance, which in a new paper urges governments internationally to ensure regulatory efforts encourage ongoing innovation in AI cyber defensive tools.

“BSA members are developing AI-enabled tools that help to better counter malicious actors, and better protect customers and citizens. As policymakers worldwide form rules governing the development and use of artificial intelligence, a risk-based approach will help to promote rather than inhibit the use of AI for cybersecurity,” said BSA director of policy Henry Young.

The two-page BSA paper, “AI for Cybersecurity: Ensuring Cyber Defenders Can Leverage AI to Protect Customers and Citizens,” was released Oct. 10 and discusses cybersecurity needs that AI can help address and policy approaches to help promote such uses.

BSA said the key elements for promoting AI use in cyber include:

  • Risk-based regulation: Policymakers should focus regulation on high-risk AI, and require developers and deployers high-risk AI to establish risk management programs and conduct impact assessments to mitigate risks.
  • Enable cybersecurity innovation: Policymakers should promote the use of AI as a key cyber defense tool.
  • Protect data transfers: The ability to move data across borders helps analyze normal behavior, detect malicious behavior, and deliver the best cybersecurity outcomes.
  • Harmonize laws and policies: Policymakers in like-minded countries should coordinate efforts to ensure laws and policies advance the use of AI for cybersecurity, including through the use of internationally recognized standards.

The paper says “malicious actors are on the march,” citing adversaries’ uses of AI to enhance their social engineering attacks, create malware and more.

Henry Young

Henry Young, Director, Policy, BSA | The Software Alliance

BSA says “enterprise software companies,” in turn, are using AI to develop more secure code, guard against malware and detect vulnerabilities, among other uses.

“Policymakers should promote using AI to bolster cybersecurity, which can be done while ensuring appropriate regulation around its high-risk uses,” BSA says in the paper. “As the importance of AI for cybersecurity increases, it is also critical that laws and policies improve security, and security is not used as a justification for other political or protectionist objectives.”

According to BSA, “Policymakers should require developers and deployers of AI intended for high-risk uses to establish risk management programs and conduct impact assessments to mitigate those risks.”

Further, the group says, “Policymakers should work with like-minded countries in a globally coordinated effort to ensure laws and policies advance the ability for cyber defenders to use AI for cybersecurity. This coordination should aim to harmonize rules across policy domains, such as cybersecurity and privacy, and leverage or develop internationally recognized standards where appropriate.”

BSA has promoted its approach in congressional testimony, urging lawmakers to embrace a federal requirement for companies to perform impact assessments in “high-risk” use cases and to maintain AI risk management programs, as well as for requiring federal agencies to use the NIST AI risk management framework in areas like procurement.

BSA also has been focusing on the European Union’s AI efforts, as well as on state level work to regulate AI.

In the U.S., the legislative process is just beginning to unfold in Congress with lawmakers of both parties supporting some level of AI regulation, but with little agreement yet on what that might be. Sen. John Hickenlooper (D-CO), chair of a pivotal Commerce subcommittee on AI issues, has emerged as a key player in trying to pull together a consensus approach.

He says regulation is essential but needs to be crafted in partnership with industry and designed to ensure the U.S. maintains its global edge on artificial intelligence. Hickenlooper says he has “the most anxiety” about cybersecurity issues related to AI but that “constant engagement” with stakeholders “will allow us to build the standards” necessary to secure AI systems and counter harms.