Events , Governance & Risk Management , RSA Conference
The Shift to Continuous AI Model Security and Pen Testing
Aaron Shilts of NetSPI on Security Challenges, Threats of AI ModelsThe challenges of securing proprietary data within AI models and the paradigm shift in enterprise security are brought about by the widespread adoption of AI models. Adversaries are exploiting vulnerabilities in AI models, employing techniquessuch as "jailbreaking" to extract or manipulate proprietary information, said Aaron Shilts, president and CEO, NetSPI.
See Also: Why the Future of Security Is Identity
Jailbreaking could pose serious threats, particularly in sensitive industries such as healthcare, where patient records and health data must remain confidential, he said.
"There are different techniques that bad actors can use to get the wrong information out and that leads to a data breach. Another example is using an AI model to generate something nefarious that you don't want it to create - for instance, information on weapons or making drugs and things like that," Shilts said. "You don't necessarily want an AI model to inform a malicious actor on what they could do. So putting guardrails in there is important."
In this video interview with Information Security Media Group at RSA Conference 2024, Shilts also discussed:
- The shortage of skilled professionals in AI security;
- The need for continuous security assessments over one-time security audits;
- The importance of asset discovery and full visibility into IT infrastructure to prevent data breaches.
In his more than 20 years of industry leadership, Shilts has built innovative and high-performing organizations. Prior to joining NetSPI, he was the executive vice president of worldwide services at Optiv, where he led one of the industry's largest mergers.