Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Popular AI Tools Contain Critical, Sometimes Unpatched, Bugs

Hackers Can Target Vulnerable Infrastructure to Take Over AI Models, Host Systems
Popular AI Tools Contain Critical, Sometimes Unpatched, Bugs
Researchers disclosed critical vulnerabilities in the technical infrastructure used to build artificial intelligence models. (Image: Shutterstock)

Nearly a dozen critical vulnerabilities in the technical infrastructure that companies use to build artificial intelligence models could allow hackers to access the tools and use them as gateways into the systems in which they are housed.

See Also: Mitigating Identity Risks, Lateral Movement and Privilege Escalation

Several of the 15 disclosed vulnerabilities are not patched.

These vulnerabilities are in tools that have been downloaded hundreds of thousands to millions of times per month and are used to host, deploy and share large language models and machine learning platforms, said Protect AI.

"Many OSS tools, frameworks and artifacts come out of the box with vulnerabilities that can lead directly to complete system takeovers such as unauthenticated remote code execution or local file inclusion vulnerabilities. What does this mean for you? You are likely at risk of theft of models, data and credentials," the company said on Thursday.

Among the affected platforms are Ray, which is used to train machine learning models; MLflow, a machine-learning life cycle platform; ModelDB, a machine learning management platform; and H20 - version 3, an open-source platform for machine learning. These platforms contain 12 of the 15 disclosed bugs.

The vulnerabilities allow attackers to gain unauthorized access to the AI models and use them to steal credentials and data and take over servers of the system on which the model is hosted.

In the past year, organizations and their adversaries have scrambled to deploy AI, especially generative AI, into their operations.

"The AI industry has a security problem, and it's not in the prompts you type into chatbots," Protect AI researchers Dan McInerney and Marcello Salvati said.

The machine learning security company said it had disclosed the vulnerabilities to vendors 45 days before publishing the advisory on Thursday. The company shared workarounds for the six unpatched bugs, four of which are critical.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing careersinfosecurity.asia, you agree to our use of cookies.