Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

Cybersecurity experts have uncovered a serious security hole in an artificial intelligence (AI)-as-a-service provider Replicate that might have allowed threat actors to obtain access to proprietary AI models and sensitive information.

"Exploitation of this vulnerability would have allowed unauthorized access to the AI prompts and results of all Replicate's platform customers," cloud security firm Wiz stated in a research issued this week.

The issue derives from the fact that AI models are often packed in forms that allow arbitrary code execution, which an attacker may weaponize to undertake cross-tenant assaults by way of a malicious model.

Replicate makes use of an open-source tool called Cog to containerize and package machine learning models that could then be deployed either in a self-hosted environment or to Replicate.

Wiz claims that it developed a rogue Cog container and submitted it to Replicate, finally deploying it to achieve remote code execution on the service's infrastructure with elevated privileges.

"We suspect this code-execution technique is a pattern, where companies and organizations run AI models from untrusted sources, even though these models are code that could potentially be malicious," security experts Shir Tamari and Sagi Tzadik said.

The attack approach designed by the organization then utilized an already-established TCP connection associated with a Redis server instance within the Kubernetes cluster hosted on the Google Cloud Platform to inject arbitrary commands.

What's more, with the centralized Redis server being used as a queue to manage multiple customer requests and their responses, the researchers found that it could be abused to facilitate cross-tenant attacks by tampering with the process in order to insert rogue tasks that could impact the results of other customers' models.

These rogue modifications not only jeopardize the integrity of the AI models, but also pose major dangers to the accuracy and reliability of AI-driven outputs.

"An attacker could have queried the private AI models of customers, potentially exposing proprietary knowledge or sensitive data involved in the model training process," the researchers added. "Additionally, intercepting prompts could have exposed sensitive data, including personally identifiable information (PII).

The flaw, which was appropriately publicized in January 2024, has since been fixed by Replicate. There is no evidence that the vulnerability was exploited in the wild to jeopardize consumer data.

The disclosure comes a little over a month after Wiz detailed now-patched risks in platforms like Hugging Face that could allow threat actors to escalate privileges, gain cross-tenant access to other customers' models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines.

"Malicious models represent a major risk to AI systems, especially for AI-as-a-service providers because attackers may leverage these models to perform cross-tenant attacks," the researchers found.

"The potential impact is devastating, as attackers may be able to access the millions of private AI models and apps stored within AI-as-a-service providers."

Post a Comment

Cookie Consent

We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.