Monday,
February 09, 2026

๐Ÿฅ– Palette Cleanser

It feels like every week more AI-related stories are creeping into ASD, which is both annoying and exciting. This week, it's definitely more the exciting kind. The lede in Chef's selection is a fascinating AWS attack case study showing an AI-assisted intrusion achieving admin access in eight minutes. It's full of juicy technical details and demonstrates how a well-documented, standardized API can empower attackers even more than defenders.

You know what, though? All these technical attacks are making attackers tired and unsatisfied. Now they are posing as recruiters, contacting people with seemingly unmissable employment opportunities, getting them to do technical interviews, which, of course, need the attacker's executed code to complete, and boom, taking over entire corporate cloud infrastructures. It's evil and genius at the same time. Stay safe out there, friends.

Have feedback about AWS Security Digest? Tell us here. This issue is also available to share online.

๐Ÿ“‹ Chef's selections

  • AI-assisted cloud intrusion achieves admin access in 8 minutes by Alessandro Brucato and Michael Clark

    Is it ethical to admire the bad guys? These guys caught an attacker going from leaked creds in a public S3 bucket to full admin in under eight minutes, making them think it was an AI-driven attack. The attacker injected Python into an existing Lambda function to mint access keys for a privileged user, created a backdoor admin account, then pivoted to Bedrock LLMjacking across nine foundation models, including Claude and DeepSeek R1. The giveaways that an LLM was doing the heavy lifting? Serbian comments in the Lambda code, session names like "claude-session" and "pwned", and hallucinated AWS account IDs in role assumption attempts. IMHO, AWS APIs are primed for AI-driven hax0ring.

  • Novel Technique to Detect Cloud Threat Actor Operations by Nathaniel Quist

    Nathaniel's team built threat actor fingerprints by mapping cloud security alerts to MITRE ATT&CK techniques, then compared Muddled Libra (the Scattered Spider cybercrime crew) against Silk Typhoon (China-nexus nation-state, formerly HAFNIUM). Despite both targeting cloud environments, they share only three techniques. The practical takeaway was that 10+ unique alert types firing within 30 days is your signal to dig deeper.

  • Testing Access to AWS Resources Without Angering the People That Pay the Bills by Daniel Grzelak

    Policy analysis says a resource is exposed, but is it really? Daniel (it me) walks through empirically verifying access without reading data or mutating state. The trick is crafting requests that fail validation after passing authorization, proving permission without executing anything. The open-source sns-buster tool demonstrates the technique across 14 SNS API actions. There's an update coming with a much more reliable and powerful technique, thanks to Sam Cox.

๐Ÿฅ— AWS security blogs

๐Ÿ› Reddit threads on r/aws


๐Ÿ’ธ Sponsor shoutout

Pleri logo

Meet Pleri: your AI-powered cloud security teammate. Sheโ€™s not a chatbot. Pleri proactively finds meaningful security work and fixes issues before they become problems.

Learn more about Pleri and see her in action.


๐Ÿค– Dessert

Dessert is made by robots, for those that enjoy the industrial content.

๐Ÿง IAM permission changes

๐Ÿช API changes

๐Ÿน IAM managed policy changes

โ˜• CloudFormation resource changes

๐ŸŽฎ Amazon Linux vulnerabilities

๐Ÿ“บ AWS security bulletins

๐Ÿšฌ Security documentation changes

    No bulletins this week.

YouTube Twitter LinkedIn