Introduction
If an AWS EC2 (or other AWS service) is configured with an IAM role, and an attacker can access the metadata service at 169.254.169.254 from that EC2, the attacker can use the credentials available there to progress their attack further. In this post I’ll show the options defenders can take against this problem.
Contents
History
In 2006, Amazon launched the EC2 service. In June 2012, Amazon launched the ability for EC2’s to have IAM roles, so that instead of leaving AWS keys sitting around, the EC2 instance could query the metadata service at 169.254.169.254 as defined by RFC 3927 along with Amazon’s specific implementation in order to obtain credentials to make the AWS API calls that the EC2 was given permission to perform.
In November 2013, at DeepSec in Vienna, Austria, Andres Riancho presented “Pivoting In Amazon Clouds” where he showed how he abused this feature in a pentest. He showed that if an attacker could get the EC2 to return the results of that metadata service to them, then they would be able to (from their laptop anywhere in the world) take on the privileges of that EC2 to make calls against the AWS APIs. Andres gave roughly the same talk at Black Hat US 2014 which is easier to watch (video and paper).
A more detailed discussion of abusing the metadata service came shortly thereafter from Nicolas Gregoire who abused this in 2014 to obtain a bug bounty from Prezi. Prezi’s write-up is here and Nicolas gave a presentation titled “Lurking in the clouds” at Insomni’hack 2014. IAM roles were not abused, but instead Nicolas gained access to some userdata hosted on this service that was meant to be private to the EC2.
Earlier this year, to help inform others of this common issue and others with AWS, I released flAWS.cloud, where this issue features as one of the levels. A number of blog posts have also been released about this issue, such as this recent one from NCC Group. Although the best defense against this issue is to know about it so you can avoid having any bugs in your code to allow access to this service (which has been the focus of many articles), this post will discuss a range of options for further mitigating this problem.
Options for defense
Limit the AWS privileges granted
The most important thing you can do is limit the privileges granted by this role, so if the credentials are compromised, the attacker won’t be able to do much. You should ensure the attacker is going to be blind. If you’ve restricted the IAM role to only allow GetObject
permissions from a specific bucket, for example, and those credentials are compromised, the attacker will not know what bucket they have access to.
This is an important distinction to make between the results of a pentest and what is likely to be a real attack. In a pentest, when the consultant gets these credentials, they’ll just look at the code they were given to identify what this key is used for and what resource it gives access to. In a real attack, the attacker is likely going to be blind (unless they also have compromised the company source code), and not know what actions the key grants or the resources.
The first thing an attacker is likely to do when they get an AWS credential is run something like dump_account_data.sh from Daniel Grzelak in order to see what is in this account and what they might do. If they don’t know what they have access to, then the situation is not a whole lot different than them not having access at all.
You can use logs from Cloudtrail to identify what API calls a service has historically made, and to what resources, in order to better scope its IAM privileges.
Also, be sure you have alerts in Cloudtrail to detect “Access Denied” errors and services making API calls they shouldn’t! How to set that up is a story for another day.
Segment your service
Sometimes the best privileges for a service are no privileges. One security strategy is to break up your service into multiple parts. One part (Service A) might run on an EC2 with no AWS privileges and would perform the riskier/more complex/less trusted work (such as using ImageMagick). Another part (Service B) would run on an EC2 that does have the AWS privileges. Using this strategy, Service B could enforce additional restrictions, checks, rate limiting, auditing, etc. on the requests made by Service A, so even if Service A is fully compromised (ex. RCE), the attacker could still be limited in what they can do. Blast radius reduction FTW!
Deny access to the metadata service
Other IaaS platforms, have additional requirements for clients to hit their metadata service. For example, Google’s GCE requires clients to include the HTTP header Metadata-Flavor: Google
which is less likely for attacks to be able to include in their requests (link).
AWS does not offer an option like that, but you can set iptables rules to deny access to this service if you do not use it all, for example:
iptables -A OUTPUT -m owner ! --uid-owner root -d 169.254.169.254 -j DROP
If you do use IAM roles, then you can’t block access to this IP because the session token is updated periodically. One possibility would be to use lyft/metadataproxy and add additional checks, such as a “Metadata-Flavor” header check. However, proxying the metadata service for this security purpose like this is not something I’ve heard of being done.
If however, part of your service calls out to another process (again, maybe it’s ImageMagick), you could sandbox that second process.
Restricting access based on Condition Keys
Erik Peterson at Black Hat Europe 2014, presented Bringing a Machete to the Amazon, which is a must watch talk on AWS issues, and he mentions how you can lock IAM roles down to IPs.
From the AWS docs on Condition Keys, there are only three that you’ll find useful for our goal here: aws:SourceIp
, aws:SourceVpc
, and aws:UserAgent
. The idea here is that even if the attacker gets access to the AWS credentials, they won’t be able to use them because other restrictions exist on them.
Using aws:SourceIp
The AWS docs provide guidance on restricting access to specific IPs (link). You can use the public IP of public instances, or the NAT gateway IP for instances in a private subnet. A lot of people are confused and frustrated why they can’t just restrict the IP to the private IP range they use, such as 10.0.0.0/8
. The reason for that is AWS services are accessed over the public Internet. Every time you make a request to the AWS API, the request has to leave your network, so the source IP that AWS sees is whatever the last hop out of your network is (such as an internal proxy, a NAT gateway, or just the public IP of the instance).
AWS provides a useful function for checking IP addresses called NotIpAddress
which can be used to match addresses using CIDR notation.
One risk of this method is you need to make sure you are updating the IP addresses you specify. If you’re scaling up and down public instances regularly, you may need to either acquire a lot of Elastic IPs to hand out or have some sort of lambda update your policies regularly when new instances are started. However, ideally nearly all your instances should be behind a NAT gateway.
Using aws:SourceVpc
My previous point about requests having to venture out across the public Internet wasn’t entirely true. For requests to S3 or DynamoDB, you can you set up VPC endpoints. In addition to some other benefits (ex. bandwidth related), you can limit IAM policies to specific VPC endpoints.
Using aws:UserAgent
Restricting access based on user agent is in some ways weaker than using the source IP because once an attacker knows the user agent string used, they can simply modify their own user agent string to match it. However, discovering the user agent is likely going to additionally involve compromising the source code for the service. At first, you might think that if an attacker can use an SSRF attack to read the metadata service, they’ll just record the user agent string returned as well, but it’s likely that different libraries with different user agent strings are being used to query the metadata service (ex. using boto) and whatever is making the other network requests.
Getting your user agent
I created an EC2 with an IAM role that allows it to list and get the objects in an S3 bucket called summitroutetest
. For an actual service, it’s likely you don’t even need the ability to list the objects.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::summitroutetest"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::summitroutetest/*"
]
}
]
}
I then SSH into the EC2 instance in two different sessions. In one session, I ran a simple local only listening service:
sudo nc -l 127.0.0.1 80
In the other session, I made a request with the AWS cli, setting the end-point to localhost:
aws --endpoint-url http://127.0.0.1:80 s3api list-objects --bucket summitroutetest
In the original session, I saw:
GET /summitroutetest?encoding-type=url HTTP/1.1
Host: 127.0.0.1:80
Accept-Encoding: identity
Date: Sun, 13 Aug 2017 03:42:16 GMT
x-amz-security-token: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
User-Agent: aws-cli/1.11.83 Python/2.7.12 Linux/4.9.32-15.41.amzn1.x86_64 botocore/1.5.46
The last line indicates the user-agent used by the aws cli. To get the user-agent used by the boto library, I created a ~/.boto
file with the contents:
[s3]
host = localhost
calling_format = boto.s3.connection.OrdinaryCallingFormat
[Boto]
is_secure = False
Then I created a file test.py
with the contents:
#!/usr/bin/env python
import boto
conn = boto.connect_s3()
bucket = conn.lookup('summitroutetest')
for key in bucket:
print key.name
I ran python test.py
and in the netcat session I saw:
HEAD /summitroutetest/ HTTP/1.1
Host: localhost
Accept-Encoding: identity
Date: Sun, 13 Aug 2017 04:08:38 GMT
Content-Length: 0
Authorization: AWS ASIAITQNW7WDRKYDBGDA:VmebAUU3pl1MkPqu7+nuBU+QsEQ=
x-amz-security-token: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
User-Agent: Boto/2.42.0 Python/2.7.12 Linux/4.9.32-15.41.amzn1.x86_64
As you can see I have two user-agent strings:
- aws-cli:
aws-cli/1.11.83 Python/2.7.12 Linux/4.9.32-15.41.amzn1.x86_64 botocore/1.5.46
- python code:
Boto/2.42.0 Python/2.7.12 Linux/4.9.32-15.41.amzn1.x86_64
I then set an IAM policy to only allow the python code, and attached to to my role:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:UserAgent": "Boto/2.42.0 Python/2.7.12 Linux/4.9.32-15.41.amzn1.x86_64"
}
}
}
}
Then I get rid of my ~/.boto
file I was using for checking the user-agent and try some tests:
[ec2-user@ip-172-31-19-174 ~]$ python test.py
file1.txt
[ec2-user@ip-172-31-19-174 ~]$ aws s3api list-objects --bucket summitroutetest
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Changing your user agent
Now using this concept, I can change my user-agent for the boto library:
#!/usr/bin/python
import boto
boto.UserAgent = "MyS3Reader v1.0"
conn = boto.connect_s3()
bucket = conn.lookup('summitroutetest')
for key in bucket:
print key.name
Then I change the condition in the policy:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"StringNotLike": {
"aws:UserAgent": "MyS3Reader*"
}
}
}
}
Notice that I’m using a “StringNotLike” so I’m only checking if the user-agent starts with “MyS3Reader”. That way I can change the version number of the app, and this will allow me to debug issues if maybe version 1.0 of the app works but 1.1 does not.
Additionally, I could set policies and user-agents for different code paths, or maybe multiple services run on the same EC2 instance and they should have different permissions. Normally, you would either need to give each service an access key (which is bad) or make your IAM role a superset of all the permissions of the different services running on that host, in cases where you (for whatever reason) have decided to run different services that need different permissions on the same host.
Canarytokens
In Nicolas Gregoire’s bug bounty from Prezi, he was able to get access to the userdata available from the metadata service which is sometimes used to pass secrets to the EC2 (you should not store secrets there). Thinkst, with their free Canarytokens service, has support for AWS keys as canary tokens, which means if these “fake” keys are ever attempted to be used, they will generate alerts. You could put these in the userdata as if these were used by the EC2, and this would help you identify if the metadata service was accessed by an attacker. You could additionally create these keys yourself and tie your own alerting to them, but make sure these keys have no privileges.
Conclusion
I hope this post gave you some additional ideas for what you can do to limit the damage if an attacker gets access to the metadata service.