Denial of Wallet Attacks on AWS

2020.06.08

RSS feed

The AWS incidents that make the news are normally data loss incidents (ex. a public S3 bucket), but one of the common ways people find out about a compromise is through their AWS bill, because a common incident that isn’t made public is a compromised AWS key that is used to spin up EC2s to mine bitcoin. That attack is used for the personal gain of the attacker, but it is possible that an attacker just wants to bring hurt to you. Historically, this would have taken the form of a DDoS attack, but in the age of the cloud, that attack can be modified to be a Denial of Wallet attack, where the goal is to cause a high bill such that you run out of money.

When you have servers in a datacenter, and an attacker just wants to bring you hurt, they can DDoS you and your site goes down. When you run in the cloud, an attacker can do things such that your site might stay up, but you’ll be bankrupt. This post will describe this concept, how it can be abused, and how it can be avoided.

This post comes about from this tweet:

Denial of wallet attack tweet

As I’m going to be discussing improvements on the attack, mitigations, and other technical issues related to this, I want to start by briefly touching on some of the non-technical issues involved here:

  • Black lives matter.
  • I don’t condone illegal or unjust activities, whether that is done by protestors or police officers.
  • Increasing police department expenses may only be shifting the burden to tax payers.
  • This tweet could have instead been about an app that allows protestors to report on police somehow that was overwhelmed.

Now onto the technical issues.

Billing alerts

One of the first things you should do on an AWS account is create a billing alert. It is better to find out about a dozen expensive EC2s running on the day they are spun up, so you can take action, rather than 20 days later after they’ve incurred 20 days worth of charges against your account.

If you’re just trying AWS out for the first time and you hear about this “free tier” concept, you might be surprised to discover that nothing stops you doing things that actually cost money, and you won’t know about this until the end of the billing cycle. There is no way to tell AWS “Don’t charge me more than X for the month, and terminate my application if it exceeds that amount.” You however can set alerts for when your estimated charges exceed a threshold. I normally set this to something like $10 for accounts I expect to be free (often you end up with bills for using AWS for something like $0.49, so setting the threshold to $0 causes alerts you aren’t too worried about).

For enterprises with monthly spend in the millions, this can be more difficult, but can still be helpful. If you’re in that category, you should talk to folks like Corey Quinn of the Duckbill Group and Thomas Dullien (aka halvarflake) of optimyze who each focus on different areas of cloud spend improvements. This is not a paid ad. I have a lot of respect for both of those people.

One strategy commonly used is to set multiple alarms that you expect to trigger throughout the month. For example, if you expect your bill to be $1000, then set the following:

  • Alarms for $250, $500, $750, and $1000 and name them based on the week you expect them to trigger. If your $500 alarm triggers in week 1, you need to investigate.
  • Alarms for $1500 and $2000 so you know when your expenses have increased dramatically.

For my free tier accounts I have alerts for $10 and $100. When the $10 alert triggers, and I’m not around my computer, I’ll casually make a note to look into it when I get back to a computer. Usually it’s a domain name being renewed for $12. When the $100 triggers, I run to my computer because something bad must be happening.

You can have these alerts not only email you, but also text you. You can set up to 10 alarms for free, after which it is $0.10/mo.

Service Quotas

AWS has a number of hard and soft limits that among other possible reasons, help mitigate run-away code. Many people have stories about infinite loops happening in AWS that caused resources to be created over and over again, or a Lambda to trigger that caused itself to trigger again. This is a common enough problem that the Cloudwatch Event Rule documentation even has a warning about it.

Cloudwatch infinite loop warning

The limits help stop these coding mistakes. They can also lead to outages when you hit into these as your application attempts to scale. For this reason, operations teams should have some familiarity with the Service Quotas service.

These limits also play a role in mitigating the impact of someone using your account for bitcoin mining. Without these limits an attacker could try to spin up a million EC2s, but due to these limits, the attacker might only spin up a few dozen EC2s.

Compromised access

One hit knockouts

There are a lot of ways to spend money in AWS, but a sort of game that comes up periodically is coming up with the single most expensive API call that could be made in an account. The largest memory optimized EC2, the x1e.32xlarge with 3TB of memory, costs $26.688/hr in us-east-1, or over $19K/mo, for an on-demand instance. As Corey Quinn has pointed out though, you can instead make a single call that costs $3M by reserving the instance, using one in a more expensive region, and with with an associated license.

Expensive API call

That call does allow you to specify an instance count, which is initially limited to 20, so that single call could cost $62M. From the comments on that thread, another expensive option is to create a Savings Plan for $26M.

Involving multiple calls to bump up service limits, actual phone calls to AWS salespeople to get access to things like Outposts or Snowmobile, or possibly by exploring AWS Marketplace or IQ could also bump up the costs. However, chances are a conversation with AWS can likely revoke all those charges.

Business costs

If someone has the ability to make those types of calls in your AWS account, they likely also have the ability to delete all your files in S3, terminate all your instances, and cause other mayhem that has the potential to cause worse business impact. We’re reminded here of Code Spaces, a company that had all of its files deleted, including backups, and had to shutdown within 24 hours of their incident.

Mitigations

For both of these scenarios of running an arbitrary API call, or deleting files, the attacker has compromised access. You can mitigate both of those scenarios by implementing least privilege on services, enforcing MFA on people, and implementing SCPs. You can mitigate an attacker’s ability to destroy a business by limiting the blast radius through the use of multiple AWS accounts and WORM backups through the use of S3 Object Lock or again through multiple AWS accounts and SCPs. Those concepts are discussed further in my AWS Security Maturity Roadmap.

Beware of S3 Object Lock though as an attacker could abuse this call to apply an object lock to your files such that they can’t be deleted, even by the root user, so things like lifecycle policies that you attempt to apply to clear out an S3 bucket of user uploads would not have an effect. Not even AWS Support can undo this. The API call for this is s3:PutObjectRetention, so someone with s3:Put* that you think might only be able to write files, can write files and lock them so they aren’t deleted.

Auto-scaling

In the face of a DDoS, depending on how the attack is performed, you likely can overcome it through the use of auto-scaling resources. Auto-scaling results in a higher bill though. AWS Shield is AWS’s DDoS solution. By default, and for free, CloudFront and Route53 come with protections against Layer 3 and 4 attacks (ex. SYN Flood). For higher level attacks, and attacks against EC2 and ELB, you can get Shield Advanced for $3K/mo plus an additional per GB cost for data transferred out. Shield Advanced includes access to the AWS DDoS Response Team who can apply mitigations on your behalf. The biggest benefit of Shield Advanced is that in the event of a DDoS where you have to scale up services to handle it, you can get credits back to your account for that additional cost imposed, so this service acts as an insurance policy that your finance team might be interested in.

You may also be able to use AWS WAF to block certain types of attack. The AWS Best Practices for DDoS Resiliency goes into further detail about the AWS solutions for DDoS mitigation.

Architecture choices

A common application need is the ability to upload files, which the original tweet was about. One way of doing this is to allow users to upload directly to an S3 bucket. Using javascript in a web app you can have the browser upload files to S3 directly. This can also be done with a mobile app. AWS S3 has no service limits that I am aware of, and there is no way to limit the size of files uploaded to an S3 bucket, the rate at which they are uploaded, or most anything else, through the use of IAM policies or resource policies on the S3 bucket. This means that if you grant an IAM role the ability to upload files to an S3 bucket, they can potentially abuse it.

One attack that can be performed, is to upload large numbers of files or large files. If you actually have AWS credentials, you can do this using aws s3 sync. By specifying a public bucket as the source, and the victim bucket as the destination, this technique uses AWS’s own resources against itself, resulting in much faster transfer speeds than you would be able to obtain by attempting to upload files directly from your own laptop. To avoid problems associated with this, you need to ensure you can differentiate and block specific users that may be abusing your service. You also will want to restrict the file name and size so that single users are not uploading many files or a single large file. To do this, you should use presigned-posts which allow you to use POST policies where you can specify the name and size limits. You’ll want to monitor how many of these presigned URLs you’re giving out to each user of your application.

Another attack is to partially upload files. Most applications will only take action once a file has completely uploaded. If you don’t complete the upload, the victim will be charged for the storage until they remove the files. You can mitigate this by using a lifecycle policy to abort incomplete multi-part uploads.

Abusing AWS for the purpose of increasing a victim’s expenses has been explored very little. Any time you provide AWS credentials to a potentially malicious client there is the possibility for abuse, especially because the possible restrictions you can perform with IAM policies are so limited. The alternative is to proxy everything through an application where you can impose stronger controls. For example, instead of presigned URLs for S3, you could architecture things to have the client upload files to one or more EC2s that then copy them to S3. This imposes additional costs, in terms of running the EC2 and building the application, and has a different set of problems that need to be considered.

For whatever architecture choice you make you should design your application so that you can monitor and block specific users and limit those users from coming back under new accounts. You should also have feature flags so that if a feature is abused, you can disable that functionality without your entire application being turned off.