AWS's biggest event of the year, re:Invent, happened last week with a number of new announcements at the conference and in the weeks leading up to it. This post attempts to summarize the news related to AWS security for November, 2018.
AWS announced a new security-focused conference re:Inforce to happen in late June in Boston for $1,099.
AWS announced two new security services, Security Hub and Control Tower. Control Tower is still in a preview state that you have to be accepted into, and few details have been announced, but it seems like it will be a more managed version of Landing Zone, which has been a complex monster that has pretty much required someone from AWS Professional Services set up for you.
Security Hub is accessible, but still in a preview state, and therefore currently free with no details of future pricing. Security Hub seeks to be an aggregation point for all of your security alerts on AWS from all your AWS accounts and all the different security tools you run, including AWS services (GuardDuty, Macie, etc.) and third-party tools (CrowdStrike, Alert Logic, etc.). I like that AWS is working to consolidate and normalize all of the alerts you might get on AWS, but without things like a ticketing system, you're still going to just point the Security Hub alerts to a third-party solution (StreamAlert, Splunk, etc.) so the big question I have is how much easier will it be to aggregate alerts to Security Hub versus just aggregating alerts directly to a third-party tool? Like GuardDuty it has a master/member relationship model with other accounts, which is AWS's preferred workflow this year as opposed to just pointing an account at it.
The major weakness of Security Hub right now is that it is regional, so the biggest problem that results from that is if you have alerts in two different regions you cannot see both alerts in a single view, which is another reason to send the alerts to a third-part tool.
CloudTrail now has organization support (link). The biggest benefit here is that these trails can't be changed by the member accounts, only at the Organization level.
AWS has added a capability to Inspector to tell you if an EC2 is publicly accessible (link).
For those using CloudWatch Logs, you can now search them more easily using CloudWatch Insights, which is especially useful for json data. A number of services default to writing to CloudWatch Logs, such as Lambda, so this will be very helpful. Unfortunately, it uses its own syntax instead of something like Athena. The cost is $0.005/GB of data scanned, which is the same as Athena. CloudWatch Logs is much more expensive than S3 (which Athena works off of) with ingestion at $0.50/GB and storage at $0.03/GB.
You can now tag IAM users and roles, which makes it possible to better restrict IAM principals to only certain resources (link).
S3 Block Public Access was announced to deny S3 buckets from being made public, or the objects within. This works very well and can be applied at the account level so no S3 buckets within the account can be made public. Now when you create a bucket and want to make it, or any objects within it, public, you first have to disable S3 Block Public Access. This is awesome because previously, if you wanted to give someone the ability to change a bucket's policy, but not allow them to make the bucket public, you had no way of doing that other than auto-remediating changes. It is odd though that AWS only applied this to S3 buckets and not to all resource policies. That is a common theme of the past few weeks that I'll explore further in this post.
S3 Object Lock is a new feature to set legal holds on files or turn an S3 bucket into a write-once-read-many (WORM) system. Other new features for S3 are S3 Batch to make changes across all objects in a bucket (change ACLs, metadata, and copying) and DataSync for copying massive data sets to S3.
To reduce costs of storing data, AWS now offers Glacier Deep Archive, as a cheaper and slower version of Glacier, that costs $1/TB/mo.
A new storage service is a managed SFTP server (link). It is more expensive than expected at $216/mo + data transfer (including ingress fees). It does not have support for a static IP or Security Groups. It does support LDAP or storing SSH keys directly in it, not no password support (except through LDAP) or anonymous users.
They also are offering a Windows File Server (Amazon FSx), similar to their EFS offering.
AWS is also now offering a blockchain solution, but I really like the way they thought about it. When AWS CEO, Andy Jassy, announced it, he explained how they hadn't seen many uses of a blockchain on AWS that couldn't easily be solved with a database instead. So the solution they developed, Quantum Ledger Database (QLDB), is a cryptographically verifiable transaction log.
DynamoDB became the first and only AWS service to encrypt all data at rest (link). No AWS service, until now, encrypts data at rest by default. You always have to minimally turn on a special flag somewhere, which has felt weird that they aren't just encrypting the data for all services by default. So I'm happy to see DynamoDB doing this and hope AWS starts doing this for more services.
In contrast, it came as a surprise to some that AWS Neptune only started supporting any form of encryption in transit two weeks ago (link). Most AWS announcements about encryption are of this form, where you read it and realize "OMG, you weren't doing this already?" AWS has no baseline security requirements when they release new services. For example, they announced a new managed Kafka service which does not support any form of communication encryption, and I'm told AppMesh also does not have any. One reason this is so problematic is that AWS does not encrypt communications between their own datacenters, so even services in the same AZ and subnet might be communicating in plain-text between buildings.
When connecting AWS networks, companies often create a transit VPC to route traffic between VPCs in ways that AWS isn't capable of. AWS now has a Transit Gateway to do this. Unlike VPC peering though, you cannot reference Security Groups in other VPCs, and you also cannot connect Direct Connect to it.
As you segment your AWS accounts in order to reduce the blast radius of issues, you often still want to share resources between some of these accounts. To do that, AWS now provides Resource Access Manager. Currently, this service only works with a new thing called Route 53 Resolver Rules, so it's still very unclear how this service will work in the future, what resources could be shared, and what the permissions around those will be.
You can now run your own AWS datacenter with Outposts. AWS will provide you with the same hardware and software as they use. They also open-sourced the virtualization software used by Lambda, called Firecracker.
As a minor update, ECS (the container service) can now have secrets injected into them via Parameter Store, which can reference Secrets Manager (link).
Outside of the announcements from Amazon, customers of AWS also released some new things. Will Bengtson of Netflix has continued his work on detecting and preventing the use of AWS credentials outside of the EC2 instances they were made for, in this blog post and video. He released a metadata proxy for use with iptables on the EC2 instance that checks the user-agent of the request to 169.254.169.254. Thinkst recently released a similar solution (commercial) to act as a canary with their apeeper feature.
Andrew Krug of Mozilla released ssm-acquire for collecting memory dumps and other information of EC2s using AWS Systems Manager. There are also some other open-source capabilities mentioned in his talk (video).
I work as an independent AWS security consultant. If you're looking for help with your AWS security, reach out to me!