S3 Security Part 3 — Talking to the Service(s)

BenH
6 min readOct 23, 2017

Disclaimers as before!

Introduction

All cloud providers take security incredibly seriously, you are after all entrusting core business activities to a 3rd party that you don’t have full control over, and it is critical that you don’t interact with anyone else’s resources, data or systems when running an activity, even if these components are in the same account.

One of the mechanisms AWS use to help you enforce this separation are Roles, available within the IAM console that let you define what a system can or can not do with the resources in that account, as well as providing cross account access for other users in other accounts. Think of them like user accounts for machines — in this article we will define one that allows EC2 to get from one bucket and upload to another.

Bad Behaviors

Its been argued that software development is one of the hardest activities known to man, personally I’ve always thought this is a little overstated, but never the less…

Because a developer will spend most of their time trying to get the code to do what it is meant to do, other aspects tend to suffer, such as security — “Lets just allow this to run as root/access anything/open up the firewall” is a common theme during development “We will fix the security when its working”.

And in all honesty, I do similar things in my personal development AWS account, and I genuinely know better and do feel guilty about it.

One thing I absolutely do not do in even my development account is setup a programmatic user for the application to use within the account.

Programmatic users provide access keys to allow you to interact with the AWS API’s securely, here’s mine -

Each tie back to ether my personal account or one of my machines, and I have a quick and easy way of tracing what was done and from where — and none of these keys are in any form of version control or heaven forbid github!

Here’s what they look like -

[Default]
Access key Id: AKIAJA2PBNZX43FQXQ6Q
Secret Access Key: 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

Further more I never use the root account for anything other than correcting a pretty severe mistake and making sure I didn’t leave a X1 instance running (again).

I cannot recommend enough that you do the same.

However it has occurred on several occasions where someone has created a pair of API keys and embedded them within an application. The problem with this is that if a bad actor obtains these keys then they have whatever level of access the keys provide (see what I said above about enabling full access).

If you ever have someone asking for a keypair to put into the application, remind them that this is worse than a username and password, and point them to the roles documentation.

Alternatively if you are feeling mischievous, I have just been pointed at this article for generating fake keys and monitoring who uses them.

http://canarytokens.org/generate

Roles

Roles are made up of policies, functionally identical to the bucket and VPC polices that were showed earlier, here’s the role that allows me to stream logs into S3, in JSON

Unlike the VPC and bucket policies, we can attach multiple to each role, this policy is made up of 3 that allow me to stream DynamoDB activity into ElasticSearch via Lambda

Along the top of each role we have a number of tools to allow additional trust relationships between components, and the Access Advisor that allows you to monitor what and when the role last accessed

And finally a red button for us to hit if we need to cut all activity associated with that role.

Building a role

To create a role, there is a straightforward wizard where we choose the service that will use the role, in our case the EC2 fleet that will access static objects in S3

We can then select from the list of available policies in the account, or build one directly

And finally name and renew

And we are done -

One policy allows us to get objects from the Public bucket, like so -

And the other allows us to put objects into the secret bucket -

Both follow the same format, we say if we can allow or deny, then we state the actions we are allowing or denying, and then we state the resource that these actions can be carried out on.

It is left as an exercise for the reader to add in a suitable deny all

Finally we configure our launch configuration (for an AutoScaling Group) or Instance (or other originating resource) with the role to allow it access -

For an ASG Launch Configuration -

For an instance -

And that’s it, those resources only have the permissions we have set!

Conclusion

“Ben!” I hear you say “We have already told the VPC what buckets and how it can access S3, and we have also told the buckets what and how can access them — this seems a little excessive”

And yes, from one perspective it is, the concept I want to introduce here is ‘Defense in Depth’, if you set everything you can down to the finest grain at every level you can, a single mistake at one level has a massively reduced blast radius.

Mistakes will always happen, systems will always fail, checks and balances will not always be performed, but if at the very start we build in security at every level, layered on again and again, the mistake may not cost you your business.

Of course if someone drops something from the secret bucket into the public bucket that artifact is then available to all and sundry, but with naming conventions, roles, policies and most critical of all, the proper training of your staff with such a level of access, the risk you have left is pretty low.

--

--

BenH

Cloud Architect, coffee snob, vicious rumor has it I know stuff about InfoSec, and I once helped hijack a spacecraft