IAM, S3 and Principle of Least Privilege Part Two — Remediation

BenH
6 min readJan 4, 2020

So you’ve now have legions of business users and people with access to the CloudWatch Logs console suffering as they attempt every permutation of action they could possibly ever use

Congratulations, you are now beginning to understand why security is hated — and why we feed off and embrace the hate

Now lets look at the case where you enter an organisation where the developers have decided “This is hard, lets use *”

Remember it is considered to be murder in these circumstances, no matter how many times I’ve protested, the spoilsports

Now earlier this month at Re:Invent, AWS Access Analyser was announced, I’ve not had a chance to fully investigate this yet (waiting on the SCP’s to be updated on the default deny, yep, hoist on my own petard…), but I was involved with the initial version evaluation of it about 3 years ago and the underpinning analysis engine was very cool, and it should be able to do a ‘service last accessed’ query

But for now we are going to do this the hard way

For this we will be using the ElasticSearch Service (while its still called that), Athena can be used against the S3 buckets holding the full cloudtrail logs corpus, and if you like complex SQL queries and filters — fill your boots

If you are running ES in the same account as where you are generating the CT logs — then happy days, in the CT Logs console you can create a subscription that feeds everything into ES

If however you have a separate security account where you handle this sort of thing (and run bitcoin miners), things get slightly more complex. Either copy the logs into the other accounts S3 bucket (duplicates the data, which isn’t the end of the world, especially for chain of custody/forensics when you find an evildoer), or setup a cross account access role that lets you read the logs from the origin account — or do something more complex with Kinesis streams

ElasticSearch, for those who don’t know, is one of those genuine magic technologies, starting out as a wrapper to make Apache Lucerne slightly friendlier (and as someone who had to use Lucerne, ES is a saviour) providing a free text indexing and search function against unstructured data (although you do have to provide the index structure that it can match against, which if you have something like JSON or XML ‘should’ be straightforward), the tool we will be discussing below provides you with an index for CloudTrail

ES has become one of those foundation level technologies, just about any website with a search function is probably using it beneath the covers

Still once you have the logs, they need to be loaded into ES, the general recommendation is a Mozilla project called hindsight to use this save yourself a world of pain and follow the link to CloudTracker below

One other consideration is that Amazon’s ElasticSearch service has a hardcoded, fixed, 60s timeout that they don’t exactly publish — so if you have catted all the CT log files together, into a multi-gig file — maybe don’t try to upload it to the ES API all in one go

Another nasty gotcha of ES is that it wants a security group, now most people might default to port 9300 open, this is sensible but wrong as Amazon has put everything (I think, I suspect there are some unsecure interfaces) on port 443, but again, not exactly well documented

If you do have a security department demanding that a copy of the logs goes to their account ‘for audit purposes’, along with the VPC flow logs (Hi Steve!) insist they start analysing them with the results to be reviewed monthly (yes, this is a petty and vicious retaliation — why are you surprised?)

Sizing for the ES cluster can be something of a black art — especially if you need to index in near real time with minimal amounts of lag, however in this scenario its blessedly not as important — my normal configuration for say 20GB of CT logs would be one or two M5.Large instances with 50GB of EBS volumes each, this still provides a good amount of room for growth, or additional indexes, such as the consolidated billing file (the finance people love this when they see it)

Uploading into ES is going to take some time depending on the size of the logs, you can monitor this with Kibana (remember though if you have created a private ES instance, you need a VPN into your VPC or an EC2 proxy), personally I think its time for a coffee

The final piece of the puzzle is a tool called CloudTracker (which we mentioned briefly above) from the lovely people at duo-labs.

This has been an utter God-send, using ES (or Athena with a smaller feature set) CloudTracker with a single command (follow their install example, its all Python Venv based so quite straightforward )— this example uses Athena because I’m a halfwit and deleted the EC2 instance I’d been using to manage the private ES cluster with the screen shots still on…

Here we are pulling back the users showing in the CT logs for the past 12 months, and now we can now produce a report on what API calls have actually been made by the role, like so

And now we have a list of exactly the calls being made, providing an ideal skeleton of the permissions required by the role, its time to find an eager young analyst/patsy to go through each and every API call (and Amazon now have ~1500)to make sure they make sense, so a call to an S3 bucket = makes sense, call to spin up some X1 instances — err maybe not

Further down the list you will see symbols against the API calls

These are explained below

Because I know someone, a “friend” lets say, totes not me of course, who knowing that the role an EC2 instance was running under told the DevOps team (yep, team, lets stop pretending shall we) and had them issue CLI commands to fix a production issue…

My “Friend” should totally get a massive payrise for that one — maybe a gong or something for services to IT and actually sorting the problem… Not that I’m envious… Definitely not about my mates gong announced last week… I’m better looking anyway!!!

And now we have a baseline, its time to drop it into the development environment, remember EC2 instances can dynamically pick up the changes to a new role, and wait for the screaming to begin

At which point — see part one!

We now have a role that can follow the application through its complete development lifecycle, that we know exactly what it does, and that it can only do what it needs to do

TL-DR
Ben uses a clever tool that it looks like Amazon have just superseded to sort out over-privileged accounts and makes smart arse comments

--

--

BenH

Cloud Architect, coffee snob, vicious rumor has it I know stuff about InfoSec, and I once helped hijack a spacecraft