hacktricks/cloud-security/concourse/concourse-lab-creation.md
carlospolop 634ff30a8d Revert "Ad hacktricks sponsoring"
This reverts commit 71795de168.
2022-05-01 12:16:37 +01:00

7.7 KiB

Support HackTricks and get benefits!

Do you work in a cybersecurity company? Do you want to see your company advertised in HackTricks? or do you want to have access the latest version of the PEASS or download HackTricks in PDF? Check the SUBSCRIPTION PLANS!

Discover The PEASS Family, our collection of exclusive NFTs

Get the official PEASS & HackTricks swag

Join the 💬 Discord group or the telegram group or follow me on Twitter 🐦@carlospolopm.

Share your hacking tricks submitting PRs to the hacktricks github repo.

Concourse Lab Creation

Testing Environment

Running Concourse

With Docker-Compose

This docker-compose file simplifies the installation to do some tests with concourse:

wget https://raw.githubusercontent.com/starkandwayne/concourse-tutorial/master/docker-compose.yml
docker-compose up -d

You can download the command line fly for your OS from the web in 127.0.0.1:8080

You can easily deploy concourse in Kubernetes (in minikube for example) using the helm-chart: concourse-chart.

brew install helm
helm repo add concourse https://concourse-charts.storage.googleapis.com/
helm install concourse-release concourse/concourse
# concourse-release will be the prefix name for the concourse elements in k8s
# After the installation you will find the indications to connect to it in the console

# If you need to delete it
helm delete concourse-release

After generating the concourse env, you could generate a secret and give a access to the SA running in concourse web to access K8s secrets:

echo 'apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: read-secrets
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-secrets-concourse
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: read-secrets
subjects:
- kind: ServiceAccount
  name: concourse-release-web
  namespace: default
  
---

apiVersion: v1
kind: Secret
metadata:
  name: super
  namespace: concourse-release-main
type: Opaque
data:
  secret: MWYyZDFlMmU2N2Rm

' | kubectl apply -f -

Create Pipeline

A pipeline is made of a list of Jobs which contains an ordered list of Steps.

Steps

Several different type of steps can be used:

Each step in a job plan runs in its own container. You can run anything you want inside the container (i.e. run my tests, run this bash script, build this image, etc.). So if you have a job with five steps Concourse will create five containers, one for each step.

Therefore, it's possible to indicate the type of container each step needs to be run in.

Simple Pipeline Example

jobs:
- name: simple
  plan:
  - task: simple-task
    privileged: true
    config:
      # Tells Concourse which type of worker this task should run on
      platform: linux
      image_resource:
        type: registry-image
        source:
          repository: busybox # images are pulled from docker hub by default
      run:
        path: sh
        args:
        - -cx
        - |
          sleep 1000
          echo "$SUPER_SECRET"          
      params:
        SUPER_SECRET: ((super.secret))
fly -t tutorial set-pipeline -p pipe-name -c hello-world.yml
# pipelines are paused when first created
fly -t tutorial unpause-pipeline -p pipe-name
# trigger the job and watch it run to completion
fly -t tutorial trigger-job --job pipe-name/simple --watch
# From another console
fly -t tutorial intercept --job pipe-name/simple

Check 127.0.0.1:8080 to see the pipeline flow.

Bash script with output/input pipeline

It's possible to save the results of one task in a file and indicate that it's an output and then indicate the input of the next task as the output of the previous task. What concourse does is to mount the directory of the previous task in the new task where you can access the files created by the previous task.

Triggers

You don't need to trigger the jobs manually every-time you need to run them, you can also program them to be run every-time:

Check a YAML pipeline example that triggers on new commits to master in https://concourse-ci.org/tutorial-resources.html

Support HackTricks and get benefits!

Do you work in a cybersecurity company? Do you want to see your company advertised in HackTricks? or do you want to have access the latest version of the PEASS or download HackTricks in PDF? Check the SUBSCRIPTION PLANS!

Discover The PEASS Family, our collection of exclusive NFTs

Get the official PEASS & HackTricks swag

Join the 💬 Discord group or the telegram group or follow me on Twitter 🐦@carlospolopm.

Share your hacking tricks submitting PRs to the hacktricks github repo.