KASM-2055 Add zone concept and refactor playbook to automate tedious tasks

This commit is contained in:
Ryan Kuba 2023-08-10 13:55:18 +00:00
parent 5e5937ecfc
commit 2171d14ee1
26 changed files with 611 additions and 212 deletions

322
README.md
View file

@ -4,7 +4,9 @@
This project requires ansible 2.9.24 or greater on the host running the ansible playbook. The target systems do no not need Ansible installed.
The steps below for installing Ansible have been tested on CentOS 7.9.2009, CentOS 8.4.2105, Debian 9.13, Debian 10.10, Ubuntu 18.04.5, and Ubuntu 20.04.3.
#### Pip Method
The steps below for installing Ansible have been tested on CentOS 7.9.2009, CentOS 8.4.2105, Debian 9.13, Debian 10.10, Ubuntu 18.04.5, Ubuntu 20.04.3, and Ubuntu 22.04.2. This should function on any Linux distribution with Python3.
1. Ensure pip3 is installed
@ -14,38 +16,51 @@ The steps below for installing Ansible have been tested on CentOS 7.9.2009, Cent
2. Add local bin directory to path in bashrc
```
echo 'PATH=$PATH:$HOME/.local/bin' >> ~/.bashrc
source ~/.bashrc
```
```
echo 'PATH=$PATH:$HOME/.local/bin' >> ~/.bashrc
source ~/.bashrc
```
3. Use pip to install ansible
`python3 -m pip install --user -U pip && python3 -m pip install --user -U ansible && python3 -m pip install --user -U jinja`
`python3 -m pip install --user -U pip && python3 -m pip install --user -U ansible`
4. Ensure that ansible version is greater than 2.9.24
`ansible --version`
`ansible --version`
#### Distribution Native
Ubuntu 22.04.2, Debian Bullseye, Alpine 3.17, RHEL 9 and derivatives (including Fedora 37), and Arch all have a late enough version of Ansible in their repositories.
Ubuntu/Debian: `sudo apt-get install -y ansible`
RHEL/Fedora: `sudo dnf -y install ansible-core`
Alpine: `sudo apk add ansible`
Arch: `sudo pacman -Sy --noconfirm ansible-core`
## Kasm Multi Server install
This playbook will deploy Kasm Workspaces in a multi-server deployment using Ansible.
* It installs the kasm components on the systems specified in the ansible `inventory` required for the respective roles (db, web, agent).
* It creates a new swapfile to ensure that the total swap space matches the size `desired_swap_size` specified on the files in group_vars/.
* It installs the kasm components on the systems specified in the ansible `inventory` required for the respective roles (db, web, agent, guac, proxy).
* It creates a new swapfile to ensure that the total swap space matches the size `desired_swap_size` specified in the inventory file for all agents.
* It enables the docker daemon to run at boot to ensure that kasm services are started after a reboot.
It has been tested on CentOS 7.9.2009, CentOS 8.4.2105, Debian 9.13, Debian 10.10, Ubuntu 18.04.5, and Ubuntu 20.04.3
It has been tested on CentOS 7.9.2009, CentOS 8.4.2105, Debian 9.13, Debian 10.10, Ubuntu 18.04.5, Ubuntu 20.04.3, and Ubuntu 22.04.2 hosts.
![Diagram][Image_Diagram]
[Image_Diagram]: https://f.hubspotusercontent30.net/hubfs/5856039/Ansible/Ansible%20Multi%20Server.png "Diagram"
### Ansible Configuration
### Ansible Configuration and installation
1. Open `roles/install_common/vars/main.yml`, `group_vars/agent.yml` and update variables if desired.
1. Open `inventory` file and fill in the hostnames / ips for the servers that will be fulfilling the agent, web, db, and guac roles. Please take the time to get acquainted with the inventory file and it's layout. It serves as the master file controlling how this multi server installation will be deployed. Every variable in this file has been designed to scale except for the database. Regardless of deployment size there will only be one centralized database `zone1_db_1` or a remote type db that all "web" roles need direct access to.
2. Open `inventory` file and fill in the hostnames / ips for the servers that will be fulfilling the agent, webapp, db, and guac roles.
2. Ensure the variables for each host in the deployment are set properly specifically:
* ansible_host: (hostname or IP address)
* ansible_port: (ssh port)
* ansible_ssh_user: (ssh user to login as, reccomended root or a user with passwordless sudo)
* ansible_ssh_private_key_file: (full path to ssh private key file to user which can be include bash completion IE ~/.ssh/mykey)
3. Download the Kasm Workspaces installer from https://www.kasmweb.com/downloads.html and copy it to `roles/install_common/files`.
@ -53,127 +68,230 @@ It has been tested on CentOS 7.9.2009, CentOS 8.4.2105, Debian 9.13, Debian 10.1
4. Run the deployment.
`ansible-playbook -Kk -u [username] -i inventory install_kasm.yml`
`ansible-playbook -i inventory install_kasm.yml`
Ansible will prompt you for the ssh password and sudo password (will almost always be the same password).
5. Make notes of the credentials generated during the installation to be able to login.
Or, if you have ssh keys copied over to your servers and have NOPASSWD in sudoers you can just run.
6. Login to the deployment as admin@kasm.local using the IP of one of the web servers (eg https://192.168.1.2)
`ansible-playbook -u [username] -i inventory install_kasm.yml`
7. Navigate to the Agents tab, and enable each Agent after it checks in. (May take a few minutes)
Additionally the deployment can be run in a "test" mode by passing the extra option test=true, this will not seed images among other test mode optimizations.
**Post installation your local inventory file will be modified with the appropriate credentials please make a copy or keep this somewhere safe**
`ansible-playbook -u [username] -i inventory install_kasm.yml -e "test=true"`
**If any deployment errors occur please run the uninstall_kasm.yml playbook against the same inventory file before trying again as there might be half set credentials leading to a broken deployment, see the helper playbooks section for more information**
5. Login to the deployment as admin@kasm.local using the IP of one of the WebApp servers (eg https://192.168.1.2)
### Scaling the deployment
6. Navigate to the Agents tab, and enable each Agent after it checks in. (May take a few minutes)
The installation can be "scaled up" after being installed by adding any additional hosts including entire new zones. Once modified run:
### Adding Additional Agent / Webapp / Guac hosts to an existing installation
`ansible-playbook -i inventory install_kasm.yml`
The installation can be "scaled up" after being installed by adding additional hosts to the agent, app, or guac roles in the inventory file and rerunning the playbook.
Before running the installation against a modified inventory file please ensure the credentials lines in your inventory were set and uncommented properly by the initial deployment IE:
Please ensure that redis_password, manager_token and database_password is set in `roles/install_common/vars/main.yml`
```
## Credentials ##
# If left commented secure passwords will be generated during the installation and substituted in upon completion
user_password: PASSWORD
admin_password: PASSWORD
database_password: PASSWORD
redis_password: PASSWORD
manager_token: PASSWORD
registration_token: PASSWORD
```
If you did not save the redis_password, manager_token or database_password for your existing installation, they can be obtained using the following methods.
#### Scaling examples
A common example of adding more Docker Agents:
```
zone1_agent:
hosts:
zone1_agent_1:
ansible_host: zone1_agent_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
zone1_agent_2:
ansible_host: zone1_agent2_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
```
If you would like to scale up web/agent/guac/proxy servers as a group where the agent/guac/proxy server talk exclusively to that web server set `default_web: false` in your inventory file. This requires entries with a matching integer for all hosts IE:
```
zone1_web:
hosts:
zone1_web_1:
ansible_host: zone1_web_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
zone1_web_2:
ansible_host: zone1_web2_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
zone1_agent:
hosts:
zone1_agent_1:
ansible_host: zone1_agent_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
zone1_agent_2:
ansible_host: zone1_agent2_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
zone1_guac:
hosts:
zone1_guac_1:
ansible_host: zone1_guac_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
hosts:
zone1_guac_2:
ansible_host: zone1_guac2_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
```
Included in inventory is a commeted section laying out a second zone. The names zone1 and zone2 were chosen arbitraily and can be modified to suite your needs, but all items need to follow that naming pattern IE:
```
# Second zone
# Optionally modify names to reference zone location IE west
west:
children:
west_web:
hosts:
west_web_1:
ansible_host: HOST_OR_IP
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
west_agent:
hosts:
west_agent_1:
ansible_host: HOST_OR_IP
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
west_guac:
hosts:
west_guac_1:
ansible_host: HOST_OR_IP
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
vars:
zones:
- zone1
- west
```
#### Missing credentials
If for any reason you have misplaced your inventory file post installation credentials for the installation can be recovered using:
- Existing Database password can be obtained by logging into a webapp host and running the following command:
```
sudo grep " password" /opt/kasm/current/conf/app/api.app.config.yaml
```
```
sudo grep " password" /opt/kasm/current/conf/app/api.app.config.yaml
```
- Existing Redis password can be obtained by logging into a webapp host and running the following command:
```
sudo grep "redis_password" /opt/kasm/current/conf/app/api.app.config.yaml
```
```
sudo grep "redis_password" /opt/kasm/current/conf/app/api.app.config.yaml
```
- Existing Manager token can be obtained by logging into an agent host and running the following command:
```
sudo grep "token" /opt/kasm/current/conf/app/agent.app.config.yaml
```
## Kasm Uninstall playbook
```
sudo grep "token" /opt/kasm/current/conf/app/agent.app.config.yaml
```
This playbook uninstalls Kasm workspaces from DB, WebApp, Agent, and Guac servers specified in the `inventory` file.
### Deploying with a remote database
It has been tested on CentOS 7.9.2009, CentOS 8.4.2105, Debian 9.13, Debian 10.10, Ubuntu 18.04.5, Ubuntu 20.04.3, and Ubuntu 22.04.1
In order to deploy with a dedicated remote database that is not managed by ansible you will need to provide endpoint and authentication credentials. To properly init the database superuser credentials along with the credentials the application will use to access it will need to be defined.
### Ansible Configuration
1. First remove the `zone1_db` entry from inventory:
1. Open `inventory` file and fill in the hostnames / ips for the servers that will be fulfilling the agent, webapp, db, and guac roles.
```
#zone1_db:
#hosts:
#zone1_db_1:
#ansible_host: zone1_db_hostname
#ansible_port: 22
#ansible_ssh_user: ubuntu
#ansible_ssh_private_key_file: ~/.ssh/id_rsa
```
3. Run the deployment.
2. Set the relevant credentials and enpoints:
`ansible-playbook -Kk -u [username] -i inventory uninstall_kasm.yml`
```
## PostgreSQL settings ##
##############################################
# PostgreSQL remote DB connection parameters #
##############################################
# The following parameters need to be set only once on database initialization
init_remote_db: true
database_master_user: postgres
database_master_password: PASSWORD
database_hostname: DATABASE_HOSTNAME
# The remaining variables can be modified to suite your needs or left as is in a normal deployment
database_user: kasmapp
database_name: kasm
database_port: 5432
database_ssl: true
## redis settings ##
# redis connection parameters if hostname is set the web role will use a remote redis server
redis_hostname: REDIS_HOSTNAME
redis_password: REDIS_PASSWORD
```
Ansible will prompt you for the ssh password and sudo password (will almost always be the same password).
3. Run the deployment:
`ansible-playbook -i inventory install_kasm.yml`
Or, if you have ssh keys copied over to your servers and have NOPASSWD in sudoers you can just run.
`ansible-playbook -u [username] -i inventory uninstall_kasm.yml`
**Post deployment if the `install_kasm.yml` needs to be run again to make scaling changes it is important to set `init_remote_db: false` this should happen automatically but best to check**
## Kasm Stop/Start/Restart playbooks
### Deploying a Dedicated Kasm Proxy
These playbooks can be used to start, stop or restart Kasm workspaces services on the DB, WebApp, Agent, and Guac servers specified in the `inventory` file.
1. Before deployment or while scaling open `inventory` and uncomment/add the relevant lines for :
It can be limited to run only on hosts in specific groups by passing `-l [db, web, agent, or guac]` flag.
```
# Optional Web Proxy server
#zone1_proxy:
#hosts:
#zone1_proxy_1:
#ansible_host: zone1_proxy_hostname
#ansible_port: 22
#ansible_ssh_user: ubuntu
#ansible_ssh_private_key_file: ~/.ssh/id_rsa
```
In the examples `restart_kasm.yml` can be substituted for `start_kasm.yml` or `stop_kasm.yml` for starting or stopping the kasm services respectively.
2. Post deployment follow the instructions [here](https://www.kasmweb.com/docs/latest/install/multi_server_install/multi_installation_proxy.html#post-install-configuration) to configure the proxy for use.
### Ansible Configuration
**It is important to use a DNS endpoint for the `web` and `proxy` role as during deployment the CORS settings will be linked to that domain**
1. Open `inventory` file and fill in the hostnames / ips for the servers that will be fulfilling the agent, webapp, db, and guac roles.
## Helper playbooks
2. Run the playbook.
Using these playbooks assumes you have allready gone through the installation process and setup your inventory file properly. These playbooks run against that inventory to help administrators:
`ansible-playbook -Kk -u [username] -i inventory restart_kasm.yml`
Ansible will prompt you for the ssh password and sudo password (will almost always be the same password).
Or, if you have ssh keys copied over to your servers and have NOPASSWD in sudoers you can just run.
`ansible-playbook -u [username] -i inventory restart_kasm.yml`
If you only want to run it against hosts in the 'db' group for example you can run the following:
`ansible-playbook -u [username] -l db -i inventory restart_kasm.yml`
## Kasm Database Backup playbook
This playbook can be used to backup the Kasm Workspaces database to a location on the Database server specified by `remote_backup_dir` and optionally to a location on the ansible server specified by `local_backup_dir`. Backups older than `retention_days` are automatically cleaned up.
### Ansible Configuration
1. Open `roles/backup_db/vars/main.yml` and update variables if desired.
2. Open `inventory` file and fill in the hostnames / ips for the servers that will be fulfilling the agent, webapp, db, and guac roles.
3. Run the playbook.
`ansible-playbook -Kk -u [username] -i inventory backup_db.yml`
Ansible will prompt you for the ssh password and sudo password (will almost always be the same password).
Or, if you have ssh keys copied over to your servers and have NOPASSWD in sudoers you can just run.
`ansible-playbook -u [username] -i inventory backup_db.yml`
## OS Patching Playbook
This playbook is used for patching the underlying OSes on the Kasm Workspace servers. It will patch and reboot the servers if needed.
### Ansible Configuration
1. Open `roles/patch_os/vars/main.yml` and update variables if desired.
2. Open `inventory` file and fill in the hostnames / ips for the servers that will be fulfilling the agent, webapp, db, and guac roles.
3. Run the playbook.
`ansible-playbook -Kk -u [username] -i inventory patch_os.yml`
Ansible will prompt you for the ssh password and sudo password (will almost always be the same password).
Or, if you have ssh keys copied over to your servers and have NOPASSWD in sudoers you can just run.
`ansible-playbook -u [username] -i inventory patch_os.yml`
* Uninstall Kasm Workspaces (uninstall_kasm.yml)- This will completely purge your Kasm Workspaces installation on all hosts, if using a remote database that data will stay intact no remote queries will be executed. Example Usage: `ansible-playbook -i inventory uninstall_kasm.yml`
* Stop Kasm Workspaces (stop_kasm.yml)- This will stop all hosts defined in inventory or optionally be limited to a zone, group or single server passing the `--limit` flag. Example Usage `ansible-playbook -i inventory --limit zone1_agent_1 stop_kasm.yml`
* Start Kasm Workspaces (start_kasm.yml)- This will start all hosts defined in inventory or optionally be limited to a zone, group or single server passing the `--limit` flag. Example Usage `ansible-playbook -i inventory --limit zone1_agent_1 start_kasm.yml`
* Restart Kasm Workspaces (restart_kasm.yml)- This will restart all hosts defined in inventory or optionally be limited to a zone, group or single server passing the `--limit` flag. Example Usage `ansible-playbook -i inventory --limit zone1_agent_1 restart_kasm.yml`
* Backup Database (backup_db.yml)- This will make a backup of a managed Docker based db server, this playbook will not function with a remote db type installation. Example Usage ``ansible-playbook -i inventory backup_db.yml`
* Modify `remote_backup_dir` in inventory to change the path the remote server stores the backups
* Modify `retention_days` in inventory to change the number of days that logs backups are retained on db host
* Set `local_backup_dir` to define a path on the local ansible host where backups will be stored, if unset backups will only exist on the remote server
* OS Patching (patch_os.yml)- This will update system packages and reboot on all hosts defined in inventory or optionally be limited to a zone, group or single server passing the `--limit` flag. Example Usage `ansible-playbook -i inventory --limit zone1_agent_1 patch_os.yml`

2
ansible.cfg Normal file
View file

@ -0,0 +1,2 @@
[ssh_connection]
ssh_args = -o StrictHostKeyChecking=accept-new

View file

@ -1,5 +1,5 @@
- hosts:
- db
- zone1_db_1
roles:
- backup_db

View file

@ -1,2 +0,0 @@
# This generally should be (1g x number_of_sessions / number_of_agents)
desired_swap_size: 5g

View file

@ -1 +0,0 @@
desired_swap_size: 4g

View file

@ -1 +0,0 @@
desired_swap_size: 4g

View file

@ -1 +0,0 @@
desired_swap_size: 4g

View file

@ -1,8 +1,4 @@
- hosts:
- db
- web
- agent
- guac
- hosts: all
roles:
- install_common
any_errors_fatal: true

150
inventory
View file

@ -1,8 +1,142 @@
[web]
ubuntu18-web
[db]
ubuntu18-db
[agent]
ubuntu18-agent
[guac]
ubuntu18-guac
##################
# Host inventory #
##################
all:
children:
# First zone
# Optionally modify names to reference zone location IE east
zone1:
children:
# The datbase can only be defined once, if initializing a remote database omit this host
# It will always be named zone1_db_1 if used, regardless of zone name modifications
zone1_db:
hosts:
zone1_db_1:
ansible_host: zone1_db_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
zone1_web:
hosts:
zone1_web_1:
ansible_host: zone1_web_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
zone1_agent:
hosts:
zone1_agent_1:
ansible_host: zone1_agent_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
zone1_guac:
hosts:
zone1_guac_1:
ansible_host: zone1_guac_hostname
ansible_port: 22
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
# Optional Web Proxy server
#zone1_proxy:
#hosts:
#zone1_proxy_1:
#ansible_host: zone1_proxy_hostname
#ansible_port: 22
#ansible_ssh_user: ubuntu
#ansible_ssh_private_key_file: ~/.ssh/id_rsa
# Second zone
# Optionally modify names to reference zone location IE west
#zone2:
#children:
#zone2_web:
#hosts:
#zone2_web_1:
#ansible_host: zone2_web_hostname
#ansible_port: 22
#ansible_ssh_user: ubuntu
#ansible_ssh_private_key_file: ~/.ssh/id_rsa
#zone2_agent:
#hosts:
#zone2_agent_1:
#ansible_host: zone2_agent_hostname
#ansible_port: 22
#ansible_ssh_user: ubuntu
#ansible_ssh_private_key_file: ~/.ssh/id_rsa
#zone2_guac:
#hosts:
#zone2_guac_1:
#ansible_host: zone2_guac_hostname
#ansible_port: 22
#ansible_ssh_user: ubuntu
#ansible_ssh_private_key_file: ~/.ssh/id_rsa
# Optional Web Proxy server
#zone2_proxy:
#hosts:
#zone2_proxy_1:
#ansible_host: zone2_proxy_hostname
#ansible_port: 22
#ansible_ssh_user: ubuntu
#ansible_ssh_private_key_file: ~/.ssh/id_rsa
##############################
# Installation configuration #
##############################
vars:
## Credentials ##
# If left commented secure passwords will be generated during the installation and substituted in upon completion
#user_password: {{ user_password }}
#admin_password: {{ admin_password }}
#database_password: {{ database_password }}
#redis_password: {{ redis_password }}
#manager_token: {{ manager_token }}
#registration_token: {{ registration_token }}
## Scaling Configuration ##
# Stick scaled agents/guacs/proxys to a default web server
# IE when set to 1 all additional hosts in that zone will use zone1_web_1 as their webserver
# Set to false to scale out as a linked group IE zone1_web_1/zone1_agent_1/zone1_guac_1/zone1_proxy_1
default_web: 1
## Zone configuration ##
# Define multiple zones here if defined in inventory above
zones:
- zone1
#- zone2
## General settings ##
proxy_port: 443
start_docker_on_boot: true
desired_swap_size: 5g # Default agent swap size for all agents
## PostgreSQL settings ##
##############################################
# PostgreSQL remote DB connection parameters #
##############################################
# The following parameters need to be set only once on database initialization
init_remote_db: false # swap to true to activate
#database_master_user: postgres
#database_master_password: changeme
database_hostname: false # swap to a string to activate
# The remaining variables can be modified to suite your needs or left as is in a normal deployment
database_user: kasmapp
database_name: kasm
database_port: 5432
database_ssl: true
## redis settings ##
# redis connection parameters if hostname is set the web role will use a remote redis server
redis_hostname: false
## Database Backup settings ##
# These settings will only work when zone1_db_1 is set in host inventory, this does not support remote database type installations
# Directory where backups are placed on db server
remote_backup_dir: /srv/backup/kasm/
# Number of days that logs backups are retained on db host
retention_days: 10
# If this is uncommented, backups will be copied from remote server to the local ansible host
#local_backup_dir: backup/
# Number of seconds to wait for system to come up after reboot
# Change this if you have a system that normally takes a long time to boot
reboot_timeout_seconds: 600

View file

@ -1,10 +1,6 @@
- import_playbook: stop_kasm.yml
- hosts:
- db
- web
- agent
- guac
- hosts: all
roles:
- patch_os

View file

@ -1,8 +0,0 @@
# Directory where backups are placed on db server
remote_backup_dir: /srv/backup/kasm/
# Number of days that logs backups are retained on db host
retention_days: 10
# If this is uncommented, backups will be copied from remote server to the local ansible host
#local_backup_dir: backup/

View file

@ -0,0 +1,23 @@
- name: Add additional zones
when: i != 0
loop: "{{ zones }}"
loop_control:
index_var: i
blockinfile:
marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item }}"
state: present
insertafter: EOF
dest: "{{ tempdir.path }}/kasm_release/conf/database/seed_data/default_properties.yaml"
content: |
- allow_origin_domain: $request_host$
load_strategy: least_load
primary_manager_id: null
prioritize_static_agents: true
proxy_connections: true
proxy_hostname: $request_host$
proxy_path: desktop
proxy_port: {{ proxy_port }}
search_alternate_zones: true
upstream_auth_address: $request_host$
zone_id: "${uuid:zone_id:{{ i + 1 }}}"
zone_name: {{ item }}

View file

@ -9,19 +9,18 @@
delay: 5
- name: Install agent role
command: "bash {{ tempdir.path }}/kasm_release/install.sh -S agent -e -L {{ proxy_port }} -p {{ target_ip }} -m {{ web_ip }} -M {{ manager_token }} {{ '-s ' ~ service_images_copy.dest if service_images_file }} {{ '-w ' ~ workspace_images_copy.dest if workspace_images_file }}"
command: >
bash {{ tempdir.path }}/kasm_release/install.sh
--role agent
--accept-eula
--proxy-port {{ proxy_port }}
--public-hostname {{ target_ip }}
--manager-hostname {{ web_ip }}
--manager-token {{ manager_token }}
{{ '-s ' ~ service_images_copy.dest if service_images_file }}
{{ '-w ' ~ workspace_images_copy.dest if workspace_images_file }}
register: install_output
become: true
retries: 20
delay: 10
until: install_output is success or ('Failed to lock apt for exclusive operation' not in install_output.stderr and '/var/lib/dpkg/lock' not in install_output.stderr)
when: test is not defined
- name: Install agent role - test
command: "bash {{ tempdir.path }}/kasm_release/install.sh -S agent -I -e -L {{ proxy_port }} -p {{ target_ip }} -m {{ web_ip }} -M {{ manager_token }} {{ '-s ' ~ service_images_copy.dest if service_images_file }} {{ '-w ' ~ workspace_images_copy.dest if workspace_images_file }}"
register: install_output
become: true
retries: 20
delay: 10
until: install_output is success or ('Failed to lock apt for exclusive operation' not in install_output.stderr and '/var/lib/dpkg/lock' not in install_output.stderr)
when: test is defined

View file

@ -1,17 +1,23 @@
- name: Install database role
command: "bash {{ tempdir.path }}/kasm_release/install.sh -S db -e -L {{proxy_port}} -Q {{database_password}} -R {{redis_password}} -U {{user_password}} -P {{admin_password}} -M {{manager_token}} --registration-token {{registration_token}} {{ '-s ' ~ service_images_copy.dest if service_images_file }} {{ '-w ' ~ workspace_images_copy.dest if workspace_images_file }}"
command: >
bash {{ tempdir.path }}/kasm_release/install.sh
--role db
--accept-eula
--proxy-port {{ proxy_port }}
--database-user {{ database_user }}
--database-name {{ database_name }}
--db-password {{ database_password }}
--redis-password {{ redis_password }}
--user-password {{ user_password }}
--admin-password {{ admin_password }}
--manager-token {{ manager_token }}
--registration-token {{ registration_token }}
--server-zone {{ zones[0] }}
{{ '--no-db-ssl ' if not database_ssl }}
{{ '--offline-service ' ~ service_images_copy.dest if service_images_file }}
{{ '--offline-workspaces ' ~ workspace_images_copy.dest if workspace_images_file }}
register: install_output
become: true
retries: 20
delay: 10
until: install_output is success or ('Failed to lock apt for exclusive operation' not in install_output.stderr and '/var/lib/dpkg/lock' not in install_output.stderr)
when: test is not defined
- name: Install database role - test
command: "bash {{ tempdir.path }}/kasm_release/install.sh -S db -e -L {{proxy_port}} -I -Q {{database_password}} -R {{redis_password}} -U {{user_password}} -P {{admin_password}} -M {{manager_token}} --registration-token {{registration_token}} {{ '-s ' ~ service_images_copy.dest if service_images_file }} {{ '-w ' ~ workspace_images_copy.dest if workspace_images_file }}"
register: install_output
become: true
retries: 20
delay: 10
until: install_output is success or ('Failed to lock apt for exclusive operation' not in install_output.stderr and '/var/lib/dpkg/lock' not in install_output.stderr)
when: test is defined

View file

@ -1,4 +1,4 @@
# Setup default creds if users don't set them in the vars/main.yml
# Setup default creds if users don't set them in the inventory
- set_fact:
database_password: "{{ lookup('password', '/dev/null chars=ascii_letters,digits length=16') }}"

View file

@ -9,7 +9,15 @@
delay: 5
- name: Install guac role
command: "bash {{ tempdir.path }}/kasm_release/install.sh -S guac -e -L {{ proxy_port }} --api-hostname {{ web_ip }} --public-hostname {{ guac_ip }} --registration-token {{ registration_token }} {{ '-s ' ~ service_images_copy.dest if service_images_file }}"
command: >
bash {{ tempdir.path }}/kasm_release/install.sh
--role guac
--accept-eula
--proxy-port {{ proxy_port }}
--api-hostname {{ web_ip }}
--public-hostname {{ target_ip }}
--registration-token {{ registration_token }}
{{ '-s ' ~ service_images_copy.dest if service_images_file }}
register: install_output
become: true
retries: 20

View file

@ -10,12 +10,33 @@
kasm_installed: "{{ kasm_path.stat.exists }}"
- set_fact:
db_ip: "{{ hostvars[groups['db'][0]]['ansible_default_ipv4']['address'] }}"
web_ip: "{{ hostvars[groups['web'][0]]['ansible_default_ipv4']['address'] }}"
guac_ip: "{{ hostvars[groups['guac'][0]]['ansible_default_ipv4']['address'] }}"
web_ip: "{{ hostvars[group_names[0] + '_web_' + inventory_hostname.split('_')[2]].ansible_default_ipv4.address }}"
# IP of the host that ansible is being ran against
target_ip: "{{ ansible_default_ipv4.address }}"
when: not default_web
- set_fact:
web_ip: "{{ hostvars[group_names[0] + '_web_' + default_web|string].ansible_default_ipv4.address }}"
# IP of the host that ansible is being ran against
target_ip: "{{ ansible_default_ipv4.address }}"
when: default_web
- set_fact:
db_ip: "{{ hostvars['zone1_db_1'].ansible_default_ipv4.address }}"
when: not database_hostname
- set_fact:
db_ip: "{{ database_hostname }}"
when: database_hostname
- set_fact:
redis_ip: "{{ hostvars['zone1_db_1'].ansible_default_ipv4.address }}"
when: not redis_hostname
- set_fact:
redis_ip: "{{ redis_hostname }}"
when: redis_hostname
- name: Override manager hostname if configured
set_fact:
web_ip: "{{ manager_hostname }}"
@ -25,25 +46,34 @@
stat:
path: /mnt/kasm.swap
register: kasm_swapfile
when:
- "'agent' in group_names[1].split('_')"
- name: Get current swapsize in bytes
# Meminfo outputs in Kb for some reason so we convert to bytes
shell: cat /proc/meminfo | grep SwapTotal | awk '{print $2 * 1024}'
register: current_swap_size
changed_when: false
when:
- "'agent' in group_names[1].split('_')"
- set_fact:
# We only want to make a swapfile large enough to make up the difference between
# the current swapsize and our desired size.
new_swap_size: "{{ desired_swap_size | human_to_bytes - current_swap_size.stdout | int }}"
when:
- "'agent' in group_names[1].split('_')"
- debug:
var: new_swap_size
when:
- "'agent' in group_names[1].split('_')"
- name: Run swap tasks
include_tasks:
file: mkswap.yml
when:
- "'agent' in group_names[1].split('_')"
- new_swap_size | int > 0
- not kasm_swapfile.stat.exists
@ -65,32 +95,54 @@
when:
- not kasm_installed
- name: Add additional zones tasks
include_tasks:
file: add_zones.yml
when:
- not kasm_installed
- name: Run Kasm db install tasks
include_tasks:
file: db_install.yml
when:
- "'db' in group_names"
when:
- "'db' in group_names[1].split('_')"
- not kasm_installed
- name: Run remote db init tasks
include_tasks:
file: remote_db_init.yml
when:
- init_remote_db
- database_hostname
- "'web' in group_names[1].split('_')"
- not kasm_installed
- name: Run Kasm web install tasks
include_tasks:
file: web_install.yml
when:
- "'web' in group_names"
when:
- "'web' in group_names[1].split('_')"
- not kasm_installed
- name: Run Kasm agent install tasks
include_tasks:
file: agent_install.yml
when:
- "'agent' in group_names"
when:
- "'agent' in group_names[1].split('_')"
- not kasm_installed
- name: Run Kasm guac install tasks
include_tasks:
file: guac_install.yml
when:
- "'guac' in group_names"
- "'guac' in group_names[1].split('_')"
- not kasm_installed
- name: Run Kasm proxy install tasks
include_tasks:
file: proxy_install.yml
when:
- "'proxy' in group_names[1].split('_')"
- not kasm_installed
- name: enable the docker service to run at boot
@ -116,3 +168,36 @@
- "user@kasm.local password: {{ user_password }}"
- "admin@kasm.local password: {{ admin_password }}"
run_once: true
- name: Write credentials to inventory
run_once: true
delegate_to: localhost
ansible.builtin.template:
src: "{{ inventory_file }}"
dest: "{{ inventory_file }}"
- name: Set credentials to active
run_once: true
delegate_to: localhost
ansible.builtin.replace:
dest: "{{ inventory_file }}"
regexp: "{{ item.from }}"
replace: "{{ item.to }}"
loop:
- {from: "#user_password", to: "user_password"}
- {from: "#admin_password", to: "admin_password"}
- {from: "#database_password", to: "database_password"}
- {from: "#redis_password", to: "redis_password"}
- {from: "#manager_token", to: "manager_token"}
- {from: "#registration_token", to: "registration_token"}
- name: Turn off remote db init
run_once: true
delegate_to: localhost
ansible.builtin.replace:
dest: "{{ inventory_file }}"
regexp: "init_remote_db: true"
replace: "init_remote_db: false"
when:
- init_remote_db
- database_hostname

View file

@ -0,0 +1,23 @@
- name: Check connection from proxy to webserver
uri:
url: "https://{{ web_ip }}:{{ proxy_port }}/api/__healthcheck"
timeout: 5
validate_certs: false
register: _result
until: _result.status == 200
retries: 7
delay: 5
- name: Install proxy role
command: >
bash {{ tempdir.path }}/kasm_release/install.sh
--role proxy
--accept-eula
--proxy-port {{ proxy_port }}
--api-hostname {{ web_ip }}
{{ '-s ' ~ service_images_copy.dest if service_images_file }}
register: install_output
become: true
retries: 20
delay: 10
until: install_output is success or ('Failed to lock apt for exclusive operation' not in install_output.stderr and '/var/lib/dpkg/lock' not in install_output.stderr)

View file

@ -0,0 +1,39 @@
- name: Check connection from web to postgres on db server
wait_for:
port: 5432
host: "{{ db_ip }}"
timeout: 60
- name: Check connection from web to redis on db server
wait_for:
port: 6379
host: "{{ redis_ip }}"
timeout: 60
- name: Init remote Database
expect:
timeout: 600
command: >
bash {{ tempdir.path }}/kasm_release/install.sh
--role init_remote_db
--accept-eula
--proxy-port {{ proxy_port }}
--db-hostname {{ database_hostname }}
--db-password {{ database_password }}
--database-user {{ database_user }}
--database-name {{ database_name }}
--db-master-user {{ database_master_user }}
--db-master-password {{ database_master_password }}
--db-port {{ database_port }}
--server-zone {{ zones[0] }}
--manager-token {{ manager_token }}
--registration-token {{ registration_token }}
--redis-password {{ redis_password }}
--user-password {{ user_password }}
--admin-password {{ admin_password }}
{{ '--no-db-ssl ' if not database_ssl }}
{{ '--offline-service ' ~ service_images_copy.dest if service_images_file }}
responses:
Continue(?i): "y"
run_once: true
become: true

View file

@ -7,11 +7,27 @@
- name: Check connection from web to redis on db server
wait_for:
port: 6379
host: "{{ db_ip }}"
host: "{{ redis_ip }}"
timeout: 60
- name: Install web role
command: "bash {{ tempdir.path }}/kasm_release/install.sh -S app -e -L {{ proxy_port }} -q {{ db_ip }} -Q {{ database_password }} -R {{ redis_password }} -n {{ target_ip }} {{ '-s ' ~ service_images_copy.dest if service_images_file }} {{ '-w ' ~ workspace_images_copy.dest if workspace_images_file }}"
command: >
bash {{ tempdir.path }}/kasm_release/install.sh
--role app
--accept-eula
--proxy-port {{ proxy_port }}
--db-hostname {{ db_ip }}
--db-password {{ database_password }}
--redis-password {{ redis_password }}
--api-hostname {{ target_ip }}
--database-user {{ database_user }}
--database-name {{ database_name }}
--db-port {{ database_port }}
--server-zone {{ group_names[0] }}
--redis-hostname {{ redis_ip }}
{{ '--no-db-ssl ' if not database_ssl }}
{{ '--offline-service ' ~ service_images_copy.dest if service_images_file }}
{{ '--offline-workspaces ' ~ workspace_images_copy.dest if workspace_images_file }}
register: install_output
become: true
retries: 20

View file

@ -1,23 +0,0 @@
# If you want custom passwords change them below, otherwise they will be auto generated and displayed
# in a message at the end of the run.
# Password for user@kasm.local in webui
#user_password: changeme
# Password for admin@kasm.local in webui
#admin_password: changeme
# Password that webapp uses to connect to postgres
#database_password: changeme
# Passwird that webapp uses to connect to redis
#redis_password: changeme
# Token that agents use to connect to webapp
#manager_token: changeme
# Port to listen on
proxy_port: 443
# Start docker daemon at boot
start_docker_on_boot: true

View file

@ -1,3 +0,0 @@
# Number of seconds to wait for system to come up after reboot
# Change this if you have a system that normally takes a long time to boot
reboot_timeout_seconds: 600

View file

@ -62,6 +62,10 @@
docker images kasmweb/manager-private -q
docker images kasmweb/api -q
docker images kasmweb/api-private -q
docker images kasmweb/guac -q
docker images kasmweb/guac-private -q
docker images kasmweb/proxy -q
docker images kasmweb/proxy-private -q
docker images redis -q
docker images postgres -q

View file

@ -1,8 +1,4 @@
- hosts:
- db
- web
- agent
- guac
- hosts: all
serial: 1
gather_facts: no
tasks:

View file

@ -1,7 +1,4 @@
- hosts:
- agent
- web
- db
- hosts: all
serial: 1
gather_facts: no
tasks:

View file

@ -1,7 +1,3 @@
- hosts:
- db
- web
- agent
- guac
- hosts: all
roles:
- uninstall