This set of playbooks prepares small kubernetes infrastructure in Google Cloud Engine. All steps are based on "Kubernetes The Hard Way" tutorial for Kubernetes 1.8.0.
-
configured Google Cloud engine account
-
gcloud should be authorized to use the target project and zone as default (check:
gcloud infoshould output somethin) -
service account for access gcloud instances should be created
-
git clone https://github.com/vbrednikov/kthw-ansible.git -
pip install -r requirements.txt
See official Google Cloud Platform Guide for details on configuring access to GCE. In short, the steps are:
- Download gce.py and gce.ini from ansible contrib inventory folder.
- In Google cloud IAM service accounts for your project, create an account named "inventory" with role "Project viewer".
- write down Service Account ID (e.g.,
inventory@infra-XXXXXX.iam.gserviceaccount.com) - enable checkbox "Furnish a new private key", select "JSON" format
- Open downloaded json file in any text editor,
- extract value of private key (from "-----BEGIN PRIVATE KEY-----" till "-----END PRIVATE KEY-----" to new file named
~/ansible_gce/inventory.pem - replace
\nstring with newlines - save the file
- In
gce.ini, you will need to edit variables:
gce_service_account_email_address: service account idgce_service_account_pem_file_path: full path to~/ansible_gce/inventory.pemgce_project_id: google cloud ID of your project
- chmod +x ./gce.py and run it as
./gce.py --list --pretty. If everything is done correctly, json-formatted data about your environment will be printed. - Create file
inventory/private-vars.ymlwith the following contents (based on the same settings as in gce.ini):
---
all:
vars:
service_account_email: ansible@docker-181920.iam.gserviceaccount.com
credentials_file: /Users/vbrednikov/otus-devops/kthw2/docker-7816912a55f9.json
project_id: docker-181920
region: europe-west1
zone: europe-west1-b
disk_size: 20
Please note that only one file should be specified as inventory, instead the full folder.
ansible-playbook -i inventory/private_vars.yml 01_gce_instances.yml
ansible-playbook -i inventory/ 02_local_conf.yml
ansible-playbook -i inventory/ 03_configure_instances.yml
After all these steps, local kubectl should be able to manage the k8s environment.
Run kubectl get cs and kubectl get nodes locally to check controllers and workers status.
Run some checks described on Smoke tests page of the tutorial.