Creating VMs with Terraform on OSK for ResOps


  1. Openstack CLI installed
  2. Name and location of openrc file is hardcoded: ~/Downloads/ ~/Downloads/


  • Log onto OpenStack Horizon. You can see the overview of the tenancy, where no resops cluster is created.
  • Inspect variable values at Make note of important variables such as cluster_name, image, network_name, floatingip_pool, number_of_bastions, number_of_k8s_masters, number_of_k8s_nodes, number_of_k8s_nodes_no_floating_ip, etc.. They describe the structure of the cluster.

  • Inspect sample script Note that this script can not run other than on my laptop. It assumes my at certain location. Replace the Bash script as needed.

  • Inspect Terraform script for OpenStack under This script is well-written, following a lot of best practices as well as carefully modularized.

    • Default values are provided in the top level
    • The infrastructure are divided into three modules: network, ips and compute in the top level
    • Each module is organized with (as required for a Terraform module), (input) and (output). The resource description is in The APIs can be found at
  • Run ~/IdeaProjects/tsi-ccdoc/tsi-cc/ResOps/scripts/kubespray/ in a terminal window. In the end, resources are created according to ~/IdeaProjects/tsi-ccdoc/tsi-cc/ResOps/scripts/kubespray/

    Apply complete! Resources: 20 added, 0 changed, 0 destroyed.
    bastion_fips = [
    floating_network_id = e25c3173-bb5c-4bbc-83a7-f0551099c8cd
    k8s_master_fips = []
    k8s_node_fips = []
    private_subnet_id = e457fd0b-c02d-4287-b049-4f143a32b2fb
    router_id = 470c37d6-1ec6-49bd-9a9e-cc7712dfcf07
  • Log back onto OpenStack Horizon to see VMs (a bastion, 2 nodes without floating IPs), private network / subnet (resops), router (resops-1-router) are created.

  • Run ~/IdeaProjects/tsi-ccdoc/tsi-cc/ResOps/scripts/kubespray/ in a terminal window. It configures VMs ready for the parcticals. It takes 6 - 7 minutes for each VM. This script requires openstack CLI installed locally. Kubespray does not provide enough information to modify VMs remotely.
  • Access the VMs via SSH directly if they have public IPs attached. Otherwise, use SSH tunnel via bastion server, for example ssh -i ~/.ssh/id_rsa -o UserKnownHostsFile=/dev/null -o ProxyCommand=”ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@” ubuntu@
  • After Docker practical and before Kubernetes practical, run ~/IdeaProjects/tsi-ccdoc/tsi-cc/ResOps/scripts/kubespray/ This is to avoid users to see overwhelming number of Docker processes for Minikube in the early practical. The script configures Minikube so that users do not have to do it manually.


  • -o UserKnownHostsFile=/dev/null disables reading from and writing to ~/.ssh/known_hosts. This opens a security hole for man-in-the-middle attack. The option -o StrictHostKeyChecking=No would not work with -o ProxyCommand as the keys need to be exchanged. It is better practice for security to edit entries in ~/.ssh/known_hosts when deuplication happens but certainly less convenient.
  • You can run ~/IdeaProjects/tsi-ccdoc/tsi-cc/ResOps/scripts/kubespray/ to remove the cluster completely.
  • You can also run ~/IdeaProjects/tsi-ccdoc/tsi-cc/ResOps/scripts/kubespray/ with different parameters in ~/IdeaProjects/tsi-ccdoc/tsi-cc/ResOps/scripts/kubespray/ to modify an existing cluster.