changes, it is recommended to add version = "…" constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.google: version = "~> 2.8"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Add a virtual machine:
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
provider "google" {
credentials = "$ {file (" kubernetes_key.json ")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud / debian-9"
}
}
network_interface {
network = "default"
access_config {}
}
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_compute_instance.cluster will be created
+ resource "google_compute_instance" "cluster" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "f1-micro"
+ metadata_fingerprint = (known after apply)
+ name = "cluster"
+ project = (known after apply)
+ self_link = (known after apply)
+ tags_fingerprint = (known after apply)
+ zone = "europe-north1-a"
+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ source = (known after apply)
+ initialize_params {
+ image = "debian-cloud / debian-9"
+ size = (known after apply)
+ type = (known after apply)
}
}
+ network_interface {
+ address = (known after apply)
+ name = (known after apply)
+ network = "default"
+ network_ip = (known after apply)
+ subnetwork = (known after apply)
+ subnetwork_project = (known after apply)
+ access_config {
+ assigned_nat_ip = (known after apply)
+ nat_ip = (known after apply)
+ network_tier = (known after apply)
}
}
+ scheduling {
+ automatic_restart = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_compute_instance.cluster: Creating …
google_compute_instance.cluster: Still creating … [10s elapsed]
google_compute_instance.cluster: Creation complete after 11s [id = cluster]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Add a public static IP address and SSH key to the node:
essh @ kubernetes-master: ~ / node-cluster $ ssh-keygen -f node-cluster
Generating public / private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in node-cluster.
Your public key has been saved in node-cluster.pub.
The key fingerprint is:
SHA256: vUhDe7FOzykE5BSLOIhE7Xt9o + AwgM4ZKOCW4nsLG58 essh @ kubernetes-master
The key's randomart image is:
+ – [RSA 2048] – +
| .o. +. |
| o. o. =. |
| * + o. =. |
| = *. … ... + o |
| B +. … S * |
| = + oo X +. |
| o. =. + = + |
| . = .... … |
| ..E. |
+ – [SHA256] – +
essh @ kubernetes-master: ~ / node-cluster $ ls node-cluster.pub
node-cluster.pub
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
provider "google" {
credentials = "$ {file (" kubernetes_key.json ")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_address" "static-ip-address" {
name = "static-ip-address"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud / debian-9"
}
}
metadata = {
ssh-keys = "essh: $ {file (" ./ node-cluster.pub ")}"
}
network_interface {
network = "default"
access_config {
nat_ip = "$ {google_compute_address.static-ip-address.address}"
}
}
} essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply
Let's check the SSH connection to the server:
essh @ kubernetes-master: ~ / node-cluster $ ssh -i ./node-cluster essh@35.228.82.222
The authenticity of host '35 .228.82.222 (35.228.82.222) 'can't be established.
ECDSA key fingerprint is SHA256: o7ykujZp46IF + eu7SaIwXOlRRApiTY1YtXQzsGwO18A.
Are you sure you want to continue connecting (yes / no)? yes
Warning: Permanently added '35 .228.82.222 '(ECDSA) to the list of known hosts.
Linux cluster 4.9.0-9-amd64 # 1 SMP Debian 4.9.168-1 + deb9u2 (2019-05-13) x86_64
The programs included with the Debian GNU / Linux system are free software;
the exact distribution terms for each program are described in the
individual files in / usr / share / doc / * / copyright.
Debian GNU / Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
essh @ cluster: ~ $ ls
essh @ cluster: ~ $ exit
logout
Connection to 35.228.82.222 closed.
Install packages:
essh @ kubernetes-master: ~ / node-cluster $ curl https://sdk.cloud.google.com | bash
essh @ kubernetes-master: ~ / node-cluster $ exec -l $ SHELL
essh @ kubernetes-master: ~ / node-cluster $ gcloud init
Let's choose a project:
You are logged in as: [esschtolts@gmail.com].
Pick cloud project to use:
[1] agile-aleph-203917
[2] node-cluster-243923
[3] essch
[4] Create a new project
Please enter numeric choice or text value (must exactly match list
item):
Please enter a value between 1 and 4, or a value present in the list: 2
Your current project has been set to: [node-cluster-243923].
Let's choose a zone:
[50] europe-north1-a
Did not print [12] options.
Too many options [62]. Enter "list" at prompt to print choices fully.
Please enter numeric choice or text value (must exactly match list
item):
Please enter a value between 1 and 62, or a value present in the list: 50
essh @ kubernetes-master: ~ / node-cluster $ PROJECT_I = "node-cluster-243923"
essh @ kubernetes-master: ~ / node-cluster $ echo $ PROJECT_I
node-cluster-243923
essh @ kubernetes-master: ~ / node-cluster $ export GOOGLE_APPLICATION_CREDENTIALS = $ HOME / node-cluster / kubernetes_key.json
essh @ kubernetes-master: ~ / node-cluster $ sudo docker-machine create –driver google –google-project $ PROJECT_ID vm01
sudo export GOOGLE_APPLICATION_CREDENTIALS = $ HOME / node-cluster / kubernetes_key.json docker-machine create –driver google –google-project $ PROJECT_ID vm01
// https://docs.docker.com/machine/drivers/gce/
// https://github.com/docker/machine/issues/4722
essh @ kubernetes-master: ~ / node-cluster $ gcloud config list
[compute]
region = europe-north1
zone = europe-north1-a
[core]
account = esschtolts@gmail.com
disable_usage_reporting = False
project = node-cluster-243923
Your active configuration is: [default]
Let's add copying the file and executing the script:
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
provider "google" {
credentials = "$ {file (" kubernetes_key.json ")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_address" "static-ip-address" {
name = "static-ip-address"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud / debian-9"
}
}
metadata = {
ssh-keys = "essh: $ {file (" ./ node-cluster.pub ")}"
}
network_interface {
network = "default"
access_config {
nat_ip = "$ {google_compute_address.static-ip-address.address}"
}
}
}
resource "null_resource" "cluster" {
triggers = {
cluster_instance_ids = "$ {join (", ", google_compute_instance.cluster. *. id)}"
}
connection {
host = "$ {google_compute_address.static-ip-address.address}"
type = "ssh"
user = "essh"
timeout = "2m"
private_key = "$ {file (" ~ / node-cluster / node-cluster ")}"
# agent = "false"
}
provisioner "file" {
source = "client.js"
destination = "~ / client.js"
}
provisioner "remote-exec" {
inline = [
"cd ~ && echo 1> test.txt"
]
}
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply
google_compute_address.static-ip-address: Creating …
google_compute_address.static-ip-address: Creation complete after 5s [id = node-cluster-243923 / europe-north1 / static-ip-address]
google_compute_instance.cluster: Creating …
google_compute_instance.cluster: Still creating … [10s elapsed]
google_compute_instance.cluster: Creation complete after 12s [id = cluster]
null_resource.cluster: Creating …
null_resource.cluster: Provisioning with 'file' …
null_resource.cluster: Provisioning with 'remote-exec' …
null_resource.cluster (remote-exec): Connecting to remote host via SSH …
null_resource.cluster (remote-exec): Host: 35.228.82.222
null_resource.cluster (remote-exec): User: essh
null_resource.cluster (remote-exec): Password: false
null_resource.cluster (remote-exec): Private key: true
null_resource.cluster (remote-exec): Certificate: false
null_resource.cluster (remote-exec): SSH Agent: false
null_resource.cluster (remote-exec): Checking Host Key: false
null_resource.cluster (remote-exec): Connected!
null_resource.cluster: Creation complete after 7s [id = 816586071607403364]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
esschtolts @ cluster: ~ $ ls / home / essh /
client.js test.txt
[sudo] password for essh:
google_compute_address.static-ip-address: Refreshing state … [id = node-cluster-243923 / europe-north1 / static-ip-address]
google_compute_instance.cluster: Refreshing state … [id = cluster]
null_resource.cluster: Refreshing state … [id = 816586071607403364]
Enter a value: yes
null_resource.cluster: Destroying … [id = 816586071607403364]
null_resource.cluster: Destruction complete after 0s
google_compute_instance.cluster: Destroying … [id = cluster]
google_compute_instance.cluster: Still destroying … [id = cluster, 10s elapsed]
google_compute_instance.cluster: Still destroying … [id = cluster, 20s elapsed]
google_compute_instance.cluster: Destruction complete after 27s
google_compute_address.static-ip-address: Destroying … [id = node-cluster-243923 / europe-north1 / static-ip-address]
google_compute_address.static-ip-address: Destruction complete after 8s
To deploy the entire project, you can add it to the repository, and we will upload it to the virtual machine by copying the installation script to this virtual machine and then launching it:
Moving on to Kubernetes
In the minimal version, creating a cluster of three nodes looks like this:
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cat main.tf
provider "google" {
credentials = "$ {file (" ../ kubernetes_key.json ")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_container_cluster" "node-ks" {
name = "node-ks"
location = "europe-north1-a"
initial_node_count = 3
}
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ sudo ../terraform init
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ sudo ../terraform apply
The cluster was created in 2:15, and after I added europe-north1-a two additional zones europe-north1 -b , europe-north1-c and set the number of created instances in the zone to one, the cluster was created in 3:13 seconds , because for higher availability, the nodes were created in different data centers: europe-north1-a , europe-north1-b , europe-north1-c :
provider "google" {
credentials = "$ {file (" ../ kubernetes_key.json ")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_container_cluster" "node-ks" {
name = "node-ks"
location = "europe-north1-a"
node_locations = ["europe-north1-b", "europe-north1-c"]
initial_node_count = 1
}
Now let's split our cluster into two: the control cluster with Kubernetes and the cluster for our PODs. All clusters will be distributed over three data centers. The cluster for our PODs can auto scale under load up to 2 on each zone (from three to six in total):
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cat main.tf
provider "google" {
credentials = "$ {file (" ../ kubernetes_key.json ")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_container_cluster" "node-ks" {
name = "node-ks"
location = "europe-north1-a"
node_locations = ["europe-north1-b", "europe-north1-c"]
initial_node_count = 1
}
resource "google_container_node_pool" "node-ks-pool" {
name = "node-ks-pool"
cluster = "$ {google_container_cluster.node-ks.name}"
location = "europe-north1-a"
node_count = "1"
node_config {
machine_type = "n1-standard-1"
}
autoscaling {
min_node_count = 1
max_node_count = 2
}
}
Let's see what happened and look for the IP address of the cluster entry point:
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
node-ks europe-north1-a 1.12.8-gke.6 35.228.20.35 n1-standard-1 1.12.8-gke.6 6 RECONCILING
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters describe node-ks | grep '^ endpoint'
endpoint: 35.228.20.35
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ ping 35.228.20.35 -c 2
PING 35.228.20.35 (35.228.20.35) 56 (84) bytes of data.
64 bytes from 35.228.20.35: icmp_seq = 1 ttl = 59 time = 8.33 ms
64 bytes from 35.228.20.35: icmp_seq = 2 ttl = 59 time = 7.09 ms
–– 35.228.20.35 ping statistics –
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min / avg / max / mdev = 7.094 / 7.714 / 8.334 / 0.620 ms
By adding variables, which I selected in a separate file just for clarity, which parameterize our config for different uses, we can use it, for example, to create test and production clusters. Variables can be added as var.name_value , and inserted into the text similarly to JS: $ {var.name_value} , as well as path.root .
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cat variables.tf
variable "region" {
default = "europe-north1"
}
variable "project_name" {
type = string
default = ""
}
variable "gce_key" {
default = "./kubernetes_key.json"
}
variable "node_count_zone" {
default = 1
}
They can be passed through the -var switch , for example: sudo ./terraform apply -var = "project_name = node-cluster-243923" .
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cp ../kubernetes_key.json.
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ sudo ../terraform apply -var = "project_name = node-cluster-243923"
Our project in the folder is not only a project, but also a module ready to use:
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cd ..
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
module "Kubernetes" {
source = "./Kubernetes"
project_name = "node-cluster-243923"
}
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply
Or upload to the public repository:
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git init
Initialized empty GIT repository in /home/essh/node-cluster/Kubernetes/.git/
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ echo "terraform.tfstate" >> .gitignore
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ echo "terraform.tfstate.backup" >> .gitignore
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ echo ".terraform /" >> .gitignore
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ rm -f kubernetes_key.json
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git remote add origin https://github.com/ESSch/terraform-google-kubernetes.git
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git add.
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git commit -m 'create a k8s Terraform module'
[master (root-commit) 4f73c64] create a Kubernetes Terraform module
3 files changed, 48 insertions (+)
create mode 100644 .gitignore
create mode 100644 main.tf
create mode 100644 variables.tf
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git push -u origin master
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git tag -a v0.0.2 -m 'publish'
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git push origin v0.0.2
After publishing in the module registry https://registry.terraform.io/, having met the requirements such as having a description, we can use it:
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
module "kubernetes" {
# source = "./Kubernetes"
source = "ESSch / kubernetes / google"
version = "0.0.2"
project_name = "node-cluster-243923"
}
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform init
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply
On the next creation of the cluster, I got the error ZONE_RESOURCE_POOL_EXHAUSTED "does not have enough resources available to fulfill the request. Try a different zone, or try again later" , indicating that there are no required servers in this region. For me this is not a problem and I do not need to edit the module's code, because I parameterized the module with the region, and if I just pass the region region = "europe-west2" to the module as a parameter , terraform after the update and initialization command ./terrafrom init and the application command ./terraform apply will transfer my cluster to the specified region. Let's improve our module a little by moving the provider from the Kubernetes child module to the main module (the main script is also a module). Having brought it to the main module, we will be able to use one more module, otherwise the provider in one module will conflict with the provider in another. Inheritance from main module to child modules and their transparency applies only to providers. The rest of the data for transferring from child to parent will have to use external variables, and from parent to child – to parameterize the parent, but this is later, when we will create another model. Also, moving the provider to the parent module will be useful when creating the next module that we will create, since it will create Kubernetes elements that do not depend on the provider, and thus we can untie the Google provider from our module and can be used with other providers supporting Kubernetes. Now we don't need to pass the project name in the variable – it is set in the provider. For the convenience of development, I will use the local connection of the module for now. I created a folder and file for a new module:
essh @ kubernetes-master: ~ / node-cluster $ ls nodejs /
main.tf
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
// module "kubernetes" {
// source = "ESSch / kubernetes / google"
// version = "0.0.2"
//
// project_name = "node-cluster-243923"
// region = "europe-west2"
//}
provider "google" {
credentials = "$ {file (" ./ kubernetes_key.json ")}"
project = "node-cluster-243923"
region = "europe-west2"
}
module "Kubernetes" {
source = "./Kubernetes"
project_name = "node-cluster-243923"
region = "europe-west2"
}
module "nodejs" {
source = "./nodejs"
}
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform init
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply
Now let's transfer data from the Kubernetes infrastructure module to the application module:
essh @ kubernetes-master: ~ / node-cluster $ cat Kubernetes / outputs.tf
output "endpoint" {
value = google_container_cluster.node-ks.endpoint
sensitive = true
}
output "name" {
value = google_container_cluster.node-ks.name
sensitive = true
}
output "cluster_ca_certificate" {
value = base64decode (google_container_cluster.node-ks.master_auth.0.cluster_ca_certificate)
}
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
// module "kubernetes" {
// source = "ESSch / kubernetes / google"
// version = "0.0.2"
//
// project_name = "node-cluster-243923"
// region = "europe-west2"
//}
provider "google" {
credentials = file ("./ kubernetes_key.json")
project = "node-cluster-243923"
region = "europe-west2"
}
module "Kubernetes" {
source = "./Kubernetes"
project_name = "node-cluster-243923"
region = "europe-west2"
}
module "nodejs" {
source = "./nodejs"
endpoint = module.Kubernetes.endpoint
cluster_ca_certificate = module.Kubernetes.cluster_ca_certificate
}
essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / variable.tf
variable "endpoint" {}
variable "cluster_ca_certificate" {}
To check the balancing of traffic from all nodes, start NGINX, replacing the standard page with the hostname. We'll replace it with a simple command call and resume the server. To start the server, let's look at its call in the Dockerfile: CMD ["Nginx", "-g", "daemon off;"] , which is equivalent to calling Nginx -g 'daemon off;' at the command line. As you can see, the Dockerfile does not use BASH as an environment for launching, but starts the server itself, which allows the shell to live in the event of a process crash, preventing the container from crashing and re-creating. But for our experiments, BASH is fine:
essh @ kubernetes-master: ~ / node-cluster $ sudo docker run -it Nginx: 1.17.0 which Nginx
/ usr / sbin / Nginx
sudo docker run -it –rm -p 8333: 80 Nginx: 1.17.0 / bin / bash -c "echo \ $ HOSTNAME> /usr/share/Nginx/html/index2.html && / usr / sbin / Nginx – g 'daemon off;' "
Now let's create our PODs in triplicate with NGINX, which Kubernetes will try to distribute to different servers by default. Let's also add the service as a balancer:
essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / main.tf
terraform {
required_version = "> = 0.12.0"
}
data "google_client_config" "default" {}
provider "kubernetes" {
host = var.endpoint
token = data.google_client_config.default.access_token
cluster_ca_certificate = var.cluster_ca_certificate