In this codelab, you'll learn the differences between a network load balancer and a HTTP load balancer, and how to set them up for your applications running in Google Compute Engine virtual machines.
There are two types of load balancers in Google Cloud Platform:
This labs will take you through the steps to setup both types of load balancers.
If you see a "request account button" at the top of the main Codelabs window, click it to obtain a temporary account. Otherwise ask one of the staff for a coupon with username/password.
These temporary accounts have existing projects that are set up with billing so that there are no costs associated for you with running this codelab.
Note that all these accounts will be disabled soon after the codelab is over.
Use these credentials to log into the machine or to open a new Google Cloud Console window https://console.cloud.google.com/. Accept the new account Terms of Service and any updates to Terms of Service.
Here's what you should see once logged in:
When presented with this console landing page, please select the only project available. Alternatively, from the console home page, click on "Select a Project" :
Very Important - Visit each of these pages to kick-off some initial setup behind the scenes, such as enabling the Compute Engine API:
Compute → Compute Engine → VM Instances
Once the operations completes, you will do most of the work from the Google Cloud Shell, a command line environment running in the Cloud. This Debian-based virtual machine is loaded with all the development tools you'll need (gcloud
, git
and others) and offers a persistent 5GB home directory. Open the Google Cloud Shell by clicking on the icon on the top right of the screen:
Finally, using Cloud Shell, set the default zone and project configuration:
$ gcloud config set compute/zone europe-west1-c $ gcloud config set compute/region europe-west1
You can pick and choose different zones too. Learn more about zones in Regions & Zones documentation.
To simulate serving from a cluster of machines, we'll create a simple cluster of NGINX web servers that will serve static content. We can achieve this easily by using Instance Templates and Managed Instance Groups. Instance Templates allows you to define what every virtual machine in the cluster should look like (disk, CPUs, memory, etc), and a Managed Instance Group can instantiate a number of new Google Compute Engine virtual machine instances for you by using the Instance Template.
First, create a startup script that will be used by every virtual machine instance to setup NGINX server upon startup:
$ cat << EOF > startup.sh #! /bin/bash apt-get update apt-get install -y nginx service nginx start sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html EOF
Next, create an instance template that will use the startup script:
$ gcloud compute instance-templates create nginx-template \ --metadata-from-file startup-script=startup.sh Created [...]. NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP nginx-template n1-standard-1 2015-11-09T08:44:59.007-08:00
Second, let's create a target pool. A target pool allows us to have a single access point to all the instances in a group and is necessary for load balancing in the future steps.
$ gcloud compute target-pools create nginx-pool Created [...]. NAME REGION SESSION_AFFINITY BACKUP HEALTH_CHECKS nginx-pool europe-west1
Finally, create a managed instance group using the instance template:
$ gcloud compute instance-groups managed create nginx-group \ --base-instance-name nginx \ --size 2 \ --template nginx-template \ --target-pool nginx-pool Created [...]. NAME ZONE BASE_INSTANCE_NAME SIZE TARGET_SIZE GROUP INSTANCE_TEMPLATE AUTOSCALED nginx-group europe-west1-c nginx 2 nginx-group nginx-template
This will create 2 Compute Engine instances with names that are prefixed with nginx-
. This may take a couple of minutes.
List the compute engine instances and you should see all of the instances created!
$ gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS nginx-7wvi europe-west1-c n1-standard-1 10.240.X.X X.X.X.X RUNNING nginx-9mwd europe-west1-c n1-standard-1 10.240.X.X X.X.X.X RUNNING
Finally, configure firewall so that you can connect to the machines on port 80 via the EXTERNAL_IP
addresses:
$ gcloud compute firewall-rules create www-firewall --allow tcp:80
Now you should be able to connect to each of the instances via their external IP addresses individually via http://EXTERNAL_IP/
.
Network load balancing allows you to balance load of your systems based on incoming IP protocol data, such as address, port, and protocol type. Network load balancing offers some load balancing options that are not available with HTTP(S) load balancing. For example, you can load balance additional TCP/UDP-based protocols such as SMTP traffic. If your application is interested in TCP-connection-related characteristics, network load balancing allows your app to inspect the packets, which you cannot do with HTTP(S) load balancing.
Let's create a L3 network load balancer targeting our instance group:
$ gcloud compute forwarding-rules create nginx-lb \ --port-range 80 \ --target-pool nginx-pool NAME REGION IP_ADDRESS IP_PROTOCOL TARGET nginx-lb europe-west1 X.X.X.X TCP europe-west1/targetPools/nginx-pool
You can then visit the load balancer from the browser http://IP_ADDRESS/
where IP_ADDRESS
is the address shown as the result of running the previous command.
HTTP(S) load balancing provides global load balancing for HTTP(S) requests destined for your instances. You can configure URL rules that route some URLs to one set of instances and route other URLs to other instances. Requests are always routed to the instance group that is closest to the user, provided that group has enough capacity and is appropriate for the request. If the closest group does not have enough capacity, the request is sent to the closest group that does have capacity.
First, create a health check. Health checks verify that the instance is responding to HTTP or HTTPS traffic:
$ gcloud compute http-health-checks create http-basic-check Created [https://www.googleapis.com/compute/v1/projects/...]. NAME HOST PORT REQUEST_PATH http-basic-check 80 /
Define an HTTP service and map a port name to the relevant port for the instance group. Once configured, the load balancing service forwards traffic to the named port:
$ gcloud compute instance-groups managed \ set-named-ports nginx-group \ --named-ports http:80 Updated [https://www.googleapis.com/compute/v1/projects/...].
Create a backend service:
$ gcloud compute backend-services create nginx-backend \ --protocol HTTP --http-health-check http-basic-check Created [https://www.googleapis.com/compute/v1/projects/...]. NAME BACKENDS PROTOCOL nginx-backend HTTP
Add the instance group into the backend service:
$ gcloud compute backend-services add-backend nginx-backend \ --instance-group nginx-group Updated [https://www.googleapis.com/compute/v1/projects/...].
Create a default URL map that directs all incoming requests to all your instances. If you need to divide your traffic to different instances depending on the URL being requested, see content-based routing:
$ gcloud compute url-maps create web-map \ --default-service nginx-backend Created [https://www.googleapis.com/compute/v1/projects/...]. NAME DEFAULT_SERVICE web-map nginx-backend
Create a target HTTP proxy to route requests to your URL map:
$ gcloud compute target-http-proxies create http-lb-proxy \ --url-map web-map Created [https://www.googleapis.com/compute/v1/projects/...]. NAME URL_MAP http-lb-proxy web-map
Create a global forwarding rule to handle and route incoming requests. A forwarding rule sends traffic to a specific target HTTP or HTTPS proxy depending on the IP address, IP protocol, and port specified. The global forwarding rule does not support multiple ports.
$ gcloud compute forwarding-rules create http-content-rule \ --global \ --target-http-proxy http-lb-proxy \ --port-range 80 Created [https://www.googleapis.com/compute/v1/projects/...]. NAME REGION IP_ADDRESS IP_PROTOCOL TARGET http-content-rule X.X.X.X TCP http-lb-proxy
After creating the global forwarding rule, it can take several minutes for your configuration to propagate. But, note down the IP_ADDRESS for the forwarding rule.
From the browser, see if you should be able to connect to http://IP_ADDRESS/
.
You are well on your way to having your Google Cloud Platform project monitored with Cloud Monitoring.