In this codelab, the following lab environment will be automatically deployed to a Google Cloud Platform project by a script provided.
These exercises are ordered to reflect a common cloud developer experience as follows:
As you progress, you'll learn how to perform basic networking tasks on Google Cloud Platform (including Compute Engine instances) and how GCP might differ from an on-premises setup. As indicated above, we'll set up a demo environment with a network and 5 subnetworks that you will use throughout the lab.
The instructor will be sharing with you temporary accounts with existing projects that are already setup so you do not need to worry about enabling billing or any cost associated with running this codelab. Note that all these accounts will be disabled soon after the codelab is over.
Once you have received a temporary username / password to login from the instructor, log into Google Cloud Console: https://console.cloud.google.com/.
Here's what you should see once logged in :
Note the project ID you were assigned ( "codelab-test003
" in the screenshot above). It will be referred to later in this codelab as PROJECT_ID
.
To interact with the Google Cloud Platform we will use the Google Cloud Shell throughout this code lab.
Google Cloud Shell is a Debian-based virtual machine pre-loaded with all the development tools you'll need that can be automatically provisioned from the Cloud Console. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).
To activate Google Cloud Shell, from the Cloud console simply click the button on the top right-hand side (it should only take a few moments to provision and connect to the environment):
Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID
. Run the following command and you should see the following output:
gcloud auth list
Credentialed accounts: - <myaccount>@<mydomain>.com (active)
gcloud config list project
[core] project = <PROJECT_ID>
If for some reason the project is not set, simply issue the following command :
gcloud config set project <PROJECT_ID>
Looking for you PROJECT_ID
? It's the ID you used in the setup steps. You can find it in the console dashboard any time:
First, we will set up the lab network through Google Cloud Shell by using a predefined Deployment Manager script.
Let's enable the Deployment Manager API on your project.
Navigate to: https://console.developers.google.com/apis/api/deploymentmanager/overview in the browser, make sure your <PROJECT_ID
> project is selected on top, and hit the Enable button to enable Google Cloud Deployment Manager V2 API (if not already enabled).
To run the script, open Cloud Shell in your lab project and copy the scripts to your local environment with gsutil:
mkdir networking101
cd networking101
gsutil cp gs://networking101/* .
if you list files in the networking101 folder you should see a set of .yaml
and .jinja
files used for simpler deployment of resources. Now create the initial deployment with the following command:
gcloud deployment-manager deployments create networking101 \ --config networking-lab.yaml
This command sets up the environment consisting of one network with five subnetworks in different regions and five Debian VMs in those subnetworks. Some basic networking tools are also pre-installed by the deployment manager script.
The following diagram shows the setup created:
Optional: While the deployment is completing, feel free to look at the Deployment Manager Configuration (the .yaml
and .jinja
files) downloaded with the Cloud Shell Code Editor. Click the Files icon in the toolbar at the top of your Cloud Shell and choose "Launch code editor".
Once the deployment is finished you see output like this:
Waiting for create operation-1483456646603-545322a75e0f9-b7026fac-84fde2fd...done. Create operation operation-1483456646603-545322a75e0f9-b7026fac-84fde2fd completed successfully. NAME TYPE STATE ERRORS INTENT networking101 compute.v1.network COMPLETED [] asia-east1 compute.v1.subnetwork COMPLETED [] asia1-vm compute.v1.instance COMPLETED [] e1-vm compute.v1.instance COMPLETED [] eu1-vm compute.v1.instance COMPLETED [] europe-west1 compute.v1.subnetwork COMPLETED [] us-east1 compute.v1.subnetwork COMPLETED [] us-west1-s1 compute.v1.subnetwork COMPLETED [] us-west1-s2 compute.v1.subnetwork COMPLETED [] w1-vm compute.v1.instance COMPLETED [] w2-vm compute.v1.instance COMPLETED []
To verify, once you return to the Cloud Console, navigate to Compute Engine:
Then hit refresh under VM instances:
You should see five VMs like this:
Try to connect to one of the VMs by clicking on the SSH button on the Console.
After a long wait, you should see that the connection fails with an error message like this:
❔Why is this? How can you fix it?❔
See the answer on the next page.
To find out the answer to the question, go to the Networking Page in the Cloud Console (either with the menu on the left or by searching for Networking).
You should see a table like this:
As you can see, the networking101
network with five subnetworks has been created, but it doesn't have any firewall rules attached to it. As the default policy inside a network is deny, all traffic is currently disallowed by the firewalls.
Since we will manually create firewall rules later in the lab, for now we will adjust our Deployment Manager script to allow all traffic inside the network as well as SSH and ICMP from anywhere.
Edit the file networking-lab.yaml
with your favourite editor (e.g. nano
, vi
, emacs
) or the experimental Cloud Shell Code Editor (see screenshot):
Then add the following code snippet to the bottom of the file:
# Add this to the bottom of networking-lab.yaml to enable the
# firewalling configuration
- name: networking-firewall
type: firewall-template.jinja
properties:
network: networking101
Make sure you save the file.
OR, you can append the snippet as follows:
cat networking-lab-snippet.yaml >> networking-lab.yaml
Now, update the deployment with the following command:
gcloud deployment-manager deployments update networking101 \ --config networking-lab.yaml
You should see an output like this:
Waiting for update operation-1483525494024-5454232162740-212dfa81-940209a5...done. Update operation operation-1483525494024-5454232162740-212dfa81-940209a5 completed successfully. NAME TYPE STATE ERRORS INTENT networking101-firewall-internal compute.v1.firewall COMPLETED [] networking101-firewall-ssh compute.v1.firewall COMPLETED [] neworking101 compute.v1.network COMPLETED [] asia-east1 compute.v1.subnetwork COMPLETED [] asia1-vm compute.v1.instance COMPLETED [] e1-vm compute.v1.instance COMPLETED [] eu1-vm compute.v1.instance COMPLETED [] europe-west1 compute.v1.subnetwork COMPLETED [] us-east1 compute.v1.subnetwork COMPLETED [] us-west1-s1 compute.v1.subnetwork COMPLETED [] us-west1-s2 compute.v1.subnetwork COMPLETED [] w1-vm compute.v1.instance COMPLETED [] w2-vm compute.v1.instance COMPLETED []
After the command runs, reload the networking page to see the firewall rules. Click the networking101
network name to inspect the rules.
Now go back to Compute Engine via the Menu on the left or by Searching "Compute Engine" and try to SSH to one of the VMs again. You should now succeed and get a command prompt!
Use ping to measure the latency between instances within a zone, within a region, and between all the regions.
For example, to observe the latency from the US East region to the Europe West region run the following command after opening an SSH window on the e1-vm:
ping eu1-vm
Use Ctrl-C to exit the ping.
The latency you get back is the "Round Trip Time" (RTT) , the time the packet takes to get from e1-vm to eu1-vm plus the response from eu1-vm to e1-vm.
Ping uses the ICMP Echo Request and Echo Reply Messages to test connectivity.
❔What is the latency you see between regions? What would you expect under ideal conditions? What is special about the connection from eu1-vm to asia1-vm?❔
See the answer on the next page.
Under ideal conditions, the latency would be limited by the ideal speed of light in fiber, which is roughly 202562 km/s or or 125866 miles/s. (Actual reachable speed is still a bit lower than that).
You can estimate the length of the fiber either by distance as the crow flies (straight line) or by land transport. You have to multiply the result by two to account for a round trip.
Between continents as the crow flies is usually the only way. If you want to estimate latency for a customer before testing, road distance is usually the better estimate, as roads, like fibers, don't follow ideal paths. You can use any mapping tool such as this one to estimate the distance.
For the available GCE regions, we know the location. We can calculate the ideal latency as shown in the following example:
VM 1: e1-vm (Berkeley County, South Carolina)
VM 2: eu1-vm (St. Ghislain, Belgium)
Distance as the crow flies: 6837.20km
Ideal latency: 6837.20 km / 202562 km/s * 1000 ms/s * 2 = 67.51 ms
Observed latency: 93.40 ms (minimum counts)
The difference is due to a non-ideal path (for example, transatlantic fibers all landing in the NY/NJ area) as well as active equipment in the path (much smaller difference).
See this table for all ideal / observed latencies:
As you can see the latency between the EU and Asia locations is very high. This is the case because Google Compute Engine does not have a direct link it can use between Europe and Asia at this time.
From a networking point of view, it is recommended that if you run a service using only ONE global location, that location be in Central US. Depending on how your user-base is split, US East or West might also be recommended.
You can also ping any well known hosts (hosts where you know the physical location) to see how the latency compares to the ideal latency (for example, ping co.za in South Africa).
Ping can also be used to measure packet loss: at the end of a run it mentions the number of lost packets and the packet loss in percent. You can use several flags to improve testing. For example:
ping -i0.2 w2-vm #(sends a ping every 200ms)
sudo ping -i0.05 w2-vm -c 1000 #(sends a ping every 50ms, 1000 times)
sudo ping -f -i0.05 w2-vm #(flood ping, adds a dot for every sent packet, and removes one for every received packet) - careful with flood ping without interval, it will send packets as fast as possible, which within the same zone is very fast
sudo ping -i0.05 w2-vm -c 100 -s 1400 #(send larger packets, does it get slower?)
Objective:
In this section of the lab we will set up a global load balancer (HTTP Load Balancer) and learn how load balancing can help scale your applications on Google Compute Engine.
We will again use gcloud
in this section along with the Cloud Console. Feel free to use gcloud
from your laptop if it is authorized for the project, otherwise you can work in the Cloud Shell as before.
Here is a diagram of all of the elements of an HTTP/HTTPS load balancer setup:
We will need to build this system starting with creating and opening access to the target Backends working towards the Global Forwarding Rule (following the arrows backwards).
This is accomplished in 3 steps:
We will start by opening the firewall to allow HTTP Internet requests. We will apply the firewall rule to VM instances using a tag which is attached to the load balanced VM instances.
To open the Network firewall, you need to supply the following information:
Field | Value | Comments |
Source IP range or tags | 0.0.0.0/0 | We will open the firewall for any IP address from the Internet. |
Destination protocol and port | tcp:80 | Only HTTP |
Destination tags |
| The tag we created |
Network |
| Our network name |
Run the following commands in the Google Cloud Shell OR you may add the firewall rules, as shown in the screenshots below, through the Cloud Console directly:
gcloud compute firewall-rules create nw101-allow-http \ --allow tcp:80 --network networking101 --source-ranges 0.0.0.0/0 \ --target-tags http-server
Navigate to Networking via the Menu (or by searching for Networking) and click on the networking101
network:
Click Add Firewall rule:
Enter the following info and click Create
:
Field | Value | Comments |
Name |
| New rule name |
Source IP ranges | 0.0.0.0/0 | We will open the firewall for any IP address from the Internet. |
Allowed protocols and ports | tcp:80 | Only HTTP |
Target tags |
| The tag we created |
Wait until the command succeeds.
Now that you have created the firewall rule, you can continue creating the globally load balanced web service.
We need to set up Managed Instance Groups which include the patterns for backend resources used by the HTTP Load Balancer. First we will create Instance Templates which define the configuration for VMs to be created for each region. Next, for a backend in each region, we will create a Managed Instance Group that references an Instance Template.
Managed Instance groups can be Zonal
or Regional
in scope. For this lab exercise we will be creating two regional Managed Instance Groups, one in us-east1
and the other in europe-west1
. We will use the gcloud
command line tool to create the templates.
In this section, you can see a pre-created startup script that will be referenced upon instance creation. This startup script installs and enables web server capabilities which we will use to simulate a web application. Feel free to explore this script.
We will be creating two instance templates in this lab. One in the us-east1
region, the other in europe-west1
.
First, we will create the instance template us-east1-template
with the following configuration:
Field | Value | Comments |
Machine type | Default (1 vCPU) | |
Boot disk | Default (10 GB standard persistent disk) | |
Image | Default (Debian GNU/Linux 9) | |
Identity and API access | Default | |
Firewall | Allow HTTP traffic | By adding the previously opened |
Metadata | Key: Value:
| |
Networking | Network: Subnetwork: us-east1 External IP: Default (Ephemeral) |
The following gcloud
command will create this instance template:
gcloud compute instance-templates create "us-east1-template" \ --subnet "us-east1" \ --metadata "startup-script-url=gs://networking101-lab/startup.sh" \ --region "us-east1" \ --tags "http-server"
Next we will create a similar instance template for europe-west1
replacing the name, region, and subnetwork fields.
gcloud compute instance-templates create "europe-west1-template" \ --subnet "europe-west1" \ --metadata "startup-script-url=gs://networking101-lab/startup.sh" \ --region "europe-west1" \ --tags "http-server"
We can now verify our instance templates were created successfully with the following gcloud command:
gcloud compute instance-templates list
Output should look like this:
NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP europe-west1-template n1-standard-1 2017-01-03T11:43:25.053-08:00 us-east1-template n1-standard-1 2017-01-03T11:36:25.494-08:00
Now that we have our Instance Templates defined, it's time to create our regional Managed Instance Groups for the us-east1
and europe-west1
regions. We will use the Cloud Console
for this configuration.
Navigate to Compute Engine -> Instance Groups and click "Create Instance Group" at the top of the work pane.
Configure your us-east1
instance group with the following configuration.
Field | Value | Comments |
Name |
| |
Location | Multi-zone | |
Region |
| |
Instance template |
| Select the template created earlier |
Autoscaling | On | |
Autoscale based on | HTTP load balancing usage | |
Target Load Balancing Usage | 80% | |
Minimum Number of Instances | 1 | Default |
Maximum Number of Instances | 5 | Down from default of 10 |
Cool Down Period | 45 seconds | Down from default of 60 |
Autohealing | No health check | Leave default settings |
Click Create
and accept the error. We will configure the HTTP load-balancing in a future step.
We will now create a 2nd Instance group in the europe-west1
region. This instance group will function as failover capacity for us-east
and we will use slightly different settings as outlined below:
Navigate again to Compute Engine -> Instance Groups and click "Create Instance Group" at the top of the work pane.
Field | Value | Comments |
Name |
| |
Location | Multi-zone | |
Region |
| |
Instance definition |
| Select the template created earlier |
Autoscaling | Off | |
Number of Instances | 3 | Default |
Autohealing | No health check | Leave default settings |
Click Create.
We can verify our instance groups were successfully created with the following gcloud command:
gcloud compute instance-groups list
Example output below:
NAME LOCATION SCOPE NETWORK MANAGED INSTANCES europe-west1-mig europe-west1 region networking101 Yes 3 us-east1-mig us-east1 region networking101 Yes 1
Each instance is configured to run an Apache web-server with a simple PHP script that renders:
To ensure your web servers are functioning correctly, navigate to Compute Engine -> VM instances. Ensure that your new instances (e.g. us-east1-mig-xxx
) have been created according to their instance group definitions.
Now, make a web request in your browser to it to ensure the web server is running (this may take a minute to start). On the VM instances page under Compute Engine, select an instance created by your instance group and click its External (public) IP.
Or, in your browser, navigate to http://<IP_Address>
Make note of your client IP.
Now that we have our Managed Instance Groups defined, it's now time to create our HTTP Load Balancer. The HTTP Load Balancer is a distributed global solution capable of distributing load across several regions. In this lab, we will be leveraging our previously created instance groups as backends in us-east1
and europe-west1
. We will be using the Cloud Console
for our configuration.
In the Cloud Console
, navigate to Networking -> Load Balancing and select "Create Load Balancer" in the work pane.
Select "Start configuration" for HTTP(S) Load Balancing
Name your HTTP Load Balancer. We will name this my-gclb
Next, click on Backend Configuration > Create a backend service. This is where we associate our previously created instance groups with load balancing policies.
Now we will configure our Backend service which will include two backends (one for us-east1 and one for europe-west1.
For the common configuration use the defaults:
Field | Value |
Name | my-backend-service |
Protocol | HTTP |
Named Port | http |
Timeout | 30 seconds |
Next, under the New Backend section, configure:
Field | Value |
Instance Group |
|
Port number | 80 |
Balancing mode | Rate |
Maximum RPS | 50 RPS per instance |
Capacity | 100% |
Verify your current settings then select the "+ Add backend" button. In the next step we will be adding the europe-west1-mig
backend into the same backend service.
After we click "+ Add backend", we will see a new box appear where we can add the europe-west1-mig
to our Backend service
Field | Value |
Instance Group |
|
Port number | 80 |
Balancing mode | Utilization |
Maximum CPU utilization | 80% |
Capacity | 100% |
The last step for the Backend configuration is to associate a Health check. The health check will actively poll instances to ensure they are healthy. If an instance fails the health check the instance will be removed from the pool of available servers. Since we are load balancing HTTP, let's create a new simple health check.
Under Health check, use the drop down and select "Create Another Health Check" A dialog box will appear. Please configure the following:
Field | Value | Comment |
Name | my-http-hc | |
Protocol | HTTP | |
Port | 80 | |
Request path | / | Default |
Health Criteria | Check interval: 5 seconds Healthy threshold: 2 consecutive successes Timeout: 5 seconds Unhealthy threshold: 2 consecutive failures | Leave defaults |
Verify your settings and click "Save and Continue"
Our Backend configuration is now complete. Note there is no save button in the Cloud Console for this step.
Your Backend Service configuration should look similar to the following. Verify you can see two green check boxes next to ‘Backend configuration' and ‘my-backend-service' and you have the correct health check selected:
Host and path rules allow you to direct traffic to backends based on matching a host or path.
Select "Host and path rules". For this lab we will not add any host or path rules.
The Frontend configuration allows for an administrator to specify how the client traffic will be terminated at the HTTP(S) Load Balancer.
Select "Frontend Configuration". For this lab we will be leaving the defaults of:
Field | Value |
Protocol | HTTP |
IP | Ephemeral |
Port | 80 |
It's now time to review our configuration. Scroll left in the work pane and select "Review and finalize". Ensure you have 3 check marks next to your Backend configuration, Host and path rules, and Frontend configuration sections.
Once verified, click "Create".
It's now time to put our freshly created HTTP Load Balancer to work. In this exercise we will be using Siege testing software to show several key features of the HTTP Load Balancer.
At this point we'll want to note the public IP address for our HTTP Load Balancer. This can be found by navigating to Networking -> Load Balancing. Click on the load balancer you just created.
To ensure our Load Balancer is up, launch your browser to this IP address.
Note the Client IP address.
❔ Why is this different than when you connected directly to your instance? ❔
For this exercise we will need to SSH to w1-vm located in the us-west1 region. This instance is preconfigured with Siege. Siege works by simulating HTTP requests to a target, which will be the public IP address of your HTTP Load Balancer.
SSH to w1-vm. You can do this by simply clicking "SSH" next to the w1-vm in the Cloud Console.
Once your SSH session is connected, navigate back to your HTTP(S) Load Balancer (Networking -> Load Balancing). Select your load balancer and show the ‘Monitoring' tab.
Now select the backend you created in the backend drop down and notice the monitoring view expands to include the backends:
Next, you will simulate 250 concurrent users in Siege by running the following command:
siege -c 250 http://<http-loadbalancer-ip>
You get output like this:
** SIEGE 3.0.8 ** Preparing 250 concurrent users for battle. The server is now under siege...
Navigate back to the monitoring page for your load balancer in the Cloud Console. What do you expect to see? Watch for a few minutes while Siege continues to run.
Navigate back to your SSH session to your w1-vm
and stop Siege with CTRL+C
. Observe the output from the Siege program..
❔ Was the HTTP load balancer able to serve every request? ❔
The HTTP(S) Load Balancer provides detailed logging information in Stackdriver Logging. To view this, navigate to Stackdriver Logging in the Cloud Console. If time permits, check the type of logs you can see.
For this step, we are going back to using the pre-deployed VMs.
Traceroute is a tool to trace the path between two hosts.
As a traceroute can be a helpful first step to uncover many different network problems, support or network engineers often ask for a traceroute when diagnosing network issues.
Let's try it out.
From any VM (e.g. e1-vm
) run a traceroute, for example:
traceroute www.icann.org
Now try a few other destinations and also from other sources:
traceroute -m 255 bad.horse
)Use Ctrl-C if at any time you want to return to the command line.
❔ What do you notice with the different traceroutes? ❔
See the answer on the next page.
You might have noticed some of the following things:
You can also use the tool "mtr" (Matt's traceroute) for a continuous traceroute to the destination and to also capture occasional packet loss. It combines the functionality of traceroute and ping and also uses ICMP echo request packets instead of UDP for the outgoing packet.
Try:
mtr www.icann.org
and any other hosts. Use q to quit.
Some important caveats when working with traceroute/mtr:
You can use iperf to test the performance between two hosts. One side needs to be set up as the iperf server to accept connections.
First do a very simple test:
On eu1-vm
run:
iperf -s #run in server mode
On e1-vm
run:
iperf -c eu1-vm #run in client mode, connecting to eu1-vm
You will see some output like this:
------------------------------------------------------------ Client connecting to eu-vm, TCP port 5001 TCP window size: 45.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.20.0.2 port 35923 connected with 10.30.0.2 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 298 MBytes 249 Mbits/sec
On eu1-vm
use Ctrl-C to exit the server side when not needed anymore.
Test this between different VMs. You will see that within a region, the bandwidth is limited by the 2 Gbit/s per core egress cap.
Between regions you reach much lower limits, mostly due to limits on TCP window size and single stream performance. You can increase bandwidth between hosts by using other parameters. e.g. use UDP:
On eu1-vm run:
iperf -s -u #iperf server side
On e1-vm run:
iperf -c eu1-vm -u -b 2G #iperf client side - send 2 Gbit/s
This should be able to achieve a higher speed between EU and US.
Even higher speeds can be achieved by running a bunch of TCP iperfs in parallel.
On eu1-vm
run:
iperf -s
On e1-vm
run:
iperf -c eu1-vm -P 20
The combined bandwidth should be really close to the maximum achievable bandwidth.
Test a few more combinations, if you use Linux on your laptop you can test against your laptop as well. You can also try iperf3 which is available for many OSes, but this is not part of the lab.
As you can see, to reach the maximum bandwidth, just running a single TCP stream (for example, file copy) is not sufficient; you need to have several TCP sessions in parallel. Reasons are TCP parameters such as Window Size and functions such as Slow Start (see TCP/IP Illustrated for excellent information on this and all other TCP/IP topics). Tools like bbcp can help to copy files as fast as possible by parallelizing transfers and using configurable window size.
Let's release the resources created during the code lab. Please make sure you are in the Cloud Shell for these commands (not one of your VM instances).
First, let's delete our load balancer and associated configuration. Select Yes when prompted.
Delete HTTP Load Balancer
In Cloud Console, navigate to Networking -> Load Balancing. Select "my-gclb" and click delete . You should check "my-backend-service" and "my-http-hc" to delete the related service and health checks. Then click "DELETE LOAD BALANCER AND THE SELECTED RESOURCES".
Delete Managed Instance Groups
gcloud compute instance-groups managed delete us-east1-mig \ --region=us-east1
gcloud compute instance-groups managed delete europe-west1-mig \ --region=europe-west1
Delete Instance Templates
gcloud compute instance-templates delete us-east1-template
gcloud compute instance-templates delete europe-west1-template
Delete manually created firewall rule:
gcloud compute firewall-rules delete nw101-allow-http
To delete the automatically created deployment (with the networks and subnetworks) run the following command in Cloud Shell:
gcloud deployment-manager deployments delete networking101
You have passed the Networking 101 Codelab!