In this codelab, you'll learn how to perform basic networking tasks on Google Compute Engine and learn about some specifics about cloud. We'll set up a demo environment you will use throughout the lab.

What you'll learn

What you'll need

How will you use this tutorial?

Read it through once Read it and complete the exercises

How would rate your experience with Google Compute Engine?

Novice Intermediate Proficient

Codelab-at-a-conference setup

The instructor will be sharing with you temporary accounts with existing projects that are already setup so you do not need to worry about enabling billing or any cost associated with running this codelab. Note that all these accounts will be disabled soon after the codelab is over.

Once you have received a temporary username / password to login from the instructor, log into Google Cloud Console: https://console.cloud.google.com/.

Here's what you should see once logged in :

Note the project ID you were assigned ( "codelab-test003" in the screenshot above). It will be referred to later in this codelab as PROJECT_ID.

This code lab can be completed one of two ways:


This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).

To activate Google Cloud Shell, from the developer console simply click the button on the top right-hand side (it should only take a few moments to provision and connect to the environment):

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID :

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

project = <PROJECT_ID>

If for some reason the project is not set, simply issue the following command :

gcloud config set project <PROJECT_ID>

Looking for you PROJECT_ID? Check out what ID you used in the setup steps or look it up in the console dashboard :

To setup the project, we're going create a "jump host." The jump host is an instance with access to all required tools and the other VMs. The startup script for the jump host will install the test environment and the codelabs network.

To create the full environment Run the following commands either from your local machine with gcloud installed or from the Cloud Shell.

$ gcloud config set project <your-project-name>
$ gcloud compute instances create jumphost --scopes cloud-platform --zone us-central1-f --metadata startup-script-url=gs://nw101/startupscript.sh

The provided startup script from the jumphost will automatically create the other hosts (after the jumphost) we will use. So wait ~3-5 min for your environment to be setup. You WILL have to reload the developer console to see the other VMs.

After a reload, you should see the following VMs in the Compute Engine tab in the Developer Console: jumphost, us-vm1, us-vm2, us-vmc, eu-vm, and asia-vm. Except the jumphost, they are all in the same network, so the jumphost acts like your own laptop.

To connect to the VMs, you will need SSH connections to the VMs. You have several possibilities. It is your choice which way you choose to connect:

  1. If you have a Mac, Linux, or Windows laptop with gcloud installed, you can connect from your laptop directly:
$ gcloud config set project <your-project-name> #only if not done before
$ gcloud compute ssh --zone <vm-zone> [vm-name] (e.g. gcloud compute ssh --zone us-central1-f us-vmc)

See the VMs and zones in the following table:

  1. You can SSH to the jumphost via the SSH button in the Developer Console. There are pre-defined shortcuts to connect to the instances, e.g. type "eu" to connect to the eu-vm.
  2. You can also open separate SSH windows to all VMs directly from the Developer Console.

gcloud commands should always be done either from your laptop with Cloud SDK installed or from the jumphost.

Open a connection to ALL VMs. Run the following commands on all of them except us-vm2 to install tools we need for the remainder of this module:

$ sudo apt-get -y update
$ sudo apt-get -y install traceroute mtr tcpdump iperf whois host dnsutils

us-vm2 is a CentOS instance. Use these alternative commands instead:

$ sudo yum check-update 
$ sudo yum -y install epel-release traceroute mtr tcpdump whois bind-utils
$ sudo yum -y install iperf

Use ping to find out the latency within a zone, within a region (us-central1), and between all the regions.

For example to get latency from us to eu run the following command on any of the us-vms:

$ ping eu-vm

Use Ctrl-C to exit the ping.

The latency you get back is the "Round Trip Time" (RTT) , the time the packet takes to get from A to B and the response from B to A.

Ping uses the ICMP Echo Request and Echo Reply Messages to test connectivity.

❔What is the latency you see between regions? What would you expect under ideal conditions? What is special about eu-vm to asia-vm?❔

See the answer on the next page.

Answer to the question

Under ideal conditions, the latency would be limited by the ideal speed of light in fiber, which is roughly 202562 km/s or or 125866 miles/s. (Actual reachable speed is still a bit lower than that).

You can estimate the length of the fiber either by distance as the crow flies (straight line) or by land transport. You have to multiply the result by two to account for a round trip.

Between regions as the crow flies is usually the only way. If you want to estimate latency for a customer before testing, road distance is usually the better estimates, as roads, like fibers, don't follow ideal paths. You can use any mapping tool such as this one to estimate the distance.

For the available GCE zones, we know the location. We can calculate the ideal latency as in the following example:

VM 1: us-vm1 (Council Bluffs, Iowa)

VM 2: eu-vm (St. Ghislain, Belgium)

Distance as the crow flies: 7197.57 km

Ideal latency: 7197.57 km / 202562 km/s * 1000 ms/s * 2 = 71.07 ms

Observed latency: 100.88 ms (minimum counts)

The difference is due to a non-ideal path (for example, transatlantic fibers all landing in the NY/NJ area) as well as active equipment in the path (much smaller difference).

See this table for all ideal / observed latencies:

As you can see the latency between the EU and Asia location is very high. This is the case because Google Compute Engine does not have any link it can use between Europe and Asia.

From a networking point of view, it is recommended that if you run a service using only ONE global location, that that location be in the US.

Pinging external hosts

You can also ping any well known hosts (hosts where you know the physical location) to see how the latency compares to the ideal latency (for example, ping co.za in South Africa).

Ping can also be used to measure packet loss: at the end of a run it mentions the number of lost packets and the packet loss in percent. You can use several flags to improve testing. For example:

$ ping -i0.2 us-vm2 #(sends a ping every 200ms)
$ sudo ping -i0.05 us-vm2 -c 1000 #(sends a ping every 50ms, 1000 times)
$ sudo ping -f -i0.05 us-vm2 #(flood ping, adds a dot for every sent packet, and removes one for every received packet)
$ # careful with flood ping without interval, it will send packets as fast as possible, which within the same zone is very fast
$ sudo ping -i0.05 us-vm2 -c 100 -s 1400 #(send larger packets, does it get slower?)

Traceroute is a tool to trace the path between two hosts.

As a traceroute can be one of the first steps to uncover a bunch of network problems, support or network engineers often ask for a traceroute as their first step in diagnosing network issues.

Let's try it out.

From any VM run a traceroute, for example:

$ traceroute www.icann.org

Now try a few other destinations and from other sources:

❔What do you notice with the different traceroutes? Why do traceroutes from asia-vm fail? Can you fix this (don't spend too much time trying)? ❔

See the answer on the next page.

Answer to the question

Traceroute from asia-vm fails because packets on the high UDP ports used by traceroute is blocked via iptables rule (visible via sudo iptables -L). You can fix this by removing the offending rule: (sudo iptables -D OUTPUT -p udp --dport 33434:33523 -j DROP). In a real setting (for example, YOU are behind a firewall), you will likely not have a chance to open the firewall for your traceroute, but you can change the port range with traceroute -p or use ICMP (traceroute -I or mtr) to have a better chance of getting your traceroutes through. Please note that traceroutes from instances are ALWAYS allowed by the Google Compute Engine firewall as you cannot block outbound packets. Related inbound traffic, even for traceroute, is detected and can pass.

Otherwise, you might have noticed some of the following things:


You can also use the tool "mtr" (Matt's traceroute) for a continuous traceroute to the destination to also capture occasional packet loss. It combines the functionality of traceroute and ping and also uses ICMP echo request packets instead of UDP for the outgoing packet.


$ mtr www.icann.org

and any other hosts.

Some important caveats when working with traceroute/mtr:

You can use iperf to test the performance between two hosts. One side needs to be set up as the iperf server to accept connections.

First do a very simple test:

On eu-vm run:

$ iperf -s #run in server mode

On us-vm1 run:

$ iperf -c eu-vm #run in client mode, connecting to eu-vm

You will see some output like this:

user@us-vm1:~$ iperf -c eu-vm
Client connecting to eu-vm, TCP port 5001
TCP window size: 45.0 KByte (default)
[  3] local port 53740 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   270 MBytes   226 Mbits/sec

Use Ctrl-C to exit the server side when not needed anymore.

Test this between different VMs. You will see that within a region, the bandwidth is limited by the 2 Gbit/s per core egress cap.

Between regions you reach much lower limits, mostly due to limits on TCP window size and single stream performance. You can increase bandwidth between hosts by using other parameters. e.g. use UDP:

On eu-vm run:

$ iperf -s -u #iperf server side

On us-vm1 run:

$ iperf -c eu-vm -u -b 2G #iperf client side - send 2 Gbit/s

This should be able to achieve a higher speed between EU and US.

Even higher speeds can be achieved by running a bunch of TCP iperfs in parallel.

On eu-vm run:

$ iperf -s

On us-vm1 run:

$ iperf -c eu-vm -P 20

The combined bandwidth should be really close to the maximum achievable bandwidth.

Test a few more combinations, if you use Linux on your laptop you can test against your laptop as well. You can also try iperf3 which is available for many OSes, but this is not part of the lab.

As you can see, to reach the maximum bandwidth, just running a single TCP stream (for example, file copy) is not sufficient; you need to have several TCP sessions in parallel. Reasons are TCP parameters such as Window Size and functions such as Slow Start (see TCP/IP Illustrated for excellent information on this and all other TCP/IP topics). Tools like bbcp can help to copy files as fast as possible by parallelizing transfers and using configurable window size.

Running TCPDump interactively

On us-vm1 (or any other VM) run:

$ sudo tcpdump -c 1000 -i eth0 not tcp port 22

Now on us-vm2 (or any other VM) run:

$ ping -c 100 us-vm1

Switch your window to us-vm1 and you should see the incoming ICMP packets (along with some organic traffic). You can exit the tcpdump via Ctrl-C.

Saving a packet capture file

Now let's collect a full packet capture for an HTTP request, similar to what a support engineer might request from you.

On us-vm1 install the Apache webserver

$ sudo apt-get -y install apache2

Start collecting a packet capture with port 80 traffic (the -s 1460 command tells it to collect the full packets, not just the header):

$ sudo tcpdump -i eth0 -c 1000 -s 1460 -w webserver.pcap tcp port 80

In another window on us-vm2, make a HTTP request to the webserver for an existing page and another one for a non-existent page.

$ curl us-vm1
$ curl us-vm1/404.php

You should not see any output on the tcpdump as it is written to a file. Stop the tcpdump on us-vm1 by pressing Ctrl-C.

The webserver.pcap file includes a capture of the packets.

[Optional] Analyzing the packet capture file

First we can "read" the packet capture by using tcpdump.

$ sudo tcpdump -nr webserver.pcap

This shows some details, but it doesn't show more than just the basic protocol, source and destination information.

To get more information you can use tools like Wireshark or Cloudshark.

But first you need to get the file copied to your own machine

Laptop with Google Cloud SDK (gcloud) installed

If you have gcloud on your local laptop and are authenticated to your project, you can use the following command to copy the webserver.pcap file to the current directory:

$ gcloud config set project <your-project-name> #only if not done before
$ gcloud compute copy-files us-vm1:~/webserver.pcap webserver.pcap --zone us-central1-f

No Google Cloud SDK (gcloud) on local laptop

If you don't have gcloud installed on your local laptop (for example, Chromebook), you can use the jumphost to copy the file to Google Cloud Storage. Use a SSH session to the jumphost via Developer Console and run the following commands:

$ gcloud compute copy-files us-vm1:~/webserver.pcap webserver.pcap --zone us-central1-f
$ gsutil mb gs://username-lab #replace username with a unique username
$ gsutil cp webserver.pcap gs://username-lab/

You can now download the webserver.pcap file to your local machine with the Storage Browser. After copying the file to your local machine delete the bucket again.

$ gsutil rm -r gs://username-lab/

If you have a Windows, Mac, or Linux laptop, you can use Wireshark to open and analyze the pcap file.

After you open the file, you can click on the different packets and inspect the headers and content with the lower pane (not shown in this lab. Ask your facilitators for assistance if required).

If you don't want to install software and the packet capture does NOT contain confidential information (like in this case) you can use CloudShark instead, which is a simple Cloud version of Wireshark.

  1. Sign up (you can use your Google account)
  2. Upload the packet capture by dragging it in the "Upload Files" box or clicking the "Upload Files" box and selecting the file.

  3. Select the webserver.pcap file to view it. You should see a screen like this:
  4. You can now see similar information as in TCPDump on the top where you see each package. However Cloudshark (and Wireshark) are aware of some L7 protocols (for example, HTTP) and can decode those. When you select a packet, in the middle tab you can drill into the protocols from outer (Ethernet frame) to inner (HTTP) layer and expand on the different sections and headers. On the bottom you see the Ethernet frame in hexadecimal and see where exactly the information Cloudshark/Wireshark is decoding is located.
  5. Play around with CloudShark/Wireshark a bit more. Please note that the advanced functionality, like following TCP streams, is only available in the paid version of CloudShark.

Objective: In this section of the lab we will setup very basic firewalling and load balancing (L3 and L7) and learn how those features work in Google Compute Engine. There will be a more detailed codelab on routing and firewalling available later.

We will use gcloud in this section. Please feel free to use gcloud from your laptop if it is authorized for the project. If not, you can run those commands from an SSH session connected to jumphost.

On us-vmc, let's first install nginx and configure it to listen to port 81:

$ sudo su - 
# apt-get -y install nginx
# echo "server { listen 81; root /usr/share/nginx/html; }" > /etc/nginx/sites-enabled/default
# service nginx restart
# curl localhost:81
# exit

Find the external IP of us-vmc in the Developers Console or with this command from the jumphost or your local machine:

$ gcloud compute instances list

Open the browser to the EXTERNAL_IP along with port 81 (http://xxx.xxx.xxx.xxx:81) and confirm you cannot reach the site.

To open the GCE firewall, you need to give the following information:

So run the following commands on your jumphost or Developers Console:

$ gcloud compute instances add-tags us-vmc --tags nginx-81 --zone us-central1-c
$ gcloud compute firewall-rules create nginx-81 --allow tcp:81 --network codelab --source-ranges --target-tags nginx-81

Open the browser again and confirm you CAN reach the site.

Installing a webserver

Let's install apache2 serving on port 80 on all instances:

Run this on jumphost:

$ for a in us-vm1 us-vmc eu-vm asia-vm ; do gcloud compute ssh --command "sudo apt-get -y install apache2" --zone `gcloud compute instances list $a --format text | grep zone: | awk '{print $2}'` $a; done
$ # Press Enter when asked for a password for a key if you never ran gcloud before
$ #CentOS is different. Run the following command to install apache2 on us-vm2:
$ gcloud compute ssh --ssh-flag="-t" --command "sudo yum -y install httpd; sudo service httpd start" --zone us-central1-f us-vm2

Run this on any of the VMs in the codelab network (so NOT the jumphost) to test:

$ for a in us-vm1 us-vm2 us-vmc eu-vm asia-vm ; do curl $a; done

If everything is OK, the process should not hang, and you should see a bunch of output including the HTML of the default web pages returned by the Apache server running on each VM.

Create US loadbalancer

First we need to open the firewall to all instances. Run on jumphost or your machine:

$ gcloud compute firewall-rules create http --allow tcp:80 --network codelab

A network load balancer consists of a target pool of machines in one region (all US instances in this examples), a HTTP health check to check if those machines are healthy (the default one checking path / on Port 80), and a forwarding rule that gives us an external IP pointing at this pool of instances.

Let's create all those resources.

$ gcloud compute http-health-checks create basic-check
$ gcloud compute target-pools create apaches --region us-central1 --health-check basic-check
$ gcloud compute target-pools add-instances apaches --instances us-vm1,us-vm2 --zone us-central1-f
$ gcloud compute target-pools add-instances apaches --instances us-vmc --zone us-central1-c
$ gcloud compute forwarding-rules create interwebs --region us-central1 --port-range 80 --target-pool apaches

Note the IP address of the forwarding rule, which is the load-balanced IP address. Try to reach it with your browser.

❔How can you know which instance you reached?❔

Answer is on the next page.

You cannot see which instance you reached by the default behaviour of the load balancer - except of course the CentOS box which has a very different default web page than the Debian installations of Apache web server...unless you check the logs on the server side.

So the only way you can tell is by changing the contents of the index.html file to include a hint on which host was reached, for example by the running following command on jumphost (or just edit the files one at a time on each VM instance in the target pool).

$ for a in us-vm1 us-vmc eu-vm asia-vm ; do gcloud compute ssh --command "sudo hostname | sudo tee /var/www/html/index.html > /dev/null" --zone `gcloud compute instances list $a --format text | grep zone: | awk '{print $2}'` $a; done

Extra work

Now configure a Network Load Balancer the same way for the eu-vm and the asia-vm instances. Since there is only one VM to be added to each target pool, you can create a pool for a single vm or just point the forwarding rule straight at the instance via a target-instance.

❔Can you find out why this does work for the asia-vm but not eu-vm. This is very difficult to find, so don't spend too much time on it if you don't know immediately.

See the answer on the next page.

First, to create the forwarding rules, you can use the following commands:

$ gcloud compute target-instances create eu-target --instance eu-vm --zone europe-west1-d
$ gcloud compute forwarding-rules create interwebs-eu --region europe-west1 --port-range 80 --target-instance eu-target --target-instance-zone europe-west1-d
$ gcloud compute target-instances create asia-target --instance asia-vm --zone asia-east1-c
$ gcloud compute forwarding-rules create interwebs-asia --region asia-east1 --port-range 80 --target-instance asia-target --target-instance-zone asia-east1-c

You will see that you can reach the IP address in Asia via your web browser, but not the EU VM.

The reason is that the Linux Guest Environment for Compute Engine (including the IP Forwarding Daemon), which is installed on the VM by default, was stopped on eu-vm. Since the traffic is forwarded to the VM without NAT, the VM needs to be configured with the addresses of all forwarding rules pointing to it. The google-ip-forwarding-daemon, which should be preinstalled on all Google Compute Engine images, takes care of this by reading all IP addresses from Metadata and configuring the IP on an interface.

To make the vm work, you could start the address manager by running the following command on eu-vm:

$ sudo service google-ip-forwarding-daemon start

However when you come across this problem in the real world, it is more likely that one of the following is the reason:

If for any reason you cannot make the address manager or equivalent part of the image, you need to add the target IP addresses of the forwarding rules manually.

On Linux you can also manually add the IP address to your system by running a command like:

$ sudo ip route add to local xxx.xxx.xxx.xxx/32 dev eth0 proto 66

Replace xxx.xxx.xxx.xxx with the external IP to be added.

You can see all added IPs with the command:

$ sudo ip route ls table local type local dev eth0 scope host proto 66

Set up global load balancing across all VMs

Here is a diagram of all elements of an HTTP/HTTPS load balancer:

We will need to build this system from the target to the beginning, following the arrows backwards.

First we need to setup an instance group for every zone, as the HTTP load balancer balances between instance groups. As we don't want to configure the autoscaler in this lab, we use an unmanaged (non-autoscaled) instance group. We also need to add a named port (http) to each newly created instance-group. Run the following commands on jumphost:

$ gcloud compute instance-groups unmanaged create us-f --zone us-central1-f
$ gcloud compute instance-groups unmanaged create us-c --zone us-central1-c
$ gcloud compute instance-groups unmanaged create eu --zone europe-west1-d
$ gcloud compute instance-groups unmanaged create asia --zone asia-east1-c
$ gcloud compute instance-groups unmanaged add-instances us-f --instances us-vm1,us-vm2 --zone us-central1-f
$ gcloud compute instance-groups unmanaged add-instances us-c --instances us-vmc --zone us-central1-c
$ gcloud compute instance-groups unmanaged add-instances eu --instances eu-vm --zone europe-west1-d
$ gcloud compute instance-groups unmanaged add-instances asia --instances asia-vm --zone asia-east1-c
$ gcloud compute instance-groups unmanaged set-named-ports us-f --named-ports http:80 --zone us-central1-f
$ gcloud compute instance-groups unmanaged set-named-ports us-c --named-ports http:80 --zone us-central1-c
$ gcloud compute instance-groups unmanaged set-named-ports eu --named-ports http:80 --zone europe-west1-d
$ gcloud compute instance-groups unmanaged set-named-ports asia --named-ports http:80 --zone asia-east1-c

Now create a backend service containing all those instance groups and add an URL map pointing all URLs to this backend service:

$ gcloud compute backend-services create global-bs --protocol http --http-health-check basic-check
$ gcloud compute backend-services add-backend global-bs --instance-group us-f --zone us-central1-f
$ gcloud compute backend-services add-backend global-bs --instance-group us-c --zone us-central1-c
$ gcloud compute backend-services add-backend global-bs --instance-group eu --zone europe-west1-d
$ gcloud compute backend-services add-backend global-bs --instance-group asia --zone asia-east1-c
$ gcloud compute url-maps create global-map --default-service global-bs

Finally we create a target proxy and point a forwarding rule with a GLOBAL IP to that target proxy:

$ gcloud compute target-http-proxies create global-proxy --url-map global-map
$ gcloud compute forwarding-rules create global-lb --global --target-http-proxy global-proxy --ports 80

You get an IP back and if you point your web-browser at this IP you should be sent to the closest instance.

Note that in the Apache log file you only see the IP of the load balancer by default:

Run on any Debian VM behind the load balancer (e.g. us-vm1)

$ tail -10 /var/log/apache2/access.log

shows lines like - - [30/Sep/2015:11:08:28 +0000] "GET / HTTP/1.1" 200 316 "-" "GoogleHC/1.0"

To surface the user IPs in the log file run the following commands.

Run on all Debian VMs (all VMs but us-vm2)::

$ sed -e 's/%h/%{X-Forwarded-For}i/' /etc/apache2/apache2.conf | sudo tee /etc/apache2/apache2.conf > /dev/null
$ sudo service apache2 reload

On the CentOS VM (us-vm2) run the command:

$ sudo sed -e 's/%h/%{X-Forwarded-For}i/' /etc/httpd/conf/httpd.conf | sudo tee /etc/httpd/conf/httpd.conf > /dev/null
$ sudo service httpd reload

From now on you should see the Client IP directly in the log file.

After setting up the HTTP load balancer (try it in the Developer Console for a simpler experience), you can convert this load balancer to HTTPS using a certificate.

Luckily, if you want to speak only HTTP to the backends, then most of the work is done.

For the test we create a self-signed certificate. But if you have your own domain (for example, on Google Domains), you can get a certificate signed by an authority (for example, Let's Encrypt) and upload this one.

To create a self-signed certificate, we will:

Run the following on the jumphost:

$ mkdir ssl
$ cd ssl
$ openssl genrsa -out my.key 2048
$ openssl req -new -key my.key -out my.csr #Enter data at each prompt except PW
$ openssl x509 -req -days 365 -in my.csr -signkey my.key -out my.crt

Now create an ssl certificate with this:

$ gcloud compute ssl-certificates create ssl --certificate my.crt --private-key my.key

Now create a target https proxy and a forwarding rule:

$ gcloud compute target-https-proxies create ssl-proxy --url-map global-map --ssl-certificate ssl
$ gcloud compute forwarding-rules create ssl-lb --global --target-https-proxy ssl-proxy --ports 443

After a few minutes you should be able to reach the service under https://yourip (replacing 'yourip' with the External IP address of the forwarding rule you just created). However, since your certificate is self-signed and the hostname does not match, you get SSL errors and have to click "Advanced" to continue.

In the next section we will explore some specifics about networking on Google Compute Engine and how to achieve a similar experience compared to what you might know from other environments.

The GCP firewall keeps idle TCP connections in the connection table for only 10 minutes. That means that if no packet are sent or received within 10 minutes, the session gets removed from the firewall.

In practice, this means for example that if you connect via SSH to a Compute Engine instance and leave the terminal unattended, after 10 minutes the connection gets removed from the firewall and then the terminal hangs as the incoming response does not make it through the firewall anymore.

To mitigate this you can set the TCP keepalive settings so periodically (every minute) the session is kept active. You can set this up on any of your VMs by running the following command:

$ sudo /sbin/sysctl -w net.ipv4.tcp_keepalive_time=60 net.ipv4.tcp_keepalive_intvl=60 net.ipv4.tcp_keepalive_probes=5

Unfortunately there is no easy way to test this without leaving a session idle for over 10 minutes.

Google Compute Engine by default does currently not allow you to set a internal IP address statically, it always is assigned automatically. However there is a workaround that uses GCEs routing functionality which is documented here (and here for Windows where it is much more complex) that allows additional IPs to be set statically.

We will now configure us1-vm and eu-vm with an alternative network ( and

On us-vm1:

$ sudo ip addr add dev eth0

On eu-vm:

$ sudo ip addr add dev eth0

Now you need to route this statically. On jumphost:

$ gcloud compute routes create us-new-ip --network codelab --destination-range --next-hop-instance us-vm1 --next-hop-instance-zone us-central1-f
$ gcloud compute routes create eu-new-ip --network codelab --destination-range --next-hop-instance eu-vm --next-hop-instance-zone europe-west1-d

Create a firewall rule to allow all internal traffic within the new network.

$ gcloud compute firewall-rules create allow-new-net --network codelab  --source-range --allow tcp:1-65535,udp:1-65535,icmp

You should now be able to talk between those IPs.

From us-vm1:

$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=64 time=102 ms
64 bytes from icmp_req=2 ttl=64 time=102 ms
64 bytes from icmp_req=3 ttl=64 time=102 ms
--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 102.400/102.550/102.847/0.335 ms

You can even remove the original IP from the eu-vm:

On eu-vm:

$ sudo ip addr ls #find address starting with 192.168.128
$ sudo ip addr del 192.168.128.x/32 dev eth0  #replace x with address found above

Let's release the compute resources created during the code lab. Run the following command on jumphost.

$ source <(curl -s https://storage.googleapis.com/nw101/cleanup.sh)

Double-check in Developer Console that everything has been deleted.

The final cleanup step is to delete the project (and its associated billing account) which can be done from the Developers Console. Click the trash can icon next to the project to start the project deletion process.

You have passed the Networking 101 Codelab

What we've covered

Next Steps

Learn More

Give us your feedback