In this codelab, you'll learn how to perform some common routing and firewalling tasks on Google Compute Engine. We'll setup a demo environment you will use throughout the lab.

What you'll learn

What you'll need

How will you use this tutorial?

Read it through only Read it and complete the exercises

How would rate your experience with Google Compute Engine?

Novice Intermediate Proficient

Codelab-at-a-conference setup

The instructor will be sharing with you temporary accounts with existing projects that are already setup so you do not need to worry about enabling billing or any cost associated with running this codelab. Note that all these accounts will be disabled soon after the codelab is over.

Once you have received a temporary username / password to login from the instructor, log into Google Cloud Console:

Here's what you should see once logged in :

Note the project ID you were assigned ( "codelab-test003" in the screenshot above). It will be referred to later in this codelab as PROJECT_ID.

Use Google Cloud Shell or your local computer

This code lab can be completed one of two ways:


To activate Google Cloud Shell, from the developer console simply click the button on the top right-hand side (it should only take a few moments to provision and connect to the environment):

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID :

gcloud auth list
Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project
project = <PROJECT_ID>

If for some reason the project is not set, simply issue the following command :

gcloud config set project <PROJECT_ID>

Looking for you PROJECT_ID? Check out what ID you used in the setup steps or look it up in the console dashboard :

IMPORTANT. Finally, set the default zone and project configuration:

gcloud config set compute/zone us-central1-f

You can pick and choose different zones too. Learn more about zones in Regions & Zones documentation.

Start by creating a new Network with a US Central and Europe-West Subnetwork and allow all internal traffic in the network.

Run the following commands either from your local machine with gcloud installed or from the Cloud Shell.

gcloud compute networks create nw102 --mode=custom
gcloud compute networks subnets create nw102-us --network nw102 --range --region us-central1 
gcloud compute networks subnets create nw102-eu --network nw102 --range --region europe-west1
gcloud compute firewall-rules create nw102-allow-internal --network nw102 --source-ranges, --allow tcp,udp,icmp

Over the next few pages, we will set up a NAT gateway. "NAT" means Network Address Translation.

Why would you want to setup a NAT gateway?

Things to keep in mind when creating a NAT gateway

Our final setup of the NAT gateway will look like this:

First, let's create the VMs necessary for this.

Create two VM instances as NAT gateways, one in US and one in EU.

Reserving static IPs: NAT gateways external IP addresses should be static, so that we don't accidentally lose the assigned external IPs in case we have to do some maintenance on these VMs. It is generally a best practice to reserve external IP addresses before the VMs are created.

Run the following commands in Cloud Shell or locally

gcloud compute addresses create nat-gw-us-ip --region us-central1
gcloud compute addresses create nat-gw-eu-ip --region europe-west1

Create the NAT gateways: It is recommended that we use debian-8 and centos-7 for these instances, respectively, to familiarize ourselves with the subtle differences between Debian/Ubuntu and RedHat/CentOS, two major Linux distributions available on Compute Engine. In this lab we use Debian for all US VMs and CentOS for all EU VMs.

gcloud compute instances create nat-gw-us --network nw102 --subnet nw102-us --address nat-gw-us-ip --can-ip-forward --zone us-central1-f --image-family debian-8 --image-project debian-cloud
gcloud compute instances create nat-gw-eu --network nw102 --subnet nw102-eu --address nat-gw-eu-ip --can-ip-forward --zone europe-west1-c --image-family centos-7 --image-project centos-cloud

Create node instances: Create one VM instance in US with an external IP, and create the other VM instance in EU without an external IP. This is to see how these instances differ in Internet connectivity.

For these instances, the option --can-ip-forward is generally optional.

gcloud compute instances create nat-node-us --network nw102 --subnet nw102-us --zone us-central1-f --image-family debian-8 --image-project debian-cloud
gcloud compute instances create nat-node-eu --network nw102  --subnet nw102-eu --zone europe-west1-c --image-family centos-7 --image-project centos-cloud --no-address

SSH to each instance: Create a firewall rule to allow SSH, then try to SSH into the gateways and the instances. Take note that SSH fails for nat-node-eu because it has no external IP. For the other instances, exit the SSH session with the exit command.

gcloud compute firewall-rules create nw102-allow-ssh --network nw102 --source-ranges --allow tcp:22
gcloud compute ssh nat-gw-us --zone us-central1-f
gcloud compute ssh nat-gw-eu --zone europe-west1-c
gcloud compute ssh nat-node-us --zone us-central1-f
gcloud compute ssh nat-node-eu --zone europe-west1-c

Now remove the external IP address from nat-node-us and try again. SSH should now fail for this instance as well.

gcloud compute instances delete-access-config nat-node-us --zone us-central1-f
gcloud compute ssh nat-node-us --zone us-central1-f

Configuring NAT egress: Now configure two NAT egress points for the network, so that VM instances in the US (such as nat-node-us) will egress to the Internet using nat-gw-us, while VM instances in EU (such as nat-node-eu) will egress to the Internet using nat-gw-eu. We will use two tags, nat-us and nat-eu, to separate VMs into these partitions.

gcloud compute instances add-tags nat-node-us --zone us-central1-f --tags nat-us
gcloud compute instances add-tags nat-node-eu --zone europe-west1-c --tags nat-eu
gcloud compute routes create nw102-nat-us --network nw102 --tags nat-us --destination-range --next-hop-instance nat-gw-us --next-hop-instance-zone us-central1-f --priority 800
gcloud compute routes create nw102-nat-eu --network nw102 --tags nat-eu --destination-range --next-hop-instance nat-gw-eu --next-hop-instance-zone europe-west1-c --priority 800

Configuring iptables: Now we need to configure iptables on both NAT gateway instances nat-gw-us, and nat-gw-eu, respectively to make sure traffic through the NAT gateways gets forwarded and NATed to its own IP.

gcloud compute ssh nat-gw-us --zone us-central1-f

Now run the following commands on nat-gw-us

echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Now login to nat-gw-eu:

gcloud compute ssh nat-gw-eu --zone europe-west1-c

And run the same commands there:

echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Verify NAT: Now let's verify that NAT is set up as expected. First, we need to SSH into the NATed instances. Since we know we couldn't directly SSH into them for lack of external IP addresses, we need to use SSH into the NAT gateway first, and SSH into these instances from either of the NAT gateway instance.

gcloud compute ssh nat-gw-us --zone us-central1-f

Then try this.

ssh nat-node-us

Note that you get the "Permission denied" error. We need to run an ssh agent locally.

Make sure you are on the Google Cloud Shell or your local computer locally again:

eval `ssh-agent`
ssh-add ~/.ssh/google_compute_engine

Try SSH into the NAT gateway again, note the use of the SSH flag -A.

gcloud compute ssh nat-gw-us --ssh-flag="-A" --zone us-central1-f

The following should now succeed from nat-gw-us and you should get a shell terminal.

ssh nat-node-us

Run traceroute on the NATed instance nat-node-us to see egress makes use of the NAT gateway.


Run ping command from one VM instance to see it can reach out to other VMs in the network regardless of regions.

ping -c 1 <instance-name>

Validate that the same tests work on nat-node-eu as well.

(you might have to install traceroute with sudo yum -y install traceroute since it is not installed by default on CentOS)

Exit the SSH session from the node and the gateway.


Restarting the NAT gateway: Before moving on, let's simulate an instance restart of the NAT gateway with the following commands:

gcloud compute instances stop nat-gw-us --zone us-central1-f
gcloud compute instances start nat-gw-us --zone us-central1-f

Try to run traceroute on NATed instance nat-node-us. This time, it should fail. The reason is that the two shell commands to enable NAT cannot survive an instance restart.

Now apply startup scripts before restarting the NAT gateways.

gcloud compute instances add-metadata nat-gw-us --zone us-central1-f --metadata startup-script=\
"#! /bin/bash 
sh -c \"echo 1 > /proc/sys/net/ipv4/ip_forward\" 
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE"
gcloud compute instances stop nat-gw-us --zone us-central1-f
gcloud compute instances start nat-gw-us --zone us-central1-f
gcloud compute instances add-metadata nat-gw-eu --zone europe-west1-c --metadata startup-script=\
"#! /bin/bash
sh -c \"echo 1 > /proc/sys/net/ipv4/ip_forward\"
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE"
gcloud compute instances stop nat-gw-eu --zone europe-west1-c
gcloud compute instances start nat-gw-eu --zone europe-west1-c

Run traceroute on NATed instance nat-node-us. The problem should be fixed, and the NAT will persist after an instance restart.

We have reduced the attack surface of the Compute Engine network to the NAT gateways so far. Notice that the firewall rule nw102-allow-ssh allows SSH from anywhere in the world, which could be highly undesirable. There are constant penetration tests from dark corners of the Internet, and we can examine invalid SSH login attempts if the VM is up for a while.

Let's further restrict the firewall rules so that SSH can only be reached from the current desktop.

First, determine the current egress IP from the local machine.

Easiest you can use a third party service, for example


Then change the firewall rule below.

gcloud compute firewall-rules delete nw102-allow-ssh -q

gcloud compute firewall-rules create nw102-allow-ssh --network nw102 --source-ranges <current-egress-ip>/32 --allow tcp:22

Verify that SSH is possible from the local machine, but impossible from Developer Console's in-browser SSH (as this uses a proxy).

Next we will create a "tiered network", controlling access on the network layer between different types of servers.

We will create a sample multi-tier architecture with web servers and application servers. We will then create logical partitions in the network. From the gateway, it should be possible to communicate by HTTP and SSH to both web and application servers and they should be able to communicate with each other, but HTTP and SSH traffic between the tiers should be restricted. This will be done by tagging the instances and restricting traffic with Firewall rules.

See the following illustrations for permitted and non-permitted traffic:

Create one more VM instance in each zone.

gcloud compute instances create nat-node-w-us --network nw102 --subnet nw102-us --zone us-central1-f  --image-family debian-8 --image-project debian-cloud --no-address --tags nat-us
gcloud compute instances create nat-node-w-eu --network nw102 --subnet nw102-eu --zone europe-west1-c --image-family centos-7 --image-project centos-cloud --no-address --tags nat-eu

Install Apache on all four NATed VM instances. The steps can be different depending on the OS. In the commands that follow, <instance> is nat-node-us, nat-node-w-us, nat-node-eu, nat-node-w-eu, respectively.

The following are steps to install Apache on Debian (nat-node-us and nat-node-w-us).

sudo apt-get update
sudo apt-get install apache2 -y
echo `hostname` | sudo tee /var/www/html/index.html

The following are steps to install Apache on CentOS (nat-node-eu and nat-node-w-eu).

sudo yum install httpd -y
echo `hostname` | sudo tee /var/www/html/index.html
sudo service httpd start

Note that one may use curl to reach another node.

curl <instance-name>

Lock down the network: Update the internal firewall to allow just ICMP, so that ping is still allowed within the network for troubleshooting purpose.

gcloud compute firewall-rules update nw102-allow-internal --source-ranges, --allow icmp

Assign tags: Assign tags gw, app and web to VM instances.

gcloud compute instances add-tags nat-gw-us --zone us-central1-f --tags gw
gcloud compute instances add-tags nat-gw-eu --zone europe-west1-c --tags gw
gcloud compute instances add-tags nat-node-us --zone us-central1-f --tags app
gcloud compute instances add-tags nat-node-eu --zone europe-west1-c --tags app
gcloud compute instances add-tags nat-node-w-us --zone us-central1-f --tags web
gcloud compute instances add-tags nat-node-w-eu --zone europe-west1-c --tags web

Permit HTTP and SSH: Create additional firewall rules to permit curl among VMs in the same tier or from the NAT gateways. The rule below should also allow ssh from the NAT gateways to each tier.

gcloud compute firewall-rules create nw102-allow-app --network nw102 --source-tags gw,app --target-tags app --allow tcp:22,tcp:80
gcloud compute firewall-rules create nw102-allow-web --network nw102 --source-tags gw,web --target-tags web --allow tcp:22,tcp:80

Allow Egress: Create a new firewall rule to allow egress traffic.

gcloud compute firewall-rules create nw102-allow-egress --network nw102  --source-tags app,web --target-tags gw --allow tcp:80,tcp:443

Allow traceroute: Create another new firewall rule to allow traceroute to flow through the NAT gateways.

gcloud compute firewall-rules create nw102-allow-traceroute --network nw102 --source-ranges --target-tags gw --allow udp:33434-33534

Verify that connections are allowed within the same zone and from the NAT gateway instances. Verify that connections are refused between app and web. To test SSH, make sure to include the option -A.

On the next few pages we will explore some alternative ways of providing connectivity to or from instances for some common scenarios.

One scenario we will explore is to access internal services on an instance with only private IP via an SSH tunnel. This will keep the instance unexposed for other users.

The following diagram shows the scenario.

The next scenario will be to expose external services via forwarding rule instead of the standard external IP. Since you can easily reassign the forwarding rule to another VM, this will allow different migration scenarios.

The following diagram shows the scenario.

Run the following command and map the web port 80 to a local port 7000.

gcloud compute ssh nat-gw-us --zone us-central1-f --ssh-flag="-nNTL 7000:nat-node-w-us:80"

Access the web site using a local browser via localhost at port 7000.

Press Ctrl-C to stop the SSH tunnel and exit.

Bonus question: If the internal web site uses SSL/TLS, how would the browser behave?

Answer: The browser would not accept the certificate, because the hostname does not match the expected name.

Add a new firewall rule to allow external access to web services in the tier web.

gcloud compute firewall-rules create nw102-allow-ext --network nw102 --source-ranges --target-tags web --allow tcp:80

Create the forwarding rule to expose web service through another external IP.

gcloud compute addresses create web-ext-ip --region us-central1
gcloud compute target-instances create web-target --instance nat-node-w-us --zone us-central1-f
gcloud compute forwarding-rules create web-ext --address <web-ext-ip> --port-range 80 --region us-central1 --target-instance web-target --target-instance-zone us-central1-f

Access the web site using a local browser via the IP address from web-ext-ip.

When using a forwarding rule to expose the web service, note that the external IP <web-ext-ip> is not tied to the VM instances. This allows an easy way to turn off the service while you continue to perform maintenance on the running VM.

If we had another VM instance like nat-node-w-us in us-central1, we could re-create the target instance to point to that alternate VM instance and modify the forwarding rule to point to the new target instance. That allows a server rotation from one VM to another.

The steps are skipped as they are trivial. Note that the manual server rotation solves a different problem than using a load balancer for a pool of VM instances.

The remainder of this optional lab is to demonstrate how to transfer to web traffic to a different VM instance without appreciable disruption by changing DNS entries, which is a technique commonly used to drain traffic from one server to another.

On your local machine, edit /etc/hosts and append the following line:

<web-ext-ip> mywebtest 

Go to your browser and enter the host name mywebtest in the address bar. It should indicate that the server is nat-node-w-us. Or run curl mywebtest.

curl mywebtest

In preparation for the IP migration, create a new forwarding rule with a new IP address.

gcloud compute addresses create new-web-ext-ip --region europe-west1
gcloud compute target-instances create new-web-target --instance nat-node-w-eu --zone europe-west1-c
gcloud compute forwarding-rules create new-web-ext --address <new-web-ext-ip> --port-range 80 --region europe-west1 --target-instance new-web-target --target-instance-zone europe-west1-c

We should have both VM instances serving web traffic simultaneously.

Now, we simulate DNS records update via /etc/hosts. Edit the file and modify that line.

<new-web-ext-ip> mywebtest 

For local demo, this is instant. For actual DNS records, wait till the record TTL has passed. Refresh the browser with the pseudo host name mywebtest. It should indicate the server is nat-node-w-eu. Or run curl mywebtest.

curl mywebtest

You may safely delete artifacts associated with web-ext and perform maintenance tasks on nat-node-w-us.

Sometimes you want to make a service that is external to your Google Compute Engine environment show up as if it was internal to the environment. In the next part of the codelab, we will explore how to set this up using the NAT gateway we created earlier.

First we will create a VM external to our lab network (in the default network) that resembles an external resource. In your production environment this could be any resource on your on-premise network. Then we will expose this external resource via the NAT gateway. After that, in an optional step, we will expose it through an extra static IP we attach to the NAT gateway.

See the following diagram for schematics:

First, let's simulate an external service.

Create a standalone VM in the default network with web serving capability and create a firewall to allow traffic to it via TCP and port 80.

gcloud compute instances create faux-on-prem-svc --network default --zone us-central1-f  --image-family debian-8 --image-project debian-cloud --tags http-server
gcloud compute firewall-rules create default-allow-http-server --network default --target-tags http-server --allow tcp:80

Install Apache on the standalone VM that was just created.

gcloud compute ssh faux-on-prem-svc --zone us-central1-f

On the VM, run:

sudo apt-get install -y apache2

Then, make sure the homepage can be loaded in a web browser using the external IP address of the standalone VM.

Login to nat-gw-us, and map the web service as an internal service using iptables with the following commands

sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j DNAT --to <faux-on-prem-svc-external-ip>:80
sudo iptables -A POSTROUTING -t nat -o eth0 -j SNAT --to-source <nat-gw-us-internal-ip>

Log out and create a new firewall rule to allow internal access to the internal web site mapped from the external web site.

gcloud compute firewall-rules create nw102-allow-on-prem --network nw102  --source-tags app,web --target-tags gw --allow tcp:80

From nat-node-us, run curl against nat-gw-us. It should show the website of the newly installed web server.

curl nat-gw-us

From nat-gw-us, choose a static IP outside of any existing network's address ranges in the current project. We will use as an example below.

On nat-gw-us do the following:

sudo su -
cat <<EOF >>/etc/network/interfaces
auto eth0:0
iface eth0:0 inet static
service networking restart

Create a static route for this address.

gcloud compute routes create nw102-192-168-30-11 --network nw102 --destination-range --next-hop-instance nat-gw-us --next-hop-instance-zone us-central1-f

Create a firewall rule to allow traffic to use the web site. Note that the source range is expanded from the current network address range of to to cover the static IP's address.

gcloud compute firewall-rules create nw102-allow-on-prem-alt --network nw102  --source-ranges --target-tags gw --allow tcp:80

From nat-gw-us, re-map the web service as an internal service.

sudo iptables -D POSTROUTING -t nat -o eth0 -j SNAT --to-source <nat-gw-us-internal-ip>
sudo iptables -A POSTROUTING -t nat -o eth0 -j SNAT --to-source

From nat-node-us, run curl against You should see the new website on this IP too.


As a last step of this lab we will explore the common need for a proxy to access external resources. This is needed if you want to restrict which resources can be accessed from the Virtual Machines in your Google Compute Engine environment. For example, if you want to allow access to Google APIs but limit access to the Internet otherwise.

We use the open source software Squid to implement this, but other software can achieve the same goals. SSL/TLS is possible and access otherwise is restricted through the service account scopes given on creation of the Google Compute Engine virtual machines.

We will use the NAT gateway to implement the proxy but not use it's NAT functionality. Please note that the NAT functionality is not required to make the proxy work, so you could start with any instance with an external IP.

See the following diagram for schematics:

Create a new VM with the full access scope to Compute Engine .

gcloud compute instances create nat-node-gcp-eu --network nw102 --subnet nw102-eu --zone europe-west1-c --image-family centos-7 --image-project centos-cloud --scopes cloud-platform

Connect via SSH to its external IP.

gcloud compute ssh nat-node-gcp-eu --zone europe-west1-c

The scope of the VM allows you to use the service account to access all Compute Engine services. Let's create a bucket using the service account first. Please note that buckets need to be unique globally, so choose any unique bucket name.

gsutil mb gs://nw102-<any-unique-id>

Make sure Cloud SDK works with Compute Engine resources within the current project.

gcloud compute instances list

From nat-node-gcp-eu, the following tests for Internet egress should all succeed.

curl -L
curl -L
curl <faux-on-prem-svc-external-ip>

Log out and remove the instance's external IP and apply the tag app to allow SSH from the NAT gateway.

gcloud compute instances delete-access-config nat-node-gcp-eu --zone europe-west1-c
gcloud compute instances add-tags nat-node-gcp-eu --zone europe-west1-c --tags app

Connect via SSH to the NAT gateway.

gcloud compute ssh nat-gw-eu --ssh-flag="-A" --zone europe-west1-c

Connect via SSH to the VM instance from the NAT gateway.

ssh nat-node-gcp-eu

Try to access the Google Cloud Platform resources or Internet sites, and see how it hangs or throws errors.

Use Ctrl-C to abort, if necessary.

gsutil ls gs://
gcloud compute instances list
curl -L

Exit the environment


First, we need to install Squid and configure an egress whitelist to include and possibly corporate-approved Internet addresses (using <faux-on-prem-svc-external-ip> as an example).

SSH into the NAT gateway.

gcloud compute ssh nat-gw-eu --ssh-flag="-A" --zone europe-west1-c

Run the following steps on nat-gw-eu:

sudo yum install squid -y
sudo su -
cat <<EOF >>/etc/squid/whitelisted-domains.txt
vi /etc/squid/squid.conf # or use your favourite editor - see below
sudo service squid restart

Note that we have whitelisted as well as the IP address of <faux-on-prem-svc> to represent an example of additional sites to be whitelisted.

Here is the sample of squid.conf, the Squid configuration file. From the default, change the commented out safe ports (now only 80 and 443 are safe ports) and insert the two lines after INSERT YOUR OWN RULE(S) HERE.


# Recommended minimum configuration:

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src        # RFC1918 possible internal network
acl localnet src        # RFC1918 possible internal network
acl localnet src        # RFC1918 possible internal network
acl localnet src fc00::/7       # RFC 4193 local private network range
acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80                # http
#acl Safe_ports port 21                # ftp
acl Safe_ports port 443                # https
#acl Safe_ports port 70                # gopher
#acl Safe_ports port 210                # wais
#acl Safe_ports port 1025-65535        # unregistered ports
#acl Safe_ports port 280                # http-mgmt
#acl Safe_ports port 488                # gss-http
#acl Safe_ports port 591                # filemaker
#acl Safe_ports port 777                # multiling http

# Recommended minimum Access Permission configuration:
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
#http_access allow localhost manager
#http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

acl nw102-approved dstdomain "/etc/squid/whitelisted-domains.txt"
http_access allow nw102-approved

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:                1440        20%        10080
refresh_pattern ^gopher:        1440        0%        1440
refresh_pattern -i (/cgi-bin/|\?) 0        0%        0
refresh_pattern .                0        20%        4320

Create a new firewall rule to permit the use of the proxy server whose port is configured in Squid configuration.

gcloud compute firewall-rules create nw102-allow-proxy --network nw102 --source-ranges --target-tags gw --allow tcp:3128

Next, connect via SSH via the nat-gw-eu to nat-node-gcp-eu.

gcloud compute ssh nat-gw-eu --ssh-flag="-A" --zone europe-west1-c
ssh nat-node-gcp-eu

Make additional changes so that the proxy server is configured for all connections.

sudo su -
cat <<EOF >>/etc/profile
export http_proxy=http://nat-gw-eu:3128
export https_proxy=http://nat-gw-eu:3128

Re-log in to nat-node-gcp-eu.

ssh nat-node-gcp-eu

Repeat the following tests to determine if the Internet egress is working (Use Ctrl-C to exit).

curl <faux-on-prem-svc-external-ip>

It should work for <faux-on-prem-svc-external-ip> only, as this is whitelisted.

Check GCS bucket. This should succeed.

gsutil ls gs://

Try the Compute Engine command. This gives out a crash error.

gcloud compute instances list

For Compute Engine to function in such a restrictive environment, it needs to access the Compute Engine metadata service (on These should not use the proxy.

To do this, add a proxy exception for those.

Run the following steps:

sudo su -
cat <<EOF >>/etc/profile
export no_proxy=".internal,localhost,,metadata,"
ssh nat-node-gcp-eu

Test the following again on nat-node-gcp-eu. This time, it should succeed.

gcloud compute instances list

We successfully setup a proxy that allows access to Google Cloud Platform services and another chosen IP, but not the Internet as a whole.


Let's release compute resources created during the lab.

gcloud compute forwarding-rules delete new-web-ext --region europe-west1 -q
gcloud compute forwarding-rules delete web-ext --region us-central1 -q
gcloud compute target-instances delete new-web-target --zone europe-west1-c -q
gcloud compute addresses delete new-web-ext-ip --region europe-west1 -q
gcloud compute routes delete nw102-192-168-30-11 -q
gcloud compute firewall-rules delete nw102-allow-on-prem-alt -q
gcloud compute target-instances delete web-target --zone us-central1-f -q
gcloud compute instances delete nat-node-us nat-node-w-us nat-gw-us faux-on-prem-svc --zone us-central1-f --delete-disks all -q
gcloud compute instances delete nat-node-eu nat-node-w-eu nat-gw-eu nat-node-gcp-eu --zone europe-west1-c --delete-disks all -q
gcloud compute addresses delete nat-gw-us-ip web-ext-ip --region us-central1 -q
gcloud compute addresses delete nat-gw-eu-ip --region europe-west1 -q
gcloud compute routes delete nw102-nat-us nw102-nat-eu -q
 gcloud compute firewall-rules delete nw102-allow-proxy nw102-allow-on-prem nw102-allow-ext nw102-allow-app nw102-allow-web nw102-allow-egress nw102-allow-traceroute nw102-allow-ssh nw102-allow-internal -q
gcloud compute networks subnets delete nw102-eu --region europe-west1 -q
gcloud compute networks subnets delete nw102-us --region us-central1 -q
gcloud compute networks delete nw102 -q
gsutil rb -f gs://nw102-<your-unique-id>

Double-check in Developer Console that everything has been deleted.

Delete the project (and its associated billing account) from the Developer Console, if no longer in need.

You have passed the Networking 102 Codelab

What we've covered

Next Steps

Learn More

Give us your feedback