In this lab we will set up Cloud CDN behind a global Load Balancer and use it with Compute Engine and Cloud Storage.

Here is a diagram of all of the elements of an HTTP/HTTPS load balancer setup which we will use:

What you'll learn

What you'll need

As the HTTP LB is needed to run Cloud CDN, create the basic network using Terraform. This will create the following setup:

Enable the Compute Engine API

Enable the Compute Engine API by following this link, selecting the right project and pressing continue.

From the Cloud Shell, clone the git repository:

git clone


Cloning into 'training'...
remote: Enumerating objects: 163, done.
remote: Counting objects: 100% (163/163), done.
remote: Compressing objects: 100% (113/113), done.
remote: Total 802 (delta 98), reused 107 (delta 46), pack-reused 639
Receiving objects: 100% (802/802), 99.74 KiB | 0 bytes/s, done.
Resolving deltas: 100% (469/469), done.

Change to appropriate folder and install terraform:

cd ~/training/codelab19v2/
source ~/.bashrc

To deploy the base configuration for the lab, navigate into the CDN directory and run the script.

cd ~/training/codelab19v2/labs/CDN/

Wait for terraform to finish creating the resources:

Apply complete! Resources: 13 added, 0 changed, 0 destroyed.


cdn-ip =

real    1m53.610s
user    0m3.762s
sys     0m0.783s

The cdn-ip listed in the output is the address of your load balancer we will attach the CDN to.

Enter the address in your browser and when the setup is ready you should see the web server you created.

First we activate Cloud CDN on the existing load balancer with Compute Engine backend.

In Cloud Shell, run:

gcloud compute backend-services update cdn-backend-service \
    --enable-cdn --global

Wait up to 5 minutes, then check if the CDN is working by running through the following steps.

First save the load balancer IP in a variable.

In Cloud Shell:

export LB_IP=`gcloud compute addresses describe cdn-ip --format="value(address)" --global`

Now run the following curl command:

for ((i=0;i<10;i++)); do curl -w  \
    "%{time_total}\n" -o /dev/null -s http://$LB_IP; done

You see in each line the number of seconds you waited to receive a HTTP response from the LB, so the first three digits after the dot show the number of milliseconds it took.

You should see that the first few queries might be slow, but then you should receive and answer within a few milliseconds. If not repeat this command as necessary as enabling the CDN might not have propagated yet.

In the Cloud Console, navigate to Network Services > Cloud CDN and select the cdn-map.

Click Monitoring, select the cdn-backend-service backend and set the graph to 1 hour for better visualization.

You should see the cache hit rate and the requests increasing. Run the curl command repeatedly if necessary, also increasing the value of 10 to more repetitions.

Back to the Cloud Console, navigate to Logging > Logs. On the left side, select Cloud HTTP Load Balancer > cdn-rule > cdn-map and wait for the logs to load.

Select a recent log entry, and expand the httpRequest section. Note that the cacheHit option is true, which means that this request was served by the Cloud CDN cache.

You can also use the filters to limit the search to specific labels, as shown below.

For example, put the filter: -httpRequest.cacheFillBytes:0 to see all cache fill for the start page.

Expand one of the log entries and also the httpRequest option. Note that cacheFillBytes entry is present - this is the number of HTTP response bytes inserted into cache.

Now we will use Cloud CDN behind GCS.

First we create a bucket. In Cloud Shell:

Export your project ID

export PROJECT_ID=$(gcloud config list --format 'value(core.project)')

And create the bucket:

gsutil mb -c REGIONAL -l asia-east1 gs://cdn-$PROJECT_ID

Now copy a picture to the bucket:

gsutil cp gs://cloudnet19-cdn/cdn.png gs://cdn-$PROJECT_ID/static/

and make it publically available:

gsutil acl ch -u allUsers:R gs://cdn-$PROJECT_ID/static/cdn.png

You now need to create a backend bucket based on your bucket:

gcloud compute backend-buckets create cdn-bucket --gcs-bucket-name cdn-$PROJECT_ID

Now add a path matcher to our existing load balancer:

gcloud compute url-maps add-path-matcher cdn-map \
--default-service cdn-backend-service \
--path-matcher-name bucket-matcher \

Wait a bit (up to 5 minutes, so time for a snack, espresso or tea) for the URL map to become active. Or read some CDN details if you're more the reading type. Please rather wait the full five minutes, if you do the next command too early, you will have to do extra steps.

Then try to get the picture:

echo $LB_IP

Enter http://[ip-just-returned]/static/cdn.png in your browser, and you should see a picture.

If this doesn't work and you get an Apache error message unfortunately your URL map is not up to date yet. You'll need to invalidate the CDN cache (follow the next chapter) before you can try again.

If you get the picture, now try timing this again:

for ((i=0;i<10;i++)); do curl -w  \
    "%{time_total}\n" -o /dev/null -s http://$LB_IP/static/cdn.png; done

Again, you should see subsequent requests being much faster received. Repeat the command a few times if necessary.

However if you, navigate to Network Services > Cloud CDN you will see a surprise:

Cloud CDN is Disabled on cdn-bucket.

So why did you still get fast responses?

Cloud Storage caches public content at the network edge by default.

However if you serve frequently served content this way, the bad surprise will be when you get the bill, because standard network egress pricing applies.

You can make use of Cloud CDN pricing and advanced features only when you activate Cloud CDN on the bucket.

gcloud compute backend-buckets update cdn-bucket --enable-cdn

Go back to the Cloud Console page and refresh:

Crisis averted. Your boss won't be mad because of a massive bill now.

For good measure, see that the caching still works.

for ((i=0;i<10;i++)); do curl -w  \
    "%{time_total}\n" -o /dev/null -s http://$LB_IP/static/cdn.png; done

Yep - all good! You still see super fast results. Your boss must be happy now.

But is she really? See the next chapter of this codelab.

Remember that picture you're serving to the world now? Take a look again.

If you don't remember the URL, get the LB IP from Cloud Shell:

echo $LB_IP

and navigate to http://$LB_IP/static/cdn.png

You should see something like this:

Uh oh.... You used an outdated picture. Why did you do that???

Let's fix this quickly. Copy the correct picture to your bucket:

gsutil cp gs://cloudnet19-cdn/cdn-new.png gs://cdn-$PROJECT_ID/static/cdn.png

Now make the object public again:

gsutil acl ch -u allUsers:R gs://cdn-$PROJECT_ID/static/cdn.png

Load the page again.

Uh oh, no change? The original entry was cached, so unfortunately your users will be served stale content for a while.

That is - unless you invalidate the cache. Sounds like a great idea, let's do that.

gcloud compute url-maps invalidate-cdn-cache cdn-map --path "/static/cdn.png"

This sends off the request to invalidate cache entries so the command takes some time to run.

When it returns load the website again, and the new shiny version of the page should be loaded:

Great - that looks much better.

Signed URLs is a mechanism for query string authentication for buckets and objects. Signed URLs provide a way to give time-limited read or write access to anyone in possession of the URL, regardless of whether they have a Google account.

In the Cloud Shell, generate a new key file:

head -c 16 /dev/urandom | base64 | tr +/ -_ > cdn-signed-key

Create a new Backend Bucket Key

gcloud compute backend-buckets add-signed-url-key cdn-bucket --key-name cdn-key --key-file cdn-signed-key

You can verify that the new key was successfully created by using the command below:

gcloud compute backend-buckets describe cdn-bucket

Remove public reader rights from your bucket:

gsutil acl ch -d allUsers gs://cdn-$PROJECT_ID/static/cdn.png

Since we are using GCS and have restricted who can read the objects, we need to give Cloud CDN permission to read them by adding the Cloud CDN service account to their ACLs:

export PROJECT_NUM=`gcloud projects describe $PROJECT_ID  --format 'value(projectNumber)'`
gsutil iam ch \
serviceAccount:service-$ gs://cdn-$PROJECT_ID

Now create the signed URL, adding your own bucket name:

gcloud compute sign-url \
--key-name cdn-key \
--expires-in 5m \
--key-file cdn-signed-key \
"http://$LB_IP/static/cdn.png" \

A new URL will be generated, similar to the one below:

signedUrl: http://LB_IP/static/cdn.png?Expires=1523380928&KeyName=cdn-key&Signature=MOhpm6DwiDzt_CiCeuxn_Ns6qUw=
validationResponseCode: 200

This URL can be shared with users without Google credentials. If you would like to test it, open a new browser window using your personal profile and paste the link you generated.

Signed URLs Logging

Back to the Cloud Console, navigate to Logging > Logs. On the left side, select Cloud HTTP Load Balancer > cdn--rule > cdn-map and wait for the logs to load.

Select the most recent entry using the signed URL generated, expand the log entry and also the httpRequest option. Note that it the status should be 200, indicating that the request has succeeded.

If you wait until the signed URL expires, you should get an error message in your browser. The long entry should show a 403 status (Forbidden).

Let's clean up all resources we used.

Delete both the backend-bucket key and the key file we created:

gcloud compute backend-buckets delete-signed-url-key cdn-bucket --key-name cdn-key
rm cdn-signed-key

Delete the bucket you created with all contents:

gsutil rm -r gs://cdn-$PROJECT_ID/static/
gsutil rb gs://cdn-$PROJECT_ID/

Delete the backend bucket you created and reverse the URL map changes:

gcloud compute url-maps remove-path-matcher cdn-map \
--path-matcher-name bucket-matcher
gcloud compute backend-buckets delete cdn-bucket

Delete all the remaining resources using Terraform:

cd $HOME/training/codelab19v2/labs/CDN/

You have passed the CDN Codelab!

What we've covered

Next Steps