Google Kubernetes Engine makes it easy to run docker containers in the cloud. Google Kubernetes Engine uses Kubernetes, an open source container scheduler, to ensure that your cluster is running exactly the way you want it to at all times.
Follow along this lab to learn how to launch a container on Google Kubernetes Engine.
The instructor will be sharing with you temporary accounts with existing projects that are already setup so you do not need to worry about enabling billing or any cost associated with running this codelab. Note that all these accounts will be disabled soon after the codelab is over.
Once you have received a temporary username / password to login from the instructor, log into Google Cloud Console: https://console.cloud.google.com/.
One you log in, Agree and Continue to the Terms of Service:
You will now be taken to the project selection screen. Dismiss the free trial popup (1) and select the precreated project (2). If your screen does not look like this, please inform a codelab proctor.
Note the project ID you were assigned ( "webcrew16-tok-7015
" in the screenshot above). It will be referred to later in this codelab as PROJECT_ID
.
In this section you'll create a Google Kubernetes Engine cluster.
Navigate to the Google Cloud Console from another browser tab/window, to https://console.cloud.google.com. Use the login credential given to you by the lab proctor.
Search for "Kubernetes Engine" in the search box. Click on "Kubernetes Engine" in the results list that appears.
Then, wait for the API to be enabled.
Launch Cloud Shell by clicking on the terminal icon in the top toolbar.
Cloud Shell is a browser based terminal to a virtual machine that has the Google Cloud Platform tools installed on it and some additional tools (like editors and compilers) that are handy when you are developing or debugging your cloud application.
We'll be using the gcloud
command to create the cluster. First, though, we need to set the compute zone so that the virtual machines in our cluster are created in the correct region. We can do this using gcloud config set compute/zone
. Enter the following in Cloud Shell.
gcloud config set compute/zone us-central1-f
You can create a new container cluster with the gcloud
command like this:
gcloud container clusters create hello-node-cluster --num-nodes 3 gcloud container clusters get-credentials hello-node-cluster gcloud config set container/cluster hello-node-cluster
These commands create a new cluster called hello-node-cluster
with three nodes (VMs), gets credentials for accessing the cluster before doing the deployment, and sets the default cluster for the kubectl
command. You can configure this command with additional flags to change the number of nodes, the default permissions, and other variables. See the documentation for more details.
Launching the cluster may take a bit of time but once it is up you should see output in Cloud Shell that looks like this:
NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS hello-world us-central1-f 1.9.6 104.197.119.168 n1-standard-1 1.9.6 3 RUNNING
The next step is to build and publish a container that contains your code. We will be using Docker to build our container, and Google Container Registry to securely publish it.
You will be using the Google Cloud Project ID in many of the commands in this lab. The Project ID is conveniently stored in an environment variable in Cloud Shell. You can see it here:
echo $DEVSHELL_PROJECT_ID
git clone https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git
cd nodejs-docs-samples/containerengine/hello-world/
Docker containers are built using a Dockerfile. The sample code provides a basic Dockerfile that we can use. Here is the contents of the file:
FROM node:6-alpine EXPOSE 8080 COPY server.js . CMD node server.js
To build the container, run the following command:
docker build -t gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 .
This will build a Docker container image stored locally.
In order for Kubernetes to access your image, you need to store it in a container registry. We are going to store it in the Google Container Registry.
First, set up Docker to push to Container Registry:
gcloud auth configure-docker
Run the following command to publish your container image:
docker push gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0
Now that we have a cluster running and our application built, it is time to deploy it.
A deployment is a core component of Kubernetes that makes sure your application is always running. A deployment schedules and manages a set of pods on the cluster. A pod is one or more containers that "travel together". That might mean they are administered together or they have the same network requirements. For this example we only have one container in our pod.
Typically, you would create a yaml file with the configuration for the deployment. In this example, we are going to skip this step and instead directly create the deployment on the command line.
Create the pod using kubectl
kubectl create deployment hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0
This command starts up one copy of the docker image on one of the nodes in the cluster.
You can see the deployment you created using kubectl
.
kubectl get deployments
You should get back a result that looks something like:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-node 1 1 1 1 30s
You can see the pod running using kubectl
as well.
kubectl get pods
You should get back a result that looks something like:
NAME READY STATUS RESTARTS AGE hello-node-3375482827-7hs3q 1/1 Running 0 1m
By default a pod is only accessible to other machines inside the cluster. In order to use the node.js container that was created it needs to be exposed as a service.
Typically, you would create a yaml file with the configuration for the service. In this example, we are going to skip this step and instead directly create the service on the command line.
Expose the deployment with the kubectl expose
command.
kubectl expose deployment hello-node --type LoadBalancer \ --port 80 --target-port 8080
kubectl expose
creates a service, the forwarding rules for the load balancer, and the firewall rules that allow external traffic to be sent to the pod. The --type=LoadBalancer
flag creates a Google Cloud Network Load Balancer that will accept external traffic.
To get the IP address for your service, run the following command:
kubectl get svc hello-node
You should get back a result that looks something like:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node 10.3.247.85 104.198.151.208 80/TCP 8m
Open a new browser window or tab and navigate to the external IP address from the previous step. You should see the sample code up and running!
Google Kubernetes Engine and Kubernetes provide a powerful and flexible way to run containers on Google Cloud Platform. Kubernetes can also be used on your own hardware or on other Cloud Providers.
This example only used a single container but it is simple to setup multiple container environments or multiple instances of a single container as well.