The goal of this codelab is for you to turn your code (a simple Hello World node.js app here) into a replicated application running on Kubernetes. We will show you how to take code that you have developed on your machine, turn it into a Docker container image, and then run that image on Google Container Engine.

Here's a diagram of the various parts in play in this codelab to help you understand how pieces fit with one another. Use this as a reference as we progress through the codelab; it should all make sense by the time we get to the end (but feel free to ignore this for now).

Kubernetes is an open source project (available on kubernetes.io) which can run on many different environments, from laptops to high-availability multi-node clusters, from public clouds to on-premise deployments, from virtual machines to bare metal.

For the purpose of this codelab, using a managed environment such as Google Container Engine (a Google-hosted version of Kubernetes running on Compute Engine) will allow you to focus more on experiencing Kubernetes rather than setting up the underlying infrastructure.

If you are interested in running Kubernetes on your local machine, say a development laptop, you should probably look into Minikube: http://kubernetes.io/docs/getting-started-guides/minikube/. This offers a simple setup of a single node kubernetes cluster for development and testing purposes. You can use Minikube to go through this codelab if you wish.

Codelab-at-a-conference setup

The instructor will be sharing with you temporary accounts with existing projects that are already setup so you do not need to worry about enabling billing or any cost associated with running this codelab. Note that all these accounts will be disabled soon after the codelab is over.

Once you have received a temporary username / password to login from the instructor, log into the Google Cloud Console: https://console.cloud.google.com/. Here's what you should see once logged in :

From there, click on the project name to get to the dashboard view :

Note the project ID you were assigned ( "cgcpnext2016-sf-1234" in the screenshot above). It will be referred to later in this codelab as PROJECT_ID.

Google Cloud Shell

While Google Cloud and Kubernetes can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.

This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).

To activate Google Cloud Shell, from the developer console simply click the button on the top right-hand side (it should only take a few moments to provision and connect to the environment):

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID :

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If for some reason the project is not set, simply issue the following command :

gcloud config set project <PROJECT_ID>

Looking for you PROJECT_ID? Check out what ID you used in the setup steps or look it up in the console dashboard :

IMPORTANT. Finally, set the default zone and project configuration:

gcloud config set compute/zone us-central1-f

You can pick and choose different zones too. Learn more about zones in Regions & Zones documentation.

The first step is to write the application that we want to deploy to Google Container Engine. Here is a simple Node.js server:

$ nano server.js
var http = require('http');
var handleRequest = function(request, response) {
  response.writeHead(200);
  response.end("Hello World!");
}
var www = http.createServer(handleRequest);
www.listen(8080);

From Cloud Shell simply exit the editor and save the server.js file. Since CloudShell has the node executable installed we can now run this simple command :

$ node server.js

and use the built-in Web preview feature of CloudShell to open a new browser tab and proxy a request to the instance you just started on port 8080.

ls

Now, more importantly, let's package this application in a Docker container.

Before we continue, stop the running node server by pressing Ctrl-C in CloudShell.

Next, create a Dockerfile which describes the image that you want to build. Docker container images can extend from other existing images so for this image, we'll extend from an existing Node image.

$ nano Dockerfile
FROM node:4.4
EXPOSE 8080
COPY server.js .
CMD node server.js

This "recipe" for the Docker image will start from the node image found on the Docker hub, expose port 8080, copy our server.js file to the image and start the node server as we previously did manually.

Save this Dockerfile and build this image by running this command (make sure to replace PROJECT_ID with yours) :

$ docker build -t gcr.io/PROJECT_ID/hello-node:v1 .

Once this completes (it'll take some time to download and extract everything) you can test the image locally with the following command which will run a Docker container as a daemon on port 8080 from our newly-created container image:

$ docker run -d -p 8080:8080 gcr.io/PROJECT_ID/hello-node:v1
325301e6b2bffd1d0049c621866831316d653c0b25a496d04ce0ec6854cb7998

And again take advantage of the Web preview feature of CloudShell :

Or use curl or wget from your CloudShell prompt if you'd like :

$ curl http://localhost:8080
Hello World!

Let's now stop the running container. In this example, our app was running as Docker process 2c66d0efcbd4 :

$ docker ps
CONTAINER ID        IMAGE                              COMMAND
2c66d0efcbd4        gcr.io/PROJECT_ID/hello-node:v1    "/bin/sh -c 'node    
$ docker stop 2c66d0efcbd4
2c66d0efcbd4

Now that the image works as intended we can push it to the Google Container Registry, a private repository for your Docker images accessible from every Google Cloud project (but also from outside Google Cloud Platform) :

$ gcloud docker push gcr.io/PROJECT_ID/hello-node:v1

If all goes well and after a little while you should be able to see the container image listed in the console: Compute > Container Engine > Container Registry. At this point we now have a project-wide Docker image available which Kubernetes can access and orchestrate as we'll see in a few minutes.

If you're curious, you can navigate through the container images as they are stored in Google Cloud Storage by following this link: https://console.cloud.google.com/storage/browser/ (the full resulting link should be of this form: https://console.cloud.google.com/project/PROJECT_ID/storage/browser/).

Ok, you are now ready to create your Container Engine cluster but before that, navigate to the Google Container Engine section of the web console and wait for the system to initialize (it should only take a few seconds).

A cluster consists of a Kubernetes master API server hosted by Google and a set of worker nodes. The worker nodes are Compute Engine virtual machines. Let's use the gcloud CLI from your CloudShell session to create a cluster with two n1-standard-1 nodes (this will take a few minutes to complete):

$ gcloud container clusters create hello-world \
                --num-nodes 2 \
                --machine-type n1-standard-1 \
                --zone us-central1-f
Creating cluster hello-world...done.
Created [https://container.googleapis.com/v1/projects/kubernetes-codelab/zones/us-central1-f/clusters/hello-world].
kubeconfig entry generated for hello-world.
NAME         ZONE           MASTER_VERSION  MASTER_IP       MACHINE_TYPE   STATUS
hello-world  us-central1-f  1.3.2           146.148.46.124  n1-standard-1  RUNNING

You should now have a fully-functioning Kubernetes cluster powered by Google Container Engine:

It's now time to deploy your own containerized application to the Kubernetes cluster! From now on we'll use the kubectl command line (already set up in your Cloud Shell environment). The rest of this codelab requires both the kubernetes client and server version to be 1.2 or above. kubectl version will give you the current versions.

A kubernetes pod is a group of containers, tied together for the purposes of administration and networking. It can contain a single container or multiple. Here we'll simply use one container built with your Node.js image stored in our private container registry. It will serve content on port 8080.

Let's now create a pod with the kubectl run command (replace PROJECT_ID with your own project name) :

$ kubectl run hello-node \
    --image=gcr.io/PROJECT_ID/hello-node:v1 \
    --port=8080
deployment "hello-node" created

As you can see, we've created a deployment object. Deployments are the recommended way to create and scale pods. Here, a new deployment manages a single pod replica running the hello-node:v1 image.

To view the deployment we just created, simply run :

$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   1         1         1            1           2m

To view the pod created by the deployment, run this command :

$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
hello-node-714049816-ztzrb   1/1       Running   0          6m

Now is a good time to run through some interesting kubectl commands (none of these will change the state of the cluster, full documentation is available here):

$ kubectl get pods
$ kubectl cluster-info
$ kubectl config view
$ kubectl get events
$ kubectl logs <pod-name>

At this point you should have our container running under the control of Kubernetes but we still have to make it accessible to the outside world.

By default, the pod is only accessible by its internal IP within the cluster. In order to make the hello-node container accessible from outside the kubernetes virtual network, you have to expose the pod as a kubernetes service.

From CloudShell we can expose the pod to the public internet with the kubectl expose command combined with the --type="LoadBalancer" flag. This flag is required for the creation of an externally accessible IP :

$ kubectl expose deployment hello-node --type="LoadBalancer"
service "hello-node" exposed

The flag used in this command specifies that we'll be using the load-balancer provided by the underlying infrastructure (in this case the Compute Engine load balancer). Note that we expose the deployment, and not the pod directly. This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but we will add more replicas later).

The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform.

To find the publicly-accessible IP address of the service, simply request kubectl to list all the cluster services:

$ kubectl get services
NAME         CLUSTER-IP     EXTERNAL-IP      PORT(S)    AGE
hello-node   10.3.250.149   104.154.90.147   8080/TCP   1m
kubernetes   10.3.240.1     <none>           443/TCP    5m

Note there are 2 IP addresses listed for our service, both serving port 8080. One is the internal IP that is only visible inside your cloud virtual network; the other is the external load-balanced IP. In this example, the external IP address is 104.154.90.147.

You should now be able to reach the service by pointing your browser to this address: http://<EXTERNAL_IP>:8080

At this point we've gained at least several features from moving to containers and Kubernetes - we do not need to specify on which host to run our workload and we also benefit from service monitoring and restart. Let's see what else we can gain from our new Kubernetes infrastructure.

One of the powerful features offered by Kubernetes is how easy it is to scale your application. Suppose you suddenly need more capacity for your application; you can simply tell the replication controller to manage a new number of replicas for your pod:

$ kubectl scale deployment hello-node --replicas=4
deployment "hello-node" scaled
$ kubectl get deployment
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   4         4         4            3           16m
$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
hello-node-714049816-g4azy   1/1       Running   0          1m
hello-node-714049816-rk0u6   1/1       Running   0          1m
hello-node-714049816-sh812   1/1       Running   0          1m
hello-node-714049816-ztzrb   1/1       Running   0          16m

Note the declarative approach here - rather than starting or stopping new instances you declare how many instances should be running at all time. Kubernetes reconciliation loops simply make sure the reality matches what you requested and take action if needed.

Here's a diagram summarizing the state of our Kubernetes cluster:

At some point the application that you've deployed to production will require bug fixes or additional features. Kubernetes is here to help you deploy a new version to production without impacting your users.

First, let's modify the application. From CloudShell, edit server.js and update the response message:

  response.end("Hello Kubernetes World!");

We can now build and publish a new container image to the registry with an incremented tag (v2 in this case):

$ docker build -t gcr.io/PROJECT_ID/hello-node:v2 . 
$ gcloud docker push gcr.io/PROJECT_ID/hello-node:v2

We're now ready for kubernetes to smoothly update our replication controller to the new version of the application. In order to change the image label for our running container, we will need to edit the existing hello-node deployment and change the image from gcr.io/PROJECT_ID/hello-node:v1 to gcr.io/PROJECT_ID/hello-node:v2.

To do this, we will use the kubectl edit command. This will open up a text editor displaying the full deployment yaml configuration. It isn't necessary to understand the full yaml config right now, instead just understand that by updating the spec.template.spec.containers.image field in the config we are telling the deployment to update the pods to use the new image.

$ kubectl edit deployment hello-node
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2016-03-24T17:55:28Z
  generation: 3
  labels:
    run: hello-node
  name: hello-node
  namespace: default
  resourceVersion: "151017"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/hello-node
  uid: 981fe302-f1e9-11e5-9a78-42010af00005
spec:
  replicas: 4
  selector:
    matchLabels:
      run: hello-node
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: hello-node
    spec:
      containers:
      - image: gcr.io/PROJECT_ID/hello-node:v1 # Update this line
        imagePullPolicy: IfNotPresent
        name: hello-node
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30

After making the change, save and close the file (this uses vi, so press "Esc" then :wq and "Enter").

deployment "hello-node" edited

This updates the deployment with the new image, causing new pods to be created with the new image and old pods to be deleted.

$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node   4         5         4            3           1h

While this is happening, the users of the services should not see any interruption. After a little while they will start accessing the new version of your application. You can find more details on rolling updates in this documentation.

Hopefully with these deployment, scaling and update features you'll agree that once you've setup your environment (your GKE/Kubernetes cluster here), Kubernetes is here to help you focus on the application rather than the infrastructure.

With recent versions of Kubernetes, a graphical web user interface (dashboard) has been introduced. This user interface allows you to get started quickly and enables some of the functionality found in the CLI as a more approachable and discoverable way of interacting with the system.

To configure access to the Kubernetes cluster dashboard, from the CloudShell window, type these commands :

$ gcloud container clusters get-credentials hello-world \
    --zone us-central1-f --project <PROJECT_ID>
$ kubectl proxy --port 8081

And then use the Cloud Shell preview feature once again to head over to port 8081:

This should send you to the API endpoint. To get to the dashboard, simply append "/ui".

Enjoy the Kubernetes graphical dashboard and use it for deploying containerized applications, as well as for monitoring and managing your clusters!

Alternatively you can access the dashboard from a development or local machine using similar instructions provided when, from the Web console, you press the "Connect" button for the cluster you wish to monitor.

Learn more about the Kubernetes dashboard by taking the Dashboard tour.

Time for some cleaning of the resources used (to save on cost and to be a good cloud citizen).

Delete the Deployment (which also deletes the running pods) and Service (which also deletes your external load balancer):

First, delete the Service, which also deletes your external load balancer:

$ kubectl delete service,deployment hello-node
service "hello-node" deleted
deployment "hello-node" deleted

Delete your cluster :

$ gcloud container clusters delete hello-world --zone=us-central1-f
The following clusters will be deleted.
 - [hello-world] in [us-central1-f]
Do you want to continue (Y/n)?  Y
Deleting cluster hello-world...done.                                                                                                                                                                                            
Deleted [https://container.googleapis.com/v1/projects/codelab-test-003/zones/us-central1-f/clusters/hello-world].

This deletes all the Google Compute Engine instances that are running the cluster.

Finally delete the Docker registry storage bucket hosting your image(s) :

$ gsutil ls
gs://artifacts.<PROJECT_ID>.appspot.com/
$ gsutil rm -r gs://artifacts.<PROJECT_ID>.appspot.com/
Removing gs://artifacts.<PROJECT_ID>.appspot.com/...

Of course, you can also delete the entire project but you would lose any billing setup you have done (disabling project billing first is required). Additionally, deleting a project will only stop all billing after the current billing cycle ends.

This concludes this simple getting started codelab with Kubernetes. We've only scratched the surface of this technology and we encourage you to explore further with your own pods, replication controllers, and services but also to check out liveness probes (health checks) and consider using the Kubernetes API directly.

Here are some follow-up steps :

/