Istio is an open source framework for connecting, securing, and managing microservices, including services running on Google Kubernetes Engine (GKE). It lets you create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code.

You add Istio support to services by deploying a special sidecar, running the Envoy proxy, to each of your application's pods. The proxy intercepts all network communication between microservices and is configured and managed using Istio's control plane functionality.

This codelab shows you how to install and configure Istio on Kubernetes Engine, deploy an Istio-enabled multi-service application, and dynamically change request routing.

Codelab-at-a-conference setup

If you see a "request account button" at the top of the main Codelabs window, click it to obtain a temporary account. Otherwise ask one of the staff for a coupon with username/password.

These temporary accounts have existing projects that are set up with billing so that there are no costs associated for you with running this codelab.

Note that all these accounts will be disabled soon after the codelab is over.

Use these credentials to log into the machine or to open a new Google Cloud Console window Accept the new account Terms of Service and any updates to Terms of Service.

Here's what you should see once logged in:

When presented with this console landing page, please select the only project available. Alternatively, from the console home page, click on "Select a Project" :

Google Cloud Shell

While Google Cloud and Kubernetes can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.

This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).

  1. To activate Cloud Shell from the Cloud Console, simply click Activate Cloud Shell (it should only take a few moments to provision and connect to the environment).

Screen Shot 2017-06-14 at 10.13.43 PM.png

Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID.

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

project = <PROJECT_ID>

If, for some reason, the project is not set, simply issue the following command:

gcloud config set project <PROJECT_ID>

Looking for your PROJECT_ID? Check out what ID you used in the setup steps or look it up in the Cloud Console dashboard:

Cloud Shell also sets some environment variables by default, which may be useful as you run future commands.


Command output

  1. Finally, set the default zone and project configuration.
gcloud config set compute/zone us-central1-f

You can choose a variety of different zones. For more information, see Regions & Zones.

You need to make sure that you have the Kubernetes Engine API enabled:

gcloud services enable

In this example we will install the latest version of Kubernetes and of Istio. At the time of last update, those were 1.18 and 1.7 respectively.

Create a Kubernetes cluster:

gcloud container clusters create hello-istio \
    --cluster-version=latest \
    --machine-type=e2-standard-2 \

Wait a few moments while your cluster is set up for you. Any warnings are safe to ignore. The cluster will then be visible in the Kubernetes Engine section of the Google Cloud Platform console.

Once the cluster is created, click on the "Connect" command, copy the command and run in Cloud Shell. This will make sure that kubectl is set up to access the cluster.

For this codelab, we will download and install Istio from There are other installation options, including the Istio add-on for GKE and Anthos Service Mesh. The application steps after this one will work on any Istio installation.

Let's first download the Istio client and samples. The Istio release page offers download artifacts for several OSs. In our case, we can use a convenient command to download and extract the latest release for our current platform:

curl -L | sh -

The script will tell you the version of Istio that has been downloaded:

Istio has been successfully downloaded into the istio-1.7.2 folder on your system.

The installation directory contains sample applications and the istioctl client binary. Change to that directory:

cd istio-1.7.2

Copy and paste the provided command to add the bin directory to your PATH, so you can use istioctl:

export PATH="$PATH:/tmp/istio-1.7.2/bin"

Verify that istioctl is available by checking your cluster is ready for Istio

istioctl x precheck

You should see a message saying Install Pre-Check passed! The cluster is ready for Istio installation.

istioctl install --set profile=demo

Istio is now installed in your cluster. There is now an istio-system namespace with three components deployed:

Istio comes with three services: the istiod control plane, and ingress and egress gateways (which you can think of as "sidecar proxies for the rest of the Internet") , named istio-ingressgateway and istio-egressgateway respectively.

kubectl get svc -n istio-system

Your output should look like this:

NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
istio-egressgateway    ClusterIP    <none>           80/TCP,443/TCP,15443/TCP
istio-ingressgateway   LoadBalancer   15021:31414/TCP,80:31462/TCP,443:30644/TCP,31400:30668/TCP,15443:31719/TCP
istiod                 ClusterIP   <none>           15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP

The Ingress Gateway has a type of LoadBalancer so it is accessible from the Internet; the others only need to be accessible from within the cluster.

Next, make sure that the corresponding Kubernetes pods are deployed and all containers are up and running:

kubectl get pods -n istio-system

When all the pods are running, you can proceed.

NAME                                    READY   STATUS 
istio-egressgateway-7ff598b98f-c486t    1/1     Running
istio-ingressgateway-5b95ff97b6-sqntl   1/1     Running
istiod-96bf8ddff-px2gc                  1/1     Running

Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation — BookInfo.

The installation directory contains sample applications in samples/. You will find the source code and all the other files used in this example in your Istio samples/bookinfo directory.

This is a simple mock bookstore application made up of four microservices - all managed using Istio. Each microservice is written in a different language, to demonstrate how you can use Istio in a multi-language environment, without any changes to code.

The microservices are:

There are 3 versions of the reviews microservice:

The end-to-end architecture of the application is thus:

First, have a look at the YAML which describes the bookinfo application:

less samples/bookinfo/platform/kube/bookinfo.yaml

Note how there are standard Deployments and Services to deploy the Bookinfo application and nothing Istio-specific here at all. To start making use of Istio functionality, no application changes are needed. When we configure and run the services, Envoy sidecars will be automatically injected into each pod for the service.

For that to work, we need to enable sidecar injection for the namespace (‘default') that we will use for our microservices. We do that by applying a label:

kubectl label namespace default istio-injection=enabled

You can verify that the label was successfully applied:

kubectl get namespace -L istio-injection

default          Active    34m       enabled
istio-system     Active    32m       disabled
Kube-node-lease  Active    34m
kube-public      Active    34m
kube-system      Active    34m

Now we can simply deploy the services to the default namespace with kubectl:

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

Look at one of the pods. You will see that it now contains a second container, the Istio sidecar, along with all of the necessary configuration:

kubectl get pod

NAME                              READY     STATUS    RESTARTS   AGE
details-v1-64b86cd49-jqq4g        2/2       Running   0          46s
productpage-v1-84f77f8747-6vg6l   0/2       Pending   0          45s
ratings-v1-5f46655b57-h4zfw       2/2       Running   0          46s
reviews-v1-ff6bdb95b-hqm89        2/2       Running   0          46s
reviews-v2-5799558d68-6wsz6       0/2       Pending   0          45s
reviews-v3-58ff7d665b-rjpbn       0/2       Pending   0          45s

kubectl describe pod details-v1-64b86cd49-jqq4g

To allow ‘ingress' traffic to reach the mesh we need to create a Gateway object (to configure our ingress gateway) and a VirtualService (which controls the forwarding of traffic from the gateway to our services). You can read more about gateways here. To create a gateway:

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

Finally, confirm that the application has been deployed correctly by running the following commands:

kubectl get services
kubectl get pods

When all the pods have been created, you should see five services and six pods:

NAME                       CLUSTER-IP   EXTERNAL-IP   PORT(S)   
details              <none>        9080/TCP  
kubernetes            <none>        443/TCP   
productpage         <none>        9080/TCP  
ratings              <none>        9080/TCP  
reviews             <none>        9080/TCP  

NAME                              READY     STATUS    RESTARTS 
details-v1-1520924117-48z17       2/2       Running   0        
productpage-v1-560495357-jk1lz    2/2       Running   0        
ratings-v1-734492171-rnr5l        2/2       Running   0        
reviews-v1-874083890-f0qf0        2/2       Running   0        
reviews-v2-1343845940-b34q5       2/2       Running   0        
reviews-v3-1813607990-8ch52       2/2       Running   0        

Congratulations: you have deployed an Istio-enabled application. Next, let's see the application in use.

Now that it's deployed, let's see the BookInfo application in action. First, you need to get the external IP of the gateway:

kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
istio-ingressgateway   LoadBalancer   15021:31414/TCP,80:31462/TCP,443:30644/TCP,31400:30668/TCP,15443:31719/TCP

Copy the EXTERNAL-IP value and paste it into the GATEWAY_URL environment variable.

export GATEWAY_URL=<your gateway IP>

Once you have the address and port, check that the BookInfo app is running by using curl:

curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage

Check you have a HTTP 200 output.

You can now point your browser to http://<your gateway IP>/productpage to view the BookInfo web page.

Refresh the page several times. Notice how you see three different versions of reviews shown in the product page? If you refer back to the diagram on the previous page, you will see we have three different book review services, which are called in a round-robin style - showing black stars, red stars, or no stars at all. This is the normal Kubernetes balancing behavior.

We can use Istio to do something different — to control which users are routed to which version of the services.

The BookInfo sample deploys three versions of the reviews microservice. When you accessed the application several times, you will have noticed that the output sometimes contains star ratings and sometimes it does not. This is because without an explicit default version set, Istio will route requests to all available versions of a service, in a round-robin fashion.

Routing rules control how requests are routed within an Istio service mesh. Requests can be routed based on the source and destination, HTTP paths and header fields, and weights associated with individual service versions.

Before you can use Istio to control the Bookinfo version routing, you need to define the available versions, called subsets, in destination rules. Run the following command to create default destination rules for the Bookinfo services:

kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml created created created created

Static routing

First, let's add rules to make traffic go to v1 of each service.

Verify that you don't have any routes for the services yet apart from the one that allows the gateway to route to the top-level productpage service:

kubectl get virtualservices

NAME       GATEWAYS             HOSTS
bookinfo   [bookinfo-gateway]   [*]  

We will create a VirtualService for each microservice. A VirtualService defines the rules that control how requests for the service are routed. Each rule corresponds to one or more request destination hosts. In our case we are routing to other services within our mesh so we can use the internal mesh name (e.g. reviews) as the host.

Here's how a rule can route all traffic for a reviews virtual service to Pods running v1 of that service, as identified by Kubernetes labels.

kind: VirtualService
  name: reviews
  - reviews
  - route:
    - destination:
        host: reviews
        subset: v1

The rule refers to a subset called v1, which is defined for the underlying reviews service instances as part of a DestinationRule:

kind: DestinationRule
  name: reviews
  host: reviews
  - name: v1
      version: v1

As can be seen above, a subset specifies one or more labels that identify version-specific instances. As the VirtualService above specifies the subset called v1, it will only send traffic with the label "version: v1".

Bookinfo includes a sample with rules for all four services. Let's install it:

kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

Note that we used the mtls version of the file because we are running Istio with mutual TLS enabled. The file includes traffic policies so that the communication between the Envoy sidecars for service to service traffic is encrypted. This all happens without changes to application code.

Confirm that four VirtualServices were created. There should be five in total. You can add -o yaml to view the actual configuration.

kubectl get virtualservices

Similarly, you can check the corresponding DestinationRules and their subset definitions:

kubectl get destinationrules

Go back to the Bookinfo application (http://$GATEWAY_URL/productpage) in your browser. Refresh a few times. Do you see any stars? You should see the book review with no rating stars, as reviews:v1 does not access the ratings service.

Dynamic routing

As the mesh operates at Layer 7, we can use HTTP attributes (paths or cookies) to decide on how to route a request.

Istio doesn't have any special, built-in understanding of user identity. Our productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service.

We can route certain users to a subset or service by matching a header like this:

kind: VirtualService
  name: reviews
    - reviews
  - match:
    - headers:
          exact: jason
    - destination:
        host: reviews
        subset: v2
  - route:
    - destination:
        host: reviews
        subset: v1

Create the route:

kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

View it in the list, or add -o yaml to see the full output.

kubectl get virtualservices reviews

We now have a way to route some requests to use the reviews:v2 service. Can you guess how? (Hint: no passwords are needed on this site) See how the page behaviour changes if you are logged in as no-one, 'jason', or 'kylie'.

Once the v2 version has been canary tested to our satisfaction by jason (or in a real example, a subset of your users), we can use Istio to progressively send more and more traffic to our new service.

Let's try that by sending 50% of the traffic to v3 by using weight based version routing. v3 of the service shows red stars. Replace the reviews route:

kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml

Confirm that the route was replaced:

kubectl get virtualservice reviews -o yaml

kind: VirtualService
  name: reviews
    - reviews
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 50
    - destination:
        host: reviews
        subset: v3
      weight: 50

The implementation of the routing in the Envoy proxy sidecar means that you may need to refresh your browser many times before seeing the results. With significant traffic there will be a 50% split. Send some extra traffic to the service like this:

watch -n 0.2 curl -o /dev/null -s -w "%{http_code}" http://$GATEWAY_URL/productpage

Now refresh the productpage in your browser and you should now see red colored star ratings about 50% of the time.

In a normal canary rollout you would want to use much smaller increments and then increase the amount of traffic gradually by progressively increasing the weighting for v3.

Now lets send 100% of the traffic to v3:

kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-v3.yaml

Now when you refresh your browser you should see the red stars 100% of the time.

Congratulations; you've reached the end of the Istio 'Hello World'.

The Istio site contains guides and samples with fully working example uses for Istio that you can experiment with. These include: