Actions on Google is a developer platform that lets you create software to extend the functionality of Google Assistant, Google's virtual personal assistant, across more than 500 million devices, including smart speakers, phones, cars, TVs, headphones, and more. Users engage Assistant in conversation to get things done, like buying groceries or booking a ride. (for a complete list of what's possible, see the Assistant directory.) As a developer, you can use Actions on Google to easily create and manage delightful and effective conversational experiences between users and your third-party service.
In this codelab, you'll refine a Conversational Action so that it:
The following screenshots show an example of the conversational flow with the Action that you'll build:
The following tools must be in your environment:
Familiarity with JavaScript (ES6) is also strongly recommended, although not required, to understand the webhook code that you'll use.
You can optionally get the full project code for this codelab from the GitHub repository.
The Firebase command-line interface allows you to deploy your Actions project to Cloud Functions.
To install or upgrade the command-line interface, run the following npm command:
npm install -g firebase-tools
To verify that the command-line interface has been installed correctly, open a terminal and run the following command:
firebase --version
Make sure the version of the Firebase command-line interface is above 3.5.0 so it has all the latest features required for Cloud Functions. If it's not, run npm install -g firebase-tools
to upgrade.
Authorize the Firebase command-line interface by running the following command:
firebase login
For this codelab, you'll start where the Level 2 codelab ended.
If you don't have the codelab cloned locally, run the following command to clone the GitHub repository::
git clone https://github.com/actions-on-google/codelabs-nodejs
For the sake of clarity, rename the /level2-complete
directory name to /level3
. You can do so by using the mv
command in your terminal.
$ cd codelabs-nodejs $ mv ./level2-complete ./level3
In order to test the Action that you'll build, you need to enable the necessary permissions.
Do the following:
codelab-level-two.zip
file from the /level3
directory you created earlier.Now that your Actions project and Dialogflow agent are ready, do the following to deploy your local index.js
file using the command-line interface:
/level3/functions
directory of your base files clone.firebase use <PROJECT_ID>
npm install
firebase deploy
After a few minutes, you should see a message that says, "Deploy complete!" It indicates that you deployed your webhook to Firebase.
You need to provide Dialogflow with the URL to the Cloud Function. To retrieve the URL, follow these steps:
Now you need to update your Dialogflow agent to use your webhook for fulfillment. To do so, follow these steps:
At this point, users can start a conversation by explicitly invoking your Action. Your fulfillment first uses the actions_intent_PERMISSION
helper intent to obtain the user's display name with permission. Once users are mid-conversation, they can trigger the "favorite color" intent by providing a color. Then, they receive a lucky number with a sound effect. Lastly, they can provide a "favorite fake color" that matches the "fakeColor" custom entity and receive a basic card in response.
To test out your Action in the Actions simulator, do the following:
Before going further, take a moment to consider the first step when building any Action—writing sample dialogs.
Before you start coding or even writing conversational flows, take the time to write (and say aloud) sample interactions between your user and Action. Write the interactions for "happy paths" when the conversation is progressing as expected, but also write them for when things go wrong, such as when the user provides some unexpected input. You can take a look at this blog post for more tips about writing sample dialogs.
Write the sample dialogs at the beginning of the development lifecycle (before you start coding), then iterate on your design as you add new conversational paths to your Action.
Here's an example of how you might represent the "happy path" for this codelab as a sample dialog:
User: | "Talk to my test app." |
Action: | "Hi there. To get to know you better, I'll need to get your name from Google. Is that ok?" |
User: | "Yes." |
Action: | "Thanks, <name>. What's your favorite color?" |
User: | "It's...umm...it's green!" |
Action: | "Your lucky number is 5. Would you like to hear some fake colors?" |
User: | "Yes." |
Action: | "Which color, indigo taco, pink unicorn, or blue grey coffee?" |
User: | "Pink unicorn." |
Action: | "Here's the color." <shows image of color> |
Here's a visual representation of that flow:
As a short exercise, take a minute to think about how you might write the sample dialog to model other interactions when the user provides unexpected responses to the Action's prompts or if the user is silent.
Here's an example of such a dialog:
User: | "Talk to my test app." |
Action: | "Hi there. To get to know you better, I'll need to get your name from Google. Is that ok?" |
User: | "Nope." |
Action: | "OK, no worries. What's your favorite color?" |
User: | "It's... umm...it's a boat!" |
Action: | "Sorry, what was that?" |
User: | "I mean green." |
Action: | "Your lucky number is 5. Would you like to hear some fake colors?" |
User: | "Yes." |
Action: | "Which color, indigo taco, pink unicorn, or blue grey coffee?" |
User: | <silence> |
Action: | "Which color would you like to hear about?" |
User: | "Pink unicorn." |
Action: | "Here's the color." <shows image of color> "Would you like to hear about another fake color?" |
User: | "No." |
Action: | "Let me know when you want to talk about colors again!" |
It's often helpful for your Action to store data between conversation sessions with the same user. Your Action can ask users for their preferences and remember them for later use, which lets you personalize future conversations with that user. For example, an Action that gives users a weather report based on a zip code could ask users whether they'd like the Action to remember their zip code for later conversations.
The conv.user.storage
object is a data structure provided by the Actions on Google Node.js client library for saving data across conversations for a particular user. In this section, you'll use the feature to cheerfully greet the user by name whenever they start a new conversation with your Action.
Open your
index.js
file in an editor and replace all instances of conv.data
with conv.user.storage
.
Update your default welcome intent handler to use the
conv.user.storage
object by replacing this code:
// Handle the Dialogflow intent named 'Default Welcome Intent'.
app.intent('Default Welcome Intent', (conv) => {
// Asks the user's permission to know their name, for personalization.
conv.ask(new Permission({
context: 'Hi there, to get to know you better',
permissions: 'NAME',
}));
});
with this code:
// Handle the Dialogflow intent named 'Default Welcome Intent'.
app.intent('Default Welcome Intent', (conv) => {
const name = conv.user.storage.userName;
if (!name) {
// Asks the user's permission to know their name, for personalization.
conv.ask(new Permission({
context: 'Hi there, to get to know you better',
permissions: 'NAME',
}));
} else {
conv.ask(`Hi again, ${name}. What's your favorite color?`);
}
});
In the terminal, run the following command to deploy your updated webhook code to Firebase:
firebase deploy
To test out your Action in the Actions simulator, do the following:
At the start of the second conversation, your Action should remember your name from the first time that you granted permission.
On smart speakers or other surfaces without a screen, there may not always be an obvious visual indicator of whether the device is waiting for a user response. Users may not realize your Action is waiting for them to respond, so it's an important design practice to implement no-input event handling to remind users that they need to respond.
Open your
index.js
file in an editor and add the following code:
// Handle the Dialogflow NO_INPUT intent.
// Triggered when the user doesn't provide input to the Action
app.intent('actions_intent_NO_INPUT', (conv) => {
// Use the number of reprompts to vary response
const repromptCount = parseInt(conv.arguments.get('REPROMPT_COUNT'));
if (repromptCount === 0) {
conv.ask('Which color would you like to hear about?');
} else if (repromptCount === 1) {
conv.ask(`Please say the name of a color.`);
} else if (conv.arguments.get('IS_FINAL_REPROMPT')) {
conv.close(`Sorry we're having trouble. Let's ` +
`try this again later. Goodbye.`);
}
});
Notice that you took advantage of a property of the conversation object called the REPROMPT_COUNT
. The value lets us you know how many times the user has been prompted so that you can modify your message each time. In the code snippet, the maximum reprompt count is set at two, at which point the conversation ends. That's a best practice, as prompting the user more than three times can increase frustration and stall the conversation.
In the terminal, run the following command to deploy your updated webhook code to Firebase:
firebase deploy
To test your custom reprompt in the Actions simulator, follow these steps:
Your Action should respond with a custom reprompt message every time that you simulate a nonresponse instead of entering a color, eventually exiting after the third reprompt.
Your Action should allow users to quickly bow out of conversations, even if they haven't followed the conversation path all the way through. By default, Actions on Google exits the conversation and plays an earcon whenever the user utters "exit," "cancel," "stop," "nevermind," or "goodbye."
You can customize that behavior by registering for the actions_intent_CANCEL
event in Dialogflow and defining a custom response.
In this section, you'll create a new cancel intent in Dialogflow and add a suitable final response message.
actions_intent_CANCEL
.actions_intent_CANCEL
.To test your custom exit prompt in the Actions simulator, follow these steps:
Your Action should respond with your custom exit prompt and end the conversation.
In this section, you'll enhance your Action by adding the ability for users to view and select a fake color option on devices with screen output.
It's important to design conversational experiences to be multi-modal, which means that users can participate via voice and text, as well as other interaction modes that their devices support (for example, touchscreen).
Always start with designing the conversation and writing sample dialogs for the voice-only experience. Then, design the multi-modal experience, which involves adding visuals as enhancements where it makes sense.
For devices with screen output, the Actions on Google platform provides several types of visual components that you can optionally integrate into your Action to provide detailed information to users.
One common use case for adding multimodal support is when users need to make a choice between several available options during the conversation.
In your conversation design, there's a decision point in the flow where the user needs to pick a fake color. You'll enhance this interaction by adding a visual component.
A good candidate for representing choices visually is the carousel. The component lets your Action present a selection of items for users to pick, where each item is easily differentiated by an image.
Make the following changes in the Dialogflow console to add the carousel.
When your favorite color - yes
follow-up intent is matched, the user is provided with the carousel, which is a visual element. As a best practice, you should check that the user's current device has a screen before presenting visual elements. You'll update your favorite color - yes
follow-up intent to perform that check.
favorite color
intent and select favorite color - yes
.You'll need to update the favorite fake color
intent in the Dialogflow console to handle the user's selection. To do so, follow these steps:
favorite fake color
intent. actions_intent_OPTION
. Dialogflow will look for that specific event when a user selects an option from the carousel.To implement the fulfillment in your webhook, perform the following steps.
To support the multi-modal conversation experience, you need to provide variable responses based on the surface capabilities of the device. You do that by checking the conv.screen
property in your fulfillment.
In the
index.js
file, update the require()
function to add the Carousel
and Image
packages from the actions-on-google
package so that your imports look like this:
// Import the Dialogflow module and response creation dependencies
// from the Actions on Google client library.
const {
dialogflow,
BasicCard,
Permission,
Suggestions,
Carousel,
Image,
} = require('actions-on-google');
Next, define the fakeColorCarousel()
function to build the carousel.
In the
index.js
file, add a fakeColorCarousel()
function with the following code:
// In the case the user is interacting with the Action on a screened device
// The Fake Color Carousel will display a carousel of color cards
const fakeColorCarousel = () => {
const carousel = new Carousel({
items: {
'indigo taco': {
title: 'Indigo Taco',
synonyms: ['indigo', 'taco'],
image: new Image({
url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDN1JRbF9ZMHZsa1k/style-color-uiapplication-palette1.png',
alt: 'Indigo Taco Color',
}),
},
'pink unicorn': {
title: 'Pink Unicorn',
synonyms: ['pink', 'unicorn'],
image: new Image({
url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDbFVfTXpoaEE5Vzg/style-color-uiapplication-palette2.png',
alt: 'Pink Unicorn Color',
}),
},
'blue grey coffee': {
title: 'Blue Grey Coffee',
synonyms: ['blue', 'grey', 'coffee'],
image: new Image({
url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDZUdpeURtaTUwLUk/style-color-colorsystem-gray-secondary-161116.png',
alt: 'Blue Grey Coffee Color',
}),
},
}});
return carousel;
};
Notice that carousels are built using the Items
object, which has several properties, including titles
and Images
. The Image
type contains a URL, which opens when the user clicks on the selection, as well as alternative text for accessibility.
To identify which carousel card the user selected,l use the keys of the Items
object—namely, "indigo taco," "pink unicorn," or "blue grey coffee."
Next, you need to add a handler for the favorite color - yes
follow-up intent to check if the conv.screen
property is true
. If so, thart indicates that the device has a screen. You can then send a response asking the user to select a fake color from the carousel by calling the ask()
function with fakeColorCarousel()
, which you pass as the argument.
In the
index.js
file, add a check for a screen on the current surface by adding the following code to your fulfillment:
// Handle the Dialogflow intent named 'favorite color - yes'
app.intent('favorite color - yes', (conv) => {
conv.ask('Which color, indigo taco, pink unicorn or blue grey coffee?');
// If the user is using a screened device, display the carousel
if (conv.screen) return conv.ask(fakeColorCarousel());
});
If the surface capability check returned false
, then your user is interacting with your Action on a device that doesn't have a screen. You should support as many different users as possible with your Action, so you're now going to add an alternate response that reads the color's description instead of displaying a visual element.
In the
index.js
file, add a screen capability check and fallback by replacing the following code:
// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
// Present user with the corresponding basic card and end the conversation.
conv.close(`Here's the color`, new BasicCard(colorMap[fakeColor]));
});
with this code:
// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
fakeColor = conv.arguments.get('OPTION') || fakeColor;
// Present user with the corresponding basic card and end the conversation.
conv.ask(`Here's the color.`, new BasicCard(colorMap[fakeColor]));
if (!conv.screen) {
conv.ask(colorMap[fakeColor].text);
}
});
In the terminal, run the following command to deploy your updated webhook code to Firebase:
firebase deploy
To test your carousel response in the Actions simulator, follow these steps:
You should see your carousel response appear under the Display tab on the right.
You can either type an option in the simulator or click on one of the carousel options to receive a card with more details about that color.
You should also test your response to see how it renders on a device without the screen capability. To test your response on a voice-only surface, follow these steps:
You should get a spoken response with a description corresponding to the color that you picked.
Your Action presents users with a multiple-choice question ("Which color, indigo taco, pink unicorn or blue grey coffee?") at the end of the conversation. Users should be able to see the other options they could have picked without having to again invoke your Action and navigate through your conversation to the decision point.
In this section, you'll create prompts that let a user choose to either pick another color or gracefully end the conversation.
Here's an example sample dialog for the interaction scenario where the user wants to pick another fake color:
Action: | "Would you like to hear some fake colors?" |
User: | "Yes." |
Action: | "Which color, indigo taco, pink unicorn, or blue grey coffee?" |
User: | "I like pink unicorn." |
Action: | "Here's the color. Do you want to hear about another fake color?" |
User: | "Yes please." |
Action: | "Which color, indigo taco, pink unicorn, or blue grey coffee?" |
Here's an example in which the user declines to pick another fake color:
Action: | "Would you like to hear some fake colors?" |
User: | "Yes" |
Action: | "Which color, indigo taco, pink unicorn, or blue grey coffee?" |
User: | "I like pink unicorn." |
Action: | "Here's the color. Do you want to hear about another fake color?" |
User: | "No thanks." |
Action: | "Goodbye, see you next time!" |
Here's a visual representation of those sample dialogs:
To implement that flow, use follow-up intents that Dialogflow matches based on the user's response after a particular intent. In your Action, you'll apply follow-up intents in the following way:
When using follow-up intents, your Action needs to be aware of the conversational context. That is, it needs to understand the statements leading up to a certain point in the conversation. Unless the user changes the subject, you can assume that the thread of conversation continues. Therefore, it's likely that your Action can use a user's previous utterances to resolve ambiguities and better understand their current utterances. For example, a flower ordering Action should understand that the user query "What about a half dozen?" is a follow-up to the user's previous utterance and interpret it as "How much does a bouquet of six roses cost?"
To follow your carousel selection with additional prompts, do the following:
Click on the favorite fake color - no intent, and do the following:
Click on Intents in the left navigation bar and click on the favorite fake color - yes intent. Then, do the following:
Next, you'll need to add a handler for the favorite fake color - yes
follow-up intent.
In the
index.js
file, replace the following code:
// Handle the Dialogflow intent named 'favorite color - yes'
app.intent('favorite color - yes', (conv) => {
conv.ask('Which color, indigo taco, pink unicorn or blue grey coffee?');
// If the user is using a screened device, display the carousel
if (conv.screen) return conv.ask(fakeColorCarousel());
});
with this code:
// Handle the Dialogflow follow-up intents
app.intent(['favorite color - yes', 'favorite fake color - yes'], (conv) => {
conv.ask('Which color, indigo taco, pink unicorn or blue grey coffee?');
// If the user is using a screened device, display the carousel
if (conv.screen) return conv.ask(fakeColorCarousel());
});
Lastly, you'll add suggestion chips to the favorite fake color
intent handler that trigger your two new follow-up intents.
In the
index.js
file, update the favorite fake color
intent handler with suggestion chips by replacing the following code:
// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
fakeColor = conv.arguments.get('OPTION') || fakeColor;
// Present user with the corresponding basic card and end the conversation.
conv.ask(`Here's the color.`, new BasicCard(colorMap[fakeColor]));
if (!conv.screen) {
conv.ask(colorMap[fakeColor].text);
}
});
with this code:
// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
fakeColor = conv.arguments.get('OPTION') || fakeColor;
// Present user with the corresponding basic card and end the conversation.
if (!conv.screen) {
conv.ask(colorMap[fakeColor].text);
} else {
conv.ask(`Here you go.`, new BasicCard(colorMap[fakeColor]));
}
conv.ask('Do you want to hear about another fake color?');
conv.ask(new Suggestions('Yes', 'No'));
});
In the terminal, run the following command to deploy your updated webhook code to Firebase:
firebase deploy
To test your follow-up prompt in the Actions simulator, do the following:
Clicking on the Yes chip should show you the carousel again and the No chip should exit the conversation with a friendly message.
You should also test your response to see how it handles being run on a device without the screen capability. To test your response on a different surface, do the following:
Responding with "yes" should prompt you with the three colors again and responding with "no" should exit the conversation with a friendly message.
You covered the advanced skills necessary to build conversational user interfaces with Actions on Google!
You can explore the following resources for learning about Actions on Google:
Follow @ActionsOnGoogle on Twitter to stay tuned to the latest announcements and tweet with #AoGDevs to share what you build!
Before you go, please fill out this form.