Actions on Google is a developer platform that lets you create software to extend the functionality of Google Assistant, Google's virtual personal assistant, across more than 1 billion devices, including smart speakers, phones, cars, TVs, headphones, and more. Users engage Assistant in conversation to get things done, like buying groceries or booking a ride. (For a complete list of what's possible, see the Actions directory.) As a developer, you can use Actions on Google to easily create and manage delightful and effective conversational experiences between users and your third-party service.
In the codelab, you'll build a sophisticated Conversational Action that does the following:
The following tools must be in your environment:
In addition, familiarity with JavaScript (ES6) is strongly recommended, although not required, to understand the webhook code that you'll use.
You can optionally get the full project code from the GitHub repository.
You're going to start with the Dialogflow intents from the Level 1 codelab, but locally develop and deploy the webhook on your machine using Cloud Functions for Firebase.
In contrast to using the Dialogflow inline editor, you can use a local machine, which gives you more control over your programming and deployment environment. That provides several advantages, including:
index.js
). To get the base files, run the following command to clone the GitHub repository for the Level 1 codelab.
git clone https://github.com/actions-on-google/codelabs-nodejs
The repository contains the following important files:
level1-complete/functions/index.js
, a Javascript file, contains your webhook's fulfillment code. It's the main file that you'll edit to add additional Actions and functionality.level1-complete/functions/package.json
outlines dependencies and other metadata for the Node.js project, but you can ignore it. You should only need to edit that file if you want to use different versions of the Actions on Google client library or other Node.js modules.For the sake of clarity, rename the /level1-complete
directory name to /level2
. You can do so by using the mv
command in your terminal.
$ cd codelabs-nodejs $ mv ./level1-complete ./level2
In order to test the Action that you'll build for this codelab, you need to enable the necessary permissions.
Next, you'll need to set up the Actions project and the Dialogflow agent.
Do the following:
codelab-level-one.zip
file.Now that your Actions project and Dialogflow agent are ready, do the following to deploy your local index.js
file using the command-line interface.
/level2/functions
directory of your base files clone.firebase use <PROJECT_ID>
npm install
firebase deploy --project <PROJECT_ID>
After a few minutes, you should see a message that says, "Deploy complete," indicating that you deployed your webhook to Firebase.
You need to provide Dialogflow with the URL to the Cloud Function. To retrieve the URL, follow these steps:
Now you need to update your Dialogflow agent to use your webhook for fulfillment. To do so, follow these steps:
At this point, users can start a conversation by explicitly invoking your Action. Once users are mid-conversation, they can trigger the ‘favorite color' custom intent by providing a color. Dialogflow will parse the user's input to extract the information your fulfillment needs—namely, the color—and send it to your fulfillment. Your fulfillment then autogenerates a lucky number to send back to the user.
To test out your Action in the Actions simulator, do the following:
Your Actions project always has an invocation name, like "Google IO 18." When users say "Talk to Google IO 18," they trigger the Dialogflow welcome intent. Every Dialogflow agent has one welcome intent, which acts as an entry point for users to start conversations.
Most of the time, users would rather jump to the specific task they want to accomplish than start at the beginning of the conversation every time. You can provide explicit deep links and implicit invocations as shortcuts into the conversation to help users get things done more efficiently.
Adding deep links and implicit invocations to your Actions is a simple, single-step process using the Assistant integration page in the Dialogflow console.
In your Actions project, you should have defined a custom Dialogflow intent called "favorite color" in an agent. The agent parses your training phrases—such as "I love yellow" and "Purple is my favorite"—extracts the color parameter from each phrase, and makes it available to your fulfillment.
For this codelab, you're going to add the ‘favorite color' intent as an implicit invocation, meaning that users can invoke that intent and skip the welcome intent. Doing so also enables users to explicitly invoke the "favorite color" intent as a deep link (for example, "Hey Google, talk to my test app about blue"). The training phrases and parameters you defined for the "favorite color" intent enable Dialogflow to extract the color parameter when users invoke that deep link.
To add your intent for deep linking and implicit invocation, do the following:
Google Assistant will now listen for users to provide a color in their invocation and extract the color parameter for your fulfillment.
To test out your deep link in the Actions simulator:
It's good practice to create a custom fallback intent to handle invocation phrases that don't provide the parameters you are looking for. For example, instead of saying a color, the user might say something unexpected like, "Talk to my test app about bananas." The term "bananas" would not fit into any of your Dialogflow intents, so you need to build a catch-all intent.
Given that Assistant now listens for any phrases that match the "favorite color" intent, you should provide a custom fallback intent specific for catching anything else.
To set up your custom fallback intent, do the following:
@sys.any
entity to tell Dialogflow to generalize the expression to any grammar (not just "banana"). Double click on "banana" and filter for or select @sys.any
.@sys.any
entity, which you can safely ignore for now. Click OK. (Generally, it's not advisable to use the @sys.any
entity, which can overpower any other intent's speech biasing, but this is a special case where you ensure that it'll only be triggered at invocation time when other intents have not been matched.) To test your custom fallback intent in the Actions simulator, type "Talk to my test app about banana" in the Input field and hit Enter.
You can make your Actions more engaging and interactive by using personalized information from the user. To request access to user information, your webhook can use helper intents to obtain values with which to personalize your responses.
You can use the actions_intent_PERMISSION
helper intent to obtain the user's display name with permission. To use the permission helper intent, do the following:
/level2/functions
folder and open the index.js
file in any text editor on your local machine.const {dialogflow} = require('actions-on-google');
with this code:
// Import the Dialogflow module and response creation dependencies from the
// Actions on Google client library.
const {
dialogflow,
Permission,
Suggestions,
} = require('actions-on-google');
Notice that you're importing the Suggestions
dependency, which lets your webhook include suggestion chips in your responses.
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app);
// Handle the Dialogflow intent named 'Default Welcome Intent'.
app.intent('Default Welcome Intent', (conv) => {
conv.ask(new Permission({
context: 'Hi there, to get to know you better',
permissions: 'NAME'
}));
});
Next, you need to update your webhook to handle the response. Use the user's information in your response if they granted permission and gracefully move the conversation forward regardless of whether permission was granted.
To respond to the user, do the following:
index.js
:// Handle the Dialogflow intent named 'actions_intent_PERMISSION'. If user
// agreed to PERMISSION prompt, then boolean value 'permissionGranted' is true.
app.intent('actions_intent_PERMISSION', (conv, params, permissionGranted) => {
if (!permissionGranted) {
conv.ask(`Ok, no worries. What's your favorite color?`);
conv.ask(new Suggestions('Blue', 'Red', 'Green'));
} else {
conv.data.userName = conv.user.name.display;
conv.ask(`Thanks, ${conv.data.userName}. What's your favorite color?`);
conv.ask(new Suggestions('Blue', 'Red', 'Green'));
}
});
You register a callback function to handle the actions_intent_PERMISSION
intent you created earlier. In the callback, you first check whether the user granted permission to know their display name. The client library passes the argument to the callback function as the third parameter, here called permissionGranted.
The conv.user.name.display
value represents the user's display name sent to your webhook as part of the HTTP request body. If the user grants permission, you store the value of conv.user.name.display
in a property called userName
of the conv.data
object.
To provide additional hints to the user on how to continue the conversation, you call the Suggestions()
function to create suggestion chips that recommend some example colors. If the user is on a device with a screen, he or she can provide input by tapping on a chip rather than by saying or typing a response.
The conv.data
object is a data structure provided by the client library for in-dialog storage. You can set and manipulate the properties on that object throughout the duration of the conversation for the user.
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'.
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
// Respond with the user's lucky number and end the conversation.
conv.close('Your lucky number is ' + luckyNumber);
});
with this:
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
if (conv.data.userName) {
conv.close(`${conv.data.userName}, your lucky number is ${luckyNumber}.`);
} else {
conv.close(`Your lucky number is ${luckyNumber}.`);
}
});
Here you modify the callback function for the ‘favorite color' intent to use the userName
property to address the user by name. If the conv.data
object doesn't have a property called userName
(that is, the user previously denied permission to know their name, so the property was never set), then your webhook still responds, but without the user's name.
firebase deploy --project <PROJECT_ID>
To test your Action in the Actions simulator, do the following:
You can embed SSML in your response strings to alter the sound of your spoken responses or even embed sound effects and other audio clips.
The following shows an example of SSML markup:
<speak>
Mandy, your lucky number is 5.
<audio src="https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg"></audio>
</speak>
You'll use a sound clip from the Actions on Google sound library.
To add a sound effect to the "favorite color" response, do the following:
index.js
in an editor.// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, { color }) => {
const luckyNumber = color.length;
if (conv.data.userName) {
conv.close(`${conv.data.userName}, your lucky number is ` + `${luckyNumber}.`);
} else {
conv.close(`Your lucky number is ` + `${luckyNumber}.`);
}
});
with this code:
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg';
if (conv.data.userName) {
// If we collected user name previously, address them by name and use SSML
// to embed an audio snippet in the response.
conv.close(`<speak>${conv.data.userName}, your lucky number is ` +
`${luckyNumber}.<audio src="${audioSound}"></audio></speak>`);
} else {
conv.close(`<speak>Your lucky number is ${luckyNumber}.` +
`<audio src="${audioSound}"></audio></speak>`);
}
});
Here, you declare an audioSound
variable containing the string URL for a statically hosted audio file on the web. You use the <speak>
SSML tags around the strings for the user response, indicating to Assistant that your response should be parsed as SSML.
The <audio>
tag embedded in the string indicates that you want Assistant to play some audio played at that point in the response. The src
attribute of that tag indicates where the audio is hosted.
firebase deploy --project <PROJECT_ID>
To test your Action in the Actions simulator, do the following:
If everything works correctly, then the user should hear this sound effect in the response.
To keep the conversation going, you can add follow-up intents that will trigger based on the user's response after a particular intent. To add follow-up intents to "favorite color," do the following:
As you expand your conversational app, you can use custom entities to deepen and personalize the conversation.
So far, you've only been using built-in entities (@sys.color,
@sys.any
) to match user input. You're going to create a custom entity (also called a developer entity) in Dialogflow so that when a user provides one of a few fake colors, you can follow up with a custom response from your webhook.
To create a custom entity:
You should see the "fakeColor" parameter show up under Actions and parameters now that Dialogflow recognizes your custom entity.
When a user selects one of the fake colors that you defined, your webhook will respond with basic cards that show each color.
To configure your webhook, do the following:
/level2/functions
folder and open index.js
in an editor.// Import the Dialogflow module and response creation dependencies from the
// Actions on Google client library.
const {
dialogflow,
Permission,
Suggestions,
} = require('actions-on-google');
with this code:
// Import the Dialogflow module and response creation dependencies from the
// Actions on Google client library.
const {
dialogflow,
Permission,
Suggestions,
BasicCard,
} = require('actions-on-google');
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg';
if (conv.data.userName) {
// If we collected user name previously, address them by name and use SSML
// to embed an audio snippet in the response.
conv.close(`<speak>${conv.data.userName}, your lucky number is ` +
`${luckyNumber}<audio src="${audioSound}"></audio>.`);
} else {
conv.close(`<speak>Your lucky number is ${luckyNumber}` +
`<audio src="${audioSound}"></audio>.`);
}
});
with this code:
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'.
app.intent('favorite color', (conv, {color}) => {
const luckyNumber = color.length;
const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.ogg';
if (conv.data.userName) {
// If we collected user name previously, address them by name and use SSML
// to embed an audio snippet in the response.
conv.ask(`<speak>${conv.data.userName}, your lucky number is ` +
`${luckyNumber}.<audio src="${audioSound}"></audio> ` +
`Would you like to hear some fake colors?</speak>`);
conv.ask(new Suggestions('Yes', 'No'));
} else {
conv.ask(`<speak>Your lucky number is ${luckyNumber}.` +
`<audio src="${audioSound}"></audio> ` +
`Would you like to hear some fake colors?</speak>`);
conv.ask(new Suggestions('Yes', 'No'));
}
});
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app);
// Define a mapping of fake color strings to basic card objects.
const colorMap = {
'indigo taco': {
title: 'Indigo Taco',
text: 'Indigo Taco is a subtle bluish tone.',
image: {
url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDN1JRbF9ZMHZsa1k/style-color-uiapplication-palette1.png',
accessibilityText: 'Indigo Taco Color',
},
display: 'WHITE',
},
'pink unicorn': {
title: 'Pink Unicorn',
text: 'Pink Unicorn is an imaginative reddish hue.',
image: {
url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDbFVfTXpoaEE5Vzg/style-color-uiapplication-palette2.png',
accessibilityText: 'Pink Unicorn Color',
},
display: 'WHITE',
},
'blue grey coffee': {
title: 'Blue Grey Coffee',
text: 'Calling out to rainy days, Blue Grey Coffee brings to mind your favorite coffee shop.',
image: {
url: 'https://storage.googleapis.com/material-design/publish/material_v_12/assets/0BxFyKV4eeNjDZUdpeURtaTUwLUk/style-color-colorsystem-gray-secondary-161116.png',
accessibilityText: 'Blue Grey Coffee Color',
},
display: 'WHITE',
},
};
// Handle the Dialogflow intent named 'favorite fake color'.
// The intent collects a parameter named 'fakeColor'.
app.intent('favorite fake color', (conv, {fakeColor}) => {
// Present user with the corresponding basic card and end the conversation.
conv.close(`Here's the color`, new BasicCard(colorMap[fakeColor]));
});
This new code performs two main tasks:
First, it sets up a mapping (colorMap
) of color strings (e.g. "indigo taco," "pink unicorn," "blue grey coffee") to the content needed for BasicCard
objects. BasicCard
is a client library class for constructing visual responses corresponding to the basic card type.
In the constructor calls, then you pass configuration options relevant to each specific color, including:
Finally, you set a callback function for the ‘favorite fake color' intent, which uses the fakeColor
option that the user selected to create a card corresponding to that fake color and present it to the user.
firebase deploy --project <PROJECT_ID>
To test out your Action in the Actions simulator, do the following:
When you select a fake color, you should receive a response that includes a basic card, similar to the following screenshot:
You now possess the intermediate skills necessary to build conversational user interfaces with Actions on Google.
Explore the following resources for learning about Actions on Google:
Follow @ActionsOnGoogle on Twitter to stay tuned to the latest announcements and tweet with #AoGDevs to share what you build!
Before you go, please fill out this form