Who am I? Alexa introduces Voice Profiles

hand in hand

Privacy within the privacy of your home is a concern for users of the Amazon Echo and of any other voice assistant, especially since skills that  sync it with your personal accounts were made available. I am okay with my spouse checking my calendar, but I would not be so happy to mistake her appointments with mine! Voice assistants also bring out an an ages old problem that us tecchies detect very well, but others not so much: that of cardinality. An example: when you have one Echo (or multiple, linked Echos acting as one, if your home is bigger than mine!) but you don’t live alone, then it’s quite likely that more than one human will speak to Alexa. Why this one-to-many relationship between humans and machines represents a cardinality problem?

Let’s continue with the example. At home, my spouse and I use our Amazon Echo. We’re both non-English speakers and have distinct accents in English (we learnt the language in different continents). Our Echo sometimes goes crazy understanding one or the other. The Machine Learning element of Alexa must be very confused about supposedly the same human saying the same thing in such different ways at random moments in time! I bet Alexa would be happier if we could let her know that we’re two humans, if we could teach her to tell us apart, and then teach her to understand us better one by one.

If on top of having different voices and different accents, you wish to use individual services information (personal calendars, mail accounts…) then you need to be able to somehow link those individual services with your Echo devices – again, cardinality problem. Which one will Alexa use? Mine or my spouse’s? Why does it have to be only one? Can’t it be both?

Luckily, Amazon has just launched Voice Profiles to achieve this. You configure your Echo devices to pair with as many humans as needed. How? Through the Alexa app on your Smartphone. Here’s how:

  • The person whose Amazon account is linked with the Echo device must launch the Alexa app on their Smartphone, visit Settings -> Accounts -> Voice, and follow the instructions.
  • The second adult in the household must do the following:
  1. When both of you are at home, launch the Alexa app on the primary user’s Smartphone.
  2. Settings -> Accounts -> Household profile, and follow the instructions to set up this new user.
  3. With any of your Smartphones, log on to the Alexa app with the credential of the second adult in the household.
  4. Follow the instructions below.
  • Any other humans other than the primary account holder must do the following:
  1. Install the Alexa app on your Smartphone if you haven’t done so.
  2. Log in with your Amazon account (or create one if you’re not the second adult in the household).
  3. Provide the info that’s required to pair up with the Echo device.
  4. (you can skip Alexa calling and messaging if you don’t want to use that with your Echo).
  5. Settings -> Accounts -> Voice, and follow the instructions.

Here’s the full instructions.

Creating the Interaction Model

As we said in the previous post, a Skill has two distinct parts: the Interaction Model and what I call “the functionality”. In this post we will try to describe the elements of the Interaction Model, the rationale behind them, behind the split, and the shortcomings or limitations of the model adopted by Amazon.

So, quoting what we said already:

The Interaction model is everything related with speech. It’s where you specify the Invocation name, the slots that your Skill can understand, and very important, examples of whole sentences that your Skill can process. These sentences are called “Sample Utterances” and you will spend many hours perfecting those. There’s also something called the “Intent schema” and it’s very, very important, because it defines the different tasks that Alexa will be asking to “the functionality”, based on what the user has asked Alexa to do. It’s where you define the hooks between the two parts of the Skill.

We mentioned four elements:

  • Invocation name
  • Intent Schema
  • Slots
  • Sample Utterances

Let’s start from the beginning!

Invocation name

We saw the other day that this is not the name of the Skill, but you will probably decide that they are identical. The Invocation name is made of the words that you pronounce so that Alexa can figure out which Skill you want to use. You will always use it in conjunction with the wake word (Alexa! Echo! or Amazon! at the time of writing) and some verb: start, ask, etc. So, when you’re deciding on an invocation name, it’s worth trying it out. Just imagine how the Skill will be used:

  • “Alexa, start <<name of my skill>>”
  • “Alexa, ask <<name of my skill>> to…”

Make sure that the sentences above are easy to remember, easy to say, and easy for Alexa to recognize. My two golden rules would be:

  1. Make sure that the entire sentence is semantically and syntactically correct (i.e. makes sense). E.g. if you’re going to invoke your skill in the first way (“start”), it’s best that your Skill name represents a thing (saying “start the car” sounds okay, saying “start the driver” sounds really weird). If it’s going to be the second  way (“ask… to…”), then you probably want the Skill to represent a profession or a person who carries out a task (e.g. Wine Helper, Dream Catcher, things like that). Also avoid falling into language ambiguity. More on this later when we talk about Utterances, and limitations of the model adopted by Amazon.
  2. Make sure that it’s easy to pronounce, you don’t want to end up with a tongue twister, or drive those with a particular accent crazy. Using the word “think” it’s probably a very bad idea.

Intent Schema

This is the most technical part of the Interaction model because it’s the boundary between “the functionality” and speech. You can say it’s the “contract” between these two parts of the Skill. Once defined, voice interaction designer and developer can part ways and do their thing. Once they finish, if both complied with the Intent Schema, then everything will integrate nicely.

Not getting into code, but staying at the conceptual level, this is what happens.

The developer comes up with a list of different situations where the functionality will receive instructions from the user. Let’s call these “Intents”. Examples are: start, help, play, ask, quit, etc.

Some of those “Intents” will require a bit more info from the user. “Start” doesn’t require more information, but what about “Ask”? “Ask” what? So this must also be specified. This “what” is known as a a”Slot”, and their definition takes place in their own section of the Interaction Model. They are used here in the Intent Schema, though, hence this little explanation.

So, the content an Intent Schema will be something like this:

  • Start
  • Stop
  • Play
  • Quit
  • Ask “the time”
  • Ask “the weather” “location”
  • Ask “the weather” “location” “date”

The Intent Schema is written in JSON. This is a certain syntax or notation to write down information. To learn more about it, w3schools has a good tutorial. But the whole point of using JSON is that it’s “user friendly”, so having a good Intent Schema example is typically enough, it isn’t hard to modify it with the actual Intents and Slots for your Skill. This is the one that would represent the “Ask”: questions above:

{
  "intents": [
    {
      "intent": "GetWeatherIntent",
      "slots": [
        {
          "name": "Location",
          "type": "LIST_OF_LOCATIONS"
        },
        {
          "name": "Date",
          "type": "AMAZON.DATE"
        }
      ]
    },
    {
      "intent": "GetTimeIntent"
    }
  ]
}

Two things to notice here:

  1. The Intent names (GetTimeIntent and GetWeatherIntent) are not in written in human or natural language. They are code. It’s the task of the interaction designer to define the “human language” that must be mapped to those Intent names. She will do that in the Utterances section. A piece of advice with Intent names: it’s good practice to add the word Intent as a suffix (i.e. GetWeatherIntent instead of GetWeather). Don’t get lazy so that things don’t get confusing!!!
  2. The Slot names have a name and a type. Type means the kind of data that will be used when the Intent is invoked (e.g. 3rd April for the “Date” slot, Barcelona for the “Location” slot). And name is… well… self explanatory. We’ll explain those in the Slots section.

Slots

[If you’re a developer, to say that Slots are just a “variable” will suffice for you to understand]

They are the placeholder for the specific information that the user will provide to Alexa when using a Skill. . The Slot section is where you define them. You have to define them BEFORE you use them in the Interaction Schema, or you won’t be able to save your Skill in the developer console.

There are two kind of Slots: built-in, and custom. Built-in Slot types are those you would expect as basic data types in any programming language: numbers, dates, etc. Amazon is adding new ones all the time. This is the list as I type (obtained from here). Note they all have the prefix “AMAZON.” so that we can easily see that they are built-in:

  • AMAZON.DATE – converts words that indicate dates (“today”, “tomorrow”, or “july”) into a date format (such as “2015-07-00T9”).
  • AMAZON.DURATION – converts words that indicate durations (“five minutes”) into a numeric duration (“PT5M”).
  • AMAZON.FOUR_DIGIT_NUMBER – Provides recognition for four-digit numbers, such as years.
  • AMAZON.NUMBER – converts numeric words (“five”) into digits (such as “5”).
  • AMAZON.TIME – converts words that indicate time (“four in the morning”, “two p m”) into a time value (“04:00”, “14:00”).
  • AMAZON.US_CITY – provides recognition for major cities in the United States. All cities with a population over 100,000 are included. You can extend the type to include more cities if necessary.
  • AMAZON.US_FIRST_NAME – provides recognition for thousands of popular first names, based on census and social security data. You can extend the type to include more names if necessary.
  • AMAZON.US_STATE – provides recognition for US states, territories, and the District of Columbia. You can extend this type to include more states if necessary.

Custom slots are just lists of possible values (like the values in a drop down list). That’s why they are typically called LIST_OF_WHATEVER. An example would be LIST_OF_WEEKDAYS and the content would be

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

Sample Utterances

This is the heart of the Interaction design. I bet you will spend many hours polishing this!!!

So, here what you do is this. Remember the Intent Schema? Well, for each one of them you have to come up with all the real-life examples you can think of. And you type them here. One by one. You will easily come up with hundreds of lines. I’ll discuss this in the shortcomings part of this post. A Sample Utterance looks like this:

NameOfIntent and a section in natural language that may contain none one {SlotName} or many {SlotNames}

In the example we’ve been following:

  • GetTimeIntent tell me the time
  • GetTimeIntent time please
  • GetTimeIntent what time is it
  • GetWeatherIntent tell me the weather for {Location} on the {Date}
  • GetWeatherIntent tell me the weather for {Location} {Date}

On the 4th example, the interaction designer is thinking of “tell me the weather for Paris on the 3rd April. On the 5th example, the interaction designer is thinking of “tell me the weather for Paris tomorrow”. This is like Pokémon: you gotta catch them all (possible examples of speech by your users!)

That’s it!!! Now you know how to create your Interaction Model!!!

[The Amazon folks explain this interaction business  here.]

Shortcomings or Limitations

You have to understand the big effort that Amazon is making here. The computing power that speech recognition uses is vast and you have to avoid the convoluted, complicated cases like the plague. From the days of Noam Chomsky and all his good work on Grammars, we know that natural language is inherently ambiguous, and that when you’re defining a synthetic grammar it’s quite easy to generate ambiguity and very hard (impossible in fact) to make sure you don’t.

If you don’t know what I am talking about, let’s analyze this sentence.

“In the show, I liked everything but everything but the girl girl.” If you don’t know that there is a band called “everything but the girl” with a female lead singer, you would think that the sentence above is gibberish, and discard it. Alexa would go crazy!

In order to avoid that, what AVS does is, before it accepts the Interaction Model of your skill, it runs some checks to make sure that you’re not introducing any ambiguity or crazy loops for Alexa to go crazy. In Computing Science terms, what you’re creating with the Interaction Model is a Context Free Grammar and the checks I mention are some heuristics trying to detect if the grammar is ambiguity-free. If you’re interested, some heavy reading here.

So, Amazon set very strict ways in the definition of your Interaction Model, and these generate the main limitations, in my opinion: that both Custom Slots and Sample Utterances are static: you have to define them beforehand and you cannot change them on the go while the Skill is live. If you want to include an extra Utterance, no matter how innocent it looks like, if you need a new value in one of your Custom Slots, you have to change the Interaction Model AND submit the Skill for re-certification. Best case possible: it will take two full working days to introduce the change.

Imagine that your Skill deals with names of people (names of players, names of friends… whatever) as Slots. You have to provide the list with all the possible names BEFOREHAND. You cannot add Anaïs, or any other person with a name you wouldn’t have thought of, on the fly through usage of the  Skill. You have to add it to the Interaction Model, and re-submit for certification.

Managing the Sample Utterances as plain text is also very, very tricky. You will just lose track of what’s in there and troubleshooting is kind of hard. My workaround is the creation of  a little Access database tool with a simple but relational data model and some wonderful macros that “dump” the content of the database as a long string of text that matches the syntax of the Sample Utterances expected by AVS in the developer console, then I copy & paste this super long string.

Everything else, I think it’s super, and I am really grateful to Amazon for opening the platform for all to explore and develop Skills.