If you're on Brand plan or above, you can use the voice answer type to blend the power of AI and your own friendly face to build a "human chatbot".
How does it work?
A respondent records their question. After a little GPT-3 AI magic, they are sent to the step in your videoask that answers their question.
Note: Currently English is the only language supported by GPT-3 AI. If you would like to use this feature in another language, you can do so only with keyword matching.
View and analyze your responses
How is this similar to a chatbot?
Chatbots allow you to answer site visitor queries at scale, regardless of whether you are online. They are super handy to help customers answer frequently asked questions or find resources on your site at any hour of the day. The downside is that, depending on their intelligence, they can send customers in frustrating loops and, by their nature, lack the human touch.
The voice answer type, or human chatbot, relies on similar elements as generic chatbots (ie matching customers' queries with answers based on keywords) but takes it to the next level through the use of cutting-edge GPT-3 AI technology. We also help you bring the human element back to these automated interactions by greeting customers with a real person instead of a bot and prompting them to speak queries instead of typing them.
Oh, and yes, you can embed your human chatbot as a widget.
How does the voice answer type differ from other uses of conditional logic?
Voice and multiple-choice answer types use conditional logic almost identically. With each of them, you can map a prompt or choice to another step, end screen, or URL redirect.
The difference is with multiple choice (and NPS®) logic, the respondent chooses their answer for themselves. With the voice answer type, the respondent never sees the topics they are being matched with - the AI does this work for them.
While traditional logic is an awesome way to guide respondents down different paths, their options are constrained by the number of text on the multiple-choice or NPS® score buttons they can see.
With the voice answer type, respondents can ask any question they want in their own words. The AI technology we use will analyze their words to measure the intent of their question and match them with the appropriate answer. This gives greater flexibility in the scope and number of options you can cover (because using a lot of options will not overwhelm respondents).
Use the voice answer type
1. Once you've created a new step, click the answer type to change it.
Note: This answer type requires multiple steps to work properly.
2. Select Voice from the dropdown menu.
3. Type in the category field to add a category.
4. After adding your first category, click + Add a category or keyword to create more.
Note: A maximum of 30 categories may be used per step.
5. Click the - icon to delete a category.
6. Now comes the fun part. You'll need to map your categories to steps within your videoask using conditional logic. To do this, you can create a new step:
And follow the prompts:
Or match existing steps in the Logic tab:
7. To match respondent queries based solely on the keywords you have entered into the categories, toggle Disable AI, use keywords only on.
Note: Disabling the AI removes the "intelligence" from the matching process. This means respondents will need to say the exact word(s) in your category to be accurately matched with subsequent steps.
8. When we can't match a respondent question with any of the categories you have defined, we will send them to a fallback answer. To customize the fallback answer, toggle Use custom fallback on.
Then, in the logic tab be sure to select what step you would like to use as a fallback.
That's it! Just repeat the category creation and logic process to have your human chatbot answer as many questions as needed. When you're finished adding steps and categories, we highly recommend giving it a good number of test runs to make sure your categories are behaving the way you want them to.
Once it's consistently giving correct answers to the questions asked, you're ready to share your human chatbot with the world.
View and analyze your responses
When someone responds to your human chatbot, you'll see their response transcript and the category they matched with.
Creating just the right categories for your customer enquiries may take a bit of testing. To assess what's working and what's not, you can tag and filter your responses or export them as a .csv or .xlsx file to run an analysis.
If you export your videoask results, you'll see three columns for each step using the voice answer type.
The first column will show what category the response matched with:
When GPT-3 receives a transcript, it compares it to each category in your videoask. Each category is given a confidence score and it will match with the category that generates the highest score. The second column of your voice responses will display the confidence score of each matched category.
For example: On line 10 there is an 91% confidence score for the category Headache, from the transcript "I've been having a lot of migraines recently".
The third column of the voice answer results will display the response transcript:
From here you can analyze the frequency of correct vs incorrect matches, and see what questions you may be receiving that do not yet have a category. This information can then be used to optimize your human chatbot to deliver the best possible answers!
How many categories do I need?
The most important thing is to anticipate what types of questions your respondents will ask. Try to create a category for each main topic that may come up.
If the AI cannot match a respondent question with any category, it will give them a fallback response. So the more accurate the categories, the more likely a respondent will get to the right answer.
Things to watch out for:
AI technology is smart, but it's not perfect. If you have poorly worded categories or categories that overlap, it may get confused as to what the category is actually about or where to send respondents. Keep your categories simple (a few words max) and clearly defined.
Also, note that there is a maximum of 30 categories per voice step.
What do I do if queries are not landing on the step I want?
This is most likely due to wording, confusion between categories, or overly complicated categories. But the best way to find the right keywords for your categories is to test them out.
Remember, the more AI technology is used, the better it will get. It does its best to measure the intent of the question asked, but small tweaks to the keywords you use in your category can really help it out (ie making a keyword one word instead of two, where appropriate).
💡 Pro tip: Because there's no guarantee respondents will be directed to the right answer, or in case they have multiple questions, adding an option for them to go back and ask another question can be helpful.
To do this, add a "Go back" or "Ask again" multiple choice option and set the logic to return to the voice step (in this case, step 1 of our videoask).
How are minutes counted with this answer type?
We count the minutes every time we transcribe an audio or video–whether that's you creating a videoask, a respondent answering one, or subsequent back-and-forth replies.
Because a respondent's voice query needs to be transcribed for the GPT-3 technology to work, we count each query towards your usage for the month. Bear in mind that most queries should be short (seconds as opposed to minutes), so this feature can still support relatively heavy usage without burning through your minutes. Learn more about how video processing minutes are calculated.