Simulate the Conversation
The Simulator tab is where you can check how the conversation between the bot and the user will look like.
Simulated actions depend on the selected channel, this is because they differ across channels. You can check how your rich cards, carousels, and quick replies are all working in the simulator.
You can also check the behavior of Bot Receives and Bot action elements. Each element displays specifics of occurrences (attributes gathered, whether the API call was successful, etc.) and allows you to simulate parts that are expected from the end user but are not available in the conversation at the moment (like destination or location).
Keep in mind that for carousel and rich cards, the cards will appear one after the other in the simulator, not one next to each other as they do in dialogs.
You can use the EXPIRE SESSION action in the simulator to check how expired session dialogs appear to end-users.
When you are satisfied with the conversation simulation, you can activate the chatbot.
If you find that the simulation is failing, it is likely due to chatbot elements not being configured properly. The system will notify you when something is missing.
Once you start the simulation, you will see the chat logs on the right. If at any time you want to start over, click the RESTART SIMULATION button.
Check Natural Language Processing
Here is also where you can check how well does the NLP (Natural Language Processing) part of Answers works. You can use synonyms of the words you trained the bot with and see how well the bot recognizes the intent. For example, if you train the chatbot with the word 'baggage' but the end-user enters 'luggage'.
Here are the training phrases that we used to train the Lost baggage intent:
Check how well Answers recognizes the intents in your chatbot.
In the simulator:
- Click Start Simulation.
- Enter a user message.
- Click Intent engine.
In the right-hand pane, you can view the following:
Intent matched: The intent with the highest score that was recognized by evaluating the tokenized text against training phrases.
If the intent match is not correct, check the Tokenized text section to understand which words were used to evaluate the intent score.
Example: Your chatbot uses the following intents - 'book appointment', 'cancel appointment', and 'reschedule appointment'. If Answers incorrectly recognizes 'book appointment' as the intent instead of 'cancel appointment, do one or more of the following:
- Add more training phrases to the correct intent (cancel appointment).
- Relocate some of the training phrases between intents (from 'book appointment' to 'cancel appointment')
- If intents are similar, merge them into a single intent, and use a different approach to distinguish between these intents. In this example, 'book appointment', 'cancel appointment', and 'reschedule appointment' may have similar training phrases. Merge these intents into one. Then, use a combination of named entity recognition (NER) and a custom attribute type to focus on the verbs 'book', 'cancel', and 'reschedule'.
Route to dialog: The dialog that was linked to the recognized intent (in the intent matched section) when the chatbot was built
Original text: The original user message without any changes, such as autocorrection and other post processing actions
Tokenized text: The text that was obtained after word autocorrection and removing stop words.
Stop words are the words in the Answers stop list. These words are insignificant and are filtered out (stopped) before processing natural language data.
Word autocorrection is applied on words that are not found in the training phrases. It is based on the following:
- synonym lookup
- replacement of characters as a part of typo-correction. If Answers does not recognize a user message, it assumes that the user message contains a typographical error, and changes a few characters in the word to see if the altered word is present in the training phrases.
If a word is autocorrected incorrectly, Answers may recognize the wrong intent in some cases. If you find such wrongly autocorrected words, add a training phrase that contain the original word (before autocorrection) to the correct intent. So, Answers will not autocorrect this word again.
To improve recognition accuracy, add more training phrases.
Autocorrected: Indicates whether autocorrection was applied. If autocorrection was applied, the field value is true.
Resolved attributes: Resolved NER attributes that are configured for the recognized intent.
Simulate User Interaction
Simulate what the conversation would look like when users are responding to your chatbots.
Simulate Incoming Files
You can also simulate receiving files from end users. Click on Send message and select what type of file you expect the end user to send during the conversation.
Simulate a User Location
Select a location to simulate how your chatbots converse with users based on their geography.
Simulate Image Caption Display
Simulate the image caption display when sending images to chat apps which support image captions.
As well as simulating the interaction with the user, you can also use the simulator to see which bot actions are occurring when to get an insight as to what the customer journey looks like when interacting with the bot.
Here are the things you can track in the simulator to track what's going on in the background:
- User Input - See which keywords are being registered as matches in bots from user responses
- Delay - See when there's a delay which occurs during the chat
- Close Session - See when users are presented with the option to close the session
- New Dialog - See when you are entering into a new dialog to get a better insight into the user flow
- Switch Dialog - See when you are switching to another dialog to get a better insight into the user flow
- Transfer to Agent - See when users are offered the option to transfer to an agent
- Webhook - See webhook request and response data
- CSAT - See details about CSAT