Transcription confidence scores are used to help ensure reliable slot filling when building voice-enabled chatbots with Amazon Lex. These scores provide a measure of confidence in Amazon Lex's conversion of speech to text for slot values.
Transcription confidence scores can be used to validate if a spoken slot value was correctly understood, decide whether to ask for confirmation or re-prompt, and branch conversation flows based on recognition confidence.
Progressive confirmation, adaptive re-prompting, and branching logic are ways to leverage confidence scores for better slot handling.
These patterns help create more robust slot filling experiences, reduce errors in capturing critical information, improve containment rates for self-service, and enable smarter conversation flows.
An AWS CloudFormation template is included to demonstrate these patterns.
This approach optimizes the conversation flow based on the quality of the input captured and prevents erroneous or redundant slot capturing, leading to an improved user experience while increasing the self-service containment rates.
Audio transcription confidence scores are available only in the English (GB) (en_GB) and English (US) (en_US) languages and are supported only for 8 kHz audio input.
To test the solution, you can examine a conversation with words that might not be clearly understood. Assign the Amazon Lex bot to an Amazon Connect workflow and make a call.
Optimizing the user experience is at the forefront of any Amazon Lex conversational designer’s priority list, and so is capturing information accurately. This new feature empowers designers to have choices around confirmation routines that drive a more natural dialog between the customer and the bot.
The authors of this article include Alex Buckhurst, Kai Loreck, Neel Kapadia, and Anand Jumnani, all of whom have careers focused on Amazon Web Services and customer-centric design.