Conversation Mining¶
The Conversation Mining feature allows you to drill down to interactions that are of interest to you or interactions that have the most potential to improve enabling you to eliminate the guesswork from manual evaluations and focus your manual efforts solely on critical interactions.
You can access Conversation Mining by navigating to Contact Center AI > Quality Management > Analyze > Conversation Mining.
The Conversation Mining has the following two sections:
- Interactions
- Audit Allocations
Note
Interactions are populated a few seconds after call termination.
Interactions¶
Users can see scored interactions or evaluation information at a glance from Conversation Mining. Users can apply filters to focus on specific interactions or with high potential for improvement and save the filters for auditing purposes. Interactions visible on the conversation mining screen are limited to the user's assigned queues.
The Conversation Mining interaction listing option enables users to quickly identify specific interactions of interest, eliminating the need to sift through numerous individual interaction records. This provides valuable insights for informed decision-making, enhances operational efficiency, and reduces the time spent on manual reviews through targeted analysis, ultimately improving oversight quality. Additionally, users can customize the interaction listing page by adding extra metadata and columns.
The Conversation Mining Interactions has the following key Items:
-
Agents: Shows the agent who last participated in the interaction and has terminated this call. By hovering over the agents, the users can view the tagged topics and tagged intent.
-
Topic Tags: Each interaction shows all classified topics as tags. When topic filters are applied, relevant tags are highlighted in a different color. If filters are changed or cleared, highlighted tags are updated accordingly based on the new selections made.
- Intent Tags: Each interaction shows classified intents as tags.If filters are changed or cleared, the highlighted tags will update based on the new selections.
-
Actions: Allows users to assign the interaction to the desired bookmark for later reference.
Note
Bookmarks have to be created first from settings. For more information, see Settings for more information.
-
Queues: Shows the queue in which the interaction was terminated.
Note
The evaluation form used to score the interaction always corresponds to the queue in which the interaction was terminated.
-
Kore Evaluation Score: Shows the Kore Evaluation score (Auto QA Score) for the interaction based on the relevant evaluation form.
- Supervisor Auditor Score: Shows the Supervisor Audited score if the interaction has already been audited/manually evaluated.
- Sentiment Score: Shows the system generated sentiment score for the interaction based on the context of what was said in the interaction by the customer.
- Moments: The Moments column shows the counts for adherences, violations, and omissions related to the configured metrics of the interaction.
When clicked, a drop-down shows the following three categories of metric:
- Adherences: Metrics that were met during the conversation.
- Violations: Speech-based violations that occurred.
- Omissions: Metrics not adhered to, including playbook steps and dialog tasks.
By clicking any of the interactions items, such as Agents, Actions, Kore Evaluation Score, and Moments, the Conversation Mining provides the following filters (Interactions of interest with) to drill down:
Auditors can check the following near-miss scenarios by reviewing metrics on the audit screen:
-
Evaluation Marking: If an agent's adherence is close to the required standard but not fully met, the system marks the evaluation as "No" and highlights similarities to fully adhered cases for easier comparison.
-
Click-Through Navigation: The system provides clickable links (View) for near-miss agent utterances, similar to those for adhered cases, allowing for a more detailed review.
-
Near-Miss Criteria: Near-miss criteria are based on predefined similarity thresholds. These thresholds help flag and navigate near-miss utterances close to adherence standards.
Columns¶
Allows users to filter the following default fields: Supervisor Auditor Score: This shows the Supervisor Audited score if the interaction has already been audited/manually evaluated.
-
Sentiment Score: This shows the system-generated sentiment score for the interaction based on the context of what the customer said in the interaction.
-
Start Time: This shows a specified time format of the conversation in the interaction listing page (for example, 24th May, 2024, 1:17:10 PM).
-
Duration: This shows call duration (voice and chat), including talk time, hold time, and after-call work time. For example, 0h 6m 25s.
Bookmarks¶
Allows users to assign the interaction to a bookmark and displays all the bookmarks that a given interaction has been assigned to.
Date Range Selection¶
Provides the option to select the date range to the conversation interactions. Default date range selected is always the last 7 days.
Chat History¶
This shows all the conversation history.
Filters¶
This provides the Filter options to filter the information based on your requirements.
Clicking any interaction will navigate you to the AI Assisted manual audit screen where you can review and evaluate the interaction.
Note
If you click any interaction that has not been assigned to you for audit, you will not be able to submit the evaluation.
Add New Filter¶
This new filter interaction lets you to focus on those areas of interest or with high potential for improvement, which allows users to save them for audit assignments. This helps users to filter out the options and identify which particular interaction has gone wrong.
Steps to Add New Filter:
-
Click the Filters button on the upper-right corner. The following screen appears to add a new filter.
The New Filter provides the following three Filter categories of interest:
Filter by Efficiency¶
This provides an operational view of areas of interest where there is greater potential for improvement.
- Select a type of conversation interaction Channels, such as Voice or Chat.
- Choose the Audit Status if it is Audited, Assigned, or Not Assigned.
- From the Queues list, add the Queue names.
-
From the Agent Groups list, add the agent group name based on the queue selected.
Note
The user can filter the Agents, based on the interactions that are part of the queues and the user is part of.
-
From the Agents list, add the agent name based on the queue selected.
Note
The user can filter the Agent Groups, who are part of the queues, not based on agents in the agent group that are part of other queues.
-
Enable either of the following options:
- Average Handling Time (AHT): Filters interactions based on the start and end of handling time range of interaction.
-
Filter by deviation from AHT: Filters interactions by % deviation from the average handling time across all interactions for the respective date range and the interactions that are going wrong.
-
Specify the Deviation of % number.
- If No. of Transfers is selected, then specify the filters by the number of transfer that occurred within each interaction.
-
Click Apply to save the filter settings entered which will be stored as Unsaved Filter in the Dashboard.
- If you do not intend to use this filter to assign an audit allocation, you can apply without saving; however, to assign audit allocation based on filters, you have to save and name it accordingly to reference it during audit allocation.
-
Enable the Save Filter toggle to make the Unsaved Filter for default view in the Dashboard. All the newly created Saved Filters and Unsaved Filters will be tagged under the Saved Filters list.
Note
The filtered interactions count allows you to verify the interaction count based on the filter selections you make, this count gets dynamically recalculated as and when you update filter selections. By default, the filtered interactions count will be zero until you make the first filter selection.
Filter by Experience¶
Avg. Waiting Time¶
This provides the following filter drop down range selection conditions in seconds:
Sentiment Score¶
This indicates the positive sentiment score (higher) and negative sentiment score (lower) interactions.
Provides a slider bar to move the minimum and maximum range of interactions.
CSAT¶
This shows the distribution interactions across the score range that the customer has responded to the feedback service and drilled down accordingly.
Intent¶
This indicates the underlying cause and customer intent that the conversation pertains to.
Topic¶
This indicates the subject that a conversation pertains to.
Churn Monitor¶
This provides the underlying cause and need that a conversation relates to. It indicates the loss of customers over a specific period.
This has the following two options to churn the monitor:
Churn Risk¶
Provides the extent of customer churn in a given conversation. In this, the Supervisor can view the churn risk % for a given time period.
Note that the customer churn is calculated once per interaction. Customer churn is not to be calculated as a score.
Escalation¶
This detects the number of escalations raised to the Supervisor by a customer.
Filter by Behavior¶
Empathy Score¶
This measures the level of understanding and compassion shown by the agent towards the customer situation. Provides the extent of empathy like frustration or displeasure that a customer has shown (negative sentiment). A higher score indicates a more empathetic interaction.
Crutch Word Score¶
This indicates the extent of filler words (for example, umm, uh, and so on) which is used by the agent. Higher score indicates the higher usage of crutch words.
Agent Playbook Adherence¶
This indicates the adherence percentage to the Agent AI playbook assigned to that interaction.
Kore Evaluation Score¶
This indicates the automated QA score associated with an interaction based on the evaluation form assigned to an interactions’s queue.
Once you Save Filter, you will get the following filters options to:
-
Copy
Allows the user to create another saved copy of the filter.
-
Mark as default
Allows the user to apply the newly created filter as a default filter whenever the call mining tab is opened.
-
Edit Filter
Allows the user to edit a saved filter.
-
Delete Filter
Allows the user to delete the saved filter.
Audit Allocations¶
This helps users to create and assign allocations to auditors for manual quality scoring.
The users can access the Audit Allocation by going to Contact Center AI > Quality Management > Analyze > Conversation Mining > Audit Allocation.
The Audit Allocations has the following options:
- Agent: Shows the name of the Auditor.
- Actions: Allows auditors to assign the allocation to the desired bookmark for later reference.
- Assigned Date: Shows the assigned date to start the audit.
- Name: Shows the audit name.
- Created By: Shows the auditor name who has initiated.
- Evaluation Form: Shows the forms list that are assigned to the QM auditors as assessments for review compliance.
- Kore Evaluation Score: Shows the Kore Evaluation score.
- Filters: Shows the filter options to search and add the filters.
- New Audit Allocation: Allows to create and assign the interactions for a new audit allocation.
New Audit Allocation¶
Settings¶
Steps to add New Audit Allocation in Settings:
-
Click the New Audit Allocation button. The following Settings screen appears to assign the interactions for a new audit allocation.
-
Under the Settings, enter a Name for the audit that needs to be done.
- Enter a short Description of the audit which is optional.
- Select an Evaluation Form from the drop down list to evaluate for.
- Select Agents to search an agent from the drop down list to assign specific agents to a Queue for audit allocation.
- Select Agents Groups to search an agent group from the drop down list to assign the agents group to a Queue for audit allocation.
- Click Next to move to Allocation section.
Allocation¶
Steps to Add New Audit Allocation in Allocation:
- Select an Allocation Type (Random or Custom).
- Random allocation allows randomly sampled interactions to be assigned for audit.
- Custom allocation allows users to select saved filters from Conversation Mining to be assigned for audit allowing focused evaluations.
- By default, the Random radio button is selected. If you choose Random, then select a Date range.
-
Select the Channel to enable Voice toggle button, and specify the % Interactions per agent that you want to assign for audit. Based on the input, a random set of interactions is getting selected among the selected agents, and the selected queue (based on the form selection).
a. The number of interactions per agent count below the input box displays the average number of interactions across the selected agents, which is being taken based on the % interactions per agent allocation user input.
b. The total interactions count at the bottom of the slideout displays the total interactions, which is being selected based on random sampling and the user input across date range selection. The % interactions per agent input across channels and the count of the interactions will be assigned for this audit if needed, and this can be adjusted by altering the user input across the fields mentioned. 4. If you choose Custom, then the following screen appears to select a saved filter for Custom Allocation to assign those interactions for audits.
-
Select a required Filter option from the above search filter for audit.
The total interactions count displays the total number of interactions that is being assigned for this audit based on the evaluation form (queue), agent group selection, and the filter selection.
-
Click Next to move to Assignment section.
Assignment¶
Steps to Add New Audit Allocation in Assignment:
-
Select the Auditors from the Search filter that you want to assign interactions for manual evaluation.
-
Enter the % allocation of interactions that you want to allocate for each selected auditor.
The interactions column displays the number of interactions that will be assigned for each auditor based on the allocation % input that allows you to adjust the input based on your preferences.
-
The total allocation percentage across all auditors must sum to 100% to enable the Create button.
Note
Once the assignment configuration is completed, such that the total allocation percentage is 100%.
-
Click Create to assign the interactions for evaluation to the selected auditors
-
The interactions that users see listed in the audit Allocation tab are the interactions that have been assigned to them for audit.
Note that if this page is empty, it implies that no interactions are there to assign for evaluation.
-
Upon completion of evaluation for each interaction, the pertaining interaction will be removed from the audit allocation page.