Skip to content

Evaluation Metrics

This phase is entirely based on individual questions . It allows you to create custom evaluation criteria across different measurement types. You can create Evaluation Metrics in the Evaluation Forms section utilizing these different measurement types.

You can access the Evaluation Metrics by navigating to Contact Center AI > Quality AI > Configure > Evaluation Forms > Evaluation Metrics.
Evaluation Metrics

The Evaluation Metrics has the following options:

  • Name: Shows the name of the Evaluation Metrics.
  • Metric Type: Shows the Evaluation Metrics Type (Measurement Type) selected.
  • Evaluation Forms: Shows the Evaluation Forms used for configuring and assigning the evaluation metrics to different channels and queues.
  • Edit: Allows you to edit or update the existing Evaluation Metrics.
  • Delete: Allows you to select and delete any Evaluation Metrics shown on the Evaluation Forms page.
  • Search: Provides a quick search option to view and update the Evaluation Metrics by name only.

Add New Evaluation Metrics

You can access the Evaluation Metrics by navigating to Contact Center AI > Quality AI > Configure > Evaluation Metrics > New Evaluation Metrics.
Metric Measurement Type

Steps to create New Evaluation Metrics:

  1. Click the New Evaluation Metric button in the upper-right corner to configure the most commonly used evaluation metrics. The following screen appears, allowing you to select a type of evaluation metrics measurement.
    Metric Measurement Type

  2. Select the type of Evaluation Metrics Measurement, such as By Question, By Speech, By Playbook Adherence, and By Dialog Task.

The following table describes the Evaluation Metrics Measurement Types:

In case of **Static**, the user must configure at least one agent **Answer** utterance for **Adherence Type**.
Evaluation Metrics Measurement Types Description
By Question: Configures the metric and expected responses based on a specific question.
Name Enter a name for the future reference of the metrics.
Language Select a language from the drop-down list.
Note: In case of **Dynamic**, the user must configure at least one **Trigger** and one agent **Answer** utterance for **Adherence Type**. In case of **Static**, the user must configure at least one agent **Answer** utterance for **Adherence Type**.
Question Provides reference to the supervisor about audit and interaction evaluation. You enter a question for which adherence check is done.
Adherence Type Provides the following two types of adherences:
  • Static: Evaluates agent adherence across all conversations where no conditional check-in is required. To get the agent's answers, you set up different acceptable utterances for a particular queue and then configure the extent of similarities expected against the set-up utterances.
  • Dynamic: Evaluates agent adherence only if the configured trigger is detected. Based on the evaluation trigger, the agent and customer option is selected. This allows the conditional check-in. It is a trigger-based detection, in which you set up a trigger either for a customer or an agent utterance and then configure the answers appropriate to that scenario. For example, Greetings and Etiquette use cases have a lower adherence Similarity, which is close to 60% (Yellow), but for Policy Privacy or Disclaimer, the adherence Similarity must be close to 100% (Green) because this is critical for the user to follow depending on the use cases.
  • Trigger:Provides the following two options to select based on the trigger created by Agent Utterance or Customer Utterance for evaluation. You can add more than one Trigger utterance and Answers for each utterance and delete them if it is not required.
    • Customer Utterance: Select the Customer Utterance that triggers the adherence check. You can enter or select more than one utterance using generative AI Assistants that are similar utterances with the same meaning.
    • Agent Utterance: Select the Agent Utterance if it is triggered by the agent. Enter the utterances using generative AI Assistants suggestions that have similar utterances with the same meaning. You can add multiple utterances for the Customer and Agent and delete them.
Answer

Provides the expected answers relevant to your question (a few different utterances) entered with the help of generative AI suggestions, which have similar utterances with the same meaning and reduce the setup time.

In this, you can enter or select more than one expected answer using generative AI having different utterances matching your question. In addition, you have the option to delete the added answers. If it is Static, then you need to define a similar percentage for the metric based on the defined use case and attribute.
  • Similarity: You can set the Similarity percentage for the desired Evaluation metrics. Whether it is Static or Dynamic, you can configure the expected Similarity threshold. For example, Greetings and Etiquette use cases have a lower adherence Similarity, which is close to 60%, but for Policy Privacy or Disclaimer, the adherence Similarity must be close to 100% because it is critical for the user to follow the adherence depending on the use cases.
Count Type

Provides the following two options based on the Adherence Type is selected:

  • Entire Conversation: This allows you to check for adherence at different points of conversation. It does not matter where the agent wants to check adherence throughout the conversation.
  • Time Bound: This allows you to check adherence within a certain time range of the interaction, either for a specific number of seconds or a number of messages for chat at the start or end of the conversation.
    • Parameter: In this field, select the section of the interaction that has to be evaluated for this metric. If you select the First Part of Conversation or the Last Part of Conversation, then enter the following subsection details provided:
      • Voice: Enter the seconds from the start or end of the interaction within which this metric should be evaluated.
      • Chat: Enter the number of messages from the start or end of the interaction within which this metric should be evaluated.
Agent Attribute (Optional) Provides an evaluation metric, which you can assign to only one agent attribute.
Evaluation Metrics Measurement Types Description
By Speech: Configures a metric based on speech attributes like dead air and speaking rate.
Name Enter a name for the future reference of the metrics.
Speech Type Provides the following Speech Type options to select:
  • Cross Talk: If the Speech Type is Cross Talk, enter the maximum acceptable Number of instances. If the number of instances exceeds the configured count, this configured metric will fail. By default, the Cross Talk duration is two seconds. But you can customize the configuration instances limit and the Cross Talk duration combination.
  • In the No of Instances field, enter the minimum allowed Cross Talk instances per second.

  • Cross Talk Metric Qualification: If the no. of Cross Talk instances, based on the configured Cross Talk duration is less than the configured no. of instances for that Evaluation Form, the metric is qualified (an occurrence is evaluated as Cross Talk, which must be equal to or exceed the configured Cross Talk duration).

    Similarly, if the number of Cross Talk instances exceeds the no. of instances limit, it is considered a failure for that metric, and the agent will be penalized.

  • Dead Air: This defines the period of silence during a contact center interaction when neither the customer nor the agent is interacting. By default, the minimum Dead Air time is one second, with a maximum limit of 300 seconds. However, you can customize the configuration instances limit and the Dead Air duration combination.
  • Dead Air Metric Qualification: The interaction will qualify for the metric if the number of dead air instances is less than the acceptable limit set in the metric configuration. Conversely, if the number of instances exceeds the configured limit, the interaction will fail the dead air metric. An instance will only be counted as dead air if it exceeds the specified dead air duration.
  • Avg. Speaking Rate: This displays the average number of conversation sessions per day and comparison analysis for the selected period.
    • In the Words Per Minute (WPM) field, select the expected speaking rate; failure to adhere to this configured rate results in failure for this configured metric.
Evaluation Metrics Measurement Types Description
By Dialog Task: Configures a metric based on adherence to execution of dialog tasks.
Name Enter a name for the future reference of the metrics.
Select Dialog Task Select a Dialog Task from the drop-down list given.
Count Type Provides the following two options based on the Count Type selected.
  • Entire Conversation: Allows you to check the adherence throughout the entire conversation.
  • Time Bound: Allows you to check adherence within a certain time range of the interaction.
    • Parameter: In this field, select the section of the interaction that has to be evaluated for this metric. If you select the First Part of Conversation or the Last Part of Conversation, then enter the following subsection details provided:
    • Voice: Enter the seconds from the start or end of the interaction within which this metric should get evaluated.
    • Chat: Enter the number of messages from the start or end of the interaction within which this metric should get evaluated.
Agent Attribute (Optional) Provides an evaluation metric, which you can assign to only one agent attribute.
Evaluation Metrics Measurement Types Description
By Playbook Adherence: Configures a metric based on adherence to a playbook or a specific playbook step.
Name Enter a name for the future reference of the metrics.
Playbook Name Select a Playbook Name from the drop-down list, from which the metric should evaluate adherence.
Adherence Type

From the Adherence Type, you can choose either Entire Playbook or Steps to do the following:

  • Entire Playbook: To evaluate adherence across the entire playbook.
  • Adherence Percentage: To enter the minimum expected adherence percentage to the playbook. If adherence falls below the configured percentage, this metric will fail.
  • From the Adherence Type, if you select Steps, then you will get the following options:

  • Steps: To evaluate adherence to specific steps of the playbook.
  • Stage: To select the stage under which the desired step for evaluation is configured.
  • Step : To select the desired step for adherence evaluation.
Agent Attribute (Optional) Provides an evaluation metric, which you can assign to only one agent attribute.

Edit Evaluation Metrics

Steps to edit existing Evaluation Metrics:

  1. Right-click to select any of the existing Evaluation Metrics (Name). The following screen appears, allowing you to select a type of evaluation metrics measurement.
    Edit Button

  2. Click Edit to update the Evaluation Metrics dialog box fields. The following dialog box appears to update the required fields.
    Edit Metric Fields

  3. Edit the required fields that you want to update.

    Note

    All the fields are editable except the Evaluation Metrics Measurement Type and Agent Attribute (Optional).

  4. Click Update to save the changes.