Teneo Developers

Class Performance

The options and functionalities described in this page currently only apply toTeneo Studio Desktop

Introduction

In the backstage of Teneo Studio Desktop, the Class Performance provides developers with options to test and evaluate the performance of the solution's assigned intent classifier: Test Data Evaluation is available for both Learn and CLU intent models while Cross Validation is available for Learn models only.

Read more about intent classification in Teneo, including Test Data Evaluation and Cross Validation, in the Conceptual Overview.

Class Performance window

The Class Performance is available in the backstage of Teneo Studio: Solution tab > Optimization > Class Performance.

When no tests have run previously, Teneo displays a message encouraging the developer to start a test evaluation by clicking Run in the upper, right area.
When at least one test has run previously, Teneo displays the results of the latest test evaluation in the Class Performance Table.

Class Performance allows to:

  • Run, i.e., launch a new Test Data Evaluation
    Click the far-right of the button to select between Run Test Data Evaluation and Run Cross Validation; when a new evaluation is launched, a progress bar is displayed indicating the progress
  • Last Evaluation provides details related to the latest test evaluation
  • View results either in the Class Performance Table or in the Confidence Threshold Graph
  • The test selector allows to compare two successful tests of the same type (i.e., Cross Validation vs. Cross Validation or Test Data Evaluation vs. Test Data Evaluation):
    • the left-hand dropdown allows to select the results of a specific evaluation (defaults, on opening the page, to the latest successful evaluation)
    • the right-hand dropdown allows to select the results of another evaluation to compare the results in the selected view
      Selecting a new test in the left-hand dropdown resets the right-hand dropdown automatically; it is not possible to select the same test results in the two dropdowns.

Class Performance

Class Performance Table

The Class Performance Table displays one row for each class and a single row for the average values for all classes at the far bottom of the results (as visualized in the below image).

For each row, the following columns are displayed:

  • Class Name: name of the class.
  • Precision / Recall / F1: these are the binary classification metrics for the row's class, i.e., for all the data examples whose ground truth class is the row's class, data predicted as belonging to that class are considered positive and any other predictions as negative.
  • Examples: number of data examples of that class at the moment of execution of the Class Performance (i.e., either Test Data Evaluation or Cross Validation).
  • Conflicting Classes: shows a number of mistaken predictions of the model. Those predictions can be either False Positives (FP) or False Negatives (FN); the arrow at the end of the column unfolds a list of rows inside the cell, each one specifying one of the classes that were confused with the class of the row, the kind of error, and the percentage of classified data that suffered from that kind of error for that particular class.

More details are available in the conceptual overview of Intent Classification in Teneo.

Class Performance Table

Confidence Threshold Graph

The purpose of this view is to provide a tool to analyze the estimated performance of the classes in the solution with regard to the confidence threshold setting; more details can be found in the conceptual overview of the Intent Classification in Teneo.

The performance metrics can be interpreted in the following way:

  • Precision: measures the percentage of the accepted inputs (classifications with a confidence over the threshold) that were rightfully accepted, i.e., they correspond to data that was correctly classified by the model.
  • Recall: measures the percentage of the correct inputs (data that was correctly classified by the model) that were accepted by the threshold.
  • F1: has the usual meaning as the harmonic mean between the other two metrics.

Confidence Threshold Graph

Details

Test Data Evaluation

Test Data Evaluation validates the solution's intent model based on the available test data; test data validation is more accurate if additional test data is available, i.e., as test data in the Class Manager or as linked test examples.

  • Test Data Evaluation is available for Learn and CLU intent models
  • Each class should have at least one test data example (either directly added to the class or as linked test examples)
  • Functionality limited to one Test Evaluation run at a time
  • Duplicated test examples are removed from each class when running Test Data Evaluation.

Read more about Test Data Evaluation

Cross Validation

Cross Validation allows to test and estimate the performance of a Learn intent model directly in Teneo Studio without the need of test data. When performing cross validation, bear in mind:

  • Cross Validation is only available for Learn intent models
  • Class must have a minimum of 5 training data examples
  • Functionality limited to one Cross Validation run at a time
  • The Cross Validation process is generally slow, in the order of minutes, and its duration depends mainly on the total number of training examples in the solution. This duration should never exceed 1 hour; if it does, the process will be marked as failed. Note that this interval can be changed in the server settings.

Read more about Cross Validation in Teneo

Troubleshooting

Error typeAffected testError messageImplicationResolution
WarningTest Data Evaluation, Cross ValidationEvaluation cannot be started with a single classTest Data Evaluations and Cross Validations cannot run when only one class exists in the solutionOpen the Class Manager to add more classes to the solution
WarningTest Data Evaluation, Cross ValidationFailed to start the evaluation. An evaluation cannot be run on empty training data.Test Data Evaluations and Cross Validations cannot run when no classes exist in the solutionOpen the Class Manager to create classes
WarningTest Data EvaluationFailed to start the evaluation. An evaluation cannot be run on empty test data.Test Data Evaluations cannot run when classes do not have test data (i.e., test examples or linked test examples)Open the Class Manager to add test examples; alternatively add positive examples of User Intent in the places where Class Matches exist
WarningCross ValidationEvaluation failed to start. There are insufficient examples. A minimum of 5 per class is required.Cross Validations cannot run when one or more classes have less than 5 training examplesOpen the Class Manager to add more training examples to classes with less than 5 examples