Run automated tests
Auto-testing in Teneo allows you to perform quality assurance tasks during the development and maintenance of a Teneo solution.
Auto-test checks example questions associated with flow triggers and transitions. It verifies that positive examples fire the trigger and transition while negative examples do not. For Auto-test to work, it requires that you have added positive examples to the trigger and transitions you want to test. That is one reason it is good practice to add example inputs to all triggers and transitions.
Auto-test can test triggers, transitions, and URLs to make sure they work as intended. All of them will be included by default. Disabling one of these options will speed up the Auto-test process, as it will test fewer items.
You can run an Auto-test at three different levels: Flow level, Folder level, and Solution level.
There are two different ways of running an Auto-test applied to the Flow, Folder, or Solution level: Run Test and Run Test Using Flow Scope.
When selecting Run Test, the triggers or transitions in that scope (flow, folder, or solution) are tested in two ways:
- Do the examples match (for positive ones) or not match (for negative ones) the condition of the trigger they belong to?
- Does the trigger fire for the examples that match the condition, or is the example’s input "stolen" by another trigger with a higher ranking in the intent trigger ordering?
Run Test Using Flow Scope only tests that the trigger or transition’s condition matches their positive example and that negative examples do not fire the trigger or transition. Other triggers and transitions are ignored in this test.
Auto-test tests none of the following items:
- NLU scripts
- Global scripted contexts
- Complete dialogs
- Values of variables
You can exclude specific triggers and transitions from a test:
- First, open the flow in edit mode.
- Click on the trigger or transition that you want to exclude from the test.
- At the bottom of the examples panel to the right you will find ‘Include in Auto-tests’. Uncheck that option for that trigger or transition to be excluded from tests.
To perform Auto-test on a single flow, you need to open the flow you want to perform the test on. Then click on the Flow button in the upper left corner and select Auto-test in the panel to the left (see image above).
After you set up the flow as you want, it is good practice to run an Auto-test at a flow level to test if all example inputs match the correct trigger and transition.
To test ‘chunks’ of your solution you can run a test on a specific flow folder. Right-click the folder in the Solution Explorer and select Test. If the selected folder has sub-folders, it will include them in the test.
To test all the triggers, transitions, and URLs that have been set up in the solution, go to the Solution tab and click on the Auto-test tab. As with flow and folder level tests, you can choose what you want to test (triggers, transitions, URLs, or all of the above). It will include all of them in the test by default.
Solution testing is mostly used for regression testing after major updates or right before publishing the solution to quality assurance or production environments.
The test results panel shows the results of the tests you selected (trigger, transition, URL) and which level you did the test on (solution, folder, or flow). If you selected the “Run Test” option (instead of “Run Test Using Flow Scope”), you will also see if triggers ordered higher stole the tested positive inputs ordering. Ordering refers to the ordering of triggers with similar or overlapping trigger conditions that may conflict with each other, and if so, by which triggers.
By clicking the ‘Get Report’ button you can view the test results in an XLS format. You can also view older results by clicking on the ‘History’ button and then select which test result you want to view by clicking 'Open'. You can also export the older test versions.
In the results window, you will find the results of the selected Auto-test run. The most recently done test will be selected automatically. Here, you can see which flow (and which of its triggers) failed the test, and which folder it is in. You can also filter the test results on:
- Passed test results
- Passed (with warning) test results
- Failed test results
- Non-testable items
Besides filtering on items, you can also filter by text on flow name, example input, or message.
The action panel displays more information about the selected test result. For example, if an input was stolen or blocked by a higher-ordered trigger, the action panel will display what it did trigger and what it should’ve triggered. The action panel also provides suggestions for how to solve the selected test result. Each failed test and test passed (with a warning) will have its own suggestions on how to solve the problem. You can view the suggestion by clicking the ‘More Information’ button.
When a failed test (or one with a warning) is selected in the test result window, the action panel on the right displays further information. The most common reasons for failures are:
- Class problem (the example's input does not fire the Class Match Requirement)
- Ordering problem (the example’s input is stolen by a trigger with a higher order)
- Condition problem (the trigger or transition condition does not match the example)
- Context Restrictions problem (the context is not set on Auto-Test).
When the example input does not match the Class Match Requirement, the test result will say “The example was not matched”. This can happen when other classes contain training data that is too similar or the trigger uses a context restriction.
When an example is stolen by another higher ordered trigger, the test result shows a failure and mentions the trigger that fired.
In the image above, the input 'A small filter cofffee, please' triggered the flow 'Partial Understanding: Coffee' but, the example was found in the trigger 'User wants to order a coffee'. To solve this, we must move one of these triggers to a different order group. To see more information about what is going wrong with the trigger ordering, we can click on this icon in the action panel to take us to the ordering window:
In order to look at just the relevant triggers, we can apply a filter in the filter panel for the current selection.
With the filter applied, we can only see the relevant triggers:
From here, we can move triggers to other groups as necessary.
If triggers are already in the correct order group, but there are still errors due to incorrect ordering, we can specify relations between triggers in the same order group. To do so, we can simply drag our cursor to draw an arrow from the more exact trigger to the slightly less exact trigger.
If we don't know exactly how to order triggers, Teneo Studio can automatically suggest ordering based on existing triggers. Suggestions can be made for relations between triggers in an active selection, triggers visible after a filter has been applied, or to all triggers in an entire solution. These three options can be found in the ribbon bar of the Intent Trigger Ordering window:
If an example is not covered by a Condition Match Requirement, the test result will say “The example did not match this trigger”.
In the image above, the trigger ‘Partial understanding: coffee‘ had a positive example added to it. And, since the condition is not designed to match 'doppio', we can open the 'Partial understanding: coffee' flow and expand the condition to include the positive example.
One common problem that often occurs in Auto-tests is that of the context restrictions not passing through the test.
The reason for this is that the context restrictions in question are based on one of two types of context. To showcase these, we have the following flow created previously in our Longberry Barista guides. Here, we have added one trigger with follow-up and one trigger with global variable context.
These triggers depend heavily on a specific path that has been taken by the user for the match requirement to be matched. These triggers will work in Try Out with the oriented path but fail in Auto-test. This can be fixed by excluding the trigger from Auto-test by deselecting the 'Include in Auto-tests' button. This will inactivate the testing examples from being tried on the trigger.
This will fix the problem and show the real results from the Auto-tests.
In some cases the problem is based on a Script created by the user which needs to hold a certain value for the trigger to fire. These Groovy scripts can be very long and have their own conditions that need to be met. If the groovy script is not met then the trigger won't fire and will therefore cause an error.
For the following example we have a flow called 'User wants to buy a coffee mug', which has its own Script Match Requirement. According to this Match Requirement, bItIsMorning has to be true for it to trigger. If we look around we discover that bItIsMorning is set in the Global Scripts inside 'Begin Dialog'. Looking closer at the Groovy code, we can discover that bItIsMorning is only set to true between 5 a.m. and 9 a.m. local time as they only prefer to sell coffee mugs in the morning.
The time of testing this is 11 a.m.; since that is after 9 a.m., bItIsMorning is set to false and the following result is shown in Auto-tests.
We can now ensure the trigger works as it should and can now go back and undo the changes temporarily made to the code by changing the condition from 12 a.m. to 9 a.m.
When a test passes with a warning, it means that although the example matched the trigger, it also matched a different syntax trigger in the same order group. It is recommended to create an order relation between any triggers that have a conflict.
Disabling a trigger stops it from triggering but it does not stop it from being tested on Auto-test. Therefore it is important to pay attention to a certain button under the examples section, called 'Include in Auto-tests'. If not, the trigger will still be tested with the testing examples and always return with 'failed'. This can affect the general Auto-test performance when you test your whole solution.
This is how the Auto-test results look while ''Include in Auto-tests' is still selected.
This is how the Auto-test results look while ''Include in Auto-tests' is unselected.