AI Agents
Quality center
AI agents
Create test cases

Create test cases

EARLY ACCESS

After creating a test group, add test cases to define the expected behavior of your agent. Each test case is one conversation or exchange between an end user and an AI agent.

You can add test cases in the following ways:

  • Manually: Configure conversations step by step, defining each user message and expected agent response.
  • Saved conversations: Turn conversations saved during manual testing in Preview in the AI agent builder into reusable test cases.

Create test cases manually

Create test cases by configuring the conversations manually.

  1. In the test group page, select Add test case.
  2. In the right-hand panel, enter an example end user message and select the checkmark.
  3. Define the expected agent response by doing one or more of the following:
    • Enter the expected agent message. Select AI agent response and then select the checkmark.
    • Select an expected tool or subagent call. Select Tool call and choose either a tool or an agent from the list.
  4. Continue adding exchanges to define the entire conversation flow, or leave the test case as only one exchange.
    • Avoid adding multiple consecutive agent responses.
  5. (Optional) For each message, tool, or subagent in the conversation, you can do the following:
    • Modify the message or select a different tool or subagent: Select the pencil icon.
    • Remove the message, tool, or subagent: Select the delete icon.
    • Add a new message, tool, or subagent below the current one: Select the plus sign (+) and then select the required option.
  6. When the test case is complete, select Save.

Add from saved conversations

Use conversations that you saved during manual testing in Preview in the AI agent builder.

  1. In the test group page, select the menu icon (three dots).
  2. Select Add saved conversations.
  3. Select the conversations you want to add and select Add.

Each selected conversation is added as a separate test case to the test group.


Next steps

  • Run tests: Validate agent behavior by running the test cases in your test group.

Test case recommendations

Create comprehensive test cases that cover the following:

  • All expected user journeys - Successful scenarios and standard interactions that represent normal agent use.
  • Non-standard input - Unusual or unexpected input from end users to test agent resilience.
  • Negative scenarios - API failures or missing data to verify the agent handles errors gracefully.
  • Error-handling situations - How the agent responds when something goes wrong during a conversation.

Use these test cases for:

  • Acceptance criteria: Show that the AI agent system works as defined.
  • Performance benchmark: Measure the quality and efficiency of agent responses over time.
  • Regression testing: Ensure that updates do not break existing agent features or behavior.






Need assistance

Explore Infobip Tutorials

Encountering issues

Contact our support

What's new? Check out

Release Notes

Unsure about a term? See

Glossary
Service status

Copyright @ 2006-2026 Infobip ltd.

Service Terms & ConditionsPrivacy policyTerms of use