___

There are two ways to test an AI agent:

- **[Preview](https://www.infobip.com/docs/agentos-ai-agents/ai-agents/test-ai-agent#preview)** - Available directly in the [AI agent builder](https://www.infobip.com/docs/agentos-ai-agents/ai-agents/configure-ai-agent). Send messages to the agent and review responses in real time. Use this during development to quickly check behavior after making changes.
- **[Quality center](https://www.infobip.com/docs/agentos-ai-agents/quality-center/ai-agents/overview)** - A separate page for running bulk predefined test cases. Create test groups and test cases that define expected agent behavior then run them to verify consistency and catch regressions.

___

## Preview

Use **Preview** to interact with your agent in real time while you configure it.

To start a preview:

1. Open your agent in the [AI agent builder](https://www.infobip.com/docs/agentos-ai-agents/ai-agents/configure-ai-agent).
2. Select the **Preview** tab.
3. Type a message in the **Chat** panel and send it to the agent.

The **Chat** panel has the following options:

- **Restart** Reset the conversation and start a new session.
- **Show logs panel** Open a side panel that shows detailed logs for each agent response including tool calls and subagent calls.
- **Options** menu:
  - **Save as test case** Save the conversation to reuse as a test case in [Quality center](https://www.infobip.com/docs/agentos-ai-agents/quality-center/ai-agents/overview).
  - **Enter user destination** Simulate a user coming from a specific destination.

### Save as test case [#save-as-test-case-preview]

You can save a preview conversation as a test case to reuse in [Quality center](https://www.infobip.com/docs/agentos-ai-agents/quality-center/ai-agents/overview).

1. After exchanging one or more messages with the agent select **Options** > **Save as test case**.
2. Enter a name for the conversation.
3. Select **Save as test case**.

The saved conversation becomes a test case with the messages you sent as inputs and the agent responses as expected responses.

___

## Quality center

For structured repeatable testing use [Quality center](https://www.infobip.com/docs/agentos-ai-agents/quality-center/ai-agents/overview) to create and run test groups before publishing your agent.

- **[Run tests](https://www.infobip.com/docs/agentos-ai-agents/quality-center/ai-agents/run-tests)** - Send test messages manually or run automated test groups to validate agent behavior.
- **[Create test group](https://www.infobip.com/docs/agentos-ai-agents/quality-center/ai-agents/create-test-group)** - Organize test cases into groups for a specific AI agent or use case.
- **[Create test cases](https://www.infobip.com/docs/agentos-ai-agents/quality-center/ai-agents/create-test-cases)** - Define individual test scenarios with inputs and expected agent responses.
- **[Review test results](https://www.infobip.com/docs/agentos-ai-agents/quality-center/ai-agents/review-test-results)** - Analyze pass and fail outcomes and identify areas where the agent needs improvement.